Basket Drawers|What's In The 1st Drawer|Palettes, Lipsticks & Lipgloss

everybody hope you keep him well and welcome to today’s video a little while ago I was asked to film a makeup collection and I did one of these about six months ago and I thought I haven’t done one for a while so I thought I would do an updated video for you today and I was sharing with you what’s in my little basket drawers and I thought I would change it a little bit and change the format slightly rather than filming it in one and big video and it being super long I thought I would split it up into like a little series maybe call it something like makeup mondays and then it’s video or something like what’s in my basket drawers what’s in my little organizer and on my table and then video over a few weeks so that you get a little bit more in depth than in more detail of some of the things that have in my collection and so I hope you enjoy it and um if you’d like to see what I have in the first of my basket drawers and please keep on watching why okay so here’s the first of the drawers I’m just going to pull this out and just tuck it over to the window salts they can make the most of the natural light so in this dry there’s a mixture of some eyeshadow palettes some autumn and winter lipsticks and also some Artem and winter lip glosses i think in there as well or there might be a little bit mixed i’m not quite sure but I do swap and change my flight lipsticks and lip glosses around in our wind spring and summer comes and then like moving into autumn and winter so I’ll change them around a little bit apologies as well if you hear any background noise but it’s really one today although it’s dull and cloudy it’s quite a humid day so I’ve got the windows open so I hope it’s not too distracting for you but I’ll start with this palette and I want i will show everything that i have in here a little bit more in depth and detail but i won’t actually do any swatches of that would really take quite a long to this is the first pallet and it’s the w7 in the night smoky sheds I color palettes and they’re inside are the sheds really really pretty and I think also has the names on the back as well I haven’t actually tried this one yet but i do like purple and like greens on the eye so I do look forward to trying that so that’s the first one and another one is a little Urban Decay palette and I really like this like mesh detail on the front and I think this is called the deluxe palettes as well and if I just quickly open that to show you the colors and there are the colors in there really pretty really quite bright for me but when I’m feeling a little bit brave with color and these are just lovely and so there’s that one then I have this one which is quite a recent purchase and this is the mac amber x 9 i should or and these are just lovely colors really like the goals and magical is in here and that’s that one I’m so really pretty and then I have this one from cargo and this is the vintage escape eyeshadow palettes and these are lovely autumn and winter colors in here and so can’t wait to use that and nearer the time there are the colors and you have a light your mirror there as well it also comes with a little double ended right applicator I then have another w7 palettes and this is in the mood and it’s natural nudes I call the pallets and again I think it has all the colors on the back and then if I just calculate open nuts oh yeah and this I think is a dupe alternative for the lake III I think and again really really pretty may I love like I next have this palette and I think I got

this from ebay and it’s the JC nadia pariss cosmetics and it’s 2826 color shadow blush combo and I just open to show you and there’s the colors in there and i have used this as well and they are really nice very blendable and very pigmented as well and i just love the blushes here at the bottom especially this one for the autumn winter and then these also just lovely for the spring and summer although the nice all year round book and like these to us pause a nice for the autumn winter and this one I just love this plumbing kind of color and so I have that one as well which is really nice and this wasn’t that expensive i think it was only like six pound and so well worth your picking up if you can find that online and then i have this one from makeup revolution and this is the awesome I should or collection and it’s the nudes and smoky collection sorry it’s a little bit you’ll see I’ll just open up and I can’t open it and this just has some a mixture of mats and shimmers in here really lovely and most of the colors you probably never need really I just love these colors here which likely though nudie kind of colors for me the badges and browns and then this lovely pop of color here on the right-hand side you know to create some really colorful and dramatic looks but I think these are nice a nice range of collision or for a daytime and nighttime so you can create some real and ice you know natural eyes or new dyes and then you know in the evening you could move into something really darkened quite smoky and so that’s really lovely and I think this was about 12 pound but when I got it it was actually reduced it was author and I think I picked that up for probably eight pound or something like that but again a really nice I should or pellet and the next one I have in here is this one and this is from I love makeup and this is the naked chocolate palette a really lovely and you have your little cellophane there within a zone and and there are the colors and this is just really pretty as well I really like that so that’s really lovely and then I also have this one as well and this is the Isle of makeup and this is that I heart chocolate and I have used this as well and this is lovely as well it’s really nice blendable very pigmented and again you can create some really pretty looks and with this and so I really like that and so that’s another one whoops I’ll just move that I nice we don’t mean it perhaps then I have this one and this is the pulp fiction palette by urban decay and and just open up and this is another lovely palette as well I really like the little quad you know like the four shades as well I’m very much into getting little you know small little pellets as well which is nice so I have that one and then I have this one which is a little clinique pallets and it’s called the 06 pink chocolate what am I to there and a worldís i think it was for the wedding that we went to recently this was the 1i had on and these are just lovely you’ve got like pinky shades and then these lovely brown colors as well so I’m really like that then I have this one which is a number 7 palette that I got at Christmas last year and this is the number 7 mini I palettes and this has our essential number seven I colors it just says this beautiful compact contains the selection of eight popular number seven eyeshadow colors along with a handy double-ended applicator create a natural daytime look or nighttime smokey eye and these are really nice as well and but I haven’t tried this one I’ve got quite a few pellets I’m still am a

little challenge to myself to wear a different palette every weekend i think i’ve been doing that now for over a year and I’m still enjoying doing it so I might wear one of these and commit at this coming weekend so be able to go through my stash and see you know which ones i haven’t used yet so i can rot it and get the most out of my makeup which i’m really enjoying but these are just lovely colors as well really really pretty and I next have this one and this is the Urban Decay the fun palette and I just love the packaging and on here it’s really nice and there are lovely colors in there I love this purple one and this golden light teal colored one here really really pretty and then I have some smaller palettes here and so I have this one which is the number 7 eyeshadow palette and this I got in some in a swap and I love this I’ve used this several times and it’s definitely one of my favorites and now are the colors some really nice blues and like pink and then this beige color and then really nice highlight in there and so I really do like that and then another one I have is this one which is the next natural and I should have pellets and this is lovely as well there I love peachy shades and I went mad at one point trying to get like peachy palettes we’d like peachy colors in and this is one I picked up i think i’ve got this either either from amazon or ebay and which is really lovely and another one half again I have this is from Smashbox and this is an eyeshadow quad in medela and I’ve kept the box i don’t usually keep the packaging but i really like this little box softest I’ve decided to keep it in here and I think this has got a future shared in and that’s it says on the back and this is just a lovely really really pretty and whoops there are the sheds and it has a lovely little mirror there and there are the sheds in there got a really nice I like blue color don’t think it’s actually coming up with the light on camera but this is a real a nice soft blue at the hour shimmery apart from this one this is a nice peach and this is like a yellowy kind of college think be nice for highlights for a highlight and swipe that one put that back in there I then have this which is the revolution makeup revolution London ultra I shadows and this is the flawless palette and this is one of my favorite palettes and I’ve use this quite a lot and you have your little plastic cellophane there and there are the colors really really pretty and I have done a little review on this as well which if I do remember i will leave a link in the downbar in case you’re interested and what ya really do like this as well and i’ve also done i think like an autumn get ready with me as well and with this palette again which i’ll leave in a link in the downbar below if I do forget please do let me know and I will pop it in the down bar and but I love this this is definitely one of my favorite poets I then have this one which is from the bomb and it’s the new tude pellets and there are the colors on the back and I’ll just quickly show you and I’ve kept the packaging of this one as I really do like it I don’t often keep packaging and there are the colors and mine is the one and this just got like the bedroom scene not the ladies on and accounting them I think I picked this up from feel unique at the time and he was a little about how can’t remember now it was a little bit cheaper than the one with the ladies on because it was just like the bedroom scene which I didn’t mind too much was the shuttles I really wanted and I really do like that I’m really pretty that one I then have this one which has been another favorite and this is the one from accessorizing it’s called musk red and actually won this in a giveaway

and I really do like that and I’ve used this quite a few times and I think um I don’t know if I did do a video on this or not not quite sure but it is really lovely and it’s a shame I don’t think you can get this now but maybe you know you could probably get it online or ebay or amazon but it’s such a lovely palette and you’ve got your eyeshadows here and then you’ve got three blushes a bronzer and a powder and it’s just stunning I really do like that and use that I’ve used that quite a lot and so I love that one and next I have an hour elf eyeshadow palette this one I haven’t used quite so much and I picked this up quite a while ago and have probably a year or two ago in TK Maxx and but they are really nice colors in here you’ve got a mixture of like nudie colors and then you’ve got like a pop of colors with the pinks and the purple and this lovely green here which is really pretty so I have that one and then I just have some little ones here I have this little urban to care I think it’s called a roller girl palettes and this is lovely as well and i have used this a few times as well you’ve got ripped some nice and light brown kind of sheds here and quite shimmery and then this lovely and vibrant pink over for me color which is lovely as well and i have used that one it’s really nice then i have this one by benefit and this is the big beautiful eyes contour kit insistently open up to show you and i have used this as well and a few terms it’s a little bit messy and but i have used it quite a few times really do like that as well then i have this little and quad I think from Clinique and it’s called all about shadow quad I think it’s always in my martini and this is just all of late as well and there are the sheds really really pretty and like a peachy shared and then a nice highlight and then you have to like a bronzy kind of cool and then a really nice brown in there which is love way so I love that and then I have another little Urban Decay palette and I think this is the mini d looks and I just open our boys don’t if you can hear the thunder outside it’s actually thundering a little bit and this is the i think ly seven mini Deluxe palette and in here you’ve got a sparkling one which is oil slick stray dog which is a real nice Goldie color bronzy gold Midnight Cowboy and then flip side her really pretty colors in there as well and then I have this one from elf and this is the elf beauty book bronzed I set and have used this quite a few times as well and in here you got six eyeshadows one primer one eye liner and want I shadow applicator sorry it’s a little bit messy and this one I wasn’t so keen on this one I first got it and I was going to actually give it away to a friend but in the end I kept it as I really like the gold kind of shares the brownie shares and the more I used it the more really liked it so I decided to keep it but those are the colors and there and then the last pellet in here is one that I got from TK Maxx and this is by lauren luke and it’s my fierce violets and it just tells you what the palette contains on the back and here just open it up and i have used this as well quite a few times and you get a lip to lip colors here and i should or three eyeshadows two eyeshadows and a cream eyeliner eyeshadow and a blush and then you have a shadow primer and a to shuttle Prime mrs. Welling there so it’s really nice and all in one kind of pallets I really do like that really lovely and so that’s everything pellet wise in here and then then moving on to the little box in here I have some lipsticks and then some lip glosses and I’ve actually got some pinky colors in here which shouldn’t actually be in here I probably put them in here because I didn’t have any room to have them on my little table but mainly it should be in here like my autumn and winter lip glosses in this little bit but I do have some pinky ones in there as well and so I’ll start with this one this is a Rimmel lipstick in 161 starstruck one of my favorites and this is just a lovely

pinky red color really really pretty I have a Revlon wand and this is one of my favorites as well in the autumn winter and this is cherries in the snow and this is a lovely red light pinky red and I have an out border outdoor Al Gore outdoor girl lipstick this is a lovely purple but it’s not as dark it’s when you put it on your lips is not quite as dark as that but it is lovely I have boots natural collection 1 and moisture shine lipstick and this is in Heather and this is a lovely like more of a kind of color although it’s looking Brown when I look at it and actually looking at it but on camera it looks quite more but it’s really pretty we love lip I have this one from Avon and it’s the color trend range and this is I’ve had this year’s I don’t think they do this kind of packaging notes in the old packaging and this is called Sparkle shine and this is a lovely pinky red lipstick as well I’m sorry there’s no swatches only I don’t the video to be super long and I think it is going to be quite long so apologies about that and this one is from Nivea and it’s called chocolate carrots and this is just a lovely light brown color and looking at it um off camera is like a brown butt on camera I’m noticing it does look quite like a purpley kind of color in the viewfinder but it’s really lovely very vampy kind of color then I have this one from Laura page and this is called red passion and this is a love like pinky red just really lovely colors for the autumn and winter then I have this one from covergirl and this is called hot passion this is a lovely red as well like a true red it’s really really pretty then I have this one from lucky or a beauty that you can buy in all day and this is called sorry about that Darren one to me then and then this is a lock you’ll review to live so that you can get an all day and it’s called 150 very cheery and this is a real nice like dark reddish color then I have a w7 lipstick and this is called chestnut and this is just a lovely deep and light chestnut color really really pretty I don’t have another Laura page lipstick and this is in plum spice and I’ve got to plum spices or not the gap and this is olive wait purpley kind of color really really nice I then have another like Nivea lip stick and this is in mauve or mauve and this is a lovely purple as well it’s looking a little bit darker on camera it’s more of a lighter purple but it’s still really really pretty I then have a lipstick from appeal cosmetics and this is called red light district which I’ve got in some friend mail from America and this is gorgeous as well really nice a red color as I do like my red lipsticks is going beast inside then have a Rimmel moisture renew of this one should actually be in here although is quite a dark pink so that’s probably why and this is called vintage pink and this is a lovely pinky color which I do actually tend to wear in the autumn winter that spotlight was in there and then I have this one from almay and this is called plum and this is another really dark like brownie colored lipstick really pretty and I have to NYC lip balms the first

one is garnet gala and that’s the coiler really nice pinky kind of color a little apple in the middle and these are really nice and quite moisturizing there really are full of life and then I have this one which is appolicious pink and my camera is not focusing again or there we go and that’s the color there and this is like a reddish kind of color really really pretty then I have a wet and wild lipstick in wine room and this is like a dusky just get pink color then have a mac lipstick and this is girl about sound and this is just a lovely I like fuchsia pink color really do like that color it’s so quickly and I did do a little dupe alternative and to this and this is a revlon lipstick and this I think is a really nice alternative to the mat one and it’s called fuchsia and I’ve worn this it’s warm down the side of one this quite a few times and that is really lovely another favorite lipstick in the autumn winter is this one by Kate and it’s the one or seven and that’s the color they’re really lovely and dark like from pcola I then have an avon lipstick and this is another lovely one called plum estate auction this is a really dark I think of one this once and I’m not too sure whether my suit or not I keep trying I then have another w7 lipstick and this is called forever red and this is just a nice deep red color really really pretty and then I have another Nivea lip stick and this is called raspberry this is a nice one as well as a nice Sheen a shimmer to it and then I have one from Jordana and this is called Bahama bronze and this is lovely as well really really pretty and the last lipstick I have in here is from bourgeois and this is called vanille I think I’m not quite sure but this is just a nice dark pinkie kind of color and so that’s all my lipsticks and if i’ll just quickly move on to the lip glosses in here the first i have is an almay lip gloss wheel a nice pink and this is called a pink 500 pink which is really pretty I have a Rimmel a gloss and this is called chill out nice clear wamp have a Barry M 1 and this is there is no name with this i think i got this might have been with a little gift set quite a while about it’s just got just pink with some of the little sparkles in there running through it and this one is from Claire’s and it has some lovely pink sparkles running through it I don’t think this has a name I have this one from Milani and this is I would say like a rose gold kind of color and doesn’t have a name I don’t think you just have some little pretty sparkles in there I have this one I have from collection 2000 sits like the old packaging and it’s one of the lock and hold lip glosses and it’s called body pop and I have this one from zara and this is one of my favorite lip glosses I really do like this and suddenly oh it does have a name it’s called deja vu and my camera is not focusing and it is really lovely on the lips I really do like that then I have this one from benefit I got in a little set this is dandelion I have this one from lauren luke a really nice lip gloss again that I got in a little emsett you might have come actually with the palette I think it did come with a palette and so I got

that one I have this one which I wore a couple of months ago and it’s one of the l’oreal splash stands i’ll just quickly show you his little door foot applicator and it really is a lovely pinky color really pretty and I love the smell of these as well and this is in Rome a real lavoie I have this one from wet’n’wild and lip gloss and this is called one Tim untamed with some lovely sparkles there have this one and I don’t know what this is actually called and it’s all roped off now but it’s quite old and quite sheer and I smells really sweet it’s a lovely lip gloss but I can’t remember where I got that from and I then I think I have a couple of Nivea one somewhere in here but I have this nivea wand with SPF 15 and this is called mango bomb I have this Rimmel lip gloss and this is called steer my Rose which is really pretty and I have this one and I can’t remember where this is from this is just a lovely pink color which is nice I have this one as well which is a favor and this is from Smashbox and it’s called uloom i think and i got this in some friend mail and i love this i like to wear it over these two lipsticks here which of the color sensational this one is coffee craze and this one is tinted to open all of it over both of Earthforce and then have an Avon lip gloss with some light pink sparkles in and this is called delicious ha need you and this is lovely and I have to NYC with lessons someone can’t find them oh there they are these two are the ain’t no that’s it where’s the more thing and that one shouldn’t be in there that’s a coral accord these are the NYC liquid lip shine and I love both of these this round one is called honey on the Hudson and this like pinky red is called Rockefeller red and I love these they are really really nice and the air non sticky as well Absalon both bars and I have this one as well which is an all in the old packaging from collection 2000 and it’s called a lollipop too which is lovely I have this one from saw just a nice lip gloss then I have a rimmel one sorry a Revlon one in cold mid midnight swirl really nice dark like purpley color with its always called current affair and it’s just got some lovely sparkles in there really pretty then I have this one which is a rimmel moisture in you and this is called pink spa so another lovely pink color these little ones are from sleek and I’ve got these Academy which online website I picked them up from and but this is sleek lip gloss in magic strawberry and then I have in your dreams and then sweet 16 which is really pretty and so I have that one I have an urban decay lip gloss that came with a little sex then I have this one which i haven’t used yet and it’s the Revlon lip gloss in coral reef and that is stunning really lovely I have had another one of these before and this is a backup of it and some want to leave that out actually and leave that on my dressing-table and I then have this which is a Rimmel volume boost a lip plumping gloss and this is in seduce nice a topi color brown color we live off left then I have this one from YSL and it’s number 9 really pretty and it’s just another door for applicator with an unlovely red color very similar to the the the l’oreal like splash one

and but yeah I love that it’s such a lovely red and then all that excuse that one I then have two of the Rimmel apocalypse this one is in big bang and this one is in galaxy lovely red and like a Marvel movie color I then have this Rimmel provoca lips and this one is in it is in kiss me you fool and this is gorgeous i highly recommend these if you haven’t tried them you get your cooler at one end and then your bomb at the other and they are so long wearing I really do like them in the last for ages and I then have another double ended gloss you have your color at one end and then your gloss at the other and this is by Nivea and then I have a lip gloss in here from Barry M this is a limited edition it just got em like clear with like silver sparkles in there and I got this with an impulse and set one Christmas then I have the mua makeup academy lip boom so you have your lip stick at one end which is really dark and lovely and then you also have your little lip gloss on the ER at the bottom and then you have your little lip gloss as well there on the end and the US or nice smell like you’re not really sweet like toffee really do like loss I then have a little lick whip from inika called apricots and this is just a lovely bronzy color and then the last one in here is another Nivea lip gloss in natural and that’s that so that’s everything so I hope you’ve enjoyed having a peep in the first of my little basket drawers and I will see you again in another makeup Monday and next week so thank you for watching take care I’m and i’ll see you again very soon bye


My Foundation Collection + Favourites: mac, nars .etc.

hey guys for pure I hope everyone’s well as you can tell by title of the video today I’m going to finally do mostly my favourites foundation and video it’s going to feature all the different types of foundations I have then I’m obviously going to dwell on those that are my favorites or that I love yeah I hope this video obviously helps someone if you’re looking to purchase any of these foundations that I’m going to actually talk about in the video so often I get asked what is the best foundation to use the best foundation in the world and my answer normally is there’s no such thing as the best foundation because people with foundations for different reasons what’s one person they want a foundation to do they’d be totally different to what another person may want their foundation or why they may wear foundation so there’s no such thing it all comes down to why you’re wearing the foundation or what’s your skin type and what finish you would obviously once the summers dry off as a matte finish there’s on the child is the dewy finish there’s some of their lights there’s some there’s high coverage some with low coverage so the so many different things evolve your foundations that you have to your sleep bear in mind when choosing your foundation or one that will suit you best see a wide range of different skin tones and skin colors Symes it can be hard and quite tricky with choosing the right shaders fits your skin tone or skin color so normally I always say it’s best to go for a tester if you go to the shop sometimes you for instance for go to Mac you’re not sure what color you want to get do ups for a test then that way you can take it home outside a natural light and see which color which shade suits you best instead of you buy foundation trying it on wearing it so many times for you today take pictures and see that it’s not your color so first foundations I’m going to show you our actually going to be Mac foundations because I think those are the most talked about makeup products that I put comes to mind anyways when I think of makeup so I’m going to salt my Mac and I’m going to go down to different brand and so on and so forth so I just wanted to say this if I go on to starts to being to you about Mac dear Oh Mac workers all dark-skinned ladies are not an n w45 I just feel like in single time you walk into Mac anytime your taxi lady they just automatically just free and open file at you and it’s like how can me and my friend who use like three shades lighter than me being in w45 it’s not possible dark-skinned people come in different shades we’re not all one color and my the only one that’s experience this if you have experience that Mac workers always tend to give all of us the same shade leave you’re angry experiments down below let me know what it was but anyways foundation that I’m going to show you which is actually my first ever foundation that’s I purchased and this is actually the same bottle it’s not finished is the Mac Studio Fix and this unsurprisingly is in the shade in w45 think it was about 20 pounds something I’m not too sure I will link it down below and it did not come with this pump on top this has to be bought separately in order for you to use it if not you’re going to have just pour it onto your palm which can be quite messy unhygenic especially who you’re going to be using that foundation for different people I would consider this more or less a matte finish and it is medium to high coverage it’s the foundation that’s beautiful cover and blemishes this would be good because it will hide it so anything I think in light of all this I know something will describe is oil free but I’m June as a I found my face look like a sweat box it just looked early and after a while I thought it did oxidize on me I found my face with quite orange and you could obviously see I was wearing foundation which I don’t like good foundation however with me personally I thought that the Studio Fix made for my face look quite oily so you should definitely have your pressed powder minou if you do want to use this foundation I know some people have complained that it’s broke out their skin everyone is different with me personally did not very cut my skin masculine spy the next foundation which I have got from Mac is actually called the pro longwear foundation and this one as you can see comes with the pump and this one

is in the shade nc54 those are arts I feel like an NC 50 little spreader on my skin then the end up you fortify the colored I always tend to get is NC 50 so with the pro longwear it’s quite creamy in comparison to Studio Fix I personally would say this is high coverage this is the type of foundation that you feel that you’re wearing foundation it feels quite heavy however it’s good with obviously covering scars and blemishes I also felt with the pro longwear this tend to last the whole day how my makeup looks in the morning it was exactly how I look like when I was about to sleep before actually taking off my makeup so this is really good if you’re looking for a high coverage foundation that’s going to cover everything and have a flawless finish I will say the pro longwear it’s easy to apply as I say it’s quite thick so if you don’t like that feeling of my foundation I would say just stay away from Pro Longwear the next foundation by mac is called the face and body now this is a water-based foundation so it’s very very light very low coverage don’t have you seen low or light very low light coverage it has a dewy finish so it looks like you’re highly wearing any makeup it looks very young and fresh on your skin if you’re a person that likes to wear foundation that feels like you’re not wearing anything on and it just looks natural you don’t want it to cover blemishes but just once just to add something to your skin I would say at the face muddying however the thing that I don’t like about this is I find it hard to find the right shape because the shades are limited they don’t do and end up your NC this one is actually all I was given the c7 the shade above was way too dark for me and this one is the tiny bit light for me so I have to obviously put on powder afterwards for it to look okay you can probably hear it it just feels like you’re putting water on your face any time I wear this I actually forget I even have alright any makeup and then jumping bed and like oh my god I have foundational time to take it off for the whites so this is a really nice foundation I do like it however as I said the colors are limited so do make sure you test it out and see you’re going to get the right shade most people don’t recommend this for oily skin but personally I think after putting it on power going to party face wedding day it can be fine so there that’s it and the site is a quite a big tub compared to the rest so you’re getting quite a lot it would last you a really really long time the next foundation is called the matchmaster foundation and this is also by Mac and is in now with this foundations is quite big it’s looks similar to that of the pro longwear container but is bigger than it this foundation bills are really really nice on the skin has a flawless finish and say this is medium to buildable coverage I was given the wrong shade now this is way too dark for me and I was given the shade aids point five however I’ve been told I’m an eight point zero so I would say with this foundation make sure you check that your gets in the right shade because it can be quite tricky because some of them are quite similar this not an n WNC kind of foundation say this lasts a really long time high coverage so it is a foundation to cover blemishes if needed also it does feel like you’re wearing foundation sonís the pro longwear so felt quite heavy on my skin that’s the only thing I would say I didn’t like about it however if you do like that and that’s not an issue then also go for this yeah it’s going to be a 15 as some of the other ones and it’s quite big it would obviously last a long time I think that’s my thoughts partly my last Mac foundation that I have is for the studio tech foundation we should actually in like a compact case like this and this is in the shade NC 50 originally I had that NW 43 because the NC that end up which while it was way too dark this is how it looks like it comes with mirror on top and it comes with the sponge and then obviously the foundation in the middle with this one it’s really creamy and definitely a high coverage if you don’t like liquid foundation T once the foundation is high Qadri which I will last you a long time I would say it’s do tech however it does feel quite heavy so putting this on it glides on really smoothly when I wore this it did last the whole day it did cover blemishes as well and it was okay for my skinny bad oily skin I found it okay most of the times was using it I did often spray my sponge with fix+ whenever I thought they was too creamy

and did use that same point on my face and so yeah this is a nice foundation if you do want to get a liquid one I often get asked about Mac Foundation’s was an end up use and what isn’t empty from my knowledge I know an NC is for those that have cold tones and NW those with warm tones how I’ve translated or how I best understand it on people of color and NCS off with the yellow undertone people and NW so for all thought orange deeper undertone people that’s how I describe it personally when I take off my makeup I seem to have a yellow toned that’s why I think the NC looks better on my skin than the NW wear sometimes when the n-double use oxidizes it tends to look quite orange and cakey on my face so that’s my matte foundation citation I’m going to show you is the NARS sheer max foundation I’ve been told is for oily skin whereas the sheer glow is the one for those with dry skin tones I am in the shade New Orleans that’s actually what I’m wearing on today on my face it’s I would describe as medium to high coverage it’s good covering blemishes this last long day easy to apply and to get onto your face it’s really nice it’s definitely one of my top 3 it’s by its pricey as it’s nice well I don’t like about this violation is it doesn’t come with a pump so you’re going you’re going to have to pour it onto your palm to test it out I know you can purchase this from Selfridges or you can get this online but I would say go to places like Selfridges where they stuck it and obviously choose or test out different shades to get the right foundation for you skin it’s what I’m wearing now it gives me a nice dewy finish I don’t feel only during the day which I do like the foundation doesn’t oxidize on my face as the day goes by so it’s really nice I definitely do you like this foundation so funny thing we talked to you about our makeup forever so this first one is called the Makeup Forever HD foundation also known as the High Definition foundation I bought it in the shade one seven eight which is slightly a little bit too dark for me so fast white skin on what it gets in the shade one-77 so with the HD foundation this thoughts quite moisturizing on the skin which I dunno I’ll describe as a medium to high coverage foundation you think that I will adds in like it’s when I use this after a while my face or oily so I had to use my powder too to control the oils Harper does have a nice finish on the skin he has a dewy looking finished it comes with a pump when you obviously take it off you can see the pump which OC is good in the UK is quite hard to get this foundation I just did the person this foundation on either sides where you can poach the in the UK down below this video yes I mean it’s nice but out of the makeup for a foundation that I’ve tried I prefer the other one which is the mattes velvet in comparison to this one because I just for oily was using this one however it does have a nice dewy finish when you do apply on the skin and it’s not too heavy so I wanted to show you is also by Makeup Forever and this is called the max evolve it’s Fantasia and I am in the shade 75 as most of you already know this is my ferrets foundation in the whole wide world simply put I just like the way you finish it on my skin do you we the natural looking finish it’s not too heavy I’m not one that likes foundations I feel really heavy we location on the skin which with this one I don’t have that feeling it spreads evenly on my skin no thing about this is while supplying your face it does dry quickly so it’s best I would say to put it on your palm and do probably one side the person then the other side or be quite fast with a primates otherwise it will dry card to you blend out its quickly as its oil free however in all in all I would say is be nice because it doesn’t make me feel I like the color is not too orange and it’s not too yellow cuz it’s hard to purchase this in the UK so it’s best you do try and find somewhere where they sell it and go in SOI try it on play something this anything else that I can say it’s just my number one and then just love it and I just need to stock up on it the next violation is sheet by Revlon and actually came with a couple top which I can’t find and you can see it comes with a comp and I got the shake cappuccino and it’s called the Revlon Photoready foundation and it’s quite good it’s a nice foundation will describe as a medium to full coverage

which can be buildable I find this quite sooner to the studio fix it looks nice on the skin however after a while makes me milk white orange I do like that obviously as a foundation in general it’s quite nice it’s tough cover up blemishes if that’s what your own or purpose is for choosing a foundation however I would say the colors can be a bit tricky doesn’t have a wide range of colors that’s just from what my experience so do be sure or to test out that you’ll gain the right colour foundation if you’re going to get the Revlon Photoready yes this in the shade cappuccino which I study have a little bit dark for me I would say however I just mix it with a lighter concealer for it to look more wearable Abdo full oh I can feel that I’m wearing some kind of foundation with this cuz obviously not light so that’s my thoughts about the Revlon the next one by a cause is actually tinted moisturizer and this is by elf and it just says since moisturizer SPF 20 machine espresso I just opted for the darkest one the thing about this is it’s quite a night of Secrets tinted moisturizer sensing moisturizers the best one with those I have for me a really good skin or really cleared skin you know me if you have acne scars and blemishes when you tend to put on the potential motion then we will not cover it up so that’s the thing about it since moisturize up with this one in particular I thought that I thought if I did not shake it it seems to be a bit too pasty and quite sticky so I didn’t like the feeling on it however when I used all mixing with a moisturizer the thoughts better but without a moisturizer it was quite sticky and dolls only too fond of this so this person who was a no-no and however is cheap is affordable if it’s just something for maybe in the summer I just want something before an even skin tone me do you check it out but in general I wasn’t too impressed with this foundation or this tinted moisturizer the last one is from Iman and this is called the second to none luminous foundation and I was in the shade earth too it comes with the mirror it comes with the foundation and then it comes with a sponge so this is the powder foundation and with this one I was really excited because of I assumed refused on the X and a team how they applied on some models however I did not like this on my skin because they had too many sparkie particles where it just looked like I had dipped my face in some glitter or something had too many shiny particles so I think I would like to use it as a highlights I’m not so much as a foundation because if you have scars or you have blemishes on your skin on the skin all it’s going to do is bring attention and shine light onto those imperfections so I would not suggest you getting this practice in however I will even suggest or recommend this in general because it just looks like I said you put oil on your skin – starkly which are my top three foundations has to be these three who have to be the Make Up For Ever foundation will be number one number two be the sheer matte foundation and then my third one will have to be match master by Mac so these are my top three foundations that I like so lastly with all my foundations as I look I have oily sea and I always like to put on a powder afterwards for the foundation to set and I’ll always use the mineralized skinfinish by Mac and I am in the shade deep dark and as you can see I’ve hit pan depending on your skin tone determined the shade you get and I’m in the shade deep dark some people told me they think I’m the shade dark but I just opted for deep dark I normally always run to the darkest things when it comes to foundations so that’s what I use the second one that I use is by Inglot I don’t know what color it is but once again I opted for the darkest shade and just use that so these are the two things I used to apply my foundation ear the Beautyblender or the elf powder brushes I hope you found this helpful and I break it down it’s easy for you to understand and hopefully I made it easy for looking for a foundation or you like any of these foundations and he wants to know my opinion of thoughts about it lastly I wouldn’t want to say if any company is watching this and your high end product and you make foundations please do consider making software I start seeing ladies too because we do exist and it’ll feel it’s quite bizarre that sometimes you don’t consider making sums so shades for us too sometimes you just want to try the different foundation they here are the people were able to about and then go in then we’re disappointed because they don’t even

have your skin tone or the darkest shade is that of the kool-aid the one thing which is obviously waiting lights or someone like me so I hope you found this video helpful and if you did find it helpful you did like it few thousands of us always and I shall speak to you guys later I hope you have a good day you


Makeup Declutter! | Eyeshadow singles, bases & palettes!

hey guys it’s Jenna what’s up and welcome back to my channel today I am doing another one of my decluttering series videos I am taking on this drawer which is all of my larger palettes as well as this one right here which pretty much consists of I shadow bases smaller palettes as well as some single eyeshadows so let’s get into it without further ado does that make any sense I don’t really know but I’m gonna start with my single eyeshadows some of these are bases as well these are the tattoo ones the cream ones and I don’t know if these are any good anymore yeah it doesn’t look like it’s drying out that much that much so I have when that’s what is it’s called barely branded I have that’s the bronze it’s this one okay and that should be okay oh yeah still good so that’s bad to the bronze and then I have inked it in pink which I don’t know about this one mm-hmm you can see that it’s coming away from the edges a little bit but it’s still creamy in the middle so I will keep it and make sure I get using that up and then I have two of these batter up cream eyeshadows from the balm I got these on like a really really good sale at Shoppers Drug Mart and they’re still really really nice so I don’t know what color this is this is base hit kick cuz that home plate Kate Oh home plate Kate and base hit kit okay so I have those two I didn’t actually check this one oh that’s pretty yep that’s still good and then I have this little Smashbox full exposure it has two of the eyeshadows from the palette and I just got this in like a 500-point perk or something you know keep it there pretty um we have a NYX eyeshadow this is just like a grey color it’s really pretty all the way back here I have a Pacifica eyeshadow I actually need to start using this because this is like stunning just stunning so that I honestly think I’ll be getting rid of these three these are like loose eyeshadows which again you guys if you’ve seen my other ones you know that I’m not really liking the loose pigment kind of things so I just have like it’s like really dark wanna purple and I’m like a blue I’m gonna take those out of there next I have this little the bomb nude tude and this is flirty I think that’s the color and I got this in an Ipsy bag and it’s a pretty color it’s just an interesting color yeah we’ll keep it it’s pretty this is absolutely gorgeous this is a teeny beauty eyeshadow and I got this in an Ipsy bag again and this is like literally stunning like so so pigmented like you can see it right there that is just stunning okay keep in that I need to start using that this is an urban decay single eyeshadow I got this in like the Sephora favorites and this is in sin which sin is like stunning I have sinned in a few of the palettes I think it’s actually just in the first palette so I honestly use that all the time but once I run out of that I’ll have this single eyeshadow and then I have my three Milani eyeshadows that I love I’m gonna keep those obviously ones in Bella cappuccino Bella ivory and Bella champagne they’re all gorgeous time keeping those I got those in Florida because I can’t seem to find anywhere in Canada that sells Milani so that kind of sucks but whatever I have this tres steep eyeshadow stick and it’s in this like really dark brown color and it’s supposed to be used as like a cream eyeshadow or like a cream base but I never use it like that because it’s dark brown and I don’t want a dark brown face so I’m just gonna hand that along I think a lot of this stuff and this front thing’s gonna be filling looks like don’t use it very often this is a shadow tint and this is really cool because

it’s like a liquid but it dries as a powder and it’s like gorgeous like coppery penny color so I’m gonna keep that one that’s just stunning and I might use it eventually this is pixie loose eye shadow yeah I don’t think I’ll keep that it’s just it’s not really anything fancy I got it in an Ipsy bag yet again and then I just have this right here this is a Mary Kay to play this isn’t like a really really pretty like blue color so I’m gonna keep that in case I ever want to put like blue under my eyes or anything like that and then I just have a few like double-ended like this is the Urban Decay Naked 2 this is just you naked and then this is just a double sided I think this was from an Ipsy bag and it’s crown brush I don’t know where my other two brushes went I had the one for the Naked 3 and the smokey but I don’t know where they went so whatever I just have a bunch of samples of the Urban Decay primer potion which I really need to use I just have a full-size of just the original Urban Decay primer potion which I love this is the lorac primer it comes in the lorac Pro palette I have this Mac Paint Pot this is in painterly I’m keeping that this mecca Beauty cosmetics I think this is their eyeshadow i primer I really liked that actually I got that in an Ipsy bag and it was one of my favorite things that I’ve ever received from Ipsy and then I just have some of the Urban Decay primer potion in this little what is this called little plastic container I got a sample of something in here from Sephora and I decided to put all of e I think it was Eden primer potion in there just so that it’s easier to get at and I can just you know squeeze all of the packets into one tell me down below if you guys know where I can buy these oopsie just like empty little containers like this so that I can put all of my Urban Decay primer potion into them please tell me down below if you know where I can get those because I really want to put all of it into some of these and then lastly we have all of my little eyeshadow palettes back here this is one of my most recent things that I got in an Ipsy bag this is like a nick okay new york this is like a quad which is amazing I’m gonna put them up here but I’m not getting rid of them I have this Revlon Photoready eyeshadow palette I don’t really reach for this often but like it’s stunning like this color right here is like this gold color and it’s gorgeous I really need to reach for this more often because this is pretty yeah I’m gonna keep that because the eyeshadows in here are really good quality considering they are drugstore not saying drug source bad or anything I have this little trio C from ELF I’ve had this forever like literally and everything is still pretty good still very good actually they’re just really nice simple matte colors you could bring this along with you traveling and it would be really really good so I’m gonna keep that that’s really good this is a palette that I got in my most recent Ipsy bag and it’s like hip C and NYX together and as you guys could see if you watch this if C thing I wasn’t very impressed with the pigmentation of this palette so I’m gonna hand that along to someone that might be able to use it more than me cuz I have a lot better eye shadows than those I have this little quad from via bombshell and I don’t have a lot of eyeshadows in these colors so I’m gonna keep this because I really love this teal color like just wait it’s like stunning just stunning like that is oh that’s gorgeous and then this like dark bronzy like oh yes I love that I need to use that more that’s so pretty like I’ve never seen a blue eyeshadow that bright before that is just gorgeous definitely keeping that I got that in an Ipsy bag by the way I have this NYC trio here mmm they’re not amazing I’m gonna pass that along NYC’s a very cheap brand so usually they’re I shadowed aren’t amazing but they’re still good for the price obviously I have a bunch of these how many do I have five three yeah I have three of these NYC custom compact individual eye things and they come with like a primer and then like four eyeshadows and this like shimmery inner corner stuff which is really nice but I just find myself not reaching for these I had this one for a really long time

and I loved it and used it all the time as you guys can see I completely got rid of all of that like inner corner I’d shimmer stuff but I think I just might hand these along to maybe my mom or someone that might be able to use those as well next I just have this NYX eyeshadow trio I had a bunch of these and I actually put most of them into a bigger palette that you guys will see in a second but I kept this one because I didn’t really have a lot of these colors like again that’s like stunning pigmentation there I love this palette I don’t really have a lot of these like yellowy golden shades so I really wanted to keep it just so that I could have some of those shades and like those are stunning so I’m gonna keep that this one right here is just like stunning you guys can see right there like what that’s amazing like this is a great palette love this one next I have this little Revlon like cream eyeshadow thing I got this for free I want it in like a little like raffle thing and it was in like a bag with a bunch of different things but I have not even reached for this whatsoever so I’m gonna pass that along it has some interesting colors in it though and then lastly is just this little duo I think this is from Miss Adaro and I got this as like an Ipsy point perk thing and it’s like one of those like baked eyeshadows and you know I might actually hmm you barely can’t even see that what it work as like a highlight maybe who that might actually be a really good highlight I’m gonna put this in my highlight drawer and I’m gonna put this right here as a new highlight cuz that is really pretty so from what I’m keeping I can put all these back in here not many little palace I’m keeping but there we go I got rid of actually a lot from this drawer I’m quite pleased with myself Boop next moving into this drawer these are my larger palettes and I do have a lot of high-end palettes in here so I don’t think I’ll be getting rid of a lot to be completely honest with you there might be maybe one palette that I’m gonna get rid of but let’s get into it anyways I’m sure you guys want to know what palettes I have I have this here this is the boudoir eyes this one is really really nice I don’t grab for it as much as I’d like to but I think it’s really really pretty and kind of like sexy in a way and I think would look really nice in the falx it has some darker colors and look really nice so then I have this one here this is the Stila eyes are the window I think this is mind the mind palette and this one is one of my favorite palettes I love this so much this orange here is stunning in the crease whenever you guys see me with an eye look that has orange in it this is the palette that I’ve used because it’s just stunning next one I’m definitely not getting rid of this is like literally my favorite palette ever this is the makeup forever artist eye shadow palette this is the neutral one and this one has majority yea majority of them are shimmery satin colors just this black down here is matte and these are like the best eyeshadows I’ve like ever used in my life they are so so pigmented so so gorgeous I really want to show you guys what these look like like they’re just yes okay that’s not going anywhere next I have this this is the Stila in the light palette and I’ve had this palette for so long this is actually my first eyeshadow palette I ever got from Sephora so that’s really exciting I have the lorac Pro palette just the first one I got this in Florida I really like that one as well it’s really light and like good for traveling I have my Kat Von D monarch palette I love that one a lot I have the shade and light eyeshadow palette love that one I also have the Naked 2 and three a naked one is over in my everyday drawer and I love that one and then I have the Too Faced this is the semi-sweet chocolate bar palette love all of these definitely keeping

those this is the narcissist palette and I love this one this has stunning stunning colors this is a great palette if you like shimmery colors and like like getting all dressed up and going out and stuff it’s really really nice for like night looks I have this here this is a YSL little v I actually should have this in my like other drawer but I just like keeping it in this drawer because it’s real fancy and expensive so that is actually really really stunning obviously it’s YSL um and I have this Naked 2 basics I was gonna get the first one but I really like this one more because it has a little bit of darker colors and a lot of the times with the first one because it’s really light you can’t even tell that I’m wearing any eyeshadow so I didn’t want to get that one this is the Maybelline nudes palette to be honest I don’t really use this one that often and I’m tempted to get rid of it but it’s the only like eyeshadow palette that I have that is drugstore and like it’s still really good for drugstore I just have so many high-end eyeshadow palettes that I really don’t reach for this one that often but I might I’m nice I might keep it just to have some drugstore eyeshadows to use in the drugstore tutorials and stuff because this one if you are on a budget and you want good eyeshadows that are pretty cheap I would definitely go for the Maybelline the nudes or get your hands on the comfort zone palette from wet and wild or any of the wet and wild shadows are really really good I have my Mac eye shadow palette this is 15 of them I love all of these I will link the video down below that I did of this palette I went over all of these eyeshadows they are all amazing and they’re just like my top 15 Mac eyeshadows which I love them all so much this is another palette that I’ve definitely recommend if you’re on a budget this is the neutral palette from ELF oopsie just hit my camera this is the neutral palette from ELF and these are gorgeous as well they have really really good pigmentation some of them are a little bit chalky but some of them are like amazing so definitely if you’re looking for some good cheaper eyeshadow palettes the elf neutrals palettes really good as well as the Maybelline nudes I think I’m gonna keep both of those this one right here this is what I was talking about before all of these in the middle are all of my NYX eyeshadows that i depotted as well as a bunch of wet and wild eyes shadows like I said wet and wild has really really good eye shadows definitely the comfort zone palette which I believe are all of these right here I have a few other like trios and stuff but I’m gonna keep this these are really really good like these three right here just all of the wet wild shadows and the Knicks ones the elf neutrals and the Maybelline nudes if you’re looking for drugstore eyeshadows those one’s right there are a definite must and then lastly this was the one that I’m thinking I might just get rid of I’ve had this for so so long and I got it back when like Coastal Scents palettes were a big thing this is the Mirage palette and like they’re stunning stunning colors and I love all of these colors I just have so many other eyeshadows that I never grab for this this is just always in the back of my drawer and I never never grabbed for it my sister um Helvetia she’ll probably really really like this so I’m gonna pass that on to her and see if she wants it she likes doing eyeshadows and eyeshadow and stuff but she doesn’t really have a lot of eyeshadow so I’m sure if I give this to her she will love it and all of these colors are really her style as well so I will definitely pass that on to her so and then these ones can go back in here I have my V palette I got my Z palette from Sephora by the way and I have my Mac and then my two ones back there actually probably move these up a little bit so you can see everything so I know what I have in my drawers there we go move that over there we go so like I said I only got rid of one palette but it was a big palette and I think I did well even though I only got rid of one so guys that was everything for my decluttering of my larger eyeshadows as well as like my single eyeshadows and whatnot I hope you guys enjoyed this video be sure to stay tuned for my next one which is going to be all of my lipsticks I have all of them here as well as I have some over there on my desk so get excited for that one it’ll

be up in a few weeks and I’m hoping that you guys aren’t getting bored of these decluttering videos I love doing them and yeah I hope you guys enjoyed this video give it up if you like my decluttering videos and I will see you guys in my next video hi hey guys it’s Jenna what’s up and welcome back to another plan with me Monday I am so excited for this one this was this one’s last week and this is probably been my most favorite spread I’ve ever done I


Docker Full Course – Learn Docker in 5 Hours | Docker Tutorial For Beginners | Edureka

Docker is one of the leading container orchestration Tools in today’s market Hi all I welcome you to this full course session on Docker and what follows is the complete crash course on the same But before we begin, let’s take a look at today’s agenda So we’ll start off with introduction to Docker where we’ll talk about what is talker its components and it’s architecture after that We’ll talk about how to install and set up docker on Centos Machine and on Windows later on in the session We’re looking to dockerfile and commands We will understand how to create and run a Dockerfile and use various commands after that I’ll talk about how to use Docker compose and Docker swarm so over here, you’ll understand how two runways containers to host a single application and also coming to talk, uh, swa– you’ll understand how to create a cluster to achieve High availability moving forward in the session will look into Docker networking We will understand the various aspects of Docker networking and after that I’ll talk about Dockerize An application so over here, you’ll understand how to categorize an application either it be an angularjs application a micro service application for the node.js application And finally I’ll end this session by talking about the differences between docker and virtual version and also comparing docker versus kubernetes right with that I come to an end to my today’s agenda But before we begin I would like to request all of you to subscribe to our Edureka YouTube channel to get daily notified on the top trending Technologies on that note Let’s get started Why we need Docker So this is the most common problem that industries were facing as you can see that there is a developer who has built an application that works fine in his own environment But when it reach production there were certain issues with that application Why does that happens that happens because of difference in the Computing environment between tech and product So I hope you are clear with the first problem I’ll move forward and we’ll see the second problem them before we proceed with the second problem It is very important for us to understand what our microservices consider a very large application that application is broken down into smaller Services Each of those Services can be termed as microservices or we can put it in another way as well microservices can be considered a small processes that communicates with each other over a network to fulfill one particular goal Let us understand this with an example as you can see that there is an online shopping service application It can be broken Goin Down into smaller micro services like account service product catalog card server and Order server Microsoft is architecture is gaining a lot of popularity nowadays even giants like Facebook and Amazon are adopting micro service architecture There are three major reasons for adopting microservice architecture, or you can say there are three major advantages of using Microsoft’s architecture first There are certain applications which are easier to build and maintain when they are broken down into smaller pieces or smaller Services Second reason is suppose if I want to update a particular software or I want a new technology stack Of my module on one of my so base so I can easily do that because the dependency concerns will be very less when compared to the application as a whole apart from that The third reason is if any of my module of or any of my service goes down, then my whole application remains largely unaffected, so I hope we are clear with what are microservices and what are their advantages? So we’ll move forward and see what are the problems in adopting This Microsoft is architecture So this is one way of implementing microservice architecture over here As you can see that there is a host machine and on top of that host machine there are multiple virtual machines each of these virtual machines contains the dependencies for one micro service So you must be thinking what is the disadvantage here? The major disadvantage here is in Virtual machines There is a lot of wastage of resources resources such as RAM processor disk space are not you Lies completely by the Microsoft which is running in these virtual machines So it is not an ideal way to implement microservice architecture and I have just given an example of five microservices What if there are more than 5 micro Services? What if your application is? So huge that it requires 50 micro services So at that time using virtual machines doesn’t make sense because of the wastage of resources So let us first discuss the implementation of microservice problem that we just saw So what is happening here? There’s a host machine And on top of that host machine there’s a virtual machine and on top of that virtual machine, there are multiple Docker containers And each of these Docker containers contains the dependencies for one microservice So you must be thinking what is the difference here earlier? We were using virtual machines Now we are using were Docker containers on top of virtual machines Let me tell you guys Docker containers are actually lightweight Alternatives of virtual machines What does that mean in Docker containers? You don’t need to relocate any Ram or any disk space

So it will take the RAM and Space according to the requirements of applications All right Now let us see how Dockers all the problem of not having a consistent Computing environment throughout the software delivery lifecycle Let me tell you first of all Docker containers are actually developed by the developers So now let us see how dark or solve the first problem that we saw where an application works fine and development environment but not in production So Docker containers can be used throughout the SCLC life cycle in order to provide consistent Computing environment So the I’m environment will be present in Dev test and product So there won’t be any difference in the Computing environment So let us move forward and understand what exactly Docker is So the docker containers does not use the guest operating system It uses the host operating system Let us refer to the diagram that is shown There is the host operating system and on top of that host operating system There’s a Docker engine and with the help of this Docker engine Docker containers are formed and these containers have applications running in them And that requirements for those applications such as all the binaries and libraries are also packaged in the same container All right, and there can be multiple containers running as you can see that there are two containers here 1 & 2 so on top of the host machine is a darker engine and on top of the docker engine there are multiple containers and each of those containers will have an application running on them and whatever the binaries and library is required for that application is also packaged in the same container So I hope you are clear So now let us move forward and understand Docker in more detail So there’s a general workflow of Docker or you can say one way of using Docker over here What is happening a developer writes a code that defines an application requirements or the dependencies in an easy to write Docker file and this Docker file produces Docker images So whatever dependencies are required for a particular application is present inside this image And what are Docker containers Docker containers are nothing but the runtime instance of Docker image This particular image is uploaded onto the docker Hub Now, what is Docker hub? Docker Hub is nothing but a git repository for Docker images it contains public as well as private repositories So from public repositories, you can pull your image as well and you can upload your own images as well on to the docker Hub All right from Docker Hub various teams such as QA or production team will pull the image and prepare their own containers as you can see from the diagram So what is the major advantage we get through this workflow? So whatever the dependencies that are required for your application is actually present throughout the Software delivery life cycle if you can recall the first problem that we saw that an application works fine in development environment, but when it reaches production, it is not working properly So that particular problem is easily resolved with the help of this particular workflow because you have a same environment throughout the software delivery lifecycle be devtest or product So I’ll move forward and we’ll see for better understanding of Docker a Docker example So this is another way of using Docker in the previous example, we saw that Docker images were used and those images were uploaded onto the docker Hub I’m from Docker Hub various teams were pulling those images and building their own containers But Docker images are huge in size and requires a lot of network bandwidth So in order to say that Network bandwidth, we use this kind of a book look over here We use Jenkins servers or any continuous integration server to build an environment that contains all the dependencies for a particular application or a Always and that bill environment is deployed onto various teams, like testing staging and production So let us move forward and see what exactly is happening in this particular image over here developer has written complex requirements for a micro service in an easy to write dockerfile And the code is then pushed onto the get repository from GitHub repository continuous integration servers, like Jenkins will pull that code and build an environment that contains all they have dependencies for that particular micro service And that environment is deployed on to testing staging and production So in this way, whatever requirements are there for your micro service is present throughout the software delivery life cycle So if you can recall the first problem we’re application works fine in Dev, but does not work in prod So with this workflow we can completely remove that problem because the requirements for the microservice is present throughout the software delivery life cycle and this image also explains how easy it is to implement a Microsoft’s Using Docker now, let us move forward and see how Industries are adopting Docker So this is the case study of Indiana University before darker They were facing many problems So let us have a look at those problems one by one The first problem was they were using custom script in order to deploy their application onto various vm’s So this requires a lot of manual steps and the second problem was their environment was optimized for legacy Java based applications,

but they’re growing environment involves new Acts that aren’t solely java-based So in order to provide these students the best possible experience They needed to began modernizing their applications Let us move forward and see what other problems Indiana University was facing So in the previous problem of dog Indiana University, they wanted to start modernizing their applications So for that they wanted to move from a monolithic architecture to a Microsoft Office architecture and the previous slides We also saw that if you want to update a particular technology in one of your micro service it is Easy to do that because they will be very less dependency constraints when compared to the whole application So because of that reason they wanted to start modernizing their application They wanted to move to a Microsoft with architecture Let us move forward and see what are the other problems that they were facing Indiana University also needed security for their sensitive student data such as SN and student health care data So there are four major problems that they were facing before Docker now, let us see how they have implemented Docker to solve all these problems Eames the solution to all these problems was darker Data Center and Docker data center has various components, which are there in front of your screen first is universal control plane Then comes ldap swarm CS engine and finally Docker trusted registry Now, let us move forward and see how they have implemented Docker data center in their infrastructure This is a workflow of how Indiana University has adopted Docker data center This is dr Trusted registry It is nothing but the storage of all Docker images and each of those images contains the dependencies for one micro Service as we saw that the Indiana University wanted to move from a monolithic architecture to a Microsoft this architecture So because of that reason these Docker images contain the dependencies for one particular micro service, but not the whole application All right, after that comes universal control plane It is used to deploy Services onto various hosts with the help of Docker images that are stored in the docker trusted registry So it Ops Team can manage their entire infrastructure from one single place with the help of universal control plane web user interface They can actually use it to provision Docker installed software on various hosts and then deploy applications without doing a lot of manual steps as we saw in the previous slides that Indiana University was earlier using custom scripts to deploy our application onto VMS that requires a lot of manual steps that problem is completely removed here when we talk about security the role based access controls within the docker data center allowed Out of University to Define level of access to various teams For example, they can provide read-only access to Docker containers for production team And at the same time they can actually provide read and write access to the deputy So I hope we all are clear with how Indiana University has adopted Docker data center So we will move forward and see what are the various Docker components versus Docker registry Docker registry is nothing but the storage of all your Docker images your images can be stored either in public repositories or in private repositories These repositories can be present locally or it can be present on the cloud Dhaka provides a cloud hosted service called Docker Hub Docker Hub as public as well as private Streets from public repositories you can actually pull an image and prepare your own containers at the same time You can write an image and upload that onto the docker Hub You can upload that into your private repository or you can upload that on a public repository as well That is totally up to you So for better understanding of Docker Hub, let me just show you how it looks like so this is how Dokken have looks like so first you need to actually sign in with your own login credentials after that You will see a page like this, which says welcome to Docker Hub over here as you can see That there is an option of create repository where you can create your own public or private repositories and upload images and at the same time There’s an option called explore repositories this contains all the repositories which are available publicly So let us go ahead and explore some of the publicly available repositories So we have a repositories for nginx redis Ubuntu then we have Docker registry Alpine Mongo my SQL swarm So what I’ll do I’ll show you a centralized repository, so The centralized repository which contains the center OS image Now, what I will do later in the session, I’ll actually pull a centralized image from Docker Hub Now, let us move forward and see what are Docker images and containers So Docker images are nothing but the read-only templates that are used to create containers these Docker images contains all the dependencies for a particular application or a Microsoft Office You can create your own image and upload that onto the docker Hub And at the same time you can also pull the images which are are available in the public repositories and the in Docker Hub Let us move forward and see what are Docker containers Docker containers are nothing but the runtime instances of Docker images it contains everything that is required to run an application

or a Microsoft Office and at the same time It is also possible that more than one image is required to create a one container Alright, so for better understanding of Docker images and Docker containers, what I’ll do on my Ubuntu box, I will pull a centralized image and I’ll run a center as container in that So let us move forward and first install Docker in my Ubuntu box So guys, this is my Ubuntu box over here first I’ll update the packages So for that I’ll type sudo apt-get update asking for password it is done now Before installing Docker I need to install the recommended packages for that I’ll type sudo apt-get install line x – image – extra – you name space – are and now a line irks – image – extra – virtual and here we go Press why? So we are done with the prerequisites So let us go ahead and install Docker so for that I’ll type sudo apt-get install Docker – engine so we have successfully installed Docker if you want to install Docker and send two ways You can refer the center is Docker installation video Now we need to start this darker service after that I’ll type sudo service darker start So it says the job is already running Now What I will do I will pull us into his image from Docker Hub and I will run the center waste container So for that I will type sudo Docker pull and the name of the image That is st OS the first it will check the local registry for Centos image If it doesn’t find there then it will go to the docker hub for st OS image and it will pull the image from there So we have successfully pulled us into his image from Docker Hub Now, I’ll run the center as container for that I’ll type sudo Docker Run – it sent OS that is the name of the image And here we go So we are now in the Centre ice container Let me exit from this Clear my terminal So let us now recall what we did first We installed awkard on open to after that We pulled sent to his image from Docker Hub And then we build a sin to waste container using that sent OS image now I’ll move forward and I’ll tell you what exactly Docker compose is So let us understand what exactly Docker compose is suppose you have multiple applications on various containers and all those containers are actually linked together So you don’t want to actually execute each of those containers one by one but you want to run those containers at once with a single command So that’s where Docker compose comes into the picture So let us proceed over to talker installation First I’ll make sure my existing packages are up-to-date So for that I will type sudo yum update And here we go So no packages marked for update I will clear my terminal now Now I will run Docker installation script for that I’ll type curl – FS sell and now I’ll give the link https Get dot docker.com /sh on here we go This script adds the docker da triple repository and installs docker

It is done now Our next step is to start a Docker service So for that I will type sudo service Docker start Here we go So darker has now started successfully Now I will pull a Docker image for Ubuntu operating system Docker images are used to create containers if the image is not present locally Docker will pull the image from registry dot half.com Currently, I don’t have any image So I’ll pull an image for Ubuntu operating system and for that I’ll use sudo Docker run on the image name dot is a boon to and here we go As you can see unable to find image I can highlight that at my cursor as well So just notice it is unable to find an image locally That means it is pulling from registry dot hub doctor.com So it has downloaded newer image for Ubuntu In order to start using container, you need to type sudo Docker Run – it and the name of the image, which is Ubuntu and here we go As you can see that we are in Ubuntu container right now I’ll open one more tab Over here if you want to see all the running Docker containers, you can type sudo Docker PS and it will display it for you So as you can see the name of images Ubuntu and this is the container ID for that particular image So why should we use Docker for Windows? Now? The first reason is that it avoids the work on my machine, but doesn’t work on the production problem All right Now this problem occurs due to the inconsistent environment throughout the software development workflow For example, let’s say that a developer has built an application Action on Windows environment and when he sends the application for the testing server, it fails to run now this happens because the testing server operates on an outdated version of Windows now, obviously the application does not support the dependencies needed to run on the outdated version of Windows So because of the difference in the software versions in the development and testing server the application will fail but when it comes to Docker we can run our application within a container which contains all the dependencies of the Equation and the container can be run through our the software development cycle Now this practice provides a consistent environment throughout apart from that It improves productivity So by installing dog on Windows, we’re running Docker natively if you’ve been following doctor for a while, you know that Docker containers originally supported only Linux operating systems, but later doctor made its platform available for other operating systems, but with a simple limitation now the limitation was that Docker engine ran inside Line X based virtual machine image on top of the operating system So basically you could run Docker from Windows or any other operating system except Line-X was still the middleman but thanks to the recent release Docker can now natively run on Windows, which means that Linux support is not needed instead the docker container will run on Windows kernel itself All right, so guys just like I mentioned earlier Docker for Windows suppose native networking now not only the Docker container the entire dock or tool set is now compatible with Windows This includes a Docker CLI Docker compose data volumes and all of the other building blocks for darker eyes infrastructure, which are now compatible with Windows but houses advantages now since all the docker components are locally compatible with Windows They can run with minimal computational overhead Now, let’s move on to the prerequisites So before you install doctor for Windows, you need to check if Running on a Windows 10 Pro Edition Enterprise education student Edition 64-bit system Now guys a point to note here is that Docker will not run on any other windows version So if you’re running on an older Windows version, you can install the docker toolbox instead Okay now doctor for Windows requires or type 1 hypervisor and in the case of windows, it’s called the hyper-v

Now, what is hyper-v hyper-v is basically a lightweight virtualization solution built on top Top of the hypervisor framework so you don’t need a virtual box You just have to enable hypervisor All right, and also you need to enable the virtualization in buyers Now when you install doctor for Windows by default, all of this is enabled but in case you’re facing any issue during installation, please check if your hyper-v and your virtualization is enabled now, let’s move on to the demo So we’re going to begin with installing doc of a Windows Now before we go ahead guys you have to make sure that you’re using a Windows And pro Enterprise education or student Edition one more important point to note here is that if you’re using a virtual box on your system, you won’t be able to run it because virtualbox will not work with the hypervisor enabled but in order for your doctor for Windows to work on your system, the hypervisor must be enabled so guys basically you cannot run Docker for Windows and a virtual box on the same system side by side Okay, so if you have a virtual box in your system, it’s not going to work because you Be enabling your hypervisor So let’s get started by installing doc of a Windows Now in order to install doc of a Windows You neither Docker for Windows installer now, I’ll leave a link in the description box so that you can download the installer So guys have already installed the talk of a Windows installer Y’all can go ahead and download it from the link in the description now here you can see that I’ve run the installer So now let’s just wait for the installation to complete, okay? Now let us click on Okay All right, so it’s unpacking files All right So the installation is completed So guys once you’ve installed it just open the doctor for Windows app Alright, it’s here on my desktop So when you try to start the application, you’ll see a whale icon on the status bar All right here you can see the way like in now when the whale I can become stable It means that Docker has started and you can start working on it Okay, so this icon needs to get stable That means that Docker has started All right So you can see him message popped up like this Okay It says Docker is now up and running All right, so guys you can either login to your Docker Hub account from here or you can use the dock or login command and login All right I’m going to go ahead and log into my Docker Hub account So now you all can open up any terminal and start running dog commands So guys I’m going to be using Windows Powershell now make sure you run as an administrator because there are a lot of commands which require admin access Okay So yes, all right Now in order to check if we’ve successfully installed Docker what we’re going to do is we’re going to check the version of Docker So the command for checking the version is darker space – hyphen version All right So it’s a returning the version of Docker that I’ve installed which means that is successfully installed darker Okay So now that we know Docker is successfully installed Let’s run a few basic Docker commands Okay So let me just clear the terminal Now I’m going to run Docker run Hello world Now This is the most basic Docker command That’s run Once you install darker Okay, so I’m basically gonna run the hello world image now Let’s see what happens So it’s unable to find image locally So it’s going to pull the hello world image from Docker Hub Okay All right So this basically gives a hello from Dhaka message So we finish the First Command now, let’s try something different Yeah So you use Docker images command to check The images that you have in your system since we just ran this hello world image from Dhaka hump We have this image in our repository All right Now, let’s pull a Docker image from Docker Hub Okay Now in order to do that, you just use a simple command called Docker pull and the name of the image that you want to pull on it So I’m going to pull an Ubuntu image Let’s see how it works So it’s basically pulling a 1/2 image from Docker hub Alright now let’s run this image So guys, do you remember that? I said that whenever you run a Docker image, it runs as a Docker container So whenever I perform this command Docker Run – ID – D and name of the image All right So whenever I use Docker run and I run an image, it’s basically going to create a container from this image Okay, so it’s going to create an Ubuntu container Alright now the next command is darker space PS – A now basically this should show all the containers All right So basically we have two containers over here because we ran both of these images All right, so whenever you run an image it runs as a container, that’s exactly what I told you earlier Okay Let’s clear this now

Let me type this out and then I’ll tell you what this does All right And what I’m doing here is I’m just accessing or running container Okay This is the container ID, which is basically the Ubuntu image that we pull some basically giving the container ID of this Ubuntu image that we put now basically within the container Okay, you can perform commands like let’s say Echo Hello All right So it says hello Now what you can do is you can just exit from here All right, so you come out of the container Okay Now, let’s try to stop a running container Okay Let’s see August top and the container ID All right, so it stopped that container Okay All right So the next command is Docker commit Okay Let me just type this out and then I’ll tell you what it does Okay So basically I’m using the docker commit command So basically it’s going to create a new image on the local system So after Docker commit, I have the container ID and I’m going to create an image out of this and after a space I’ve mentioned zuleika / abun to now Julia A car is basically the name of my Docker Hub repository and Ubuntu is the name of the image All right So let’s see what happens So basically we created a new image over here So here you can see that there’s another image which is added which is delay – Ubuntu Okay It has a new image ID and so on All right now guys, if you perform this command without logging in to Docker Hub, they’re going to ask you to log in first Okay, and for that you can use the command, which is Docker login All right now Did he logged in earlier in the session? So that’s why it says login succeeded Otherwise, it’s going to ask you for your credentials All right, it’s one ask you for your username and your password Okay Now what we’re going to do is we’re going to push this image to Docker Hub So we’re going to use a darker push command Along with the name of my Docker Hub repository and the image name All right, so it’s preparing and it’s going to push this image to Docker Hub All right Now, let’s say that you want to delete a container So what you can do is you can use the docker RM command So basically the command goes darker RM and the container ID Okay Alright Now, let’s look at our containers now, we have only one container So basically the container with container ID this got deleted Okay Similarly You can also remove Docker images Alright so first Let’s look at the docker image ID that you want to remove All right, let’s say I want to remove zuleyka Ubuntu Okay I’m just going to use this image ID And the command is Docker RMI and the image ID Now Let’s look at the docker images Now, you can see only Ubuntu and hello world is there so this is how you remove Docker images and I also showed you how to remove Docker containers So those of you who want familiar with Docker have a good idea of how simple Docker commands work Alright, so now I’m going to create a simple python web application using Docker compose Okay Now, let me tell you a little bit about this application It basically uses flask framework and it maintains a hit counter in redis So guys for those of you who don’t know what flask is it is basically a web development framework, which is written in Python and red is is an in memory storage component It is basically used as a database Okay now guys, don’t worry If you don’t know by Thin this program is very understandable So we’re basically going to use a Docker compose to run to Services, which is web service and red service Now, what is application does is it’s going to maintain a hit counter every time you access a webpage So each time you access the website or hit counter gets incremented Okay It’s simple logic just increment the value of the hit counter when the web page is accessed Okay Alright, so let’s begin with creating a directory for your application It’s always a good practice to have a directory with stores all of your code All right So let’s start with creating a directory Let’s say web application All right Now I’m going to change to that directory So guys have already typed out the entire code because I didn’t want to waste a lot of time So what I’m going to do is I’m just going to open up the files and I’ll explain what the code does All right, so I have all of my code written in notepad plus plus so I’m just opening up notepad Also guys, I want to tell you that you don’t have to install python or red is which is going to use Docker images for Python and redness Okay So first what you do is you have to create a python file Okay I’ve called it web app So I’m not going to spend a lot of time We are just tell you what we’re doing So first of all, we’re going to begin with importing the dependencies So we’re going to import real time we need red is we also need flask Okay These are the requirements that are going to import after that We just initializing name of the application So here we just hosting the database and we’re connecting to read is using the port number

six three seven nine All right This is the default code Then we Define the get hit count function this basically a returns a number of hits So we are also setting the read rise to 5 in case the page does not load while all of this holds true The incremented hits are returned And if there’s an arrow then we have an exception So we have also defined exception in case of errors this function is basically to display the hello world message along with the hit comes so this is the python file It’s very simple guys Very understandable You don’t have to be a pro in Python to understand This is very understandable Alright now, the next file you’re going to create is a txt File which are named requirements dot txt Okay now over here, I’m just going to add my requirements which is flask and redis So next we have the doc of file Now this talk of file is used to create Docker images Okay I mentioned this earlier in the session that you require doc of files to create doc images Okay So first we’re just setting the base image So we’re building an image starting with Python 3.4 now in this line, which is going to add the current directory into this – code path of the image Then we’re going to change the working directory to this path after this you’re going to use a packet manager of python to install the requirements that are mentioned in my requirements dot txt file Okay So these two were the requirements which is flask and redis And then finally we do setting the default command for the containers to python web app Okay, so it’s basically going to run my web app Now we finally have a Docker compose file Like I mentioned earlier that a Docker compose or a Amel file is going to contain all of the services So there is Web service over here and there is redis service So we’re basically running two containers over here or do Services over here, which is web and red is so now the web service is basically building the dockerfile in the current directory All right the dot signifies the current directory and it forwards the exposed Port 5000 on the container to the port 5000 on the host machine Now The red is service is basically using a redis image pulled from Docker Hub So guys, this was all about the files you need Create a web application file, which is a python file And then you have a requirements dot txt file Then you have to have a Docker file and a Docker compose file to run both of these services So guys now that I’ve explained the various files, what I’m going to do is I’m going to run both of these services or both of these containers by using the darker – compose up command Alright guys, make sure to create all of these four files and you have to create them obviously in the web application directory So if I do LS, I know that I have a Docker compose dot yongle file I have a dog of file I have requirements Dot txt And I have a web app dot python file Now Let’s use Docker – compose up to run all of these containers So just building from my doctor file So now it’s installing my requirements over here So now it’s running my web app dot python file now It’s creating two Services over here, which is web service and redis service So what I’m going to do is I’m going to look at the output by using kitematic So guys I told you earlier that kitematic is basically a UI tool for Doc of windows So just left-click on the dock icon over here and here you’re going to see kitematic Okay click on it But I think I’m facing an arrow I’m just going to go back to my files and see if I have missed out any line All right, so over here I have written RT This is actually import time Okay This was a simple mistake So let me just save this and let’s try and run this again now, it should definitely work I’ll just clear the terminal and we’re going to use Docker – compose up All right Now that’s what I’m going to do is I’m going to show you the output using kitematic Here you can see an option kitematic So click on this now It shows two applications over here, which are running one is the web service and the other is the redis service Now, when you go to the web service, you can see the output over here Let’s click on this So whenever you refresh the page the hit count increases So this is how the application works If you keep refreshing the headcount will keep increasing so guys, this was a simple web application and I also showed you all how to view this using kitematic Okay So now you can see that this is green, which means that it’s running All right, you can also be star the container you can stop it You can enter into the container and you can run a few commands Okay, you can use kitematic in a lot of other ways Let’s go about writing a Docker file First of all, your dockerfile is just gonna be a file Okay Just going to be a text file without any dot txt extension

Okay Your dockerfile will basically contain commands and arguments only and that is all that is needed to run Okay, these commands and arguments but additionally if you want to just comment something if you want to use bogus lines, then you can use it then you can write it by using this hashtag over here Okay So technically it might involve you having commands and arguments Okay So Mike imagine arguments are the ones which are going to help me customize my Docker image and the commands are something which I can write for my explanation Okay So if I #over your then whatever comes in that particular line after the hashtag would be ignored So if I say print welcome to enter a garden, I’m just giving a sample here So this line will be completely ignored and not executed But however in the second line if I have run Ecco, welcome to Erica, then this line would be executed Okay And in this case, I have my commands and arguments run is going to be my command and Echo welcome to enter Rekha will be my arguments so I can have argument one I can have argument to I can have argument three and many more Okay, but by default your arguments would be just too okay I will have one command and then two arguments Okay, so let me go into more details of these things and by details I want to talk about the different syntax that I can use your the different functionalities or the different commands that I can use Okay So let me start off by talking about the most important command which is nothing but the from command so from is the most important command because without the from command you cannot write a Docker file because the from command is what is used to specify a base Docker image Okay in my case I was specified your Ubuntu which means that I will be using an Ubuntu as my base Docker image and all my customizations will be on top of my Ubuntu image Now think of it very much like you working on a server or you working on a Linux machine Okay, you have an Ubuntu machine with you and then if you want to execute or deploy your application on that particular machine of yours, you have to install everything, right? So you’ve not done the other steps though, but so far by using a from Ubuntu it means that you have an Ubuntu machine with you So this is just the base image, which can be equivalent To you just having an open to machine Okay, and what do you do after that on to that particular Docker image depends on the other functionalities and then moving on to the next command is the Run command Now, this is again, I would say the second most used command because at the end of the day if you want to run a particular image or if you want to run a particular command, then you use this run command in my case if I have an open-door image on if I want to install say Java or Jenkins or react or curl then I will be using this run command Okay, so I have my run command and then my argument Would be app get install with a yes flag I’m saying react or Jenkins or whatever Okay, so that’s what my run command does It’s basically for executing any come out of mine, but it has a slight difference when compared to CMD Okay, because run is used to run a command Okay, it can be a command which is it could be a shell command or it could be a command which basically runs my image into a container Okay, and that is what the difference is with respect to CMD with CMD you again can execute A shell commands like I’ve done here I can say CMD Echo Welcome to edu Rekha, but however, I cannot use a CMD command for building my Docker image Okay, so I cannot execute my Docker image or I cannot build my Docker image with the help of the CMD command So if I want to ask you to come out of my shell I can you either run or CMD and if I want to basically build my documents, then I can only use run in such places CMD don’t work Okay So moving on the next important command is the entry point command Okay, the entry point command is basically used To basically override whatever function your CMD command does or the entry point basically suggests that when you’ve finished building your Docker image, then the command which is specified with the entry point that will be the one which will be executed first when you run the docker container of that particular image, right so I can build a Docker image which has this entry point Come on and my Docker image will be built and when I execute that particular Docker image, then the command which is specified with entry point will be the first And to be executed, okay, and the additional functionality that entry point has is the one which I already said, which is nothing but it can override your CMD command So take for example over here So here I’m saying CMD, welcome to Ed Eureka Okay, and here if I say a three-point Echo then my entry point Well basically overwrite this because most of the time you are CMD command would be the first set of commands We should be executed in your dockerfile Okay, you can have a lot of things but in your CMD command you will have some set of arguments present over here So This case I have one command and then one argument but with my entry point, if I say entry point Echo, then this would be used as my argument to execute this argument of mind Okay, so that is the whole point of entry point And this is the subtle difference between entry point and CMD entry point can basically override your CMD commas So next comes the add command now the add command or the copy command These are cons which can be used interchangeably

because the add command is used to copy whatever files which are therein One particular directory to another directory Okay, so it could be copying files from my host to my container Okay So I’m seeing add and then I can just specify the path of my source after that for the space I can specify the path of my destination to where I want to copy my files Okay So this is also pretty self-explanatory and then the environment command now if my application needs a particular environment variable then then I can specify to my Docker container that this application needs certain environment variables and this environment variable is present So an example of this could be if you want to execute Java program, then you need Java right and you have to set your environment variables so I can specify my Java environment variables like this in turn my Docker container So my environment would be my command and these would be my arguments So server Works would be my argument 1 and this would be my argument to alright, so that was one example, and then the next important command that you have to be aware of is the working directory command in your Docker container a lot of times you would Would want to go into a particular container and then start execution inside that container, especially when you want to execute certain commands in the Shell, right? So if you want to use the CMD command inside your dockerfile, then you want to basically execute a particular command on the shell, correct? But where exactly do you want to execute that command because these commands will be executed from inside the container and inside the container if you want to customize the place by you want to execute that command if you want to change the place where the CMD command will be executing its arguments then you have to say The working directory over here So you will say working directory and you’ll just set the path over here And then whenever you have a CMD command, which gets executed then that CMD command will be executed in this particular path Okay, so pretty simple, right and then we have the Expos command and the Expos command is a very important command in case of front-end applications because so with the Expos command you can specify a port number and you can specify that this application would be active on this particular port number inside the container Okay And yeah, this will be the one which is running inside the container and however, if you want to execute the same particular application and you want to run on a particular port number on your host, then you have to do the port mapping but that comes later on but inside the dockerfile, this is how we specify that and remember this is going to be only specific to your container And this port number is only going to be used from inside your container The next thing is the maintainer command so it’s not a very technical thing But if you want to tag your name along with the image, which you are building then you can use this Not to specify who’s the person that’s metering this particular Docker images before you approach the docker Hub And then that way whoever uploads or downloads your image from the doctor up will know that okay This is the guy that basically built your image Okay, so we can just set your name over here And this has to be present only after the from command That’s the point which you have a node and then we have the user command and if you want a particular user to execute or to run a container, then you can use this user command and specify the user ID of that particular user whom you want to acute the docker container Okay, so it’s pretty simple, right? So the user here is my command + 7 5 1 is the argument and this particular user who’s having this uid will be executing that particular Docker container of mine And then we have the one last comment that we are going to talk about which is nothing but the volume command So this volume command is basically used to set a custom path where your container will store all the files So this is the place where all the files related to your Docker container will be present and even if you want Containers to share the same path then you can use this volume So this path it can be shared by multiple containers So logically if you have multiple containers which are hosting the same application, then you might want them all to use the same path right where it’s stored So this is the path where it can be present So that’s it And now let’s move on to our demo And first I will show you how to install your Apache web server and I will show you how to write a Docker file Okay So in the first demo of mine, I have a simple dockerfile when I’m first all using an Ubuntu image as my base image and then I’m saying maintainer is at Eureka and then I’m running a few commands Even if you’re trying to install Apache on your local machine on Ubuntu, then you will have to probably run these commands will offer do an AB get update this basically updates my Advocate repositories Okay, and then you’ll have to mainly install your Apache service So the command for that is app get install Apache where in your Apache will be downloaded from your app data repository and then you will want to clean your object repository So you’ll use the app get clean command And most importantly we are deleting this particular path Okay, so you will have these files which are there in this particular path We’re live apt lists Okay, so whenever you use an app get update and if you get an error that time then that’s because of the files which are present in this particular path So to avoid any error that comes in the future we are deleting whatever is created over here Okay and RM – RF is what is used for that and our run command is what performs all of these so I have up to four functions

or four commands which need to be done And I’m using one run command and using an and over here Okay, and a percent to say that I have multiple commands which need to be run So that’s the thing and for my apartheid work after set my environment variables and that’s what I’ve done here So it’s pretty simple and it’s the same as the installation process It’s just that I’m setting it manually insert the dockerfile on my own Okay So for Apache run user for Apache run group and for Apache log directory, there are various parts where it has to be present and that’s what have a specified Arguments over here So these are my first arguments and these are my second arguments and then I’m saying expose T Which means that on port number 80 my Apache service would be hosted Okay, but remember this is only from within the container Okay, if I want to access it on my host machine then after the port mapping while starting this particular container of mine, so in my inside my container on port number 80 Apache will be present and finally if I want to start this Apache service, then I have to go to this particular path and then after Art that apache2 service right? I’m doing the same thing over here using my CMD So using the CMD command I’m saying go to this particular slash user slash sbin slash apache2 and I’m saying execute this and run it in foreground mode So – D is the flag, which we have to specify and then I’m saying foreground to basically get the UI up and running and to get it hosted Okay now to show you the same demo Let me open up my virtual machine where I have prepared this Docker file So this is my VM I hope you all can see my VM over here Okay, so I’ll just open up my terminal and I’ll bring up my Mozilla Firefox Okay, so let me do an LS and then I have my documents folder right? So let me do CD documents and I let me do LS so I have dockerfile here Let me do cat Docker file and show you that I have the same code present over here Okay So what I explain right now as to how to install Apache and then use the various commands I have Copy that same code into this particular dockerfile of mine Now The first thing after do is build a Docker image, which is going to be my custom Docker image out of this Docker file Okay, and the second thing after do is to run that particular Docker image into a container All right So let me get started with the first thing So if you want to basically build a Docker image from your dockerfile the command is Doc a build – t and then you have to specify the name of your Docker image Okay, so I’m going to say my Apache image Okay, and then I’ll just say the place where the docker file is present by mentioning period so by mentioning period it means that the dockerfile can be present in this particular directory and based on that dockerfile This Docker image would be built So let me hit and oh and just wait for the Viennese steps to be performed Okay, so step one step two step Three and then all the different steps, which I specified in semi dockerfile are being executed one after the other So my first step here is nothing but from Ubuntu which means that I’m pulling a base Ubuntu image that is present over here And then I’m saying step number two make a dareka as the maintainer and then step number three bottom installing various functionalities Okay So let this complete Okay, I think this part can be forwarded and moved ahead Okay, so I think all the steps have been executed successfully because I’ve got this message right successfully built and the idea of my Docker image So if you intermittently see step for executed step 5 executed step six step seven and step eight everything I’ve been executed So my Docker image has been built and then I can verify the Same by running this command Docker images Okay Let me say pseudo Docker images and as you can see here my Apache image with the latest tag hasn’t built Seconds ago All right, and this is the size of this particular Docker image of mine Now, let me use this Docker image and bring up the docker container out of this image And the command for that is pseudo Docker run and now up to specify the port number because inside my Docker file and specify that the application has to be active on port number 80 and if I want to access that application on my host machine then after do the port mapping Port mapping of my host port to my container border, so let me I say – p and say 80 colon 80 So this means that port number 80 of my container will be mapped to my port number 80 of my host Okay So first comes the host then comes the container and after this I can just simply specify the image which I want to build right so I can say the name of the images might Apache image and I can also give a name to my particular container I can say – – Game equal to app one

Okay, so I can give enter and yes, so ignore this message But anyways, my Apache servers would have been installed So let me just go here And if you remember it was port number 80 where it was hosted right? So, let me just type in localhost Colon 8080 and yes, it says that it works This is the default page for the server and the server is running but no content has been added yet That’s because I have not Not done anything manually, but it’s just that have hosted the same service which I got This is my party service which I have installed Okay, so my service is running now and I can verify that from a different terminal So let me just go here and say Docker PS, okay So dog appears and you can see that my Apache image was the name of the image And then the name of the container is app1 and this has been continued rise and it’s basically created these many seconds ago right now if I want to stop running the service I can either stop this container over here I can run a command to stop the container or I can simply use a control C And with that I’m out, right this container is not being executed anymore So this is a shortcut but it’s not advisable Either the command to stop a container is Docker stop and then container ID So I will show you how to do it with the second demo Okay, but let me just go here and verify that again So if I refresh this then the page is not accessible anymore That means my container is not running and hence The server’s not working So that’s the end of the first demo of mind which shows how to install and how to install Apache Okay So let me just do a sudo Docker PS – A over here and show you that the same container over here with the same idea Has exited okay and let me clear the screen here and over here too Okay, and then get my second demo So my second demo is all about installing engines So again to install in my engines server, I’ll follow the same steps Okay, at first for all users are going to base image and I’ll be installing my answering service on my Ubuntu machine So that’s why I’m doing from Ubuntu and then I’m specifying maintainer at Eureka and then similarly I’m using or are running the command Run apt-get update run apt-get install – why and drinks and then I’m doing add an index dot HTML Okay Now first of all, let me tell you that with respect to the previous demo I ran these two commands on the same line correct with an ampersand here I’ve just divided into two lines So it’s just to show you the functionality and the shortcut which are used in the previous demo Okay, but otherwise, it’s all pretty much the same and then what is new here is the index dot HTML because with engines this index dot HTML is Created by default So I just created an index dot HTML file and then I put my own code in that index dot HTML file and that is what I’m putting inside my container over here So if you remember the add command basically copies, what is there in one path to the destination path? So this is my source path, which is my host path And this is going to be my container part so index.html, which was there in my same folder that is copied inside my container So there is engines inside the user / – are okay So inside that there’s another folder called HTML and inside that folder the index.html file will be copied and once it’s copied over there from here I’m using an entry point command so that whenever my container is running right, so I have my Docker image and then when I execute the docker container, then this line would be executed So these will be built though All the environment would be set up over here But this command slash user slash sbin slash engines So this is the service which needs to be started so my doctor is going to particularly Go to this particular path and then start my engine service by giving the flag – G and Demon off So demon of your basically helps me bring my application for the foreground Okay So if it’s demon on then the application will be running in my background Okay, but since I was specified demon off over here and because I’ve specified demon off and brought it to my foreground I can see the UI and I can only see the UI if I say the particular port number and that is what I’ve done in my final line absurd expose port number 80 again, so if I have said exposed port Number 80 it means that in my container is going to be hosted on port number 80 and I can map this to my host port in my run command Okay, and when I run the command I’m going to repeat again when I run the command This would be executed on this was what brings up my engines service Okay So let me go back to my terminal and show you the file But however, it’s not in the documents folder So let me go back and go to my downloads folder Okay, so here I have my index dot HTML file and then I have my Docker file for Roaring engines So first, let me do a cat dockerfile So it’s the same set of code, which I explained few seconds back and then let me also do cat index dot HTML and this is basically my HTML code

which will be displayed on my UI So the title of my page is going to be a direct cause Docker engines tutorial and then in h tags, I have a hell of a trick are loners and then I have a P tag where I was specified something So this is the back end of HTML file and this is how any HTML file is built, right? So I’ve just copied it and I’ve pasted it So this is how the backend HTML of any web page will look like right So I’ve just created that and hosted I actually had it on my machine and then I basically put it inside my Docker container and then I’m going to start the engine service when I start the engine service this index dot HTML file will be picked up and be used as the default page or the first page which comes up in my view I okay So let me clear the screen and first of all execute this dockerfile now to build the doctor made out of this dog The command is Docker build – tea and then after specify the name of the image Let me say my engines image And then after this, let me see the path with the docker file is present and I can do that by specifying Dot and let me go ahead and also specify pseudo Okay So my first tip executed Second Step also executed and so is my third step and let’s just wait for all the steps to be executed so that my Custom Image is built and trust me This is my custom Ubuntu image, okay Okay, so that was my bill command And if I want to execute or bring up the container out of this particular image of mine, I can use the command Docker Run – PL specify the port number barrier to be active and let me again say 80 colon 80 followed by the name of this particular container I can say name is equal to app to and then I’ll just specify the image which I am going to use So my And drinks image So this is the name of my image, right and however after also specify pseudo over here, so if I had enter my container would be active and since I’ve specified entry point Basically, it would go to that particular path and start my service So let me just go to localhost and check if that’s working Okay, I’m going to say localhost 8080 and yes, this is my ending service Of course It’s a little different from my Apache service and I’ve customized this one to say that this is my page title To iterate cause Docker engines tutorial and then I have a specified Hello at a record low nose And then this right this is what I also showed you some time back in that index dot HTML file So this time let me show you how to stop this particular container by not doing control C, but by actually stopping your container in a healthy way So let me open up the second tab over here and over here Let me say pseudo Docker PS This would first fall lists down there Regular engines image of mine Okay This is the container ID So I’m going to copy this container ID and then stop this particular container I can say pseudo Docker stop and then container ID and my container would have stopped by now So if I go back to the other tab, you can see that I’ve got control back here, which means that my application is stopped being deployed So if I refresh this page you can see that I do not have access to the page anymore, right? So that’s how you host any application of yours And that’s how you bring it down with the hell Of your containers, okay in some of your fingers you can get anything done and that’s why doctor is really good and really useful commands that you see on your screen here the other ones which are most commonly used So if you are a develops engineer or if you’re just someone that’s working on Doc Ock then you might have already used these commands or you might use them in your future Some of those commands are darker version doctor helped broker pull Docker run Kaka build Docker log in Dhaka push Docker PS your pH stands for Docker processes And this command is used to see what are the active containers currently and then we have Docker images Then you have doctor stop go kill There is Docker RM which of course stands for dr Remove Docker RMI, which stands for remove images we have Docker exec and this is used to access the bash of any active container and then we have dr Commit We have doc or import Docker export the upper Container Docker compose Turkish warm and Docker service So these are the 20 plus Docker commands, which are most commonly used now without wasting much time Let me get started and discuss each of these commands darker version This command is used to find out what is the version of your Docker engine? Okay So remember there will be two flags that will come in before writing version and then we have another command which is Docker again to flags and then help now This is basically used to list down all the possible That you can use with Docker So here Docker will be a parent command and whatever child commands

that are possible here as permutation combinations Those would be less down Now Let me quickly open my terminal and execute these two commands for you Do remember that I will be using my Linux virtual machine Okay, and and this Linux virtual machine of mine is an open to machine and it’s hosted on my VM Like I said, so it’s going to open my terminal over here and the First Command that We were supposed to execute is darker version, right? So as you can see the version of my doc origin is 17.0 5 Okay, so that’s how this command works Now the next command that we were supposed to execute is Docker help that of course, we’re also come with two hyphens and like I told you there are various commands that you haven’t talked up like dr Attached Docker build talker Comet, dr CP dr Create Docker deaf So all of these Todd the other child commands that can be used with Daka Okay as a primary command So I hope that was clear and at any point of time if you people have any doubt with respect to the usage of any command in da core, then you can just use the help right? I’ll help will basically tell you the different commands are there along with a description? So it’s also explain what each and every command does now, let’s say you have Docker a built then you can see the explanation that it says build an image from um a Docker file, right? So that’s a good enough explanation If I was a guy that’s working on Docker then I would know which option to use right similarly for everything So for Docker RM, it says remove one or more containers and then for Docker for DACA start, it says start one or more stopped containers and many more So whenever in your free time, you can use the docker Health command and see what are the different commands possible along with their explanation Okay, so I am going to clear the screen and go back my PPT Check what are the next set of commands that I can execute and remember these are all still basic Docker commands Okay So the next command is Docker pull Now the docker full command is used to pull any image from the docker Hub Okay, and then we have Docker images command, which of course list down all the images in your local repository Now in the previous command, we do a Docker pull, right? So for the first time you will not have any image in your local repository You will have to pull it from your Docker Hub, and when you pull it from your Docker Hub, it gets stored in your local repository And once it is there in your local repository, you can run the docker images command and then check all the different images So all the images would be listed down Okay, so that’s about these two commands and then we have Docker Run Come on Now the docker run command is basically used to execute that image and I’m pretty sure you are aware that whatever you download from the docker Hub our images right and if you Running instance, or if you want it to be active then you have to run it because what you will have to deal with they are containers, right? I’ll to get containers running then you have to basically run those images That’s the same thing that we are doing here The Commander’s Docker run along with the image name So supposing I am pulling it Ubuntu image from Docker So I will be using this command Docker pull Ubuntu Okay, and if I want to execute this image and get a running continue Rod of it then I would have to basically go here and run the command Docker run along with that particular image name Docker run Ubuntu Okay guys, so I think you have a decent understanding of these three commands Now, let me again go back to my terminal and execute these three commands and show you how they work So I’m back to my terminal here Okay So here let me write down the command Docker pull Ubuntu Now by running this command I am pulling the latest Ubuntu image from my Docker Hub Okay, so, Hit enter in spite of the fact that I did not specify any tag over here as late as it’s pulling the latest image that is available on the docker Hub Okay So let the process happen guys Give it a minute Perfect So now you can see the status here, right? It says it has downloaded on your image for Ubuntu that is with the tag latest Now if I want to check if this image is actually been pulled then I can run the command Docker images now, let me run that command by fostering the screen and then actually running the command Docker images when you hit enter, like I said, you have the entire list of images available in your repository over here The entire list is down over here Like so you have customer made you have rather than S / Custom Image, right? So this is another image which I created if you want to check if your images latest, then you look at it Ubuntu here, right? So this has a tag as latest and this was an image which was created 10 days ago in my local repository and it is about a hundred and 12 Mb

Okay This is the one which we downloaded recently and it has the latest tag So this is how you check the different images that you have Have in your local repository guys Okay, so I’m going to clear the screen and now it’s all about executing a Docker image So for sample purpose, I can even run any kind of Animation get a container, right? So before I execute an Ubuntu image, let me execute a simple hello world container So for that I’m going to say Docker run Hello world Now remember when we say Docker run hello world You might ask me a question That is Hello World already present in my life Repository Well, the answer is it’s actually already present So I have an image of hello world in my local repository But even if I do not have the hello world image in my local repository this command will run because when you do a Docker run command, this would first of all look for this particular image in your local repository If the image is not present then it will go to the docker Hub and look for an image with this particular name and it will pull that image with the latest tag Okay, so run does two things As it pulls and it executes Okay So let me hit enter and there you go It says hello from Docker, right? So this is the hello world container for Docker Now the reason I did not execute the Ubuntu images because I want to make a few modifications to that image But if you want to make a few modifications to that image, then you have a different set of commands which are involved So let me go through those commands and then get back to what I was supposed to do that was about Docker run and then we have something called as Doctor Built Okay, and this Docker build command is used to build a custom image of yours supposing you have a low bun to image Okay, but you do not want it exactly as it is and you want to make a few adjustments to that So one other example for that would be the note image right in my previous sessions have had sessions on Docker swarm had a session on Docker compose and many more right so over there what happens is I’m using a node.js image as my base image and then I’m building my entire application on that At so what you had your risk? You have a base load image? Okay, and on that note image, you build your entire application impede an angular application or beat a mean stack application and the command that you use to build the entire application is the docker build command And as you can see, this is the syntax we have to say Docker built with the flag – tea and the tea flag what it does is it basically tells you that you can give your name with a tad Order our image with your building because this image is going to be your image right your customly building this image So when you custom build this image, you can give it your own name and that’s what this is and followed by that with a space I have specified a DOT here now the dot specifies that the dock of file which is needed to build This Docker image is present in the current directory where this command is being executed Now, how do I specify the entire path of my dockerfile then? I didn’t I wouldn’t be Be specifying the dot over here, right? But in that case if I’m specifying the entire path then that means that my Docker file is present and some other location not necessarily in the same location where the command is being executed Okay I hope that was a little fear you people so now if you’re still not here, let me give you a demonstration and then you will be able to understand this in a better fashion Okay So let me open my terminal again and currently we are in the slash home slash iterator directory now for the demo purpose I had created a new Docker file Let me first open and show you that dockerfile Okay, and that Docker file is present in the downloads demo folder of mine If I do an LS, there is a dockerfile perfect So let me say cat dockerfile In fact, let me open this in G edit So pseudo G edit dockerfile Yes Now the dockerfile is the most important file if you want to build your own custom images because whatever you want for the application to run those dependencies are Fide in this file We have the same which is the base image that you have to first of all download from the doctor up and use that as a basis where application and then you have to say the other commands that you want to run now in this demo of Mind simply downloading an Ubuntu image from the docker Hub, and I’m just echoing this sentence Hi, this is version from Ed wake up So it’s a very simple process right? I’m pulling one able to image and I’m doing an echo on that particular image so you can just save this closest Profile and then execute this particular dockerfile Okay And since I am in this folder I can use the dot to specify that the docker file is present in this directory Now, let me first clear the screen and then run that command again So the command is darker till – tea let’s give the name of the image

as my custom Ubuntu image custom Ubuntu Well, that’s good enough, right? And then I’m going to say dot because the dockerfile to build this my customer been to image is present in the same path Okay So it says my customer going to should be in lowercase Okay, no problem So, let me just check my dockerfile once okay Now the reason I got this is because my image name cannot be in caps So what I’m going to do is let me read on the command with a different name possibly in small letters My customer going to okay, perfect the command got executed So if you can see here, it says selling the build context to the doctor demon and since I had specified only two steps in my dockerfile those two steps are being executed here Step One is it’s pulling the Ubuntu image from the docker Hub And since it’s already there in my local repository It’s using whatever is there Okay and step two is running the echo statement Hi This is what in from Federica, right? Right This is the second step and the same Echo Command has been executed over here Hi, this is Worden from Ed Wake up, correct? Perfect So I hope you guys got a good understanding of this particular command because this is the most important command if you want to make it as a devops engineer or a person that’s regularly working on Docker because all of the images that you will be working on in your office or in your workspace You will have to be working on custom images for your application I remembered how this command is used and how the applications are bit So let me just clear the screen and go back to my slides and see what is the next command in store for us Okay So the next command is the docker container come on and this Docker container command is basically used to manage your containers Now, let’s say you have a number of containers and because so many containers are active at the same time your system may be lagging right there’s a performance issue So at that time you might want to close or end certain containers Right kill their process So at that point of time you can use the container command and kill the container straight away So it’s just one of the different options that we have So there are a number of other commands which can be used with Docker container as The Parent Command and I would request you to look up the set of commands on Docker docks Okay, but for now, let me just go back to my terminal and execute one of these commands and show you how they work So let me go back to my terminal here and here I’m going to run that Come on Docker container and let me run Docker containers logs Okay, but so here Let me run the command Docker container logs to basically find out the different logs that are associated with my container Okay Now the thing is in arguments after specify the container name or the container ID And since I don’t note right now then we first find out what is my container ID Okay, so I’m going to do a dock of PS command To list down the different containers Okay, if there are no active containers, so I’m going to do a – a flag So these two commands I will explain in detail at a later point of time guys Okay, but anyways getting back to our problem here, you can see that the hello world container got executed, right? So I want to copy the container ID here And now I’m going to find out what are the logs of this Docker container logs And then I’m going to paste the container ID this way Whatever logs Generated for this container Those would be displayed perfect worked as with the container executed again So the same thing can be done for any of the other containers Okay If I do a Docker PS – A and C There are so many other containers which are there right in my system I can copy the container ID of any of these and I can execute the same thing again and again, Docker container logs, right and then I can paste this So this time the logs of this particular container, which is nine months six and this entire ID This logs have come out and like I said with Docker container, you have various other options, correct? You have options like Docker container kill, you have Docker container remove and all those things so I can use a Docker container remove and hit enter And basically when I do that this particular container has off so if Remember the CCC right? This container is the hello world container And when I said RM this container is removed So if I go back and do Docker PS – A then the first entry for the helloworld container would not be present And yes, as you can see it’s not present, right the hello world container is not present here Now That’s what I’m going to show you So we’re clear my screen and now let me get back to my slides So I basically executed the docker container logs command

And the docker container RM command, so you have various other options Like I said, we have the container kill which can be used if you want to kill any one particular container Okay, you can use the docker container run command to start any container which has been temporarily stopped or which is inactive Okay, and if you want to again start the container from something that has been stopped you can use the docker containers start from at and these are just a few of the commands and the entire list of Docker container Runs can be found in Dr. Dobbs Okay, so I would request you to go to doctor docs and then see the entire list of commands If you want to learn more about this command in the meanwhile, let me go to the next slide and country with our session The next command that we’re going to talk about is the docker log in command Okay, and as simple as it sounds this is used to log into your Docker Hub account can any of you guess why we would need to login? Well, it’s for the simple reason that you might want to push any of your image that you have created locally, right? So when you’re working with a team who are all using Docker, then you can just pull the docker image or create a new Docker image from scratch at your end and build a container And if you want to share that container with other people in your team, then you can upload it to Docker Hub, right? So, how do you upload it to Docker Hub? So if you want to upload it, you don’t have any other work around so you do it through the terminal and to do it through the I know you have to first do a docket login Once you have logged in using your Docker container credentials, then you can simply start pushing your Docker image to the docker Hub Okay, so that’s why this come on is really important So let me go to my terminal and show you this command the command is dr Log in when I hit enter it says login with your doc ready to push all images from Docker Hub If you don’t have a doctor ID, it says head over to this website So this is where you can create a new Docker ID, okay And the username it says in Brackets it says what ananas that’s my username because I’m already logged in So I’m just going to hit enter without entering the username again and the password I can enter is my password so that of course I’m not going to reveal to you people But once you enter the password and hit enter then it says login succeeded, right if your credentials are a match then you are successfully logged in and once you’re logged in you can start pushing your Docker images, which you work down low Lee do your Docker Hub? Okay, perfect Right So let’s clear the screen and get back to our slides now Like I said, the next command is basically the push your Docker image to your Docker Hub Remember the command should have your doctor ID / the image name? Okay, my Ubuntu image This may be the name of the image that you might have created locally Okay, but if you want to push it to the docker Hub, you have to tag it with a name and that name should be your doctor ID Okay, so let me get to the terminal and show you how this command works So let me first look for the image that I want to upload to my Docker Hub Okay So when I hit Docker images the list of all the images come out and if you remember my customer Boon to is the name of the image, which I created now, let me try pushing this image to the docker Hub Okay, so I’m going to copy this and first clearing the screen and here I need to tag this image with my dog Writing right because right now it has the name my customer going to and I cannot upload it to Docker Hub with this name now since I have to tag it with my name, there’s a command called Docker tag and here you have to specify what is that which image that you want to tag? So the images my customer born to and here let me specify my Docker ID / the image name, so I’m going to save Warden NS Okay Now that’s my doctor ID and Slash Mike custom Ubuntu image, right? So this is the name of my image I can even change the name, but I’ve just retained my custom open to as a name of my image So when I hit enter this image would be getting uploaded to Docker hub And now this image has been renamed to what the nest / my custom open to we can verify the Same by running the command Docker images and as you can see here, there is there is one image with the name Mike customer 1 2 and then there is another image with Walden NS / my customer burn to correct Now This is what I have to upload So now I can use the docker push come out So I’m going to say Doc a push and then simply specify the image that you want Doc a push-button lettuce / my customer going to hit enter and the image would be getting uploaded to Docker Hub And once when you do it from your end

after this command is executed successfully you can go to your Docker Hub and check that your image, which you created locally has been uploaded to the doctor about it can be shared and access by other people Okay? Okay, perfect So this shows that my image has been uploaded and let me just clear the screen And get back to my slides and move forward and this command is something that I already excluded some time back Right if you remember I use the docker PS command to identify which are the containers which are currently active in my system right in my doctor engine So that’s what this does PS basically stands for processes And when you hit Docker processes, then all the container processes, which are currently running in your dock origin would be listed However, if you append this command with a a flag, right? Then all the containers which are inactive even those containers would be listed down So that is a difference between these two commands Docker PS and Docker PS with a flag a now Let me go to my terminal and show you that so Docker PS first Okay, and right now there are no entries because there are no containers which are currently active But if you want to find out all the containers irrespective of whether they are active or whether they are not active then it would list down all the containers in my system Right or in my host and that’s what it’s going to do Docker PS – name and as you can see there’s an entire list of Docker containers over here There is the customer Mage which I created and then there are various other images over here, which I used to build a container and I’ll show you how they work in my previous sessions So the contact list angular and then there is a demo app one These were images which are used for my Dockers swamp and for my Docker compose videos respectively, so if you want to go and see those videos And the link will be there in the description below guys and I would request you to go through those images to understand other Docker Concepts better Okay, because dr Campos and Docker swamp, they are the advanced concepts and Docker and that’s a must know if you want to make it as a doctor professional the link for those videos are in the description below So let me just clear the screen and get back to what I was doing So the next command that we have is the docker stop command now the Dockers top commanders basically used to shut down any container So if There’s any container in your Docker engine which is running right in your host And if you want to stop it, then you can use this command and do note that This command would not be shut down right away It might take a few seconds because it would be graceful shutdown waiting for the other dependencies to shut first Okay It’s not a force stop It’s a very gentle stop That’s what this commanders but we have something called as a Docker kill command Okay and what this doctor kill command does has it ungracefully stops your container as if there is Container that is actively running it would straightaway kill it in spite of its something similar to for skill, right? So that is a difference between these two commands Docker stop and Docker kill kill would straightaway kill your command now before I show a hazard of this, let me go forward and talk about a few more commands There is something called as a Docker remove right Docker RM this one what it does is it removes a container at this point of time you have to remember that if you want to remove any container from your host you have to First stop it and how will you stop it by the two commands that I explained in the previous two slides you either for skillet, or you kill it gracefully using the docker stop command or the locker kill command and once you’ve used those two commands, you can remove them from your repository Okay, and we have another command That is the docker RM I okay So the doctor Adam would remove containers, but if you want to remove images itself from your repository, then you can use the docker RMI command Okay guys, so these are the four different Enter commands that we have your which is are regularly used Now Let me open my terminal and show you how they work first Let me do a Docker PS and since there are no containers which are currently active What I’m going to do is I’m going to start a service Okay, I want to containerize a particular service and then I will show you how to stop it or kill it or remove it Okay There is one particular image demo app one Okay, which I use to deliver my previous session There was the docker compose session right over there I used that particular image and I created an angular application So I’m going to first start that service and the command for that is Docker Run – – RM I want to say port number is for to double 0: for to double zero because it’s an angular application Let’s give it a name It’s – – name right So let’s give it a name my angular application or let’s give a name My Demo up Patient Okay and demo app one is the name of that image So when you hit enter first, the image would come up right damage would be spun and the container will come up Let’s just wait

for the container to become active So let me first open a new tab of this terminal Okay, and here let me run the command Docker PS and you can see that few 42 seconds ago This app was created right the demo app one here It says the web pack is compiled successfully So if Go to my Firefox the service would be active The angular application would be active Okay, but if I want to temporarily stop this container or if I want to kill this then I can use those commands Docker stop or I can use Docker kill Okay, so let’s use those commands and see how they work I’m going to say danke stop followed by the container riding Correct hit enter So the doctors are stopped Now If I do a Docker PS command, this container would not be active Okay and over here also, you can see that which was temporarily compiled It has ended right here in the service is not hosted anymore So that’s how the docker stop command works So let me go to this command and restart the same service And over here this time instead of using the docker Stop command Let me say Docker kill Okay Sorry I’ve just used the same container ID Right? So I need to do a Docker PS first Okay And yeah now this is the container ID, which I have to kill so I’m going to say dr Kiehl pasting this culinary and a tender and that’s container has also ended so you’re in the service has exited from here, right? So that’s how you kill a container That’s different between the stop command and the kill command Okay, so I’m going to clear the screen and after these two commands that are two commands like Docker RM and Docker RMI, right? They are used to remove containers and Respectively, so let me go ahead and do that first Let’s run the command Docker RM Okay, and now we have to specify which container you want to kill or remove So for that purpose Let me first find out which are the different Docker containers There are there in my system So when I do a Docker PS – A there are a number of containers and from here Let me remove this test angular container Okay This is the name of the image and this is the container ID So I’m going to copy this container ID and go back here Let me clear it and here let me run the docker or em with the container writing and when this is return it means at my container has been deleted successfully and the benefit with this is I have freed up a little more space in my host right in my doctor region now guys are similarly we saw how to remove a container Okay now, let me go here Let me do a Docker images So this is the other list of the different images that are there in my repository And if I want to remove any of these images, then I can do Docker RMI and what we have here is we have a resume John we have an Alpine image, which I do not need So let me copy this redis image and remove this image from my repository So the command is Docker RMI this time because remove image is what it stands for and I can specify the image name or I can even specify the image ID So So image name is good enough So that’s what’s happening, right? It says untagged and deleted perfect now I can clear the screen and what I wanted to show you I’ve showed you already know if I run the docker images card again, then redis would not be visible here so you can see Alpine, but you can’t see red is correct So that’s how it works So, let me go back to my slides and go to the next command We spoke about stop We spoke about kale We spoke about Docker RM, and we also spoke about Docker or am I now Next command that is in question is the docker exec Command Okay This command is used to access any Act of container Right? Any container that is actively running If you want to access the bash of that particular container, then you can use this exact command Okay, and we use a it flag over here So you can either use – ID together or you can use hyphen I space hyphen T Now What I does is it’s basically it says access your Boehner in interactive mode, so that’s the option this flag specifies and that’s why we’re able to access the container Okay, and you have to specify which container you want to access followed by the word bash So let me go back to my terminal and show you how that works So over here, let me clear the screen and do a Docker PS and check which Cardenas are actively running None of them are running right now So let me start a container over here Okay Let me do a dog Are in fact I can start one of the containers I started sometime back the demo app one, right? The one I spoke about is the angular application Let me start this same container Let’s wait for it to come up perfect

Now it says webpack compiled successfully So now let me go to my browser and hit localhost photo double zero because my angular application is active on port number for to double 0, right So this is that angular application, which I was talking about So So if I go back to my terminal you can also see that I have specified photo double zero as the port which is to be used to access that application on my host And this is the port number it’s running on internally in my container So I’m mapping my container port to my host port and because of this I could access that angular application on my web browser now getting back to our slides We are supposed to use the docker exec command to access this container, right? So right now I cannot access this curtain over here Let me access this curtain of in a new terminal So this is the new terminal and here if I do the same Docker PS command, the new container is active So from here, let me copy the container ID and then run the command Docker exec with the flag It followed by the container ID and then bash bingo So right now I am inside this container So all this time I would this was the user right Erica at the rate Ubuntu This was my host machine and this is my username right now Now I’m logged in as a root user inside the container with the hammer having the I’d eat this one because this is what I specified over here So now we are not in my local system We are inside the container and what can we find inside the container? We would basically find dependencies libraries and the actual application code of this particular angular application, which is both sit over here Right which you can see all the project codes would be present inside this container, correct So let’s verify if that is the case By checking if you are actually there and by doing an LS, you can see all the different files here We have a Docker file, which was used to build this application And then we have a package dot Json file, which is the most important file to build any Android application or any means that application and then we have protractor dot corner of dot J’s, which is used to test any other applications and then we have so many others right? We have an SRC folder We have 1 e 2 e folder and then you have noticed Code module So this is where all your project dependencies are stored Correct? So package.json specifies What are the project dependencies that you want? And this is where it’s all stored So this is my Android application Right? So if I go One Directory back, I am in this SRC folder now Okay Let me do an LS I have a pure let me go One path back again and do an LS and your you can see that I have other photos like bin games include lip local has been shared and SRC now These are inside my container I hope this was enough evidence for you I hope it was so I’m back here And yeah, that’s how you access your container If you want to make changes you can make changes yard Okay, and since we are inside the container, let’s just create a new file So let’s just say touch F1 So the touch command is used to create an empty file, right? So now if I do cat F1, of course, there is nothing but let me do a sudo G edit Okay, so I don’t need to give a pseudo Because I’m already logged in as root user So I’m just going to do g8f one Okay, so it’s not letting me access this Command, right? Okay guys Anyways, that’s how you access the container Okay So let me just clear the screen and if you want to exit the container exit The Bash then you should use the command exit So when you hit exit your back as the adric our user on your Ubuntu host system interesting, right? So I’m going to clear the screen and go back to my slides and check what’s next and then we have the docker commit command And what this Docker commit commanded does is that it basically creates a new image of an edited container on the local repository It’s simple words It creates a new image of any container which you have edited, correct So let’s execute this doctor commit command and see how that works Let me go to my terminal here Let’s first run the docker PS command check This is the container ID I access my Docker container So I hope something is would have been there So A new image of that particular Docker container Okay, so I’m going to say copy and run the command Docker Comet and then specify the container ID of your container and followed by that you have to specify the name of your new image so I can say what an NS / my image, right my angular image So this would basically create an image of this Container which is running and big or perfect it’s done So if I run the command Docker images, then there will be a new image with this name and tag Let’s verify that by going to Docker images

Let’s go up and as you can see there is version Alice / my angular image Perfect This is what we want to verify correct So let me clear the screen and go back here And yes the work package compiled successfully This was the message We got earlier So anyways Let’s not worry about that That’s what a Docker command does So if I want to stop this cutting a service then from the new terminal, let me just kill that container service for X So this is the ID would have copy this and then I’m Gonna Save Docker container stop and my container would have stopped So here Yes, my service has stopped over here Bingo So I’m going to clear the screen and both the places Okay now, let me get back to my slides So So the next command that we’re going to talk about is the docker export command, correct? So the docker export command is basically used to export any Docker image in your system into a tar file, correct? So this tar file is going to be saved in your local file system and it’s not going to be inside Docker anymore This is another way of sharing your Docker images, right? So one way was by uploading it to Docker Hub But in case you don’t want to do that, you don’t want to upload it to Docker Hub because the image is very heavy so that This is an alternative which is used in the industry where we do a Docker export from one machine and we save that image as a tar file and this start file is imported inside the destination system and over there It can be accessed again and the container can be run So let me show you an example of that by first of all getting to it Okay, so it says Docker export, right? So this is the syntax for that Okay, you say Docker export you use the output flag with two hyphens you can Specify the name of the tar file that you want to store it with and then you have to specify your image name over here Okay So the image name over here is my container so you’ll have to specify you are amazed name So let me go to my virtual machine and what are the docker images that I have available? There is what Dennis / my anger image There is my custom Ubuntu So what I’ll do is let me save this my custom Ubuntu image Okay This is my image and the image ID is Straight to I want to copy this go back to this terminal and what I’ll do here is I’ll say Docker export double – which is the flag I’m going to say output flag is equal to F to specify the name of my tarfaya, right? So my Docker tar file, I can say I can say that my Docker tar file and Hereafter specify the container name So Docker images wouldn’t do for that So I’m to do a Docker PS – A so I have a custom image You’re right So let me save this particular image So I’m going to copy the content already copy this and paste it over here which indicates that I will create a top five of this particular container and this stuff I’ll would be saved in this repository itself in America advisable to now since it’s a heavy container It’s going to take a few seconds and it’s done and we can verify that by doing And LS the name we gave us my Docker tar file, correct? And then you can see there’s a my docket our file which is basically a tg5 So if you go back to your documents, you can see that there’s a new tire file my Docker tar file that is created and you can modify the same my docket our file over here This is the newly created tar file so I can go back to my slides here and let me just clear the screen Okay, perfect So going back to my slides I’ll show you how the docker export command works And what’s the benefit now in the next slide we have the docker import command The docker import command is basically used to import any tar file If you have any tar file, which has been given to you by your fellow developer and if you want to create a container out of that one, then you have to import it, right So how is that possible? So this is the syntax for that The command is Docker import and then the complete path of that demo file of that tar file, okay So for this particular purpose, I have already created one tar file because I wanted to create one which can be imported very soon So I created a tar file over here demo taught so it is present inside my downloads folder, correct So let me import that file So I’m going to say darker import and then I’m after specify the complete path So it’s slash home slash a dareka /downloads / demo Tower, let’s hit enter and this particular image has been successfully imported You can verify that by seeing the first few characters of the newly created image

Okay So let’s run the command Docker images over here and you can see that just recently 23 seconds ago New Image was created right with the same image re to 3ef and does the same image audio over here, right? It starts with the same sequence characters and right now it has Your name so that is how you easily import Docker images? Okay, the first exported and then you can import it So let me just clear the screens of both the tabs and now getting back to my slides mom download my doctor import command and now comes the advanced Docker commands Okay So after here we saw the docker commands which are very basic and you know, which can be executed easily But here comes the challenging part Docker compose and Docker swamp, right? These are all Advanced concepts and DACA which solves a lot of business problems And of course the commands are also little Advanced nature So first, let’s start with Docker compose, you know, there are two variations to it and the to syntax can be seen over here doctor – compose built and Docker – compose up So these are the two commands which work very similar to Docker build and Docker run, right? So Docker build is basically used to build a new image from a Docker file Correct? Similarly Docker compose build is used to build your Docker compose by using your Docker yawm Al file Now your Yama file stands for yet another markup language And now in the yongle file, we can specify which all containers we want to be active Okay, and you have to specify the path of the different doctor files, which will be used to create those containers or those services and that’s what Docker compose does, right? It creates multiple services And it’s basically used to manage all those services and the start them at one go so it would use more than one dockerfile Probably If you go through my previous video on Docker compose have explained it in detail over there I have used three different Docker files and using those three doctor files I have created three services, right and the three different services are the angular service the express and load service and the mongodb service The mongodb was used as a database the express and load was used as My back-end server and angular was used for my front table Okay Now the link to this video is present in the description below Okay, but let me just quickly wrap this up by saying that if you want to build that you use a Docker compose built and if you want to start your Docker compose and start the container service, then you can use the docker compose up This is very similar to the docker run command Okay, and that’s what your Docker compose does, right? It creates multiple doctors services and computer Rises each of them and gets the different containers Has to work together so perfect Let me go back to my terminal and let me do that for you So Docker PS, there is nothing So right now we are in the home / it reca folder, correct So let me do LS and there is a folder called means that cap So I’m going to Siri into this particular folder and here if I do an LS, you can see that there’s a Docker compose document file So let me do a g edit docker Campos dot yml file So here you can see that I have specified the commands to create three different Services one is the angular service others the express service and finally my database service Okay I’ve explained all these things in my previous video I repeat the link for that video would be in the description below Okay So let me quickly execute this Yaman file Okay, so if I do a docker Campos build then this command would look for this Docker compose file inside this directory Okay, and then it would you know, once this image is built I can sit we execute that command by using the docker compose up Okay, so I’m just going to replace build with up this way My Docker compose would also be up and running earlier I showed you an Android application and this time is going to be a entire Means like application which is going to involve everything mongodb Express angular and node.js so my expense is up and running My angular is up and running My mongodb is active on port number two seven zero one seven My expense would be active on port number 3000 and angular as usual would be active on port number for to double zero So let’s verify the Same by going over here Okay It also says we’re back up I successfully so this time if I refresh this there’s a different application that would Come up, correct So this is my main stack application or important photo double zero is the front end on port number 3000 This is my server end which simply says full bar and then and port number two seven zero

Sorry zero one seven There is my mongodb, right? So these are the three different Services which are active on my waist port numbers So going back to my terminal I can do a Docker PS to verify that there are three different Services of mine which are running If I want to stop each of these Services, I can simply do a Ctrl C from here and hopefully it stops Yes All three services has stopped Let me execute the same command and this time yeah, they’re all gone Right so the docker PS command shows no containers active bingo So I’m going to clear the screen out Okay and go back to my slides and go to the next command and the next Advanced command that we have is the docker swamp command Docker compose I told you was to basically have a multi container application Right and Doc a swarm is however used to manage multiple Docker engines on various hosts, right? So usually you might be aware that your Docker engine is hosted on one particular host and you’re executing your Docker commands over there, right? That’s what we were doing all this time Even dr Campos did that on the same host three different Services were started but with Docker swamp, the benefit is that we can start those services in Multiple machines so you will have a Master machine which is nothing but the doctor manager as visible from here and then you will have different slaves or the charcoal as worker in Docker terms So you have a manager and work up and whatever service you start at The manager will be executed across all the machines which are there in that Docker swamp cluster Okay So it says right it creates a network of Docker engines or hosts to execute the containers in parallel and the biggest benefit of Dr. Swarm is scaling up and ensuring High availability Okay, so some of the commands which are associated with Docker swarm are these if you want to first of all start off creating a Docker storm, then you use this command Docker swarm in it And you say advertise Okay, and then you have 192 168 dot one dot hundred It’s supposed to be two hyphens over here Okay Yeah So this is how the syntax is supposed to be doctors swamp in it – – Advertise – add up and then you have to specify the IP address of your manager machine So if I start the swamp from this particular host of mine, then this would assume the role of my manager Okay, and in this syntax remember after specify my own IP address so that the other workers who will be joining my network would subscribe to my IP address over here So let’s quickly go and execute this First Command Let me show that to you Okay, so Let me open up the terminal and the command is darker swamp in it which sells for initialized with the flags advertised adder and then the IP address so the IP address of my VMS hundred Okay, so when I hit enter see what happens it says swarm is initialized and this is the current mode This particular node is now a manager Okay? And if you want other machines to join this particular manager as workers, then the after use this token, so we offer just copy this go to the other machines and execute it supposing this is another machine of mine Okay I’m giving you an example so over here you would have to taste that token Okay So this is called the token you just hit enter and then you will join us a worker Okay? So that’s how the docker swamp command verse now I cannot go into too much details with respect to how Docker swamp works Okay, because there again it will take a lot of time and if you want to actually learn Doc a swamp you can go and watch the other video which I delivered a couple of months back right that video is called Docker swab for high availability Okay, and that is a detailed video and you will enjoy that video because with that video I have shown how dr Swann can be used and you will see the power of dock In that particular video, so I would request you to go there and the link for it is again Below in the description Okay, so I would request you to go there if you want to learn more about Docker swamp, but getting back to our slides we have other commands You’re right So the docker swarm join is what I already explained to you So followed by this you will have a token So if you give that you can join a particular swamp cluster as a worker Okay, so if you want to regenerate that particular token, which is needed to join that particular cluster then add the managers and you can Execute this command Docker swamp join token, so it would generate that open and give it to you and similarly If you want to leave the docker swamp cluster, then you can execute this command Docker swamp leave Okay So if you execute this command straight away at the workers end or the nodes,

then it would simply leave okay, but at the managers and it would not leave just like that you’d have to append the flag Force So let me show you that Let me just execute the command Docker swarm leave It was a vodka it would leave right away But since it’s a manager, like I said, it says use the force option So let’s use that okay doc a swarm leave with double flat force and it says the node has left the swamp perfect Right? So this is all about Docker swamp guys Okay, so let me go back to my slides and cover the one last command that is that for today and that command is the docker service command So this command is used to control Any existing Docker service beat any container or Bagheera Docker compose or Docker swamp or anything else? Right So talker service is a very underutilized command I would say because if you want to control your different nodes when you’re in a Docker swamp, then you use the docker service you use a Docker service command to list down The different nodes are there in your cluster you use a Docker service PS command to find out what containers are being executed in a particular node, and then You want to scale the number of containers supposing you have a cluster of five machines and then you have five containers running in each of those machines If you want to scale those containers to 25, that means you will be executing five containers on each machine, right? So for that you have to use a command Docker service scale if you want to stop any container on any particular node, then you use a command Docker service stop And then if you want to find out the different logs, then you use command Docker servers logs Docker servers are on and so on Right So the docker service command is let me repeat it’s used in sync with your Dockers warm and Docker compose primarily So that’s why these form the advanced Docker commands So let me go to my terminal and quickly show you a glimpse of this So it’s Docker service If we do an LS, you will not have any options listed because it says this is not a swamp manager currently, but if I start my Docker swamp and then if I run the same command Docker serve as l Then you can see that the output is different right? I have a few attributes your ID name mode, which is basically details about the different worker nodes in my cluster But since no worker has Norma cluster, there are no entry job So that’s how it is So, let me just add that So that is the docker serve as LS If you want to find out the logs, then you can do that too Docker service logs So if you use the doctor service log, you have to specify which service you want to check the logs off And what is the task? Right so which tasks and which service so it’s that simple guys, so that’s how da cursor which is used Okay guys And again if you want to stop any service if you want to remove any service you can use these commands doctor service stopped or doctor service remove What is Docker compose So the definition says Docker compose is used to run multi container applications Multi containers, right? So well, the thing is you use one container usually to host one service right now That’s what we discuss all this time Now, let’s take a case of a massive big application All right on it has multiple services And in fact, there are multiple web servers which need to be placed separately on a particular server or on a particular VM because it might cause an overhead because maybe 2/3 so was cannot be hosted on the same same machine So at that time what we usually do is we we create we have a new VM and we hosted their right or we have a new server all together For example, if you want to have if you want to monitor your your application, then you might probably use niosh So nagios you may have there’ll be times that you’ll have to hose it separately in a different machine and similarly you will have made very other various various So I was like Jenkins and many web services so that time, you know instead of having a different different machine or having a different VM We can simply have a different container Okay, you can have these multiple Services hosted in these multiple containers So each container will be hosting one service and then these containers would be home run such that they can interact with one another Okay exactly how it works in the you know in case of servers or VM so it’s exactly the same way but it’s just that it’s going to be one command very simple command, which is gone Oh, you know get your doctor composer and this Docker compose up It’s like a grid right so it will run All the three containers at the same time it will host all these and it will get them to interact with one another So that’s what that’s the benefit with the docker compose And that’s the whole point of today’s session Right? I want to show you how awesome Docker compose has this way So Yeah moving on to what I’m actually going to show

you in today’s session I’m going to show you how to set up a mean stack application like I mentioned earlier So first of all, the mean in means that has four different things So the m stands for mongodb, which is the database and E stands for Express a chance for angular and n stands for node.js now together They are this is a full stack application Okay now since we are using a combination of Chortles we a comet to a means like application Okay, that’s what it’s acronym to so this full sack application is again a web service so such that, you know, you have a front end client and you have a back-end server and then you have a database So whenever you have your clients or your customers interacting with your web server it would they would be interacting with the client Okay, the client of your thing front end client so the data that they pass over there, right? Whatever As they perform or whatever requests they make that would go to the client Sorry that would go to the server and the server would you know do the necessary function? So it would have to sometimes need to fetch the data from the database So in that case it would fetch the data and provide a response or sometimes it might have to do with these are the functions So the actual functionality would be done by the server and the displaying part would be done by the client and the actual data would be stored inside the database So that’s how The foods like application works its combination of for these three services the front end client a back-end server and a database and that’s what I’m going to use Right So if I want to have these three services, then I would have to create three different containers Right? So I have a container number one, which I can use for mongodb which would be my database have continued number two, which I can use for my back-end server I’m going to use express and node.js in combination and the third service that I’m going to use is my front end Client, okay So I’m going to use angular for that purpose now, I’ll be hosting these three services inside these three containers and each of these three containers would become would be built from their respective doc of files Okay, as you can see there’s dockerfile one dockerfile to and dockerfile three now in the same way that I explained in the previous slide We have a Docker file should build the image first and then that would be spun into a container the same process follows here also Okay It’s just that for each of these containers separately We would be built if we would be using a Docker file and each of these doctor This would be called, you know one after the other with the help of our Docker compose file So Campos is what is the key term that you need to note here? And the compose file is it’s a yamen file basically, okay yet another markup language and in the common file you for you specify the location by a Docker file is present and then you also specify the port numbers that container needs to use to interact with the other container Okay, and at times if you have a database in place, you might also have to specify by the link of the database server and the database will be connected So for that purpose you do that So that’s how the docker compose works And that’s the overview that I’ve given you right three containers, you know built from 3 doctor files, which would be called by the docker compose file, which is a yeoman file and there you go You will have a web application hosted that’s up and running All right, I mean is nothing but a full stack application that that involves The combination of these four Technologies angular node.js and express and mongodb Okay So the three services that might mean sack application involves are primarily they are the front end client the back-end web server and the database So this is the same thing that I explained a couple of minutes earlier But since you have a pictorial representation, I hope this can you can relate to this better, right? And my friend and client is going to be my angler and the backend server would be no Jason Express and database is going To be mongodb Okay, so you guys should have any problems now and these three services would be hosted separately in the three different containers Okay, and that would be built from my Docker compose file So that’s what I’m going to do now Now, let’s see how to do containerize and deploy a mean app by using Docker compose great, so First of all, let me open my Virtual Machine So first of all, I have my terminal here and now I want to show you my project Okay? So this is my main stack app is the folder where I have my project present So as you can see there is one angular app folder, which is basically containing all the codes of my for my for my client for the front end This is the back end server where all my codes are press it again And this is the composed file which I have written and this Docker compose file is what is going to do all the work for us Okay

It is really Work and one thing you might notice here is that I don’t have a Docker file for my mongodb Right? So I mentioned earlier that I would be using a doctor or file for creating the dirt container But in this case, I don’t really need to do that Okay So a Docker file is a little complicated procedure, but for my database, I don’t need to build something from scratch and I don’t need something customized so I can just simply use an existing mongodb image, which is there on the docker Hub I can use that and Link that with my back-end server So that’s why I don’t have that but instead I have directly called that mongodb image over here Okay So this is the yongle file and if you guys are watching this video at later point of time, you can also relate and you can understand what I have specified here because I have mentioned the in comments I mentioned what each and every line does Okay, so it’d be helpful for you people do come back and have a look at this later if you are having any problem, but in the meanwhile, let me explain each line here So in the first line we are saying the In to be used is 3.0 Okay, so you have to install a version of Docker compose separately and Doctrine will anyways be there, right and you have to download a version of Docker compose which matches your Docker engine version Okay, because certain versions of compose is not compatible with certain versions of engine So you have to just look up the right version and I am using version 3.0 of compose and I have version 1.1 six of my doctor engine Okay, so just make no doubt make note of that And yeah, you are specify the version that’s gonna be our first night and after that you simply specify the different services that you want to run Okay you the command the key word for that is services and you give a collinear and you specify the three different container names? Okay And each of these containers will contain the actual services So in case of my angler just going to be the name of my container number 1 right and here I’m saying build this container from the documents that’s present in this particular folder This particular path Okay, similarly expresses the name of the second container and I’m asking it to build this container from the dockerfile that’s present in this particular path Express So in case of my mongodb image creating the container with the name of database and I’m not giving a Docker file here I’m just saying pull the image from the docker Hub Okay, so it would use a image that is the Mongo image with the latest tag and it would Use that okay So let me just quickly go to my photo and show you where the rock of eyes are present So this is the angular app now since my compost pile is here from my compost file This is the pathway my Docker file will be present Right? So this is that Docker file So let me just open this dog for and keep it here and similarly if you can if I put this go back there’s Express server folder, right? So this is the code where my coach for server-side is present and my Docker file for that is present over here So Now coming back to my yamen file after specifying the path of each of these darker files specifying the port numbers on which they should be running inside my you know, where the port mapping how the port mapping happens So whatever application hosting inside the docker container, right that will be hosted in one particular port number of your darker So you have to map it to one of your port numbers of your Local Host machine If you want to see the you–if you want to interact with the web browser then you have But map it with a particular port number Okay So I have a said this is going to be the local machine port number on which it would be visible And this is the port number on Docker where the application is going to be running Okay Similarly for express Its 3000: 3004 mongodb It’s to 7 0 1 7: two seven zero one seven So each of these port numbers are default for these applications Okay, so I haven’t done much of a deal over here Let me just since I’ve explained the Camel file in a decent fashion Okay, there’s one more thing left There is links Okay Now this line if you see I’m linking my server side to my database Okay now since I have a database where it meets the fetch data from on a regular basis, we have to give the keyword links with a colon and the specify the container name So that’s why my third container is going to be the mongodb container It’s going to have the container name of database and I’m linking that over here All right, so it’s pretty simple And now that I’ve explained each of these Campos files it’s time I explain the doctor files so This is the first Docker file, which I created for my front end and it’s very similar to the dockerfile that I use for my previous session If you people remember if any of you are aware there in that session, you might realize that first of all, I’m using a from command to pull the load image Okay with the version 6 I’m doing this from the broker up and inside this image

I’m creating a new directory Okay make directory is the Linux command that you use I’m doing that with the pflag I’m creating this pot Okay, so I’m using the pflag so that creating the entire path the parent path also / user / SRC Okay, so I’m creating this folder inside my Docker image and I’m changing the working directory to the newly created folder or new newly created path Okay project path And what we need to do is we have to copy all your dependencies have to copy all your past or your project codes and all these things right? So that’s what I mentioned to you when I was delivering the slides all your Project codes all your applications codes the dependencies libraries will all be packaged together So that’s what we are doing here first thing copy the package or Json file to the project path Okay Now if you can let me just show you the package or using file So this is the package or Json file First of all, I’m copying this one inside my Docker container Docker image now, that’s because this file is the most important file which has details about which dip version of dependencies are needed for your code For my angular code which is present over here, but he followed my angular code represented in s RC / app path So whatever I would need the versions of that dependencies would be have to would have to be mentioned in the package.json file Okay So I’m copying this file inside my current Image First and Afric copy it I’m running a npm cash clean command Okay, so MDM search for node package manager It demands your application here and caching is understand You’re just removing the cashier It’s not a very important command but the important commanders npm install So when you give the npm install command what this does is it would first of all look for the package dot Json file Okay, the npm which is node package manager would look for the package.json file and whatever versions of dependencies are mentioned inside it Those would be downloaded Okay, and they will be present inside a new folder called nodes underscore modules okay, so that would be created and it would have to be moved to this particular path inside your it should be moved inside your documents Right so that command is what I’m doing next So after downloading the node underscore modules all the actual dependencies that along with the actual project codes So those also I’m copying by giving the.com and you’re okay So when I say dot whatever is present in the host machine everything in that particular path would be copied to my Docker container path of this and then I’m simply doing an Exposé photo double zero Indicating the fact that I want this container to have an open port at this photo double zero Okay, and the same folder of 0 is what I’m using over here since full double zero is where angular is hosted a mapping that to my host machine sport of photo double zero Okay this I do in the yamen file But anyways, when I specify the port number, but it’s running I can simply do a npm start command Okay, and when you run an npm start It’s your son your node package manager, but would straightaway look for your codes So your codes would be present inside the SRC folder Okay, you have another folder called so it would look for everything here and whatever is present here It would start executing them And yeah, that’s a of course the dependencies would be present inside the same image So your application will be successfully hosted in that way Okay and similarly going to the third doctor or second dockerfile This is Creating the this oversight Okay, and if you can notice there’s not much of a difference So the almost every step is the same except for me exposing the port number here So I’m saying my server would be hosted at Port number 3000 of my Docker container Okay, and this again, I’m mapping in the human file to the host machine port number 3000 Okay So that’s the only difference This is the only difference in both the doctor files And now that I have also mentioned where these files are present inside My almond file I can simply execute this Docker compose file to have my service to my having to have my servers hosted So the command for that is let me check Where am I right now? Okay home / a trigger I’m going to see Dee do the folder and here we have the same set of files and folders, right? So this is the file that we want to execute and the command to execute the docker compose file is Doc Oh – compose space up Okay There’s a very simple command and you dockerfile would basically be executed as you can see It’s starting the angler one container database one container and the server side container great So guys, this is going to take a minute Okay, the angel abs

are the development server is listening on localhost for 2.0 Great This is this indication of my client side Okay, so it says webpack compiled successfully That’s great My web services are hosted Now What I can do is I can open my web browser here and go to those particular port numbers and see if my services up and running So if you remember my client is hosted at Port number for to double 0, right So let’s hit enter I can either give localhost or I can give Either of that will do and as you can see my angular client is up and ready and my app is simply about having a forum page Okay, so I can add details You’re my first name last name phone number he deals and Just click on add to get the details and that details would go to my database so that it’s a very simple application which we have created And similarly you can verify if the client if the server is running by going to the port number where it was hosted and you can recall this is the port number was hosted on Fubar great So this was the authentication that I needed and localhost two seven zero one seven is when mongodb is hosted So for mongodb container, we get a message like this It looks like you are trying to access mongodb over HTTP on the native driver port and if you get this message, it means your continues operatic very good Alright so we can just Let’s start web application by giving the name Behavior Okay So the first name, let me say I’m going to have my own name I’m going to save our them All right last name I’ll go Venice phone number I can just give a random number here Okay, and if I say add this data would go into my database That’s my mongodb container All right, great So it shows that already I have these rails present Okay, so I have this one record and now this record has been added Okay Now let’s verify that by making an API call Now I have only explain the client aspect The service had aspect is something that didn’t explain right you guys might you guys should know by now that the client will take the request from the sorry the server would take the request from the client and query on that Right? If it if there’s a you know, if there is if there is a request to access the database then it would fetch it from the database and respond to the client So let’s do that This of course is the UI that I created which shows my database but anyways to verify if the same thing has gone into my database Actually we can do that by going to the server here and Going to this particular URL, okay, there’s a / EPA / contacts, right? So this is basically an API call that my server is making and at this URL at / AP / contacts It can view what data is present inside my container So it’s it says that the first record that is present is this one it has an idea of this which was generated automatically And of course the first name last name and the phone number was what was given okay, and this was the this was the record that I created And as you can see when this is present, so if you want a little if you want to play around a little bit you can do that and let me just do that by deleting one of these records Okay I’m going to delete the first record And now if I just go back and refresh this page, you can see that the first record is gone Okay, so we have this version We have NS and the phone number now, that’s because we deleted that from the database itself and I made a call from my server from a client Sorry, so I hit a delete button Button from my client that Quest went to my server first and the server would indirectly go to the database and delete that particular record And since I did a / AP / contacts as I refresh this this would return whatever is present inside the database so it’s that’s what is visible here Right? So currently this is the only record that is present in my container in my data mongodb database, right? So that’s what’s visible here and similarly this To functionalities I showed you there are a couple of more functionalities and you know that we can do with this image You can retrieve one particular record If you want to you can do all these things Okay, and this is just my application You can come up with your own customizations You can build your application in your own way, correct? And you can do all these and of course I cannot go into depth I can teach you in detail what the words different parts of this application is but instead I can point you to one of The videos which is, you know, a red record video which has a complete tutorial on how to create a mean stack application

Okay So let me just give you the link of that video in some time But before that I just want to quickly show you that this was the express server page And again, we had the package or Json file here the Apple IDs, which is basically the entry point into my server and in this app dot JS file, we would have details about what apis are there what functions Function calls can be made and whenever that particular function call is made by the client Then it would be routed to this route dot J’s file inside the routes folder Okay So the definition of those functionalities would be present here So whatever actions need to be performed when clicked on that there will be specified here So that’s how the server communicates with the database and forth Right? So that’s how it works And yeah, that’s the explanation of both the mean stack At the company explanation write the angular app and the express server What is Docker swamp? So a Docker swamp is a technique to create and maintain a cluster of Docker engines Okay Now what I mean when I say a cluster of Docker engines is that they will be many Docker engines connected to each other forming a network Okay Now this network of Docker engines is what is called as a Docker swarm cluster And as you can see from the image over here, this is the architecture of Docker swarm cluster Okay, and there will always be one doctor manager In fact, it is the docket manager which basically initializes the whole swarm and with the manager they will have many other nodes on which the service would be executing So there will be times when the service will also be executing at the manager sent but basically the managers primary role is to make sure that the services or the applications are running Earning perfectly on the docker nodes Okay Now whatever applications or services that are specified or requested they will be divided and they would be executed on the different Docker nodes Now this app is called as load balancing, right the load is balance between all the other nodes So that’s what happens with the Dockers warm And that’s the role of a doctor managers Now, let’s go and see what are the features of Dockers warm and why it’s really important and whites You know the go to standard in the industry That’s because with Docker swamp, there is high availability of these Services Okay It’s so much so that they can be hundred percent High availability all the time, right? That’s what high availability means, right So how is that possible that’s possible? Because at any point of time even if one node goes down then the services which are running inside that note They can start the manager will make sure that that service is just that Services started on other nodes, right? So the service is not hampered even though the note maybe down the load is balanced between the other nodes which are active in the swamp So that that’s what a Docker manager does and that’s why the doctor manager is heart of the Curse one cluster Okay, that’s one feature The other feature is auto load balancing Now again, the auto load balancing is something that is related to high availability itself We’re at any point of time If there is any downtime at those times, the manager will make sure that those services are not stopped and they are continued and executed on other nodes, right so that the manager would do but along with that load balancing also comes into the picture when you’re when you want to scale up your service Has supposing you have say three applications and you have bought three notes for that Right? So including the manager you will have 4 nodes because manager is also technically a node Okay, so you have a manager node, and then you have three different nodes So in this case the three services which you deploy they will be running on three different nodes And if you want to scale them at a later point of time, let’s say you want to scale them up to 10 Services Then at that time you the concept of Auto load balancing would still come into the picture We’re in the attend services They would be divided between the nodes All right, so it would be such that you will have three probably three services running on one node 3 more services running on the other node, and the remaining three services on the other node And the one service that is left out that would you know, sometimes be run on the manager or it would be load balanced on some other node Okay, and the best part of Dockers you don’t need to do any load balancing It’s all done or on its own right? So there’s an internal DNS server with which the deed The doctor manager manages and the DNS server, make sure that it allocates It makes the DNS server make sure that all the nodes are connected in the cluster and whenever any your load is coming it would balance the traffic between the different nodes Okay So that’s one big Advantage with auto load balancing and another feature is that of decentralized axis?

So when we say decentralized access it means that we can access these managers or these notes from anywhere So if you have these These managers or order or these nodes hosted on any server Then you can simply SSH into that particular server and you can get access to that particular manager or node So if you access the manager, then you can control what services are being deployed to which nodes Okay, but if you log in or if you sh into the server which is a node then you can control or see which service is running inside that node itself Okay, but however, you can’t control the other nodes if you are inside a node only the The manager node can do that for you? But anyways, all that we need is to log into or SSH into a doctor manager and you know control which services are running right? So that’s all we need that can happen this way and of course it’s very easy to scale up deployment So I also spoke about that earlier where you know, you can if you want let’s say you already have but answers and if you want to certainly scale it up to 50 or say a hundred servers hundred Services, then what you can do is you can just buy a few more sir Us and deploy those hundred Services into those servers, right? So it’s a very simple or very simple functionality where you can do it with just one single command one single command is all it takes to scale up your number of services or applications to the desired amount Right and you will have multiple Services running inside that same Docker node So each node can have a probable probably have 10 or 15 service is running and it basically depends on the number of nodes Are you have but Ideally, you wouldn’t you shouldn’t do that You cannot have but too many services running inside the same note because that causes performance issues Right? So all those things you can do and finally is this concept of rolling updates and rolling update is by far the most catchy feature because when we said rolling updates we what we mean is these applications are Services which are running right? They will have to be updated at one point of time or the other down the line you will have to update it So at that time what will You cannot you know upload update manly in every single machine, right? If you don’t have darker if you have hosted your web servers on either virtual machines or on actual web servers, then what happens at that time, you would have to go to each and every system and then probably updated everywhere right or you might have to use other configuration management tools But with the help of Docker, you don’t have all those problems You can simply, you know specify the the you can use the rolling updates functionality for that And you can specify a delay So in the delay, it would update one service or each service which is hosted or deployed inside every node It will update each of those Services one after the other with a delay of the specified amount of time Right so between so even when one surface is getting updated the other service is not down and because of that there is high availability since the other service is still up and running you don’t there is no downtime caused, right so you can be sure of that In spite of and rolling updates are very simple also, so you just again it just one command and you’re all done That’s these are the benefits of for Dockers Home and these are reasons why you should Implement Docker swamp in your organization If you have a massive web application web servers, which is deployed over multiple servers So that’s the big benefit with the Dockers warm, right? So moving on to the next slide Okay Now it’s time for the demo Okay Now, let’s see how to achieve High availability with Docker swamp But before I get started with the Hands-On part where I would be showing you on my virtual machines, I want to First go through what I want to show you with respect to high availability Okay, and how to achieve it with Docker swarm? Okay, so first of all so first of all men terms of high availability The ideal definition is that you have the application of the services deployed in each of the web servers? Okay Now look at this architecture where I have about two nodes and I have one manager Okay, and I have Docker engine running on each of these each of these nodes and each of these are all highly available Okay So at this point of time, I don’t have any problem okay with respect to any services and my application is deployed in each of these servers Okay, each of these servers or each of these nodes So at this point Time if I access if I try to access my browser, and if I try to access this port number in my browser, I can see my application running Okay Now this is the application which I will be showing you my demonstration on and this is also the application which I executed a couple of sessions back Okay, the the link of this application the demo of this I will share it at the end of the session

Okay, but don’t worry about that because this session is all about Docker swamp So getting back Back, what I was saying is since these are hosted in each of these servers I can access the application that I have deployed on each of these machines But look at this scenario where my service is only hosted on one particular node this time Okay I have the other services connected to my cluster Okay This is my swamp cluster where it’s all connected, but the application is not hosted on these two nodes So at this time, can you guess what happens? Can anybody does anybody think that the application will not be accessible on these machines? Can anybody tell me that if you people think like that, can you just Well, if you think like that, then you people are wrong because since it’s connected in a cluster these Docker whatever is hosted on one particular node, they can also be accessible on other nodes So even even in spite of the fact that these servers do not have the application running the web port on which this application is hosted, right? This port number will be internally exposed to all the nodes inside this cluster and since the port number on which it’s running over here that is for to double zero That is exposed to the cluster then in all the other nodes in the cluster on the same port number for to double zero The application with would be accessible right same thing with even this particular node So on for two double zero, you can access this angular application This is the second scenario of high availability But this is just a scenario where you don’t have your application This is when this is the third scenario where High availability is actually being implemented, okay? Okay Now you have a scenario where you have your three nodes and two of your nodes or one of your nodes goes down Okay So this time you don’t have your application itself forget about the fact that doctor is not the application not hosted forget about that fact Think about this scenario where your node is not accessible It’s down for some reason for some natural Calamity at that point of time Do you think you can’t access it? You can well that’s because the the again the nodes will be connected inside the docker class The swamp cluster and the port number would be exposed So because of this reason you would still be able to access you would still be able to view the angular application on these servers right now That’s the benefit of having a Docker swarm cluster All right So this is how the hive a high availability factor is achieved with the help of doctors form And this is what I’m going to show you in my hands on part But before I go to that part, let me just just quickly run through these Docker swamp commands Okay So these commands is what I will be using extensively in my demo and they’re also the most common swamp commands that you need to use when you’re starting with your Dockers warm cluster Okay So first of all to initialize the swamp you use this command you say Docker swarm and in it, you use double flag and say advertised Adder Okay followed the followed by that you specify the IP address of the manager machine or that same machine where you are Starting this service Okay So the when you when you do this, whatever IP address is specified here that particular machine would be acting as a manager Okay It is also ideally the same machine on which this command is running right the IP address of the of what you specify your it should be the same machine So that’s the thing and whenever you you should this command this swarm would be initiated along with the manager being this particular machine Gene which has this IP address Okay That’s what happens when you do a initializing the when you initialize the Swarm and of course when you initialize the Swarm you will get a token It’s more like a the key Enter key using which your other workers can join your Docker cluster Okay, but getting back to our Dockers warm, once you finish lies your swamp you can list the different services that are running inside that swamp Okay, you can list the different you can list the different Nodes that are running you can check which all nodes are connected to your swamp cluster You can check what all tasks of services are running You can check you can create a new service new service as in a new container and then you can also remove that a new container and you can scale them up using these commands Okay So use the docker service LS to list down the services that are running then if you want to drill down on one particular service and check in which node one particular service is running then you can the docker service PS command Okay, so it lists down that process when you shoot with the name of the service that you want to check for and then if you want to create a new service, then you use this command of Docker service

create then you specify the name of the servers in fact and you got to specify the image with you want to which you want to use to build that particular container and service? And finally to remove a service you use this Docker service rm4 by the name of that particular service And finally if you want to scale your services, then you can use this command Docker service scale and then you can just specify the name of the service and you can specify the number that you want to scale it up to Okay So in this case if I had the servers which are which was which had two replicas then by simply specifying is equal to 5 I can out of scale it up to five different I was right So those are these are the swamp commands which Which are applicable from the manager and and now going back to the node and if you want to list down all the nodes that are there in your swamp Then you can use the docker node less Okay, do note that here It was all about the different Services Okay, and these commands cannot be run on the docker nodes? Okay They can only be run on the docker managers Okay So here you have the docker node LS which lists lists down all the managers And the nodes and then if you do a Docker node PS followed by the service, which you want to in fact, if you do a Docker node PS it basically lists down all the containers or services that are running inside that machine Okay Now this command again, it can be run on even nodes Okay, this cannot be run on all the nodes So the node LS it can be only run on the manager And finally if you want to remove a particular node from your service from your cluster then you can run Run the command Docker node RM followed by the ID of that particular node Okay, but at times you might not be able to do that That’s because the node might still be connected to the cluster So in that case, what you have to do is you have to use a Docker swarm leave command When you use this command you can you can get if you run these commands from the nodes Then the nodes would leave the cluster and then You can just end your cluster, right and finally you if you can just run the Dockers mom leave from the manager and then you can end the whole cluster itself So even the manager would leave and manager would ideally be the last instance to live right? So when there are nodes there you cannot you cannot have the manager leave with the notes being present So that’s one thing and at times you would be given a error saying that you cannot leave the cluster because you’re a man What that I’m you can use the flag Force flag Okay So at this time you are as a manager you can leave the cluster and your cluster session ends there, right? So these are the top commands which are in question So yeah, I think it’s time for me to go to my session Okay, it’s time to go to my Hands-On session Where I’ll open up my virtual machines So for the demo purpose, I have got three different VMS Okay, and inside these vm’s I have three different Docker engines and I will be basically using two of the doctrines as my node And I would be using one of them as my manager Okay So this is my manager one as you can see over here and This is the password Okay So this is the manager one, which I’m going to start the service the whole swarm and the services from here Okay, and if I go here? This is my worker one as you can see over here All right And this is worker to now in these two nodes I would be executing my applications or Services Okay So first of all, if you want to create the swamp service, then you have to run the command Dhaka swamp in it advertise Adder followed by the IP address So the IP address of this manager machine is 192 dot 168 Sorry dot one sixty eight dot one dot hundred Okay, so great So my swarm is initialized and as it says if you want to add a workout with this warm, then you have to run this token Okay Now this is the token Let me copy this token And run this at the nodes and okay So I’m going to go to worker 1 I’m going to paste this token And when I hit enter it says this node has joined the swamp as a worker Now Let me verify that if I go back here and if I issue the command Docker node list,

then it says that I have one manager which is myself myself is being indicated by this aspect Okay referring to this own system, which is also the leader So it says manager status leader correct The state is And availability it’s active and recently I added the worker node So it says even this is available Now, let me go to the third VM and enter the token here and it says even this node has joined as a worker now if I go back to the manager and run the same node list command, you can see that the worker to has also come in now Okay now that’s because I have issued the join command at my node And so I’m going to clear this clear the screen and now we can start creating our services So first of all, if you want to create a service the command is docker service create followed by the name of the name flat Okay So you specify the name of the service that you want to give ammo? Let’s say I want to say angular application I will say this and followed by this we should specify the name of the image So the image name is demo app one Okay, and along with this I also want to specify the port number on which I want to do The Binding because the angular application which is being hosted in one particular Port number in turn my container that has been mapped to my browser port number right if I want to access it on my web browser that is this Firefox So for that reason, I will use the – pflag and I’m going to say photo double zero of the browser Port should be mapped to photo double zero of my contain Abode So this is the command Okay Now this command simply creates one instance of this service angular app, which will be built from this image demo app one Okay, and it would Bose the port number 4 Double Zero from the container to the port number for 2.0 of my browser So let me hit an enter And let’s see what happens We got to give it a few seconds because it’s a big application, right? Yeah So now let’s do a talker Docker service LS, okay You can see that one Android application is created Now This is just a warning Okay, you can ignore this because this is the confirmation that your service has been started Okay, you can ignore such warnings if you just what you need to look for Now If you get this image ID, then it means that your service has been created Okay This is the service ID basically so as you can see you’re right Now the mode is not replicated There is just one single instance and the same thing you can see Got it says replicas is one the same name which is specified the image that it used and the port number where it’s active right now Okay Now let me do a docket PS command from the manager and check if this application is running inside this node So yes, it says that this application is running over here now parallely Let me go to my worker one Okay, this is also connected to the same cluster So I’m going to do what Docker PS over here You can see that I have got no output So this means that there is no container that are started inside this node Okay This is the worker 1 similarly Let me go to work or two and say Docker PS again, there is no output when it come you know with this Okay It says no I started now if I go back to the manager and verify I can verify where in which node this application is started and the command for that is docker service PS followed by the name of the application that is and lower rap So when I hit enter you can see that the name of the application is this this is the ID and image that was there and the note but it’s running So it’s hosted in the manager one in my system itself It’s hosted Okay primary system The results say it is run running and the current seed is running about a minute ago Okay Now let me go to my browser and access Local Host for to double zero Now as you can see this particular, this is the Android application which I’ve hosted Okay Now I’ve explained what this application is about in one of my previous actions I would request you to go to that video and get more details about this application Okay I’m going to just quickly get back to my session here with respect to swarm So since I have started my application I can access over here Now as I explained earlier all the nodes in your cluster

can see the application that you’ve started Right I explained that order right now Let’s verify that by going to the other nodes So in spite of the container not being hosted on this particular node, I can get the same local I can get the same anger application over here because the port number would be exposed internally between the different nodes in that cluster Same thing with my Docker worker to right So if I do well, I’ve already done a Docker PS you can see there is no container here So let me just quickly go here and do a local host for double zero Yeah, so you can see the application is hosted Even on this particular node Now this is good news So this means that your application is successfully hosted on the cluster and it’s accessible on all the nodes right now I’m gonna do a Docker node LS And yeah, we have three different nodes It’s the application executed over here if you want to verify that you can also do this docker service LS, okay, it just one application And if I do PS with the angular name with the image name, it says it’s running here Great This is one this is one of the scenarios which I want to show you Okay, but I want to show you another scenario where the application can be hosted on multiple nodes at the same time from the manager Okay, and the command is not going to be too lengthy also Okay So last time what I did I basically executed the container at Right, so that was executed only over here Let me but before I go to the next scenario, let me remove this service Okay, so the command to remove the services Docker service? remove Angular app so when you get this output, it basically means our application has stopped the deployment has been removed So if I try and refresh this folder will report it says it’s unable to find anything there and similarly you won’t be able to find it on any of the other nodes also because the cluster itself does not have access to this particular angular application right now, but now let me go back to what I was talking about the second scenario where I can start the same service On all the three nodes, okay The same dock ourselves, which I created I’m going to issue that with a slight modification So after my port options after this flag, I’m going to use this flag of mode Okay, I’m going to say mode is global now with the help of this flag The application which I am deploying which I am, you know hosting this will be basically deployed onto all of the all the three nodes of mine Okay, we can so I can show you that by first hitting enter and let’s see what the status comes Okay, so we did just take a few seconds because it’s being deployed to multiple nodes, right? That’s the only thing so yeah again, the service has been created This is the service ID Now Let’s do a Docker PS and check and there’s one instance of this application running in this same manager Okay, like before it’s running over here also and let me verify that if it’s running over here this time by running the same Docker PS command Yes, as you can see sounding seconds ago This application was created and similarly if I go to the third VN and run the same Docker PS command It’s opened over here Also, this means that the application this time was deployed to all the three nodes parallely Okay, we can also verify that by going to the foldable reports of each of these machines So this time the app is back up right? It’s running again Same thing I can refresh this And I will see the application coming here Just connecting And similarly over here Also, I will have the same success scenario Okay It’s connected Great This is the movie rating system that is been deployed and all and across all the three VMS Now let me verify this give you the way you can confirm This is by running the docker Service First you do the docker service LS Command Okay So with this you have the more s Global, okay, and it says replicas 3 of 3 that’s because there are three different nodes connected and since it’s deployed to all the three it says replicas 3 of 3, correct The only difference last time was it was one out of one the same as three or three and to give you a for to drill down Further into details as to if it’s running on each

of these nodes we can use the command Docker service PS followed by the application name that is angular app So when I hit enter As you can see it says there’s one instance running on Walker one one instance running on manager one and the third instance running on worker to great So this is what we this is the real fun part with Docker right with once we command you can do all these things So let me also I mean, we just verified this right now now comes the concept of high availability if any of my node goes down then what happens But I still get access to my application over there right that question needs to be answered So, let’s see if that is going to happen So for that Let’s say well my internet of my worker one, right? This is my worker one Right? Let’s say I am I enter my notice down and to get my note down I’m just going to do a disconnect Okay, so right now it’s not connected to the internet And if I go here and do a Docker node LS command, which would list down all the different nodes in my system You can see that The status of worker one is down Okay all this time It was getting we were getting ready That’s because the state is was that’s because the server server was up But since I turned off the internet in my worker one, it’s telling state is as down but in spite of that I won’t have any problems accessing any application, okay? So even though I refresh it you can see that on this port number I could access the application That’s because in spite of the fact that this node is connected to the cluster I can access this Right and the Very fact I can do that is because all the machines are all the nodes in the cluster will have the port number opened, right? It would be exposed between all the other nodes the same concept I explained during my slides, right? So in spite of the fact that my note being down I could do this now this solves one If I availability, right? So in this case, even if I have like multiple nodes going down then some of the nodes which are you know, good enough which are healthy dose can service They those can satisfy all my services for a temporary period of time, but of course, I’d have to bring up my your notes again, right? So this is how one high availability can be accessed that that’s one thing So, let me just go back to my worker 1 and N able internet again Okay, so that I can continue with my demonstration Okay connected now, so if I do a doc or node Docker node LS again? Let me just refresh this Huh? Yes now it says the state is ready Great I’m going to just clear the screen Now since I ran last last command where I did it in global mode, I had an instance running on each of the nodes, right? So this time let’s say I don’t want to do that I have three different nodes, but I want to host the application only on two nodes Well, I can do that Also, I can set the number of replicas of my service over there in the command where I’m starting my service So let me go back to that start command and modify it as per our needs so I’m going to remove this Mode Global Once you remove this flag You can add you can add Replicas and set the number of services you want Okay, but before this I would have to remove the service, right? Sorry, my bad I just forgot to do that So let’s say Docker service remove angular app, so I’m have removed it now and I’m going to restart the service Okay So now let me start modifying this start command So I’m going to remove this global mode and I’m going to say Replicas and I’m going to set the replicas to to now this would indicate that I will have to running instances of this service between the three nodes Okay, it will load balance between the three nodes and the manager will choose on its own It will deploy the application on two of the best performing nodes Let’s verify if that’s happening So yeah the success so it’s successful We can verify that by doing a docker Note PS Okay This would basically less down if the container is present in this node Yeah, there is one container or One servers running over here But to get a detailed to get more details Let’s run the docker service PS command, okay

Let me just clear the screen out and run the command again for you So when I do this it says that two instances have been created right one has been started on my worker 1 and the other husband started on my manager one right two instances between the three nodes Let me also do this just for you Let me do a Docker service LS should to confirm the replicas, right? It says the mode is replicated mode and it’s two out of two, correct So no No hassle anywhere here, right? So if I refresh it, I would still have the angular application hosted This is worker one Its enemies host over here, so I don’t need to verify anything But to give you a confirmation I can do that also by running the docker PS command So the docker PS would list down all the containers and services running in this particular node So when I hit enter I have one entry here, okay for once I was at got started however in this node to the worker to I do not have the application right? So let me verify that by Running the command Docker PS Okay, it’s not running here There’s no service But in spite of that the application would be running here So that’s the concept of Docker cluster wherein all the nodes will get access to what’s there in the docker cluster So that’s the fact And now comes the concept of scaling up and scaling down, right? This is one thing which a lot of people have this doubt because it’s not always done right in spite of having a cluster where you have only three nodes we can scale it up to any number of services that we want to so right now I have a two different Services, right if I do a darker service and if I do LS you can see that there are two Services running now if I want to Scaled up to let’s say five Services I can do that too That would also happen and the simple command do that is Docker service scale and we should choose the application and we have to set the number we want to scale it up to let’s say I want to scale it up to 5 So in this case three more services would be added on to this cluster Okay I’m going to go ahead and hit enter And yeah, so it says the app the application has been scaled to 5 now Let me run the same Docker service list command And when I do that it says right now the three replicas have already been started and let’s give it a time Let’s give it a few minutes so that it can start on all the other nodes Okay, in the meanwhile, I’m going to clear I’ll clear the screen first and I will do a docker service BS and angular app this would tell me on which nodes my applications are going to get deployed So it says out of the five on worker one Yeah, they’re on two of those services will be running on worker 1 Okay, as you can see our service number one This is the service number two, right? This is running on worker one and again on vocal to there will be two Services running You can see the so Services over here worker to and then on manager one There is one server is running This is because I scale it up from two to five Now Let me do a Docker service list command to check if all my replicas or up Yes So we’ve given sufficient time and buying up all the services are up and running we can check it over here, but we don’t need to because we know for sure that it’s going to be hosted anyways, so this is good news, correct So yeah, this is this is how we can easily scale up We can easily scale down and we can achieve a lot A lot of comfort by using Docker, correct? So, yeah, so yeah guys So this come brings an end to my session to my Hands-On session have also showed you how to scale up and scale down I’ve showed you the concept of high availability Also the whole concept of load balancing happened here, but I still there is one more thing, which I also want to add on from my side Okay, and that is why will Services be executed at the managers and write a manager ideally does not Is not supposed to do any work, right? That’s what the workers offered the manager just manages So this is the question that you can come up with It’s a very valid question

So if I want to do that, then I can you know again run this one command and enable that functionality also Okay, and the command to do that is there is Docker node update I can use the availability flag here Okay, and I can say drain and I can choose which node I want to drain So when I saw drain basically stops allocating services to that particular node, which is specified so you over here if I specify manager one then and if I hit enter Then from now on the surface, which is allocated over here, right? This would shut down and a new service will be created on either Walker one or work or two that would happen or if I Want to drain my manager I can also drain one of the workers I can either drain work of one avocado But let’s say for our in our case We want to down the manager so I can do that by simply hitting enter over here And yes, we’ve got this as the return return value That’s great So now if I do Docker service PS angular application, which is the same command you can see that the manager one has been shut down Okay, and there has been an additional service that has started on worker 1 So right now there is worker one running is one two and three so three running on vodka one and these two are running on worker to right So I’m going to clear the screen here and we execute the same command and also show you what happens Now when I do a darker node list, okay, the nodelist will basically lays down all the nodes connected inside that cluster Right? So I’m going to do a Docker node list and over here this time You can see that for my manager one, which is this ID The state is ready However, availability is not active It is drained Okay, even though it is a leader it is brain So from now on if I scale up the servers or whatever I do even if in case of high availability at that time Am I cannot know services will be allocated to my manager unless and until I remove the train Okay, I can you know, remove the drain by again specifying the command of active So let me run that command and show you that so here instead of saying drain if I change the availability to active then I can start allocating services to my manager also So if I hit enter it says I’ve got manager one has the return value Again, if I run the same document LS command availability is there and from now on whichever if I if I scale up at only at that point of time, well my manager start getting resources So these would what are or what our existing right? This will not get allocated to my manager in case of there’s any downtime if any of my node goes on then at that time manager one will get access Right and yeah that would happen So this is the simple demonstration which I on to show you it sounds simple but this solves a lot of Industry issues Correct? It’s one of the one of the best tools I have worked on Docker and Dockers warm is one amazing technology that also have witnessed So I hope you’ve also understood what kind of you know what I’m talking about over here, correct So yeah that brings an end to my demonstration here So before I deep dive into what exactly is Takin it working Let me show you the workflow of talker All right guys So this is the general workflow of talker So a developer writes a code that defines all the application requirements and dependencies in an easy to write dockerfile and this talk of file produces Docker images So whatever dependencies are required for a particular application is present inside this image And then when we run this Docker image, it creates an instance And that is nothing but the docker containers this particular image is then uploaded onto the docker Hub So from these repositories, you can pull your image as well as you can upload your images onto the docker Hub, then from the docker Hub, very steams just a quality assurance team or the production team will pull the images and prepare their own containers, as you can see from the diagram Now these individual containers will communicate with each other through a network to perform all the actions required This is nothing but talk a networking So what exactly is Takin it working So when containers are created these isolated containers have to communicate between each other Right? So the communication Channel between all the containers introduces the concept of talker networking All right So now what would be the goals of Dhaka networking?

So Docker is flexible In other words, I mean pluggable by flexibility I mean that you can bring in any kind of applications or operating systems or Any kind of features in Docker and deploy them in Docker next door can be easily used in cross-platform by cross-platform I mean, you can have n number of containers running on different operating systems, like one container can run on the Ubuntu host other container can run on the window host Etc like that Right so you can have all these containers work across with the help of swamp clusters After that we have doctor offering scalability as talker is a fully Booted that work It makes the applications grow and scale individually, then we have talker using decentralized network this enables the capability to have application spread and highly available So in the event that a container or a host suddenly missing from your pool of resources, we can automate the process of either bringing up additional resources or passing over to the services that are still available apart from offering the decentralized network We have talked of being really user-friendly so doctor makes it really easy to automate the deployment of your services or containers that would mean a make things easy for you in your day-to-day life And finally we have darker offering out-of-box support So the ability to use talk Enterprise Edition and getting all the functionality is very easy and straightforward make stock a platform to be very easily used So those with the goals of Docker networking now to enable these capabilities we have the container Booking management and to do that we have to live Network Now What is this live Network select network is an open source in which you can read through the source code and you can automate all of that as an open source So left network is basically a talk library that implements all of the key Concepts that make up the CNM model Now what exactly is this container Network model? Well contain a network model formalizes the steps required to provide networking for containers while providing Being an abstraction that can be used to support multiple network drivers So CNM requires a distributed key-value store, like console to store the network configurations The container Network model has interfaces for ipam plugins and network plugins The ifan plug-in apis are used to create delete address pools and allocate or deallocate container IP addresses, whereas the network plug-in apis are used to create or delete the networks and add or remove the containers from Looks I’ll continue Network model is basically built on three main components the sandbox endpoints and the network object itself So a sand box contains the configuration of a containers Network stack just basically includes the management of the containers interfaces routing tables and DNS settings Now a Sandbox may contain many end points from multiple networks, right and then point is something which joins a Sandbox to a network endpoint can belong to only one a network but may only belong to one sandbox and finally as I was talking about network network is a group of endpoints that have the ability to communicate with each other directly Now that you know a brief about the container Network model Let me tell you the various objects involved in this model the container Network model comprises of five main objects the network controllers drivers Network and point and sandbox starting with network controller Network The object provides the entry point into the lip Network that exposes the simple apis for users such as the docker engines to allocate and manage networks since lip that work supports multiple active drivers both inbuilt and remote network controller allows users to bind a particular driver to a given Network Next comes driver driver is not a user visible object but drivers provide the actual implementation that makes the network work Driver can be both in built-in remote to satisfy various use cases and deployment scenarios The driver owns the network and is responsible for managing the network which can be further improvised by having multiple drivers participating in handling various Network management functionalities after the driver object We have the third object as network network object is an implementation of the container Network model as I said Network controllers provide apis to create and manage the network Object whenever a network is created or updated the corresponding driver will be notified of the event the lip Network treats Network object at an abstract level to provide the connectivity between a group of end points that belong to the same network and then also simultaneously isolate them from the rest the driver performs

the actual work of providing the required connectivity and isolation The connectivity can be within the same host or across the multiple hose after that the next object That we have is endpoint as I discussed before and point Maley represents a service endpoint So it provides the connectivity for services exposed by a container in a network with other services provided by other containers in a network So the network object provides the apis to create and manage and points and an endpoint can be attached to only one network since end point represents a service and not necessarily a particular container and point has a global scope within the cluster as well And finally, we have the sandbox So the sandbox represents containers network configuration such as the IP address Mac address roots and DNS entries a Sandbox object is created when the user request to create an in point on the network the driver that hand us the network is responsible to allocate the required network resources such as the IP address and pass the info called a Sandbox info back to the live Network So the lip network will make use of the OSP Civic constructs to populate the network configuration into the containers that is represented by the sandbox So a Sandbox can have multiple endpoints attached to different networks Alright guys, so that was a brief about the various Network model objects Now, let me tell you to various network drivers that are involved in Dhaka networking Dhaka networking has mainly five network drivers involved with it the bridge host None overly and Macklin Network So starting with the bridge Bridge network is the default Network driver So if you do not specify a driver then this is the type of networking you’re creating So the bridge network is a private internal Network created by the docker on host All the containers are attached to this network by default The containers can access each other using this internal IP And if it is required to access any of these containers from the outside world, then port forwarding of this containers is performed to map the port onto the docker host East Bridge networks are usually used when your applications run and Standalone containers that need to communicate Another type of network is the host Network This removes the network isolation between the docker host and the docker containers and then it uses the host networking directly So if you were to run a web server on Port 5000 in a web app container attached to the host Network, it is automatically accessible on the same boat externally without requiring to Is the port as a web container uses the host Network? This will also mean that unlike before you will now not be able to run multiple web containers on the same host on the same port as to put are now common to all the containers in the host Network The third option is the none Network The containers are not attached to any network and to not have any access to the external network or the other containers This is usually used in conjunction with a custom network drive Over and is not available for swarm Services The next Network that we have in the list is overlay Network So to understand this network Let’s consider a scenario Let’s say we have multiple doc host running containers each talk a host has its own internal private Bridge Network in the 172 Point 2 point 17 series allowing the other containers running on each host to communicate with each other However containers across the hosts have no way of communication with each other unless Published reports on those containers and set some kind of routing yourself This is where the overlay Network comes into play with Doc is warm You could create a new network of type overly which will create an internal private Network that spans across all the nodes participating in the swamp cluster We could then attach the containers or services to this network using the network option while creating a service and then we could get them communicate with each other through this overlay Network so you can see that That you can use overlay networks to facilitate communication between a swarm service and a standalone container or between two Standalone containers on different docherty mints And finally we have the last Network That is the Macklin Network So Macklin networks allow you to assign a MAC address to a container making it appear as a physical device on your network Then the talk a demon Roots traffic to the containers by the Mac addresses And then the Maitland driver is sometimes the best Choice when dealing with Legacy applications that expect to be directly connected to the physical Network rather than routed through the docker host Network stack So guys that was about the waitress network drivers Now, let me brief you a little bit about talk a swarm and tell you the significance of torque a swamp in Dhaka networking in simple words

If we have to Define doc a swamp then doc a swarm is a cluster of a machine or running on Docker this provides scalable and reliable platforms to run many containers Owners which enables the it administrators and developers to establish and manage a cluster of Takin notes as a single virtual system So as we know that Docker swarm is a technique to create and maintain cluster of talker engines What exactly is this cluster of talker engines? So let me tell you that in a cluster of talker engines There will be many Docker engines connected to each other forming a network this network of talk engines is what is called as a Docker swamp cluster as You can see from the diagram on the screen and this is also the architecture of taka swamp cluster So they will always be one doctor manager In fact, it is to talk a manager which basically initializes the whole swarm and with the manager They will have many other notes on which the server should be executing So there will be times when the service is also executing at the managers and but the managers role is to make sure that these services are the applications are running perfectly on the docker nodes Now whatever applications or services that are specified or requested, they will be divided and then they would be executed on different notes as you can see in the diagram here So these different notes are nothing but the workers Alright guys, that’s all you need to know about Docker networking Now Let’s move on to the Hands-On part in a Hands-On part first I’m going to show you how to create a simple Network and how to deploy the service over your network and after that will create a swamp cluster and then we’ll connect to services and And we will scale a single service Alright, so let’s get started with our hands on so first we’re going to deploy an application named apache2 by creating a Docker service in the default Network That is the bridge Network So apart from that will also initialize the swamp cluster as we want it to work on two different nodes That is the manager node and the worker node So for that let me open my terminal and then let me type in the command sudo talker swarm in it – hi – advertise – addr and then mention the IP address, right? So I’ll mention the IP address of the manager node And then I’ll click on enter Once you click on enter you’ll be asked for the password So type in the password and then you can see that the Swarm has been initialized now to connect the slave node to this particular manager You have to copy this link and then go to the sleeve note open the terminal and then paste it here so you can see that this node has joined the Swarm as a Look up so that is the manager and this is the slave now Let’s go back to the manager node and over here We’re going to deploy an application named apache2 by creating a Docker service So for that you have to type in the command Docker service create – – name give the name of the application – – more that is which more we want it to work We want you to work in the global mode – d – P that is port forwarding And then we’ll mention the port where it’s going to work So it’s going to work on 8 0 0 3 and then mention the account name from which the docker image has been pulled So once you click on enter you can see that your doctor service has been created now to check whether your doctor service has been created a lot You can use the command Docker service Ellis So this will list all your running services at present We just have one service that is apache2 now to check whether it is running or not You have to go to the slave node And then open the web browser here and then let’s go to the Local Host to the port a 0 0 3 So let’s go to the port So you can see a message that it works So that means our application has been deployed onto a container and then it is also connected to the swamp cluster So the worker also has this particular application now if you want to deploy a multi-tier application in a swamp cluster, how will you do that? So let me show you how to do that So before I do that, let me tell you what are we going to connect? So basically we have two applications That is the web application and the mySQL database So the web application has two parameters Is that is the course name and the course ID and once you mentioned the details and you click on submit query it will be automatically stored in the mySQL database This multi-tier application is connected with each other through the overlay Network So let’s start doing it So first, let’s create the overlay Network So for that you have to type in the command Docker Network Create – D overlay my overlay one So my overly one is basically the name of the network that I am giving you can give any other name that you want after that Let’s create a service for the web application So for that I’ll again type

in the command Docker service create – – name name of the application as web app one – d – – Network and will connect it to my overlay one network and then – be for port forwarding and then we’ll mention the port on which it is going to run and then we’ll mention the Account Details from which this Docker image will be pulled So after that you can just click on enter so you can see that you’re talking service has been created So let’s check it once again So for that you’ll type in the command as talk a service Ellis So you can see that the web app one service has been created now, we’ll create another service for the MySQL application So for that you’ll type in the command Docker service create – – name MySQL that is the name of the application – d – – Network the network to which we want to get connected is my overlay 1 – P for port forwarding Let’s mention the port and then the account details for that You mentioned the Account Details So you can see that your doctor service has been created So let’s check again So for that will type in the command Docker service Ellis and you can see that the MySQL service has been also created Now, what you have to do is you have to go to the web application and inside which you have to make some changes So to go inside the web app service You need to know the container ID So to know the container ID, you have to type in the command Docker PS So this will list all the container IDs that are present on this node so you can see But up one and a bash you two are present at present We need container ID for web app one, right? So to go inside this container, you’ll type in the command Docker exe see – i t and then copy this container ID and then paste it here and then end the command with bash so you can see that you’ve gone inside this container Now, you have to go to a file index dot PHP and make some changes So for that we will type in the command Nano and mention the directory So this PHP file opens up now, you have to change the server name to mySQL since we want to get connected to the MySQL server and then change the password to ADT Eureka and let’s say we keep the database name as Hands-On after that use the keyboard shortcut Ctrl X, press on Y and then save the file So once you’re done with this, you have to exit the container so for that type in exit and you can exit the container now You must have observed here that only the web app and the Apache to Services can be seen on this node Whereas we are not able to find the MySQL that is because it is present in the slave node So let me show you there So let me go to the terminal type in Docker PS and you can see that the MySQL service is over here So why is that? So that’s because swarm is performing load balancing It is dividing its containers into two different notes so that you can balance the load properly So now what you have to do is you have to go inside MySQL container so for that you’ll type in the command Docker exe see – i t and then mention the container ID of this particular container and then end the command with bash So you’ll go inside this particular container now, once you’re inside this particular container, you need the access to use MySQL commands So for that you’ll type in the command MySQL – you route – P Ed Eureka So once you type in this command, you can see that you have got the access to use the MySQL commands So – you is basically the user and – P is for the for the password and if you have a question that why were using – it here that’s because we are opening the container in the interactive mode So once you get the axis of MySQL, you have to create a database and then you have to create a table inside it so for that you’ll use the MySQL commands such as create database Hans on And then you have to use the database So for that you’ll type in the command use hands on So you can see a message that the database has been changed Now, you have to create a table So for that type in the command create table, let’s say courses is the name of the table and then mention the two parameters So we have course name which is of where cat-type Let’s say, it allows a length of 15 characters and let’s say we have course idea of where cat types of 12 characters and after that close the bracket and this creates a Able for you after that you have to exit your MySQL connection So for that you’ll type in exit

and then you have to again come out of this container So again type in the command exit and you’ll be out of this container Now, what you have to do is you have to go to the slave mode and then run this index dot PHP file So for that let me open localhost since the web application service was running on 8 0 0 1 and index dot PHP file All right, so you can see that our doc a sample app has been opened up now before you include the details Let me go back to my web app container and let me show you what all has changed in the file So you have to mention the server name to be MySQL your username to be root password to be your password and then database what I’ve mentioned there that is Hands-On and the name and ID would be the parameters that are would give so it will be course name and course ID and then you have a basic PHP file in which you cried the SQL command that in certain Took the course name and the course ID which will basically include the details that we fill into the application to be stored directly into this table Now, let’s go back to the slave node And now let’s mention some details So let’s say we mention a detail to be blockchain and the course ID to be randomly some number and then we’ll submit the query once you submit the query you can see that a new record has been created successfully So let me just create few records Alright, so I’ve typed in few records Now Let’s go back to a MySQL container and get into the database and see if a table has all the records entered store or not So let’s go back to the terminal So now let me type in the command Docker PS and then I’ll type in Docker exe see – it mention the container ID now I’ll type in the command MySQL – you route – P Ed Eureka So this will give my MySQL connection So now I’ll type in the command as use Hands-On So database has been changed now I’ll type in the commander show tables So it Will show me the tables that have included in database, so I have this table included Now Let me type in the command select star from courses So this will basically lists all the details that are stored inside this table So you can see that we have entered so many details with the help of this web app and that would directly stored into a mySQL database So guys, that’s how you can you know, connect multi-tier applications over the overlay Network now if you want to scale any particular This you can just scale that service by using a simple command So for that you have to go back to your manager node, and then use the command Docker PS to list all the containers light So we have two containers Let’s suppose we want to scale this web app service for around five times So you can do that with a simple command that is stalker service scale web app 1 is equal to 5 so you can see that the web app one service has been scaled to five times Now if you want to check whether it is working or not, you just have to type in the command Docker service PS web app one so you can see five instances of the same services So guys that’s how you know, you can deploy a simple application over the default Network and also connect multi-tier applications with an overlay Network And finally you can scale any particular services So guys that was a short demo on talking networking The project that I’m going to show you that of an angular application, which is created by my team and what I’ll be doing is I’ll be deploying this angular application by implementing the devops strategy The first topic that I’m going to talk about today is what is angular and after I talk about angular and give you an introduction I’m going to talk about what is devops, right? So this is going to be very brief I’ll quickly talk about these two things and then I’m going to go to the third topic which is the different devops tools and techniques to achieve continuous deployment because this is the highlight of today’s session I will be spending a lot of time on this slide and on the final slide, which is continue Rising an angular application the devops way and the divorce we basically includes a combination of these three tools get Jenkins and Docker All right guys, so enough talk Let me get started with my first topic and that is what is angular So angular is an open source, client-side framework for deploying single page web applications, right and the keyword that you need to note here is single page applications And that’s acronym Das es PA There are quite a few Technologies for developing single page applications So anger is a very popular one react.js is another popular technology similarly vue.js and we have a couple mode and well the thing

that you go to ask me here is why single page applications right? So you You might ask Why are we why am I having a demo one single page applications? Right? Well, the answer for that is because single page applications are the way forward They are more effective and they are easier to build and they come with a lot of other benefits that that I of course convention today’s session because they’ll get to detail but I do have a couple of been a beneficiary of that are mentioned on the slide and you can see that on your screen now And the biggest reason under under most important factors single page applications, which are Reno created by Technologies like angular and the other JavaScript Technologies These applications are really fast and they are fast because while accessing any webpage which is developed in angular or such technologies that I’m your browser will fully render the entire D om in one single time and later on it only modifies that view or the content display to you when you interact with that web page Great, and even these modifications will be done by the JavaScript which runs in the background And yeah, you can see an example of an SP architecture over here Right? So you have so any anyway, basically any web application which is developed, you know with the help of angular, right? So they’ll be called single page applications and they’ll have three different components First of all or well let’s not say three components in my example It’s three compounds but General they’ll have different components and the components that you can expect there to be are those of a navigation bar where you can switch from one one tap to another tab, then you will have a sidebar right again you Now filter down to different options that you want to be displayed and then of course, you’ll have a Content you’ll have a Content bar, right so similar to how we have a sidebar will have another component called the content which will be the actual display So whatever you’re actually viewing on your you know on your web page that will be displayed over here And what is displayed here can be controlled from here by clicking on the different information or you can also control that by switching or clicking on a different option in the navigation bar or in the sidebar So So you can switch you can switch the view the view like that And when when you do it this way, your browser will not take too much time to fetch the information from the server because the entire Dome will be felt at one goal So that’s the big benefit with single page applications and especially angular implements, you know anger is used for developing single page applications That’s why it’s the way forward and its really popular and it’s you know, the technology is really coming up So my team has developed developed a single page application using angular and that’s what I’m going to deploy today, right? Now let me quickly go to the next slide and talk about what is devops here Right? I’m sure everyone here knows what devops is it’s a software development approach which involves continuous development continues testing continuous integration continuous deployment and continuous monitoring of the software throughout its development lifecycle Well, I’ve mentioned this numerous times my video sessions and I expected to know this Okay, but what I what you might not know is which of these tools are used for continuous deployment Right So on a higher level I can see that Docker is most important tool for achieving continuous deployment Okay, but as you can see on the screen, I will also be showing the act of continuous development and continuous integration in today’s session So continuous development is achieved with the implementation of git and GitHub continuous integration is achieved with the implementation of Jenkins and continuous deployment is achieved with the implementation of docker Well using GitHub you can pull the code from the repository, right and then using Jenkins We can deploy that code to the production environment or the servers or virtual machines whichever suits you and then finally we can make use of Docker to containerize those deployments So that’s how the different the devops tools that you see here get Jenkins and Doc up That’s how they can be orchestrated to achieve, you know Automation and you know for software development So that’s how things go and in my anger application I’m going to use these three tools right these three drops tools Now moving on to the next slide Now this slide which is all about deploying an angular application This is the most interesting slide in today’s session and you can ask me why and the reason for that is, you know, the reason it’s interesting is because we are using Docker majorly, right? We are pulling all the code from get and then we are using Jenkins integrated into darker and we are creating multiple containers by using Docker Basically Docker container sizes application along with all its dependencies, right and when we say contain arises It means that we are packaging the code of the application along with all the required packages and dependencies in a lightweight manner, which is not too heavy on the server on which we are deploying the application tool, right so and the best part with these Docker containers are that they can be run on any operating system irrespective of the one it was built on

Well what that means is I can containerize any application in my case Well, let’s it in my case I have the angular application Right? What kind of dependencies will my angular application app the see that your angular application would primarily need are those of node.js node package manager, which is again sync acronym Das npm And of course the package dot Json file, right? So no DS is basically going to be the back end for your angular application and p.m Is going to install and angular application and maintain all its dependencies and the versions of those dependencies and the package dot Json file is again the most important file right because it’s going to contain details about your project about watched what Dependencies are needed and what versions of dependencies are needed all these things will be present in your package dot Json file So basically these three files will be the dependencies and in my case, what I can do with help of container is that you know, I can have a container I can install all these dependencies I can place them all together and I can simply you know, you know without having an operating system that’s actually powering it You know, I can package all these things into this particular container and I can just you know, share it with The people and what the other people have to do is they just have to you know run the container and they can just boot it with the help of any operating system Right? So that’s the benefit with containers so developers can containerize any app that’s created in a Mac operating system and probably they can upload that container or image to Docker Hub and someone in remote location can download that Docker image and you know, it’s been a contender out of it while the personal that’s remotely located while he’s having a different operating system The guy who built the application right who sorry who built the docker image, he could have been he could have done it using a different operating system and the guy who’s actually running the container at you know, a remote location We can have a different operating system So that’s the big benefit So you guys are getting the advantage that lies you’re right with the help of Docker and with the help of continue Rising all your applications and dependencies – the actual operating system Okay So anyways, I think it’s time to move on and now I’ve reached my demonstration aspect right? So now that I’ve told you what exactly is what I’m going to do, right that’s going to be the architecture of how I’m going to deploy my angular application Now that you know it I’m going to start with my demonstration and what I’m going to do is I’m going to do it with the help of continuous deployment using Jenkins a Docker and we will also use GitHub from where we will be pulling the code right? So let me first of all open my Machine for that So first of all to achieve this continuous deployment like I told you Jenkins is the broker, right? So Jenkins is the one that pulls the the code from the repository and that is what which is going to help us build Docker images and spell containers out of those images So what I have to do is have to first of all open my web browser and launch my Jenkins Jenkins the dashboard, right? So Jenkins is by default hosted on port number 80 So let me just Lon CH that particular web, you know Lawns at Port number on my Local Host Sorry for the delay guys It’s lying a little bit Right, so just the port number by Jenkins is hosted Now in the meanwhile, let me just quickly go to my terminal and so we my project folder where my actual application is present so here’s a dominant and my project is present in this folder I’ve created a demo folder inside which there’s a top movies, right? So the top movies is the project which I created the project folder and what you see here These are the different files and folders present inside this project folder Let me also open it to my file explorer here and explain what these different packages files and folders are Forum So this is the project that I created and as you can see there are a number of files are and the number one file that I want to talk about Is that of a Docker file right now the Profile is basically used to build your Docker images and spend containers or of those Docker images, right and to build your Docker images you specify the commands inside the dockerfile, right? And then you just you know, execute that Docker image which is built by issuing the drunk command that I’m your Docker container would be spun and that I’m your container would be ready and your application will be hosted on a particular port number and then would be put to a it would be mapped to a particular port on your local looks so all these functionalities are done with the help of Of dockerfile, okay now but that is only with respect to Docker

and the other things here the other folders and the files that you see here These are with respect my angular application So we have different files like the package.json file We have known underscore modules We have done we have SRC folder So all these things well, first of all, let me talk about the package.json file right now this package or Json file is a very important file which contains all the details about my project it contains which version of which Which dependencies my project needs words the name of my project what versions of dependencies my project needs to implement? All these details will be present inside my package.json file So without the package or Json file your application cannot get hosted its you know, for those of you on you can consider this to be like the metadata, right if we will know what is the metadata So baggage out Jason does a similar role right? But here comes the question How will the package.json file be initiated right from? How do you Acute the package.json file What’s the first step? And that’s where this whole node underscore modules folder comes into the picture Right? So you have a command called npm install, right? So npm install is nothing but n PM stands for node package manager, right? So it installs all the dependencies that your project needs and when you run the command npm install through your terminal, so at that time it will look for the package or Json file in that particular directory So I have to execute the npm install from the A tree where my package or Json file is present from your if I execute that command that I’m it would first fall initiate the package.json file and whatever dependencies are present over here for my project for my code All those would be installed and downloaded and installed from from the node repository right in my case would be no order no repository But yeah, otherwise it you just downloaded from the internet and you would have it already and all those dependencies will be present inside this folder called node underscore modules Right and this this folder node underscore modules is going to be a very heavy folder, right? There’s gonna be a lot of content here So it’s ideal that you know, you don’t place it in the GitHub If you want to share your project with someone else you I’d you in the real world environment What happens is you just share the package.json file and when they do the npm install the time they would you know, automatically get all the dependencies installed as per the package dot Json file, whatever specified you so that’s what it does and then you have other files You’re right the configuration files like Protracted odd configuration file you have the typescript configuration file typescript lint configuration file and you have the other other files are so guys All these configuration files are the configuration for your angular applications right beat the typescript configuration or the linting configuration of the protractor All these can be considered, you know, basically, these are of the boiler plates that come with the actual angular application So that’s these are these are dependencies, right? So you need it with your project and and the folder SRC So this is where your actual project would be present So what about code you’ve written for your Android application that will be present here So yeah, these are basically the contents of my repository and these are what is needed for continue Rising my angular application and maybe you would find you know, you have not explained this folder ID to e, so this one basically, you know is used for, you know, the end-to-end testing So whatever is needed for that it’s president This package up But yeah on a high level this is what you need to know These are the packages And the first thing that I got to do is to to do containerize this application The first thing I have to do is after pull the code from our GitHub repository and I will do that with the help of Jenkins, right? So even though I have it locally in a real-world environment developers or you know Engineers would we would pull this code from GitHub? Right? So I will show you how that happens by first of all going through my Jenkins dashboard here This is my dashboard I already have a object called demo So this is the one that I want to show you a demonstration of so this contains I’ve already pre-built the environment so that I don’t waste much time and no downloading everything and you know, because downloading everything and installing everything would take a lot of time So if I have the environment ready I can just shoot you straight away So I have it over here and if I go to configure I can show you what are the elements that have defined already? So let’s just wait for this to come up for a minute First of all, we have to go under the source code management, right? So this is where you need to first of all enter the GitHub repository from where you want to pull your code right now Let me just open this repository and show you what I’m going to pull It’s basically the same content that is there in my local host in my in my whole system, right? So whatever whatever you saw here are the file explorer Most of the contents over here are there in my GitHub repository except for the node modules because this gets installed automatically when you run the package or Json file Yeah, so you guys see this right?

So we have the same Adobe folder we have the SRC folder and then we have various other files like the angular CLI Jason we have the docker ignore We have the dockerfile that’s present while the docker file is present inside the GitHub repository The reason I have I have the dockerfile inside the the GitHub repository is because wherever my code is present, right, so that’s where my execution should ideally happen And if I have my dockerfile present in the same directory Then I can run my dockerfile I can use my dockerfile to build my Docker image inside that repository and it will also look for the dependencies and the angular code for my application all these things from the same repository So that’s why I have the dockerfile in the same Repository Right, so that’s what doc file is used for and then similarly we have the other dependencies like the package dot Json the 80s conflict got audition the other things which I spoke about which was there over here So the same thing we have in our GitHub repository So getting back to our Jenkins Be forceful specify that you want to pull the code from here And what we do next is we can go down to the build option So under build we have our shell here, right? So whatever commands you specify here, they would be executed on your shell So since I am using a Linux system, I have chosen to execute these commands on my ex on my shell well in case you guys are executing it at your end, you know, if you’re using a Windows system, you might probably want to choose Windows batch command and then specify the commands that you want to run a new windows CLI, right, so that’s the only difference but yeah, whatever commands I specify here They will be run on my shell and the commander And first of all running is that of Docker build and building the image called demo app one So right now I’m using the dockable – t flag T command to build a new image called demo app one and it would build this application based on the dockerfile which is present in this folder So this is the folder where my dog Is present right so I can you know, in case I do a CD and if I give this folder then I would have moved to that particular directory and then I can just simply replace this path with a DOT So that’s another alternative But yeah, otherwise, you can specify the entire absolute power also here So I’ve done that and it’s basically creating a new image based on the dockerfile and the instructions present inside the dockerfile in this command And then in the second command is the docker run command And okay, so the image was created your demo app one So that image is basically being run Okay, you spend on an image into a container by running this command Docker run and you specify other options while doing this, you know, we specify flag flag RM we specify single flag copy and then we specify the port numbers So the the pflag is used for mapping your Docker containers put to your host machine sport so over here the photo double zero that you see here This is the port number of my yard Host machine on which the subsequent or the equivalent port on which my Docker container, right? So whatever is basically president medical container the port number whatever was hosted at that would be visible inside my photo double zero port in my your host system Okay So yeah, it’s an angular application right? It’s a web application So what you have to do is you have to host that and one of the ports and I have see by default angular applications are hosted on port number 4 double zero and I have also specified The same in the package dot Json file So that’s where you specify the path the port number and what I’m saying here is whatever is running inside my Docker container in port number for two double zero, that should also be visible or available in the port number 4 double zero of my host machine So that’s what the flag is transferred And then we have the name of giving I’m giving this container, which I’m building I’m giving it a name top movies one And yeah, this is basically the image same image that we are First of all spending with head buffer dockerfile So these are the two common that I’ll be running and at this point of time if there are any any of you who are new to Jenkins or you are if you are new Docker if you execute the same commands from your you know, execute shell you might have a problem So can you can anyone can any of you guess what that problem might be right so I can give you a hint the problem would lie over here Okay, so it would lie right at the beginning of this command Well, no problem See the thing is any Docker command that has to be run It has to be run with the pseudo axis Right only a root user can only the root can execute any document especially the build

and the Run command Okay, but there are some few commands which can be executed without the pseudo but these two commands especially needs to access if you’re excluding the same two commands from your terminal then you can just simply, you know, prefix this whole command with sudo here and then the shell would prompt you for the password and then you can The password but what would you do in case of Jenkins? Right? So this is Jenkins You cannot know put pseudo here because Jenkins cannot manually enter the password for root access, right? So root credentials Jenkins does nap So in this case, what you need to do is you have to give the pseudo credentials or the root credentials to Jenkins itself So Jenkins is actually a user if you guys notice if you go under you know, if you if you know, or let me just tell you this Jenkins is a separate user because it’s a web server, right and any commands that you execute through Jenkins, it would be executed as the user Jenkins So what you have to do is similar to have you, you know, use us, you know, you execute commands without pseudo how you create a new Docker group And you add your user the user from which are executing you add that user to the group similar to that You have to add your Jenkins user to the docker group and you should give the dog a group The Root axis So the docker group would basically be on par Through in terms of the access that it has over the system So that’s the important step that we need to do because otherwise if you don’t enable this axis, then your current your commanders are going to get executed It would say failure permission denied So that’s the thing And yeah, if you have these two were ready then it’s pretty much ready, right your dockerfile would be you know used to build the image and then that image would be used to get the container out So I’m just going to save this and quickly show you how to build this application Okay So to build the application we can simply go to build now, right? You can see the build history or these are the previous times I run the same command and if I do it again bill now just build a scheduled and do it see a new-build pop up over here build number 212, right? So if I click on this and if I go to console output over here You’d get to see what is the status of this build So let me just go here Yeah, if you go to console output, you will get to see what’s happening So similar to the output that you get on your terminal something that you’ll get over here Okay, we’re here already So let me just quickly go up As you can see the first set of commands have started executing using your you know by Jenkins and you know the first of course the first one was to pull the data the code from get right there from the git repository So whatever was there is being fetched over here And then the First Command that we’re executing on the shell can be, you know differentiated with this command the plus symbol basically indicates that it’s command as being executed on the Champs the command prompt so the comma the docker build – t demo app is the command that’s being built and when you build it you can That there are various steps being performed So for each line in your dockerfile, there will be a step that will be performed Now Let me quickly go to the dockerfile and explain what are the different steps that are going to be performed Okay So at this point of time, I’m going to go back here and let me open the dockerfile and explain the different steps Because if we want to host a note, you know, if you want to host an angular application we have to first of all Bill pulled a node image, right? Your angular application would be present or you know, it would be a hosted only with only when there’s a load application which is running at the back end So the First Command that is from it’s going to pull the node image which has the tag 6 right so version number 6 of 4 node, so probably this is what is going to get pulled with the help of from node 6 and when it pulls Then what do you call it was you have to use the Run command to make a directory inside this particular image So you use the – pflag to specify the path that you want to create / user / Sr C / app So you’re creating this particular path inside your Docker image, which you pulled and then you’re changing the working directory to the path that you created by using this command working work dir and this and the first thing that you got to do that you need to notice here is Is the package or Json file which is are present in my local system that I’m moving to my path, which I created inside my Docker image now that is because this is the file that contains all the dependencies that are needed to to do basically download all the node and a node node modules, right? So what about dependencies are are there inside the node modules? They will be downloaded with the help of package or Jason So right now it’s present in my local system am specifying that so stirring Docker to copy this file into the A patio and then when your once you’ve done that it asked me to run the npm cash clean now,

if you are, you know running the npm install for the first time or if you’re using npm the first time you might not need it but since I’ve run this command earlier and the using it because I want to avoid any version conflicts between the different dependencies Right? So dependencies can be afraid of different versions of angular 2 version or angular for version on all those so I’m just using this two different to to you know, keep myself healthy there and Then I’m using their run npm install So this is the most important command which would basically start everything So the npm is the node package manager And when I the moment I issue this command my package dot Json file would be searched for and one it’s located the commands there the dependencies which are there inside those would be created inside a Docker modules, right? Sorry node modules inside that folder Everything would be created So that’s what this command does And all the next command is all about copying every single file or folder which is present inside my you know, instead of my host directory that I’m copying it into my host folder So the other files that are those of configuration files that I spoke about earlier, the the typescript configuration file, the typescript lent files are all those applications will also be copied to this image inside my door, you know to this path instrument Docker image and then I’m finally saying expose for the for to double zero because this Port on which my angularjs application would be hosted and then I’m finishing it off with by specifying this command of npm start So you do the npm install here So at this point of time your dependencies are ready Everything is ready to be your application is ready to be deployed and hosted and the start is what is going to actually do the hosting on to this port number for to double 0, right So that’s what the the dockerfile instructions are and the same instructions have been Earning on my Jenkins So it says Step 1 of 9 It’s pulling from here and it’s moving to this directory and it’s you know, creating a new directory moving the working directory to this and then copying the package So each and every path step is being executed one after the other if any of them fail, then you would have a notification saying this step failed and check your command So all those details but anyways since ours is a successfully built This is the idea of my image, which was generated, right? And yeah, this is the tag And added to it and the next command that’s being run from a shell is the docker run command with the – – RM flag and the pflag right the same command which I explained earlier And when this command is executed it says that your energy source is being hosted So this is the Local Host Port which are mentioning on which I wanted to run and you can see the state as you’re right It says zero percent 10 percent here you have again, you know 10 passenger and we are basically it keeps increasing So we have 11 percent So it’s a big process So there’s a lot of dependencies that gets downloaded and in the meanwhile that we saw the dockerfile that I was expecting the dockerfile We have all our packages downloaded and installed and your applications actually hosted So it says webpack compiled successfully, right? So this is the success message If I now open the localhost photo double zero, then you would see that my angular application is running up and running Right, so you can see that the application name is movie rating system And this is something that can tell you This was the application which my team created for me and this apple This project is all about no the top 250 movies that you have to watch before you die Right some of the you know, the biggest blockbuster hits of Hollywood So all those are present here, you know, the it’s it’s the anger you will get the angular feel over here by looking at the different components So the log out Option that you see here This is a different component Right? If I log out then I will add not get to see the list of movies But if you login only then you will get to see that right and then you have the navigation bar You are where you can switch to different tabs You can go to the it reca home tab You can go to the you know the about tab where again we have weather details and then if you login successfully then, you know, you will get to see the the movie list that we have So let’s just wait for just to log in and I can show you the movie list that we have Yeah, so you had a movie list here, right? So in the navigation bar, let me just click on this movie list and you can see the 250 movies that we choose are You know, I have been the best ever Hollywood movies Yeah, so the number one movie that you have to watch is the Shawshank Redemption, right? And then we have all movies like Godfather

the part 2 of Godfather Dark Knight Will agreement Which again is my favorite movie We have Schindler’s List We have a number of movies here which you know, which is a favorite of course and we’ve created an application this way and this is a simple web application a single page web application that we created and you can create all these things if you know, you know how to work with the node.js and if you know how to work with angularjs Right similarly similarly If I go to the Erica tab, we have details about a drink over here So, you know, we believe in Tech up your skills ReDiscover learning so we have that, you know live classes and expert instruction So these are the this is the interface that we built that we created in our application and that’s what I want to show you and in the movies list again, I mean, of course we have the list of movies and if you click on any of the movies, you can look at the details of that movie as in when it was released What is a John of the movie who was a director actor writer who are the actors in the movie and what readings it has got so whatever data we have here with respect to ratings and right around Stars These were basically, you know, you know basically got from IMDb, right? So it was those I’m going to be ratings at you know, we are using as dataset in today’s session We are going to discuss about the two most popular devops tools, which is Jenkins and darker and we are going to see how these tools can be Created to achieve a better software delivery workflow So first off let me run you through today’s agenda first We are going to see what exactly Jenkins is and how it works Next We’re going to see how dark or solves the problem of inconsistent environment by contain advising the application Once we’re done with that We’ll briefly have a discussion on microservices because on the Hands-On part I’m going to deploy a micro service based application by using Jenkins and Docker now after you’ve got a brief idea about microservices, we’re going to Look at a use case and see how to solve a problem statement by using Jenkins and darker And finally, we’re going to move on to the Hands-On part where we will deploy a micro service based application by using Docker and Jenkins So guys, I hope you find the session interesting and informative Let’s get started with our first topic now before listing down a few features of Jenkins Let me tell you some fun facts about Jenkins currently There are over 24,000 companies, which use Jenkins to Some of you that is Google Tesla Facebook and Netflix Now there has to be a reason why such reputed and successful companies make use of Jenkins Now, let’s discuss a few key features and see why Jenkins is so important All right Now, the first feature is that it is an open source freely available tool which is very easy to use It has various features like the build pipeline plug-in, which lets you graphically visualize the output and apart from that There is also a feature known as user input which lets you interact With Jenkins All right Now one major feature of Jenkins is that it implements continuous integration Now what is continuous integration every time a double up or commits into a source Control Management that commit is continuously pulled built and tested using Jenkins Now, how does Jenkins do all of this now? Jenkins has over 2,000 plugins which allow it to integrate with other tools like darker get selenium Etc So by integrating with other tools it make sure that the Fed development process is fully automated All right, so it is also an automation server which make sure that the software delivery cycle is fully automated Now, let’s see how Jenkins works So here you can see there is a group of developers committing the code into the source code repository Now every time a developer makes a commit is stored in the source code repository Now what Jenkins does is every time a commit is made into the source code repository Jenkins will pull that comment build it test it and deploy it by using plugins and other tools All right now not only is it used for continuous integration It can also be used for His delivery and continuous deployment with the help of plugins So by integrating with other tools, the application can be deployed to a testing environment The user acceptance test and load testing is performed to check the application is production-ready and this process is basically continuous delivery Now, it can also make use of plugins to continuously deploy the applications to a live server So here we saw how Jenkins can be used for continuous integration continuous delivery and continuous deployment by integrating I’m with other tools All right Now, let’s move on to what is darker Now before we discuss about Docker, let’s compare virtualization and containerization Now the goal of virtualization and containerization is to solve the problem of the code works on my machine, but it does not work on the production Now this problem happens because somewhere along the line you might be on a different operating system Now, let’s say your machine is a Windows machine

and you’re pushing the go to a line X server now, this will usually result In error because the windows and Linux support different libraries and packages and that’s why your code works on the development server and not on the production server All right Now when it comes to virtualization, every application is run on a virtual machine now the virtual machine will basically let you import a guest operating system on top of your host operating system Now this way you can run different applications on the same machine All right Now you’re wondering what is the problem with virtualization? Now one major drawback of virtualization is that running multiple virtual machines on the same host operating system will degrade the performance of the system now this is because the guest operating system running on top of your host operating system will have its own set of Kernel and set of libraries and dependencies which take up a lot of resources like the hard disk processor and RAM and another drawback is that it takes time to boot up which is very critical when it comes to a real-time application All right Get rid of these drawbacks containerization was introduced now in containerization that is no guest operating system So instead the application will utilize the host operating system itself So basically every container is going to share the host operating system and each container will have its own application and application-specific libraries and packages All right So within a container there is going to be an application and the application specific dependencies I hope this is clear guys Now that we’ve discussed containerization Let’s see how Docker uses containerization now Docker is basically a containerization platform which runs applications within different Docker containers So over here you can see that there is a host operating system on top of which there is a Docker engine Now this Docker engine will basically run contain a number one in container Number two Now Within These two containers are different applications along with that dependencies Alright, so basically within a container The application is going to have its own dependencies installed So it does not have to bother any other container Okay So basically there is process level isolation that happens You’re all right Now there are three important terminologies to remember when it comes to Docker Now the first is the dockerfile now the dockerfile basically contains the code which defines the application dependencies and requirements All right, and through the dock of file, you’re going to produce the docker image which contains all the dependencies such as the libraries And the packages of the application next is the docker container Now every time a Docker image is executed It runs as a Docker container So basically Docker container is a runtime instance of a Docker image So now let’s look at a dock or use case now over here You can see that I’ve created a Docker file Now within the dockerfile are basically defined the dependencies of the application Now out of this Docker file or Docker image is created So basically the libraries and the packages that the application needs Installed within the docker image now every time the docker images run It runs as a Docker container Now, these Docker images are pushed into a repository known as Docker Hub Now this repository is very similar to the git repository where in you’re committing code into the git repository in this case You’re going to commit Docker images into the docker Hub repository All right Now you can either have a private or public repository depending on your requirements now after the image is published a Docker Hub the production team Or the testing team can pull the docker images on their respective servers and then build as many containers as they want All right Now this ensures that a consistent environment is used throughout the software development cycle Now, let’s look at water microservices now guys, I’m going to explain what microservices is because we need to deploy a micro service based application in our demo just to take it up a notch I’ve implemented microservices Now first Let’s look at the monolithic architecture now over here Let me explain this with a Example now on the screen you can see that there is an online shopping application which has three services customer service product service and card service Now these services are defined within the application as a single instance So when I say single instance, it means that these three servers will share the same resources and databases which make them dependent on each other Now if they share resources, obviously that dependent on each other right now, you must be wondering what’s wrong in this architecture Now, let’s say that the product service stops working because of some problem now because the services are dependent on each other the customer and the card service will also stop functioning So basically if one service goes down, the entire application is going to go down All right Now when it comes to a micro service application the structure of the application is defined in such a way that it forms a collection of smaller services or microservices and each service has

its own database and resources All right, so basically Customer Microsoft product micro service and card micro service will have their own database and their own resources and therefore they’re not going to be dependent on each other All right, so they are basically independent autonomous microservices Alright Now, let’s look at a few advantages of microservices Now, the First Advantage is independent development Now when it comes to a monolithic application developing the application takes time because each feature has to be built one after the other so in the case of the Online shopping example only after developing the customer service the product service can be started So if the customer service takes two weeks to build then you have to wait until customer service is completed and only then you can start building the product service All right, but when it comes to a micro service architecture, each service is developed independently, and so you can develop customer service card service and product service parallely, which will save up a lot of time Alright now the next Advantage is in When deployment now similar to Independent development each service in a micro service application can be deployed irrespective of whether the service before it was deployed So each service can basically deploy individually now fault isolation Now when it comes to monolithic application, if one of the services dropped working, then the entire application will shut down but when it comes to a micro service architecture, the services are isolated from each other So in case anyone service shuts down There will be no effect on any other service now The next Advantage is mixed technology stack now each micro service can be developed on different technology Now, for example, the customer service can be built on Java and the product service can be built on Python and so on Alright, so basically you’re allowed to use mixed technology to build your microservices The next is granular scaling now granular scaling means that every service within an application can be scaled independently Basically, the services are not dependent on each other They can be developed deployed at any point of time irrespective of whether the previous service has been deployed or not So guys, I hope you are clear with the advantages now over here We’re going to compare how microservices can be deployed by using virtual machines and Docker containers All right Now when it comes to a virtual machine now, let’s say that we have a micro service application which has five Services now in order to deploy these five services on a virtual machine Gene will need 5 virtual machines All right Now each virtual machine will be for one micro service now, for example, if I allocate 2 GB RAM for each virtual machine then five of these virtual machines will take up 10 GB RAM and the microservices may not even require so much of resources So we just end up wasting these resources and at the same time you’re occupying too much disk space which will degrade the system’s performance Now, let’s see how Docker containers deploy microservices So instead of running five virtual machines, we can just run five Docker containers on one virtual machine now by doing this we’re saving up a lot of resources, but when it comes to a Docker container, you don’t have to preallocate any Ram the docker container will just utilize the resources that are needed And another point to remember here is that Docker containers are light-weighted They do not require additional guest operating system instead They just share the host operating system All right, so this makes them very lightweight when compared to a virtual machine Now let’s move on to the use case now Basically, we’re going to try and understand the problem with the help of an analogy now over here You can see that in the favorable environment The soil is fertile and the tree is water at a regular basis And as a result of this the tree grows properly, but when the tree grows in an unfit environment, whether required dependencies for growing a tree are not present, then the tree will die All right now similarly when an application runs on an inconsistent environment, which is not both the application dependencies Then the application will fade All right guys Now let’s look at the problem statement with a small example Now, let’s say that it developers building an application using lamp stack Now after the application is developed it is sent for testing now this application runs properly on the testing server, but when it is deployed to production a feature or the entire application fails now this may happen because the Apache version of the lamp stack is outdated on the production server so due to the Difference in the software versions on the production and development server the application fails now in order to get rid of the inconsistent environment problem We’re going to deploy an application using Docker now Docker will make sure that the environment throughout the development cycle is consistent now deploying a monolithic application can cause many problems like for example, if one other feature of the application stops working, then the entire application will shut down

So for this reason we are going to create a micro service based application And build it on the Jenkins server and finally use Docker to maintain a consistent environment throughout the cycle So over here you can see that there are four micro services and for each microservices I’ve built a Docker file All right So first let me discuss what each of these microservices do now the account service and the customer service are the main microservices but as the discovery and Gateway services are supporting microservices now account service will basically hold the account details of a customer and similarly the customer service will have a list of Customer details now Discovery service, which is the supporting service will hold details of the services that are running on the application and apart from that It will also register every service in the application Now what a Gateway service does is on receiving a client request It will route the client requests to the destination service now to be more specific it will provide the IP address of the destination service Okay So now that you know how these microservices work Let’s move on to the next part now basically A sickly these micro services are coded and that dependencies are put into a Docker file Now for each of these doc of files or Docker images created by packing the docker image with a jar files Now, how do you create a jar file? A jar file is created by running the ambient clean install command, which basically cleans up anything that was created by the previous Bill and it will run the pom.xml file which will download the needed dependencies So whatever dependencies are needed are stored in this jar file All right Now once the dockerfile is packaged with the jar file then a darker image is created for each of these micro services So here we are going to use Jenkins to automate all of these processes So Jenkins is basically automatically going to build and then push these Docker images to Docker Hub Now after the images are pushed to dock a ham the quality assurance or the production team can pull these images and build as many containers are to fit All right So basically over here we’re going to create Docker files for each of these These micro services and then we’re going to package these doc of files along with the jar files and create a Docker image for each of these micro Services All right Now after creating the docker images, we’re going to push these images to Docker Hub after which the quality assurance or a production team will pull these Docker images onto their respective servers and build as many containers as they want I hope this is clear So now let’s practically Implement all of this Alright guys, so I’ve logged into Jenkins and I’ve created for different jobs each for one micro service All right Now, let me just show you the configuration of one of these jobs Now, let’s go to account service Let’s go to configure So guys make sure that you enter your GitHub repository here So go to source code management click on get and then enter your repository URL over here Now, let me just show you what we’re doing here Now within the build step I’ve selected execute shell now Let me just show you how that’s done So it’s simple just go to add build step and click on execute shell So when you click on execute shell or command prompt will open like this and you can type this code on that Now what I am doing here is first I’m changing the directory to account services because I’m running the account service within this job after that I’m performing an MBA and clean install which I explained earlier now once we’re done with that we’re going to build a Docker image and then we’re going to push it to your Docker Hub account Now, these are the credentials to my Docker Hub account Where’s the Lakers? Are you? Name of my Docker Hub account and enter a car – demo is a repository that I’ve created in my Docker Hub account and click on apply and save our bathroom account services are built Jaws for the other services as well Now Let me just show you customer service Also Let’s go to configure now within the source code management Like I said earlier enter your repository URL after that go to the build steps over here You can see that I’m changing the directory to customer service then I’ll perform MBA Clean installed So next I’m going to build Docker images now over here customers basically the tag of the image So whenever this image is going to get pushed to my doctor harm the tag is going to be customer All right, now similarly for account service the tag was account Now, I’ve done the same thing for all the other jobs All right, click on apply and save now guys in order to run these for jobs as one workflow I’ve created a build pipeline Now this pipeline will basically execute these four Jobs in one workflow Now, if you want to know how to create a build pipeline, please refer the video in my description box I’m going to leave a link where you can see how to create a build pipeline All right Now, let me just show you my GitHub account now over here, you can see that I have account service customer service Discovery service Gateway service And also there’s Zipkin service now guys, this service basically keeps a track of all the other services So it’s going to keep a track of where the requests are going

and how the Are getting sent from account service through customer service now with an account service You can see that I have a dock of Carl Jenkins file and a Bomb Dot XML file All right guys, now let’s start building an application Just click on the run now here you can see that account service is getting executed Now, let’s individually go to account service first Let’s click on account service All right here you can see that it’s building this job So basically here what we’re going to do is we’re going to change the directory to account service After that, we’re going to perform an MBA and clean install and then we’re going to build an image and push that image to Docker Hub All right, so guys remember to provide your Docker Hub credentials Now this job has successfully executed now after the job has executed is going to trigger the next job, which is customer service Now, let’s look at the build pipeline now over here This is turned green because account services completed execution now customer service is currently running so Let’s look at the build in customer service So after the gun service is completed execution Where’s you can also check the output of account service from here scroll down and says success now after Gonzales has finished Let’s trigger the customer service So customer service starts executing now now let’s individually look at customer service now over here, you can see that customer services building now in this job You’re basically going to change the directory to customer service after that We’re going to perform an MV and clean and Tall command and then we’re going to build and push a Docker image to Docker Hub All right So once this is completed the next job in the pipeline will get executed Okay? Alright guys, so this is successfully executed Now, you can see that customer service is also turned green, which means that it has successfully finished building Alright Alright guys, so you can see that customer service is completed execution Now, let’s trigger the build of Discovery service Now, let’s look at the output of Discovery service So guys, you can see the output from here itself Let’s click on console All right So within the discovery service again, we’re going to change the directory to Discovery service after that We’re going to perform an MBA and clean install and once we’re done with that we’re going to build an image and then we’re going to push it to Docker Hub guys Make sure you have entered your dog or have credentials Alright guys So this is completed execution You can see that it says success now, it’s triggering the new bill, which is Gateway service So here you can see that Gateway Services started execution Let’s look at the console output All right, so similarly in Gateway Service First you’re going to change the directory to Gateway service after that you’re going to perform an MV and clean install command and once you’re done with that you’re going to build an image and then push it to Docker hub So guys Gateway service and successfully completed execution Now the bill pipeline has fully turn green, which means that the entire workflow has completed execution Now, let’s go to our Docker Hub account and see if all these images got pushed to Docker Hub All right, so I’m going to my doctor harm now Let’s go to the editor a car demo Repository Now over here, you can see that the account customer Discovery and Gateway Services all of these four images of push to Docker Hub All right So with this we have come to the end of the demo now after pushing these images to Dhaka harm any quality assurance team or any testing team can pull these images onto their respective servers so they can easily deploy this to production Docker and node.js tutorial so why use node.js with Docker? So as it speeds up the application deployment process deployment becomes easier that the lot of things which you don’t need to look at while deploying if it runs locally on your Docker container surely it will run on any other machine or any other server on any Docker container application portability increases you’re developing on Windows deploying on On a Linux you don’t need to care about all that if it works in one container it’ll work in another container simplifies the Version Control process promotes componentry use very light footprint and puts less overhead on the app Now, let’s start with a simple node app I’ll do npm in it I create an empty project So we have a empty package dot

Json file will go ahead and install Express So we’ll try to create a very simple hello world application using Express So it will be a web application Express is a very popular web framework created on node So you’ve now if we Open the package dot diesel we can see that the name of the application and mainly in the dependencies We have expressed listed over here Okay, we’ll create our app dot J’s file That will be our app Let’s write our application over here First we’ll import the express module We’ll use that object of the express module to create our app object We’ll use the app object to listen for this is where we actually starting our HTTP server and as to listen on a particular port We can use any port number over here 3000 the port numbers are a commonly used are three thousand eighty eighty or eighty eight eight four times Basically any number any port that is open on your system And finally we’ll just create one route where the app would be giving a responsive that route is hit in the browser We’ll just send Hello from Docker Okay, I think that’s all we need is a simple demo will try and run this Okay, I’m listening on 3,000 blue errors will open this Okay Hello from Dhaka We have it great So I’ll close this now now let’s talk our eyes this application So as you remember the three basics of Docker dockerfile Docker image Docker container We need a Docker file over here Destroyed dockerfile, no need to give any extension Okay still text file will open this Not this is where we will tell doc of what to do before moving on with the commands This is the docker Hub website Hub dr.com I suggest you please create account on this website, or I guess when you go to install when you go to download the docker, I think it will tell you to create an account or Docker ID So once you have an account on half doctor.com you go to explore and this is where you see all the popular images of talker So anytime you’re working with any language platform For example, let’s say notice in our case or PHP or dotnet or using a database is let’s say like over here post cry or couchbase all these standard Technologies They have their own official Docker images already on the docker Hub You don’t need to create images for these from scratch because these are all readily available to use So in our case right now, we need to use the node on the docker So you just search for node Okay, and yes, you can see we have node is an official image of official image means this particular Docker image is created by the people who create a node Okay, like Mongo Express So this Docker image is created by the people who actually create the Mongo Express Library Okay, and then obviously there are verified Publishers, like for example, Microsoft and anybody can upload a Docker image over here in the hub Okay, and you can also filter Them by the categories the kind of image that you want the supported OS supported architectures, etc Etc So for us all we need is this node Docker image? So this is the image upon which our image will be built and the container would be working Okay, so coming down how to use this image Okay So you need to go to this guide will open the setup page of node Okay So over here create a Docker file in your node.js app project specify the know Road base image with your desired version This will be the first line of our dockerfile So this is the official Docker image that we want to use and this is the version of that Docker image not the version of node You can have a look at all the supported Docker image versions of node I think 10 is good for us I think it is the latest one We don’t actually need to go and dig into which version supports what but this is good for us for now Okay, moving forward now What do we want to tell Docker to do? First we’ll create a working directory for the docket So this is where we are telling Docker to like create a app directory for itself where whatever work or things

The doctor needs to do for our application doctor can put it inside the app folder Will tell Docker to copy our package dot Json and if you remember dockerfile is inside our app folder Okay So this is inside our app folder So we don’t need to give any paths for any files that you are referencing so I can directly say copy package.json inside the app Okay run npm install So this is like very easily understandable commands copy package dot Json inside the app folder and then run the npm install So this command will be Inside this folder of the docker container and what npm install would do is whatever packages are listed inside package dot Json if you remember we have expressed so those packages will be installed Okay, then we tell Docker to now dot over here means the current directory the file isn’t so this is our current directory and metering Docker to copy everything from the current directory to the app folder and then run the command node So this is the command which even we used to run the node app Okay So this is what Taco would run inside its container and lastly if you remember we had a port number which we were using for us It’s 3,000 So we need to expose a Number as well, okay Yeah, so I think this is it we are done now We’ll try and run this inside Docker Okay So first we need to build the docker image We only have a dark of file right now So next step is to build a Docker image from the dockerfile We need to give a name to the docker image I’ll name it Hello Docker Okay, because the mistake that I did is to put a dot in the end So just in the current directory we are in okay So as you can see, these are the commands that is running one by one It’s going through all the steps from Note 10 What dir copy back as dot Json run npm install and this over here It might take a while because it needs some time to download the packages listed in package dot Json Okay, and then it went through everything fine Okay Now we have the image created now, we need to run the docker app So the Almond for that is Docker run Also again, we do have the docker documentation online, which you can refer any time Okay, so any language any platform technology, you learn always try and refer to the official documentation, if you can like refer from it study from it That is the best always another thing even in the terminal in the console You know, you can always look at help it will list out all the options that you can give to the command So these are the options for the Run command not the docker command the Run command Okay all the options for the the docker command would be darker help Okay So these are all the sub commands and options that you can use with Docker So moving on we’re going to run the image occur run now, it means we want the docker to run in an interactive shell So if you look at the options, that’s not required, but it’s good Okay, so – i – I interactive keeps standard input open even if not – And T allocate a pseudo TTY TTY means or terminal so Docker run And we want to tell Docker which Port we exposing and which Port via using inside the application So again, if you look at help, there is this P option publish lists publish a container sport to the host So we’ll be creating a Docker container, which is running on the host host is over Windows operating system We need to tell which Port the docker is exposing and which Port is being used by the app inside this port 3000 over here and this mm Oh why Will be different Okay, it’s not compulsory that there should be the same all the time We can have 8,000 over here 3000 over here And all we need to do is map it in the command But over here is the same so Docker run So the it what basically would do is I’ll show you when the command runs So this first 3000 is the port number of we listed in the dockerfile the port exposed by doctor And the second one is what our app is using Okay now our image name was hello Soccer let’s run this Okay We have some error it says it’s already allocated

Okay, we’ll try another put this may be due to because I was already doing some tests and running this now once I have changed the dockerfile, I need to rebuild my image will run the build command again Okay Now I will run the image Docker run interactive’s 888 and 3000 inside the application Hello talker And we have running now Let’s try and run again Okay, the site can’t be reached now if you remember we have this talker toolbox thing open and it has its own machine IP hundred 192.168 attend and hundred port number 3000 Okay, so I’m sorry, but I had another instance running already on that Port that is why it did not allow me to run that Docker image on that particular Port I changed my dockerfile port to double it double it and that is the one that is being exposed So for us over here the port number should be double a double date and that is where I see the message Hello from darker, which is what actually we have used Hello from Docker The reason why 3000 is working is because there’s always The another Docker instance running in the background, which I hadn’t closed and it says hello world So there’s actually another Docker container already running on that port and that is why it did not allow me to run the docker container on this port So as you can see I already had one running which I forgot about sorry about that So our app is running right now on the port double it double it and that is the port exposed by talker to my operating system, which is Windows, but the Port on which the app is running inside the docker container is 3,000 So the app is still running on three thousand but Docker is exposing our app to put double it double it and that is what we have mapped over here and – – IIT means it’s interactive right now this message what you’re seeing is actually from a console inside the docker container This is not from our own CMD or command line Okay So now I can press control C to end this Okay Yeah, so as you can see it has ended Okay, so I was able to give it the command to end it and that came and went to the console of the container So I hope you enjoyed it the small demo Okay And yeah, this is what we actually did basically create the know Jesus I have created dockerfile build the image and then execute it Let’s look into the topics for today’s session So we’ll start today’s session by understanding What is a virtual machine and then I’ll tell you the benefits of virtual machine after understanding that I’ll tell you what our talk of containers and then I’ll tell you the benefits of Docker containers after an introduction of virtual machine and talk of containers I’ll tell you the difference between Docker containers and virtual machine and then the uses of them So now let’s get started with the first topic for today’s session that is what is virtual machine a word Your machine is an emulation of a computer system in simple terms It makes it possible to run what appears to be on many separate computers on Hardware that is actually one computer the operating systems and their applications share Hardware resources from a single host server or from a pool of host servers Each virtual machine requires, its own underlying operating system And then the hardware is virtualized not only this but a hypervisor or a virtual machine monitor is a software We’re firmware or a hardware that creates and runs virtual machines It sits between the hardware and the virtual machine and is necessary to virtualize the server since the Advent of affordable virtualization technology It departments have embraced virtual machines as the way to lower costs and increase efficiencies now with the note of this let me tell you the benefits of virtual machines So the benefits of virtual machines are mainly all the operating system resources Sis are available to all the applications They have established management and security tools and not only this but they’re better known for security controls Now who are the popular virtual machine providers while the popular virtual machine providers are VMware K VM virtualbox Zen and hyper-v So now that you’ve understood what is the virtual machine? Let me tell you what Docker containers are

So as we all know that Docker is the company driving the container movement and the only container platform provider to edges every application across the hybrid cloud with containers instead of virtualizing the underlying computer like a virtual machine only the operating system is virtualized container sit on the top of a physical server and each container shares the host operating system kernel and usually the binaries and libraries to now sharing the operating system resources Just libraries significantly reduces the need to reproduce the operating system code and means that the server can run multiple workloads with a single operating system installation containers are thus exceptionally light and they’re only megabytes in size and they just take few seconds to start in contrast with that virtual machines Take minutes to run and are an order of magnitude larger than the equivalent container all that The container requires is enough of an operating system Ting programs in libraries and a system resource to run a specific program what this means is that in practice You can put two to three as many as applications on a single server with containers that you can with a virtual machine in addition to this with containers You can create a portable consistent operating environments for development testing and deployment So now that I’ve told you about containers, let me tell you the Types of containers so mainly there are two different types of containers that is the Linux container and the docker containers So the Linux container is a Linux operating system-level virtualization method for running multiple isolated Linux systems on a single host Whereas darker started as a project to build single application Linux containers introducing several changes to the Linux containers that make containers more portable and flexible to use at a high level We can say that Soccer is a Linux utility that can efficiently create ship and run containers So now that I’ve told you the different types of containers, let me tell you the benefits of containers So containers offer reduced it management resources They reduce the size of the snapshots They’re used in quicker spinning of apps and they also make sure that the security updates are reduced and simplified and they also make sure that there is less code to transfer migrate and upload workloads Now who are the popular container providers? Well, the popular container providers are the lyrics containers to talk and Windows server So now that I’ve told you individually what a container is what a virtual machine is and how do these two work now? Let me show you the major differences between Docker containers and virtual machines Well, the major difference is come with operating support security portability and performance So let’s discuss each one of these terms one by one and let’s know the differences between both of them So let’s start with the operating system Support the basic architecture of Docker containers and virtual machines differ in their operating system supports containers are hosted in a single physical server with the host operating system, which is shared among them But the virtual machines on the other hand have a host operating system and an individual guest operating system inside each virtual machine irrespective of the host operating system The guest operating system can be anything like it can be Linux windows or any other our operating system Now the docker containers are suited for situations where you want to run multiple applications over a single operating system kernel, but if you have applications or servers that need to run on different operating system flavors, then virtual machines are required sharing the host operating system between the containers make them very light and helps them to boot up in just a few seconds Hence the overhead to manage the container system is very low compared to that of Virtual machines now, let’s move on to the second difference that is Security in Docker since the hose kernel is shared among the containers the container technology has access to the kernel subsystems as a result of which a single vulnerable application can hack the entire host server providing root access to the applications and running them with a Superuser privileges is therefore not recommended in Docker containers because of these security issues on the other hand And virtual machines are unique instances with their own kernel and security features They can therefore run applications that need more privileged and security Now moving on to the third difference that is portability Docker containers are self-contained packages that can run the required application

since they do not have a separate guest operating system They can be easily ported across different platforms The containers can be started and stopped in a matter of few seconds compared That of vm’s due to the lightweight architecture This makes it easy to deploy Docker containers quickly in servers on the other hand virtual machines are isolated server instances with their own operating system They cannot be ported across multiple platforms without incurring compatibility issues for development purposes where the applications have to be developed and tested in different platforms Docker containers are the ideal choice now, let’s move on to the Final difference that is performance darker and virtual machines are intended for different purposes So it’s not fair to measure the performance equally but the lightweight architecture makes Docker containers less resource-intensive than the virtual machines as a result of which containers can start up very fast compared to that of virtual machines and also the resource usage varies among the two in containers the resource usage such as CPU memory input output varies With the load of traffic in it unlike the case of watching machines There is no need to allocate resources permanently two containers scaling up and duplicating The containers is also an easy task compared to that of virtual machines as there is no need to install an operating system in them So now that I’ve told you the differences between Docker containers and virtual machines, let me show you a real life case study of how Docker containers And virtual machines can complement each other So all of us know PayPal, right? So PayPal provides online Payment Solutions through their account balances bank accounts credit cards or promotional financing without sharing the financial information today PayPal is leveraging openstack for the private clouds and run smoother than 1 lakh virtual machines Now, one of the biggest desire of PayPal’s business was to modernize their data center infrastructures making it more on demand Improving its security meeting compliance regulations and also making everything cost efficient So they wanted to refactor the existing Java and C++ Legacy applications by doc Rising them and deploying them as containers this called for a technology that provides a distributed application deployment architecture and can manage workloads But must also be deployed in both private and public Cloud environments So PayPal uses talk a commercial solution Solutions to enable them to not only provide gains for the developers in terms of productivity and Agility but also for the infrastructure teams in the form of cost efficiency and enterprise-grade security The tools being used in production today include talk of commercially supported engines Docker trusted registry and as well as talk a compose the company believes that containers and virtual machines can co-exist and thus they combined these two technologies leveraging Docker containers and Two machines together gave PayPal the ability to run more applications while reducing the number of total virtual machines and also optimizing their infrastructure this also allowed PayPal to spin up new applications much more quickly and also on an as-needed basis since containers are more lightweight and instantiate in a fraction of second while virtual machines take minutes They can roll out a new application instance quickly patch up an existing application Ian or even as the capacity to compensate for peak times within the year, so this helped PayPal to drive Innovation and also outpaced the computations So guys, that’s how the company gained the ability to scale quickly and deploy faster with the help of Docker containers and virtual machines So now let me summarize the complete session in a minute for you So Docker is a containerization app that isolates applications at the software level if a virtual This a house the docker container is a hotel room If you do not like the setup, then you can always change the hotel room as it is much easier than changing a house, isn’t it? So similarly as a hotel has multiple rooms sharing the same underlying infrastructure doctor offers the ability to run multiple applications with the same host operating system and sharing underlying resources now, it is often observed that some of them believe that Docker is better than a virtual Machine, but we need to understand that while having a lot of functionality and being more efficient in running applications Docker cannot replace virtual machines, both containers and virtual machines have their own benefits and drawbacks and the ultimate decision will depend on your specific needs

But let me also tell you that the some general rules of thumb that is what your machines are a better choice for running applications that require all of the operating system resources and functionalities Well, you need to run multiple applications on servers or have a wide variety of operating systems to manage Whereas the containers are a better choice when your biggest priority is to maximize the number of applications running on a minimal number of servers But in many situations the ideal setup is to likely include both with the current state of virtualization technology, the flexibility of virtual machines and the minimal resource requirements of containers work together to provide Environmental Moments with the maximum functionality These will be the parameters I’ll be comparing these two tools against insulation cluster configuration GUI scalability auto-scaling load balancing updates and rollbacks data volumes and finally logging and monitoring Okay Now before I get started with the difference, let me just go back a little bit and give you a brief about communities and Doc is warm Okay now first of all, At ease and Dockers warm are both container orchestration tools orchestration is basically needed when you have multiple containers in production and you will have to manage each of these containers and that’s why you need these tools Okay, Cuban eighties was first of all created by Google Okay, and then they donated the whole project to the cloud native Computing foundation And yeah now it’s a part of the CN CF open source project Okay, and since communities was Google’s brainchild It has a huge developer community and a lot of people who are contributing Two communities So if you have any errors at any point of time when you’re working with kubernetes, then you can straight away put that error on github.com or stackoverflow and you will definitely have solutions to those errors So that’s the thing about communities and we consider Cuban at ease to be more preferable for a complex architecture because that’s when the whole power of Cuban and is comes out So communities is really strong Okay, if you’re going to use a very simple architecture may be an application which has very few services and which needs very few containers Then you’re not going to really see the power of Been at ease when you have like hundreds of thousands of containers and Broad that’s when kubernetes is actually beneficial and that’s why you see the difference between Cuban IDs and Dockers Wang, right? So Doc is form on the other hand is not that good when you have to deal with hundreds of containers Okay, so functionality wise they are pretty much head-to-head with each other Okay So with both you can set up your cluster, but yeah, dr Swarm is little easier and it’s more preferable when you have less number of containers Okay, but whatever it is if you are dealing with fraud environment Then Cuban at ease is your solution because Cuban artists will ensure your classes strength in prod a little more at least when you compare it to dock a swamp Okay, and yeah the doctors from Community is unfortunately not as big as the communities because Google is basically bigger than darker and darker swarm is again owned by and Marion by darker ink, so that is the deal with kubernetes and doctors from all right, but never mind the fact that the base continues which are used for these are again Docker containers So at the end of the day Docker is definitely going to be a part of communities and as part of dr It’s just what you do after your containers That’s what matters the container management part Okay So anyway, I have given you a good brief about these two tools Okay Now, let’s get down to the functional differences between these two Let’s start with insulation and cluster configuration now for setting up your cluster with kubernetes is going to be really challenging in the beginning because you will have to set up multiple things You have to first bring up your to Cluster Then you have to set up the storage volume for your cluster and then you have to set up your environment and then you have to bring up Up your dashboard You have to bring up your Port Network And when you bring up your dashboard, you have to do the cluster role binding and all these things Okay, and then finally you can get your node to join your cluster Okay, but with Docker swamp, it’s pretty simple and straightforward You need to run one command to bring up the cluster and one command add the node end for it to join the cluster and to simple commands and your classes running You can straight away get started with deploying Okay So this is where kubernetes fall short It’s a little more complicated, but it’s worth the effort because the classes And that you get with kubernetes is way more stronger than doctors warm Okay, so when it comes to failure mechanisms and to Recovery in such places kubernetes is little faster And in fact Humanities will give you more security compared to Dhaka swarm because because your containers are more likely to fail with swamp Dynamic Cuban at ease so it’s not like I’m saying that your containers will definitely fail but if at all they feel then there are more chances of your continuous feeling at swamp than with Cuban at ease Okay, so that’s about the cluster strength and If you are really important about your product moment, and if you have a business, which is basically running over these containers, then I would say your preference should be Cuban at ease because at the end of the day business and your continuous running in prod are more important, so the plaster is more important and that’s why Cuban at he’s now moving on to the next parameter We have GUI now Humanities wins over here also

because Humanity’s provides a dashboard over which we can basically controller cluster not just control we can also figure out and get know What is the status of your Start and how many pods are running in your class? Stop? How many deployments are there? How many containers are running how many services are running and which are your nodes you will have all these details in a very simple fashion Okay, so it’s not like you don’t get all these things with doctors form Okay, you get it with dr Swann also, but you don’t have a gy over here There’s one dashboard where you can visually see everything so you can use the CLI with Docker swamp and you can use the CLI with kubernetes also, but it’s just that communities provide you a dashboard which is a little better and to our eyes It’s a little More easier to understand when you see graphs when you see your deployments would say you’re all your diplomas a hundred percent healthy when you see something like that You will relate to it a lot more and you will like it a lot more so that this additional functionality which you get with kubernetes Okay, so that’s where humanity is wins over here And I also want to add another point that with your kubernetes dashboard You can easily scale up your containers and you can also control your deployments and make new deployments in a very simple fashion So even non-technical people can use kubernetes, okay? But I mean if you are a non technical person then what are you doing with containers? Right? So that’s what veterans would say So season developers would say that I mean if you’re not technical enough to deal with containers, then you don’t deserve to be here So that is one point which can defend orcas warm but it does not change the fact that Cuba Nattie is makes our life easier now moving on to the third one, which is scalability both communities and Dockers warm are very good for scaling up Okay So that is the whole point of these tools So when we see orchestration, this is the biggest benefit Okay communities can scale up very fast Just swarm can also scale up very fast, but there’s a saying that swarm is five times faster than Humanities when it comes to scaling up That is the point So I think swarm this nut just communities over here to Victory, right? But yeah, whatever it is It’s scaling up That’s what matters since both can do it Well and good So the next point is auto-scaling now if I was a Salesman then I would use this whole point of Auto scaling as my sales pitch because Auto scaling is all about intelligence right with Cuban at ease Your communities will always be analyzing your service Eric and whenever there’s a certain increase in your traffic, your communities will automatically scale up your number of containers Okay, and then when the traffic reduces then automatically or number of containers will also be scaled down Okay, so there’s no manual intervention whatsoever I don’t need to barge in So if there’s a weekend coming up and if I’m pretty sure that my website is going to get a lot of traffic over the weekend over the Saturday and Sunday, then I don’t have to manually configure my deployments for the weekend Humanities will automatically do that for me and with DACA swarm That is a major drawback because you cannot do auto-scaling you will have to do it manually Okay, you can do scaling It’s not that scaling is a big deal but during emergency situations It’s really important Okay, communities will automatically analyze that okay, you’re getting a lot of traffic today and it will automatically scale it up for you Okay, but swamp is a little different and if there’s an emergency and if your containers are running out of the number of services, which they can request then they cannot do anything I mean worst case scenario, they will just die out Okay So this is a Cuban at ease winds during these emergency situations because auto-scaling is not possible with talk us warm now moving on to my next point which is load balancing Okay Now with Cuban at ease at times you will have to manually configure these load balancing options Okay with Docker swarm You don’t need to do that because it’s automatically done Now The reason you should do it with Cuban at ease is because in communities you will have multiple nodes and inside each node You will have multiple pods right and inside these pods You will have many containers Now if your service is basically spanning over multiple containers running in different parts, then there’s this concept of load balancing which you have to manually configure because pods can let all the containers inside them to talk to each other Okay, but when it comes to managing your load between these pods, that’s where the challenge comes especially when these pods are on different nodes Okay So you will face times when you will have to manually configure these load balancing and you will have small issues Okay, but it’s not that it’s going to be a major you can still deal with it but swamp wizard because you have no The phones over here You have a swarm cluster in which there are containers So these containers can be easily discovered by others Okay, so they use IP addresses and they can easily just discover each other and you’re all good So that’s the point and now coming to the sixth point which is ruining updates and rollbacks Let’s say that these two are very important aspects and these are some of the best features of these two tools Okay Now rolling updates is basically needed for any application now, we’re a software application which is using Cuban at ease Or not whether it’s using Docker swarm or not any application needs updates Okay So rolling updates is really important because any application would need to have updates to it Right any software application any web application It definitely needs updates to its functionality Okay Now if your application is basically containerized then at any point of time,

you don’t need to bring down your containers and then make the updates with the help of using these containers the different containers and these parts can be progressively given the updates Okay, so Cuban at ease We have the concept of PODS and inside the pods We have multiple containers, right? So in Cuban at is what happens is these rolling updates are gradually sent to each of these pods as a whole So all the containers inside the pods will be gradually given these rolling updates Okay with Docker swarm You have the same thing, but it’s a little different because you don’t have pods the rolling updates are gradually sent to all the containers one after the other That’s the only difference Okay rolling updates are gradually sent to different containers and communities and in Dhaka swamp But in Cuba nattie’s it’s to all the containers within the same pod Okay, ruining updates sent one after the other to the different containers in the same pot That’s the point and when it comes to rollbacks again both provide the same thing Okay, you can roll back your changes So if your master at any point of Time figures out that you’re rolling up it is going to fail then you have an option of rollback You actually have the functionality in both communities and andhakas form But the point is there’s no automatic roll back in case of your kubernetes cluster of your master Turns out that the update is going to fail then it will automatically roll back to the previously stable condition But with swamp the Swarm manager will not automatically do it I mean it provides optionality to roll back but it’s not automatic That is the only difference between these two So I think over here also communities slightly beats doctors warm Okay just nudges ahead of it And now coming to the seventh point which is nothing but data volumes now data volumes is a very key concept because you can have a shared volume space for different containers, okay The conceptual difference between these two is that in Cuba Nettie’s you have multiple ports and only containers inside that one particular pod can have a shared volume Okay, and the difference with Q Docker swarm is that since there are no pods pretty much any container can share the shared space with other containers So that is the only difference Okay, so I don’t think I would go ahead and rate these two It’s just a functionality and a conceptual difference between these two tools Okay now moving on to the last Point logging and monitoring so with kubernetes you have Built tools which does the logging for you and also the monitoring happens Okay So there is a particular directory where you can go and you can read your logs and you can find out where your errors are what your deployment failed why something happened you can get all those details because it automatically does the logging and the monitoring part is used by your master to basically analyze what’s your cluster state is at all the time What is the status of all your notes? What is the status of the different pods in the nodes are all the containers up and running our continuous responsive So communities uses monitoring for All these purposes, okay, but with DACA swarm, there is no inbuilt tool and you have to use third-party tools something like an e LK, right? So I’ve done that before and I’ve set up a LK to work with my doctors mom and it pretty much does the same thing Okay So with L kill, you can correct all the logs you can figure out where the error is and even monitoring can be done But it’s just that L key is again not a very easy tool to set up and use that extra step which you have to do with the respect to dr Swarm Okay, so I think that’s pretty much the end of the function Novelties and the concept of differences between these two tools communities and dr Swann now that is the end of the theory part over here I want to open up for DACA swamp and communities and show you a demonstration and give you a feel as to how they work Okay So for that, let me open up my VMS where these two are installed Let me start the demo with Docker swarm first So can everybody see my VM over here So I have what two virtual machines I have a master and I have a slave but when it comes to Dockers warm, they’re not called master and slave but they are rather called manager and woke up So my manager is the manager of the cluster and my worker would be the one that would be executing the services All right So like I said with your doctor swarm, it’s very easy to bring up the cluster and the command to do that is very simple You can just specify Docker swarm in it advertised adder and just specify the IP address of your master Okay, so in my case my master or my man I’d be addresses this one Okay So if I just hit enter then everything is up and ready I get the joining token And if I just execute this command at my node and then my cluster would be ready and I would be joining my cluster Okay, so I’m going to copy this Let me go to my worker and here let me paste this if I hit enter then it says that this node has joined the Swarm as of worker brilliant, right? So that’s as quick as it is and with your master you got this message saying to add a manager to the Swarm you should Use this other command Okay, but that’s only if you want another node to join as a master or as a manager, but otherwise this command is good enough So your cluster has been initialized and in just a few seconds, right? So it’s as simple as that if you want to deploy an application, you can go ahead and straight away do that Let me show you a simple application

Let me run a hello world and show you I can use the command Docker service create and let me give the name as Hello world Okay, and the image to be used as the hello world image? Okay, so I will basically get a container having the name of hello world and that would be created and it could be both it could be running in both the manager and on my node, but if I want one replica of that running in both my manager and my node, then I can set then there’s another flag I can use to set that Okay, and that is the mode flag I can say mode is equal to Cool So with this I will have one container running on my manager end and one continue in my node And this is where the differences with respect to Cuba Nettie’s because in Cuba not he’s only your nodes will run the services your manager or your master will not execute any service Okay, it’ll only manage the whole deployment Okay, so that was a spelling mistake in my command So let me just go here and say service Okay So let’s just wait for a few seconds until my container is up and running great So as you can see my service has been created and this is my ID if I want to verify my cluster State and to check if my services are ready I can run these commands I can do a Docker node PS Okay in this way, I’ll know how many continuous came up and on which nodes they were executed So it says on my manager there were four of these that started Okay, and of course it’s a hello world containers So it’s basically shut down immediately And I can do a Docker know Del s to identify how many members are there in my cluster? So with this you can see that there are ideas for two different nodes one is my worker and the other is my manager, right? And that’s how simple it is with Docker, but with kubernetes it’s a little more different Okay, so I think by now you got a good idea of how simple Dockers warmest Okay, and besides I can also check the service status by running a few commands, right? So I have a docker As service lso this would basically list down the containers which are there and the services Okay So it says I have a Hello World container which is running and it’s in replicated mode and there is one replica of this particular container, right so I can also check more I can do a Docker service PS and I can say the name I can say hello world and I have videos about my container Right? So initially there was one container with start off in my worker and then it got shut down and then one more on my And then a couple of them on my manager So these are the details and this is how you drill down Right? So you specify that I need one particular service That is the Hello World Service and I need this many replicas wherein I want them running on my manager and my Walker and then the container has been deployed and it’s running and that’s what you can see here Right? So that is about Docker swamp and let me bring the VM down and bring up my communities bm’s to show you a demonstration of Cuban at ease Okay Well, I hope the concept was clear you but in case if you have any doubt, then I would request you to go to enter a girl’s YouTube channel and watch my video on Docker swarm where I’ve shown load balancing between the different nodes and I’ve also shown how to ensure High availability Okay So in the meanwhile, let me just shut this down to bring down the cluster I can basically get my notes to leave the cluster first and then shut down the classroom itself So if I want to bring down my cluster, I can do it with a simple command But before I do that, let me stop the service That created Okay So Docker service RM and hello world was a name So I want to stop the service and now there are not going to be any replicas of this service or this container Okay Now, let me go back to my node And here let me leave the cluster the Swarm cluster Okay, and the simple command for that is darker swarm leave So if I hit the command at the node and then this node will leave the swamp Okay So if I run the docker node LS command over here then the entry you for my note will be gone It would have only one entry that is my manager Okay, but anyways, if I want to bring the password to an end, then I can get you in my manager to leave the cluster and I can execute the same command for that I can say Swarm leave and everything is back So yeah, let me use Force Okay, no one has left So that’s a bit darker swarm Okay Let me just close this terminal and go to my next demo So now let me open up my VMS which have the kubernetes running Okay So this is my community’s master and this is my Cuban at ease node Okay Now the thing is that I’m not showing you the entire set up over here because it’s really complicated All right, so I don’t want to go ahead and execute all the 1015 commands and show you the entire process set up because it’s going to take a lot of time to do that rather If you want to know

how to do the cluster set up with kubernetes, then you can go and see the blog which I have written And also there’s another video on how to setup the kubernetes cluster All right So these two would do you of any help the YouTube video and the blog and the link to both of them are below in the description Okay So, let me just straight away get started and tell you what I have already done if I have one Set up the cluster Then there are a lot of things which I should have done Okay starting from here So basically I have started my inner command over here I’ve specified which Port Network I’m using I’m going to use a Calico Port networks of specified that and specify the address where the other nodes are to subscribe Okay, and then there are various commands which after run with respect to setting up the environment Okay these and then I basically set up the calcio Padova here and then I have made room for setting up the dashboard and then I brought my proxy up, okay Okay, and then from the second terminal I’ve done few more things I have brought up my dashboard account So I have created a service account for my dashboard And then I have done the cluster old binding by saying that this dashboard that I am the admin and give me the admin privileges So I’ve done that here and then basically obtained the key which is basically the authentication key for accessing the dashboard So these are the so many other commands which are needed to be executed from the Masters end before your no joints And then after that I basically went to my node And then I executed the one command which I was asked to execute Okay So this was the joint command a generated at my master So I took that and I paste it here and then my node successfully has joined the cluster So this was the entire process and you can go to my blog and to my videos and go through this whole process Okay, so I have also brought up the dashboard, right? So let me just straight away go to the dashboard and show you how simple it is how easy it is to make any deployment because that is the whole advantage With your dashboard right doctor for may be easier That’s what I showed you with communities It’s much more better And this is the communities dashboard that comes up it comes up on this port number All right, and if you want to start your deployment, it’s very very simple Okay, you can just go to this create button over here click on Create and then you have option You can either write your Json script or you can upload the Json file, which I have already written or you can click on this create an app over here It’s basically a click functionality and here you can just put Your app name So let’s say I want to deploy the same Hello World app So I’ll just put the hello I will give the name over here hello world and then the basement which I want to use for this is going to be the hello world image is going to be present in my Docker Hub registry or the Google Registry So hello world is the image and let’s say I want three pause initially Okay and let me just straight away Click deploy Okay, and with that your application is deployed and similarly if there’s anything else you want a containerized It’s a simple if it’s going to be Nanjing server or a tomcat or an Apache server, which you want to deploy you can just choose the base image and hit on the deploy button and yes, it’s straight away deployed and you’ll get something like this which would show what is the overview and what is the status of your cluster? Okay, as you can see my deployments mypods and replica sets Everything is healthy a hundred percent, right? So this is my deployments So it says that two out of three pods are running So let’s just give it a few minutes and then all the three pods will be up and running Okay, you got to give it a few seconds because it’s just 19 seconds old and yeah, these are the three pods The third one is coming up Okay, because yeah, it says terminated because the hello world container its self destruct container, right? It prints hello world and exits That’s what happens here Same thing with replicas sets have mentioned two or three pods, which means that at all times there will be three pods running and my replica sets and replication controller is the one that will control this password Okay So yeah, that is pretty much it and that’s how easy and simple it is to work with kubernetes He’s right So you can have your own opinion You can choose whether you want to use Cuban at ease or you can choose if you want to use Docker swarm Okay, so my take on this is if you have a very simple application, then you would rather be better off with dr Storm Okay, and also if you have a very few clusters, which you are dealing with, but if you’re dealing with a real Prada environment, then I would say Cuban IDs is a better option And also when the containers are many number when you have a lot of containers, then it’s easier to work with communities You can just specify the configurations You can say that I need this many containers running at all the times I need this many nodes which are connected to my classroom and I will have these many Paul’s running in these nodes and whatever you say that will be followed by our communities So that is y cubed he’s is better in my opinion and you can have your own version So whatever your choices between the two I would like to listen to your opinion You can please put in your opinion in the comment box And if you have any doubts you can let me know Alright So before I end this video I would like to talk about the market share between Cuban at ease and Dockers warm when it comes to new articles or blogs written on these two tools

Then communities beats doctors form 9 is to 1 for every nine blogs written on humanities as one at known dr Swamp So that is the differential 90 percent to 10 percent Same thing with web searches, right? So communities has you know Vamo searches 90% more searches as compared to dr Storms 10% search and same thing you can say for GitHub stars and for your GitHub comments Okay So culinary is pretty much wins on every term here and it’s way more popular and it’s way more used and it’s probably more comfortable And if you have any problem at any point of time, you have a huge Community which will help you out with all the replies right off whatever your errors So if you want Simplicity, I would say go for doctors warm, but if you want classes friend and if you want to ensure High availability in your Prada specially then Humanities is your tool to go for but however, both are equally good And they are pretty much neck to neck on all the grounds and this was a statistic, which I picked up from Platform 9, which is a very famous or tech company, right? So they write about that So I think on that note, I would like to conclude today’s session and I’d like to thank you for watching the video to the very end Do let us know and what topics you want us to make more videos, and it would be a pleasure for us to do the same and with that I’d like to take my leave Thank you and happy learning I hope you have enjoyed listening to this video Please be kind enough to like It and you can comment any of your doubts and queries and we will reply them at the earliest do look out for more videos in our playlist And subscribe to Edureka channel to learn more Happy learning


What is Jenkins | Jenkins Tutorial for Beginners | Jenkins Continuous Integration Tutorial | Edureka

hello everyone this is sort of from ad Rekha in today’s session we’ll focus on what is Jenkins so without any further ado let us move forward and have a look at the agenda for today first we’ll see why we need continuous integration what are the problems our industries were facing before continuous integration was introduced after that we’ll understand what exactly is continuous integration and we’ll see various types of continuous integration tools among those standardized integration tools will focus on Jenkins and we’ll also look at Jenkins distributed architecture finally in our hands-on parts will prepare a build pipeline using Jenkins and I’d also tell you how to add Jenkins slaves so I hope we all are clear with the agenda kindly give me a quick confirmation by writing down in the chat box when it says yes so does Quinn elucha says looks great as a orphan all right cool thanks for the confirmation guys now I will move forward and we’ll see why we need continuous integration so this is a process before continuous integration over here as you can see that there is a group of developers who are making changes to the source code that is present in the source code repository this repository can be a git repository subversion repository etc and when the entire source code of the application is written it will be built by tools like and maven etc and after that that built application will be deployed on to the test server for testing if there is any bug in the code developers are notified with the help of the feedback loop as you can see it on your screen and if there are no bugs then the application is deployed onto the production server for release I know you must be thinking that what is the problem with this process this process looks fine as you first write the code then you build it then you test it and finally you deploy it but let us look at the flaws that were there in this process one by one so this is the first problem guys as you can see that there is a developer who is waiting for a long time in order to get the test results as first the entire source code of the application will be built and then only it will be deployed on to the test server for testing it takes a lot of time so developers have to wait for a long time in order to get the test results the second problem is since the entire source code of the application is first build and then it is tested so if there is any bug in the code developers have to go through the entire source code of the application as you can see that there is a frustrated developer because he has written a code for an application which was built successfully but in testing there were certain bugs in that so he has to check the entire source code of the application in order to remove that bug which takes a lot of time so basically locating and fixing of bugs was very time-consuming so I hope you are clear with the two problems that we have just discussed kindly give me a quick confirmation so that I can move forward or if you have any doubts please write it down in your chat box I’ll be happy to help you any doubts guys so shall I move forward or itag for the confirmation now we’ll move forward and we’ll see two more problems that were there before continuous integration so the third problem was software delivery process was slow developers were actually wasting a lot of time in locating and fixing of bugs instead of building new applications as we just saw that locating and fixing of bugs was a very time-consuming task due to which developers are not able to focus on building new applications you can donate that to the diagram which is spreaded in front of your screen as we always a lot of time in watching TV doing social media similarly developers were also wasting a lot of time in fixing bugs alright so let us have a look at the fourth problem that is continuous feedback continues feedback related to things like build failures test status etc was not present due to which the developers were unaware of how their application is doing so we have a question guys it is from Anisha she is asking the process that you showed before continuous integration there was a feedback loop present very good question or Lucia so what I’ll do I’ll go back to that particular diagram and I’ll try to explain you from their solution the feedback loop that you are talking about is here when the entire source code of the application is built and tested then only the developers are notified about the bugs in the code alright when we talk about Cantonese feedback suppose this developer that I’m highlighting makes any coming to the source code that is present in the source code repository and at that time the code should be pulled and it should be built and the moment it is built the developer should be notified about the build status and then once it is built successfully it is then deployed onto the test server for testing at that time whatever the test data says the developer should be notified about it similarly if this developer makes any commits to the source code at that time the code should be pulled it should be built and the base data should be notified the developers after that it should be deployed on to the test server for testing and the test results should also be given to the developers I suppose you have got the different solution between the country’s feedback and feedback yeah I’ll summarize it once again so what happens when we talk about feedback feedback is present as you can see first the entire source code of the application will be written it will be built and it will be tested and then only the developers will be notified about the bugs if there are any when we talk about continuous feedback suppose the developer that I’m highlighting with my cursor makes any commits in the source code that is present in the source code repository all right so at

that time the code should be pulled it should be build the developer should be notified about the build results similarly it should be deployed on to the test server for testing and the developer should also be notified about the test results similarly if the second developer makes any coming to the source code at that time the code should be pulled it should be built developers should be notified about the build results and after that the build application should be deployed onto the test server for testing and the developer should be notified about the test results as well so I hope you all are clear what is the difference between continets feedback and feedback so incontinence feedback you’re getting the feedback on the run all right thanks for the confirmation anusha so we’ll move forward and we’ll see how exactly continuous integration addresses these problems let us see how exactly continuous integration is resolving the issues that we have discussed so what happens here there are multiple developers so if any one of them makes any coming to the source code that is present in the source code repository the code will be pulled it will be built tested and deployed so what advantage we get here so first of all any comment that is made to the source code is build and tested due to which if there is any bug in the code developers actually know where the bug is present or which commit has caused that error so they don’t need to go through the entire source code of the application they just need to check that particular commit which has introduced the bug all right so in that way locating and fixing of bugs becomes very easy apart from that the first problem that we saw the developers have to wait for a long time in order to get the test result here every commit made to the source code is tested so they don’t need to wait for a long time in order to get the test result so when we talk about the third problem that was software delivery process was slow is completely removed in this process developers are not actually focusing on the locating and fixing of bugs because that won’t take a lot of time as we just discussed instead of that they’re focusing on building new applications now a fourth problem was Cantonese feedback was not present but over here as you can see on the run developers are getting the feedback about the build status test results etc developers are continuously notified about how their application is doing so I hope we are clear with this any questions any doubts please write it down in the chat box guys I wanted to add this thing let us make this session interactive all right you would even enjoy it if it’s a one-way conversation so whatever doubts whatever questions you have please write it down your chat box and I’ll be very happy to help you and if you have no questions just give me a confirmation so that I can move forward okay so Quincy is cool Anisha says no routes Anusha says no doubts jessica says please move forward all right all right thanks for your confirmation guys so I’ll move forward now I compare the two scenarios that is before continuous integration and after continuous integration now over here what you can see is before continuous integration as we just saw first the source code of the application will be built the entire source code then only it should be tested but when we talk about after continuous integration every commit whatever change you made in the source code whatever change the mind you change is when you commit into the source code that time only the code will be pulled it will be build and then in be tested they will oppose have to wait for a long time in order to get the test results as we just saw because the entire source code will be first build and then it will be deployed onto the test server but when we talk about continuous integration the test result of every commit will be given to the developers and many talk about feedback there was no feedback that was present earlier but in continuous integration feedback is present for every commit you made to the source code you will be provided with the relevant result all right so now let us move forward and we’ll see what exactly is continuous integration now in continuous integration process developers are required to make frequent commits to the source code they have to frequently make changes in the source code and because of that any change made in the source code will be pulled by the continuous integration server it will be pulled by the continuous integrations over and then that code will be built or you can say to be compiled all right now depending on the continuous integration that you are using or depending on the need of your organization it will also be deployed under the test server for testing and once testing is done it will also be deployed onto the production server for release and developers are continuously getting the feedback about their application on the run so I hope I am clear with this particular process kindly give me a quick confirmation so that I can move forward any questions any queries guys please write it down in a chat box okay so Alicia says no questions when it says no questions Quinn no doubts alright thanks for confirmation guys so we’ll see the importance of continuous integration with the help of a case study of Nokia so Nokia I adopted a process called nightly build nicely built can be considered as a predecessor to continuous integration let me tell you what all right so over here as you can see that there there are developers who are committing changes to the source code that is present in a shared repository alright and then what happens in the night this is a build so this build server will hold the shared repository for changes and then it will pull that code

and prepare a build all right so in that way whatever commits are made throughout the day are compiled in the night so obviously this process is better than writing the entire source code of the application and then compiling it but again since if there is any bug in the code developers have to check all the comments that have been made throughout the day so it is not the ideal way of doing things because you are again wasting a lot of time in locating and fixing of bugs all right so I want answers from you all guys what can be the solution to this problem how can Nokia address this particular problem since we have seen what exactly continuous integration is and why we need so you can answer this question guys come on all right so Jessica says the build should be triggered for every commit that is absolutely correct when it says continuous integration so does AJ or I let me tell you guys you all are correct now without wasting any time I will move forward and I will show you how Nokia solved this problem so Nokia I adopted continuous integration as a solution in which what happens developers commit changes to the source code in a shared repository all right and then what happens is a continuous integration server this continuous integration server pose the repository for changes if it finds that there is any change in the source code and it will pull the code and compile it so what is happening the moment you commit a change in the source code continuous integration server will pull that and prepare a build so if there is any bug in the code developers know which commit is causing that error all right so they can just go through that particular commit in order to fix the bug so in this way locating and fixing of bugs was very easy but we saw that in nightly builds if there is any bugs they have to check all the commits that have we made throughout the day so with the help of continuous integration they know which commit is causing that error so locating and fixing of bugs then take a lot of time all right so any questions any doubts till here guys any questions all right so we have no questions so shall I move forward okay before I move forward let me give you a quick recap of what we have discussed till now first we saw why we need continuous integration what were the problems that industries were facing before continuous integration was introduced after that we saw how continuous integration addresses those problems and we understood water exactly continuous integration is and then in order to understand the importance of continuous integration we saw case study of Nokia in which they shipped it from nightly build to continuous integration all right so shall I move forward kindly give me a quick confirmation jessica says yes Bennet says yes as I say school all right guys so we’ll move forward and we’ll see various containers integration tools available in the market these are the four most widely used continuous integration tools first is Jenkins on which we’ll focus in today’s session then buildbot Travis and bamboo all right and let us move forward and see what exactly Jenkins is so Jenkins is a continuous integration tool it is an open-source tool and it is written in Java how it achieves continuous integration it does not with the help of plugins Jenkins have well over thousand plugins and that is a major reason why we are focusing on Jenkins let me tell you guys it is the most widely accepted tool for continuous integration because of its flexibility and the amount of plugins that is support so as you can see from the diagram itself that it is supporting various development deployment testing technologies for example get maven selenium puppet ansible Nagios all right so if you want to integrate a put you need to make sure that plugins or that tool is installed in your Zen game now for better understanding of Jenkins let me show you the Jenkins dashboard I’ve installed Jenkins in my Ubuntu boss so if you want to learn how to install Jenkins you can refer the Jenkins installation video so this is a Jenkins dashboard guys as you can see that there are currently no jobs because of that this section is empty otherwise it will give you the status of all your build jobs over here now when you click on new item you can actually start a new project all over from the scrap alright so any questions tin here guys any queries you have regarding Jenkins alright so we have a question from Roger he’s asking what is the difference between Hudson and Jenkins as a let me tell you this thing that there is no difference between Hudson in Jenkins Hudson was only the earlier name of Jenkins all right so there’s no much there’s no difference between Hudson and Jenkins okay so we have one more question it is from Quinn Quinn is asking you’ve talked about plugins so do we need to install those plugins or it will come automatically with Jenkins so what happens Quinn when you are installing Jenkins it will give you two options first is install relevant plugins in which there are certain set of plugins which are there and that will be installed and on the right hand side there is option called select the plugins so what there you can go and select the plugins that you want to install and once you have installed Jenkins then also if you need a plug-in you can actually install that I’ll tell you how to do that later in the session I hope this answers your question all right Thank You Quinn for your confirmation now let us go back to our slides let us move forward and see what are the various categories of plugins as I’ve told you earlier is when the Jenkins RC is continuous integration with the help of plugins alright and

Jenkins approached Vale over thousand plugins and that is a major reason why Jenkins is so popular nowadays so the plug-in categorization is there on your screen well there are certain plugins for testing like j-unit selenium etc when we talk about reports we have multiple plugins for example HTML publisher for notification also we have many plugins and I’ve written one of them that is sentence build notification plug-in and we talked about deployment we have plugins like deploy plug-in and we talked about compile we have plugins like maven and etc alright so let us move forward and see how to actually install a plug-in on the same event to box where my Jenkins is installed so over here in order to install Jenkins what do you need to do is you need to click on manage and conduction and over here as you can see this is an option called manage plugins just click over there as you can see that it has certain updates for the existing plugins which I have already installed right then there is an option called installed where you’ll get the list of plugins that are there in your system all right and at the same time there’s an option called available it will give you all the plugins that are available with Jenkins alright so now what I’ll do I’ll go ahead and install a plug-in that is called HTML publisher so it’s very easy what you need to do is just type the name of the plug-in here it is HTML publisher plug-in just click over there and install without restart so it is now installing that plug-in we need to wait for some time so it has now successfully installed now let us go back to our Jenkins dashboard so we have understood what exactly Jenkins is and we have seen various Simpkins plugins is win so now is the time to understand Jenkins with an example we’ll see a general vote flow how Jenkins can be used alright so let us go back to a slide so now as I’ve told you earlier is when we will see a Jenkins example so let us move forward so earlier what is happening developers are committing changes to the source code and that source code is present in a shared repository it can be a git repository subversion repository or any other repository alright now let us move forward and see what happens now now over here what is happening there is a Jenkins server it is actually polling the source code repository at regular intervals to see if any developer has made any commit to the source code if there is a change in the source code it will pull the code a little prepare a build and at the same time developers will be notified about the build results now let us execute this practically alright so I will again go back to my Jenkins dashboard which is there in my ubuntu box over here what I’m going to do is I’m going to create a new item alright basically a new project now over here I’ll give a suitable name to my project you can use any name that you want I’ll just write compile and now I click on freestyle project the reason for doing that is freestyle project is the most configurable and flexible option it is easier to set up as well and at the same time any of the options that we configure here are present in other build jobs with move forward with freestyle project and I’ll click on ok.now over here what I will do I’ll go to the source code management tab and it will ask you for what type of source code management you want I’ll click on get and over here you need to type your repository URL in my case it is HTTP github.com your user name slash the name of your repository and finally dot get all right now in the build option you have multiple options alright so what I’ll do i click on invoke top-level maven target now so now over here let me tell you guys that maven has a build life cycle and that build life cycle is made up of multiple build phases typically the sequence for build phase will be first you validate the code then you compile it then you test it then you perform unit tests by using suitable unit testing framework then you package your code in a distributable format like a jaws then you verify it and you can actually install any package that you want with the help of install build phase and then you can deploy it in the production environment for release so I hope you have understood the maven build lifecycle so in the goals tab so what I need to do is I need to combine the code that is present in the github account so for that in the goals tab I need to write combined so this will trigger the compile build phase of maven now that’s it guys that’s it just click on apply and save now on the left hand side is an option called build now to trigger the build just click over there and you will be able to see that the build is starting in order to see the console output you can click on that build and you’ll see the console output so it has validated the github account and it is now starting to combine that code which is there in the github account so we have successfully compiled the code that was present in the github account now let us go back to the Jenkins dashboard now in the Jenkins dashboard you can see that my project is displayed over here and as you can see the view color of the ball indicates that as that it has been successfully

executed all right now let us go back to the slides now let’s move forward and see what happens once you have compile your code now the code that you have compiled you need to test it all right so what Jenkins will do it will deploy the code on to the test server for testing and at the same time developers will be notified about the test results as well so let us again execute this practically I will go back to my Ubuntu box again so in the github repository the test cases are already defined all right so we are going to analyze those test cases with the help of maven so let me tell you how to do it we’ll again go and click on new item and over here we’ll give any suitable name to a project I’ll just type tests and I will again use freestyle projects for the reason that I have told you earlier click on OK and in the source code management tab now before applying unit testing on the code that has compiled I need to first review it with the help of PMD plug-in and do that so for that I will again click on new item and over here I need to type the name of the project so I’ll just type it as cold underscore review freestyle project click OK now the source code management tab I will again choose gate and give my repository URL HTTP github.com slash user name slash name of the repository dot cater all right now scroll down now in the build tab I’m going to click over there and again I will click on invoke top-level maven target now in order to review the code I’m going to use the matrix profile of maven so how to do that let me tell you you need to type here – P matrix P md : p md alright and this will actually produce a PMD report that contains all the warnings and errors now in the post build action tab I click publish PMD analysis result that’s all click on apply and save now finally click on build now and let us see the console output so it has now pulls the code from the github account and performing the code review so that successfully review the code now let us go back to the project over here you can see an option called PMD warnings just click over there and it will display all the warnings that are there present in your code so this is the PMD analysis report over here as you can see that there are total 11 warnings and you can find the details here is when like package you have then you have then you have categories then the types of warnings which are there like for example empty cache blocks empty finally blocks now you have one more tab called warnings over there you can find where the warning is present the filename package alright then you can find all the details in the details tab it will actually tell you where the warning is present in your code alright now let us go back to the Jenkins dashboard and now we will perform unit tests on the code that we have combined so that again I will click on new item and I’ll give a name to this project I will just buy it test and I’ll click on free side project ok now in the source code management tab I’ll click on get now over here I’ll type the repository URL HTTP github.com slash username slash name of the repository dot skit and in the build option I click on again invoke top-level maven target now over here as I told you earlier as well that maven build life so I can have multiple build phases like first it will validate the code compile the tested package then it will verify it then it will install it certain packages are required and then finally it will deploy it alright so one of the phase is actually testing that forms unit testing using the suitable unit testing framework the test cases are already defined in my github account so to analyze that test case in the goal section I need to write tests all right and it will invoke the test phase of the maven build lifecycle all right so just click on apply and save finally click on build now to see the control output click here now in the source code management tab I’ll select get right over here again I need to type my reports to URL that is HTTPS so github.com slash username slash repository name dot kit and now in the build tab I’ll select invoke top-level maven buckets and over here as I have told you earlier as well that the maven build lifecycle has multiple phases alright and one of that phase is a unit test so in order to invoke that unit test what I need to do is in the go

stuff I need to write tests and it will invoke the test build phase of the maven build life cycle all right so the moment I write tests here and I’ll build it it will actually analyze the test cases that are present in the github account so let us write tests and apply and save finally click on build now and in order to see the control output click here so does pull the code from the github account and now now dis performing unit test so we have successfully performed testing on that code now I’ll go back to my Jenkins dashboard or it as you can see that all the three build jobs that I’ve executed are successful which is indicated with the help of view colored ball all right now let us go back to our slides so we have successfully performed the unit tests on the test cases that were there on the github account now we will move forward and see what happens after that now finally you can deploy that build application on to the production environment for release but when you have one single Jenkins over there are multiple disadvantages so let us discuss that one by one so we’ll move forward and we’ll see one of the disadvantages of using one single Jenkins so now what I’ll do I’ll go back to my Jenkins dashboard and I’ll show you how to create a build pipeline alright so for that I’ll move to my Ubuntu box once again now in here you can see that there is an option of plus okay just click over there now over here click on build pipeline view whatever Nene what you can give I just give it as ed you rekha underscore pipeline and click on OK now over here what you can do you can give some certain description about your build pipeline all right and there are multiple options that you can just have a look and over here there is an option called select initial job so I want compiled to be my first job and there are display options over here number of display builds that you want I’ll just keep it as five now the row headers that you want column headers so you can just have a look at all these options and you can play around with them just for the introductory example let us keep it this way now finally click on apply and okay now currently you can see that there is only one job that is combined so what I’ll do I’ll add more jobs to this pipeline for that I go back to my Jenkins dashboard and over here a large code review as well so for that I’ll go to configure and in this build trigger stop what I’ll do I click on build after other projects are built so whatever project that you want to execute before code review just type that so I want compile yeah click on compile and over here you can see that there are multiple options like trigger only if build a stable trigger even if the build is unstable trigger even if the build page so I just click on a trigger even if the build fails alright finally click on apply and save similarly if I want to add my test job as well to the pipeline I can click on configure and again the build triggers tab I’ll click on the build after other projects are build so we type the project that you want to execute before this particular project in our case it is code review so let us click over there trigger even if the build fails apply and save now let us go back to the dashboard and see how our pipeline looks like so this is our pipeline okay so when we click on run let us see what happens first it will compile the code from the github account that is it will pull the code and it will compile it so now this compile is done all right now it will review the code so the code review has started in order to see the log you can click on console and we’ll give you the console output now once code review is done it will start testing it will perform unit tests all right so code has been successfully reviewed with the as you can see the color has become green now the testing has started it will perform unit tests on the test cases that are there in the github account so we have successfully executed the three build jobs that is combined the code then review it and then perform testing all right and this is the build pipeline guys so let us go back to the Jenkins dashboard and we’ll go back to our slides now so now we have successfully performed unit tests on the test cases that the president of github account alright now let us move forward and see what else you can do with Jenkins now the application that we have tested that can also be deployed on to the production server for releases with all right so now let us move forward and see what are the disadvantages of this one single Jenkins over so there are two major disadvantages of using one single zankan so first is you might require different environments for your builds and test jobs alright so at that time one single Jenkins over cannot serve our purpose and the second major disadvantage is suppose you have a via projects to build on regular basis so at that time one single Jenkins over cannot simply handle the load let us understand this with an example

suppose if you need to run web test using Internet Explorer so at that time you need a Windows machine but your other build job might require online export so you can’t use one single Jenkins over all right so let us move forward and see what is actually the solution to this problem the solution to this problem is Jenkins distributed architecture so the Jenkins distributed architecture consists of a Jenkins master and multiple Jenkins sleep so this Jenkins master is actually used for scheduling build jobs it also dispatches builds to the slaves for actual execution all right it also monitors the slave that is possibly taking them online and offline as required and it also recalls and presents the build results and you can directly execute a build job or master instances well now that we talk about Jenkins slaves these slaves are nothing but the Java executable that are present on remote machines all right so these slaves basically here is the request of the Jenkins master or you can say the perform the jobs as told by the Jenkins master they operate a variety of operating system so you can configure Jenkins in order to execute a particular type of builds up on a particular Jenkins space or on a particular type of Jenkins wave or you can actually let Jenkins pick the next available Jenkins space all right now I go back again to my open to watch and I’ll show you practically how to add Jenkins slave now over here as you can see that there is an option called manage Jenkins just click over there and when you scroll down you’ll see an option called manage nodes on the left hand side there is an option called new node just click over there click on permanent agent give a name to your slave I’ll just give it as slaves underscore one click on ok over here you need to write the remote root directories so I’ll keep it as slash home slash EDD Eureka and labels are not mandatory still if you want you can use that on launch method I want it to be launched slave agents via SSH right over here you need to give the IP address of care force so let me show you the IP address of my host or this my tenkan slave which I’ll be using like Jenkins slave so this is the machine that I will be using as Jenkins slave in order to check the IP address I’ll type if config and this is the IP address of that machine just copy it now I’ll go back to my Jenkins master and the host tab I’ll just paste that IP address and over here you can add the credentials to do that just click on add and over here you can give the username I’ll give it as root password that’s all just click on add and over here select it now finally save it now it is currently adding the sleep in order to see the logs you can click on that slave again now it has successfully added that particular slave now what I will do I will show you the logs for that and click on slave and on the left-hand side you will notice an option called log just click over there and will give you the output so as you can see agent has successfully connected and it is online right now now what I will do I will go to my Jenkins slave and I’ll show you in slash home slash Eddie record that it is added let me first clear my terminal now what I will do I’ll show you the contents of slash home slash Eddie Rekha as you can see that we have successfully added slave door jar that means we have successfully added Jenkins slave to our Jenkins master thank you for attending today’s session if you have any questions or any doubts please write it down in your chat bot any questions guys okay so we have a question from Bennett he’s asking what is the difference between puppet and Jenkins but it the basic difference between puppet and Jenkins is Jenkins is a continuous integration tool but when we talk about puppet it is a configuration management tool let us understand this with an example suppose you have a code in your github repository so Jenkins server will pull that code at build and that build application will be deployed or to the test server for testing so that test server will require certain configuration or you get this or environment in order to execute the test so that can be a lab stack that that is Linux Apache my sequel at PHP or it can be map stack that is back a path shape my sequel PHP anything it can be even a path a tomcat anything alright so in order to provide that environment we use configuration management tools like puppet ship etc same goes for your production environment is win so if you want to configure your production servers what do you need you need tools

like puppet ship a stable etcetera to configure those servers alright so I hope you’re clear or attack for the conservation barrage so any questions any other questions you have and we shall say is no questions I you see says everything’s clear so does Jessica alright thank you guys so this video will be uploaded into your element you can go through it if you have any doubts you can ask our 24/7 support team you can also bring your doubts in the next classes will kindly provide us with your important feedback that will help us to improve the quality of education that we provide thank you and have a great day I hope you enjoyed listening to this video please be kind enough to like it and you can comment any of your doubts and queries and we will reply to them at the earliest to look out for more videos in our playlist and subscribe to our at Eureka channel to learn more happy learning


GoPro MAX vs HERO 8 vs Insta360 ONE X!

hey guys I’m Ben from Authentech and this is my comparison of the GoPro Max vs GoPro Hero 8 black versus the insta360 ONE X now these are some of the very best action cameras out right now and at the end of this video you should have a better idea of the pros and cons of each camera and if you’re thinking about getting one which one might be the best fit for you I’m a little sick right now so forgive me for that no this video is not sponsored by anyone and as always all product links will be down below find me on Instagram for all my behind the scenes and let’s jump right in here’s the key questions and categories I’ll cover today pricing design footage workflow and most bang for the buck which one is for you and it sort of goes without saying two of these action cameras are shooting in 360 while the hero 8 is shooting flat so some of these tests are little apples to oranges but there’s of course lots of overlap as well today’s video is all about producing a final flat video not shooting 360 for VR these 360 cameras are all about over capturing the scene and then reframing afterwards so quick pricing breakdown the max is currently $500 the hero 8 is 400 and the insta 360 1 X 400 as well on to design durability and user interface now I prefer the form factor of the 1x best it’s slim and slips easy into a pocket and it’s most comfortable to hold single-handed the max is a little boxy and less economic trying to keep your fingers hidden from the lenses could be difficult now plus the One X has a quarter twenty tripod mount built right into the bottom which is the best however both the Max and the hero H have these little folding fingers on the bottom which is better than nothing just not as a universal or convenient as the tripod mount in my opinion I really like the screen on the max over the insas LCD screen which could be difficult to see in bright sunlight the max is easier and faster to change settings and shooting modes while on the fly plus we can use it as a viewfinder when in Hero Mode plus it’s great to playback your photos and video right there no need to connect to your phone another big deal is the waterproofing on the max and the hero 8 is built right in it’s a huge win for GoPro over the insta 360 which needs a case to protect it another bonus point to the max they include these two clear lens caps to protect both lenses when shooting more intense action shots now I say this is a big deal because I’ve personally scratched the lens on my one axe in the past and that’s not easily replaceable the only way to protect is the venture case which adds weight and bulk instead these low clear lens caps stick on tight and it’s a dead simple solution to this issue plus GoPro also included these two lens caps for travel nice little touches I appreciate I shot a quick footage comparison test with and without the lens caps on and here’s the results basically it looks like it softens the image a little bit maybe some specks of dust can be visible but it’s not terrible and a nice option to have now on to comparing footage both photos and video hands down if you need the highest resolution the best quality 4k footage well you’ll need the hero 8 it looks crazy sharp and crisp compared to the 360 cameras and of course it has more shooting modes resolutions and frame rate options over the 360 camps for both the max and the 1x even though they combine two different lenses into this massively large resolution canvas after reframing and exporting your video it down scales to 1080p maximum for 1080p they both look really good and of course widely usable for most people my wish though is that a 360 camera let’s say the max when shooting flat like in Hero Mode could export 4k video but sadly it cannot so what about video quality between the max and the One X well the GoPro footage looks much more vivid and punchy saturated colors and contrast now this will definitely come down to personal preference some people like the flatter image on the One X it’s maybe more natural-looking easier to color grade after the fact others will prefer the vivid colors on the GoPros no editing needed and I sort of lean this way as of now now I heard insa 360 is releasing some upcoming updates to enhance visuals probably for this exact reason but it’s not out at the time of this video stabilization is huge for these cameras and all three look pretty Rocksteady smooth just look at the wobble and the selfie stick to see how much e is is taking place and it’s pretty incredible when I start jogging they separate out just a little bit and I think my favorite is the GoPro Maxx with the 1x still looking really good but if we analyze the edges we can notice a little bit of camera shake and jittering and the hero 8 looks smooth as well but it’s so crazy to look at the

360 cameras as I flip the selfie stick over and under and they keep that ground perfectly level but the flat action camera like the hero can’t do this yet without editing after the fact now gobro has a horizon lock feature for the hero 8 but it’s not built into the camera yet which is kind of stupid and currently must be used through their GoPro app one thing to mention the crushed blacks on the max is a little too much for me we’re losing too much detail and they need to back that off just a tad but again as for sharpness and clarity between the max and the 1x well I think the max wins it here the 1x is just too soft and a little mushy looking this is an audio test on the GoPro Max this is audio on the GoPro Hero 8 black and this is audio on the insta 360 1x how does the audio sound audio test 1 2 3 4 let’s live authentic how does the audio sound audio test 1 2 3 4 let’s live authentic how does the audio sound audio test 1 2 3 4 let’s live authentic this is a far away audio test on the GoPro max how does the audio sound this is an audio test on the infant 361 X it’s super windy down here at the lake how does the audio sound it’s super windy down here at the lake how does the audio sound it’s super windy down here at the lake how does the audio sound a very interesting result on those three different audio tests and I’m actually surprised that the one axe sounded the loudest but it also was boosting everything so the wind noise was a little bit more apparent the hero 8 sounded good and even though the max sounded pretty good as well it has 6 built-in microphones so it could probably win for the best directional stereo audio extended far away on that selfie stick and if I boost the audio levels and editing this is a far away audio test on the GoPro max how does the audio sound this is an audio test on the infant 261 X the max sounds crazy impressive with that far distance almost like I’m wearing a wireless laugh and then on the One X we can hear the background noise is boosted and it’s not terrible just not as good for my taste there is no slow motion on the max and this was a major bummer to a lot of people now of course we can shoot up to 240 FPS 1080p on the hero 8 which wins in this test for slow-mo but the One X can also shoot up to 100 frames per second in 3 K + 360 field of view and you’re able to capture some insanely cool shots like this when you’re shooting in every direction so those are extra points going to the 1x this was sort of an eye-opener to me and maybe I’ll help some of you out there as well I originally thought that the hero eight was like here and the max was everything of the hero A plus so much more way up here but that’s far from the truth each lens on the max is recording much less quality than the hero eight for example the max can shoot in hero mode so it’s using only one lens and you just point and aim like any action camera but that’s not in 4k res it maxes out at 1080p or 1440 4 by 3 ratio at 60fps nothing higher my dream would be to have a max have all the same features of the hero 8 but then for maybe a hundred bucks more and on that second lens to unlock a lot more fun unique shooting angles over capture the world and reframe your shots later but that’s not here yet I’m guessing that cost might be a little bit too high on a product like that I’m not sure one big feature that the One X wins for me is the invisible selfie stick now I shot a comparison of this as well using the same thin poll and while the GoPro Max looks okay the 1 X can make it look just like it’s hovering midair which produces these incredible drone or crane jib shots of sweeping landscapes or fast action scenes with this test I also want to see which 360 camera did a better job of stitching the two lenses together and same again I think the One X wins for stitching and this is a pretty major deal hiding the stitch line is massively important for maintaining the magic of that hovering impossible shot big points to insta 360 here I have to step back and appreciate this technology overall the max for example each fish eye lens is recording a hundred and ninety four degrees I believe times 2 that’s 388 that’s 28 degrees more than 360 Saphira Cole so 28 divided by 2 that’s about 14

degrees a field of view on each lens that is extra for overlapping and blending together all of this in real time it’s very impressive technology that I appreciate GoPro max has this cool power panel mode basically one shot captures this ultra wide panoramic shot no stitch you needed and this is a nice feature to have all three cameras can shoot hyperlapse which are very cool and fun to experiment with the One X goes about it differently by recommending you shoot in normal video mode then in their app scrub the timeline to select the speed up fast-moving shots then create sections of slowed down normal speed to highlight key scenes now I wish they also had an auto hyper lapse mode like the others let me show you that the max has the hyper lapse mode built right in GoPro calls it time warp and the max export res is 1080p with multiple field of view options and multiple speed multipliers or that auto mode which is my favorite that’s where GoPro automatically sets the speed based on a few variables Plaza as this cool real-time button you can press on the screen mid recording so if there’s a section you want to record at normal speed then press it again to ramp back up into auto speed mode it’s very cool results the hero 8 I’ve covered that in the past it can record these hyper lapses up to 4k res and it also has multiple field of views and speeds or auto speed mode which I like clearly the hero 8 again wins for quality of the image but the other to open up that world of changing your viewers perspective to a new angle and sometimes when you’re recording you just don’t know what you’ll capture so I love the idea of over capturing the world and reframing later I shot a quick underwater test and I use the GoPro Max naked body because it’s waterproof built-in and even though it’s not really meant to be used underwater without a case the results were pretty impressive now of course we can see the stitch line but it produces some neat and usable shots next I tried the 1x underwater with it’s adventure case and oddly enough it does not look as cool or as good as I thought it would that transition line is way too distracting and I’m not sure if their underwater dive case would have greatly improved it here I’m guessing that would have but keep all this in mind don’t use the venture case for underwater shots honestly I wish the 1x had waterproof built right in like the GoPro max hopefully we can see this in their next model snapped a few photos and I’d say the GoPro eight wins again for sharpness and clarity but the max looks like a good second place and of course it can shoot much wider field of view over the hero eight the 1x just looks flat and natural but bit too soft the same goes again for low-light all these sensors are just so tiny so unless you’re shooting on a tripod with long exposures they won’t yield great results low-light video and again the 360 cameras will allow you these crazy wide-angle shots that can’t be captured any other way then it’s sort of fun and addicting to over capture your scene reframe later however there’s going to be a quality loss and here in the low-light well I guess they all three look pretty grainy noisy and soft the GoPro Max might win in terms of clarity but here’s what you can expect from all three now just as important or possibly more important than the footage itself is the workflow of editing your shots end of day with standard flat footage like on the hero 8 black I can simply copy those files right over to my computer drop into Final Cut for editing and go right from there as for the 360 cams will they add a major middle step to that workflow and that’s basically reframing your shots converting that 360 footage and a flat footage like 16 by 9 and that middle step could make or break the whole process a lot of it comes down to the mobile app as this is probably the used case for 80% of people out there it’s simply faster and more convenient to simply edit straight from your camera over to your phone wherever you’re located and this is honestly how I edit most of my 360 shots their mobile apps are much more efficient over their desktop apps and I’ll show you that in a minute so which mobile app is better go pro or insta 360 well I think insta wins it here let me show you firstly the GoPro app is not terrible and actually has some sweet features I wish and adopted for example when scrubbing through your footage the timeline scrubbing is much smoother than insa 360s GoPro has a nice keyframing implementation even with transitions like ease in out points or jump cuts we can easily pinch zoom rotate however I really need a horizon lock feature I couldn’t find this anywhere and this was an issue I ran into constantly needing to level the horizon while editing and it got really tedious and annoying with the insta 360 they auto lock the horizon by default so it saves me a lot of time

speaking of editing efficiency even though key frame editing can be nice it’s much slower and I prefer the viewfinder or over capture mode I simply move my phone to record the angle I want in real-time again instant 360s app wins it on this one as the GoPros over capture mode there’s no easy way of pausing mid clip to reposition the camera but instead the only visible button on screen is stop and if I press that well it auto export the clip no way of going back to continue unlike insta 360s which does it much better I can start and stop again and again on the same clip plus it saves those little sections on the timeline thus we can mix and match with key framing smart tracking and viewfinder mode all on one clip speaking of insta has object tracking so I can just tap and hold and a lot emotion track for you this is a great feature to have another glitch I ran into on the Macs was Wi-Fi and firmware updating issues now I’m using the latest iPhone and I was constantly running into trouble trying to connect the Macs to my phone I tried everything I can think of and it was totally broken I then had to manually download the firmware file from GoPros website to my computer copied it to the SD card pop it into the camera and then the Mac’s was able to successfully update the firmware this has fixed the Wi-Fi issues so far another quirk I found in the GoPro app after editing and exporting a photo or video I have to save it again to get those files into my camera roll this is just super inefficient plus GoPro is running banner ads in their app trying to sell me on their cloud storage I don’t love that and for a quick overview of their desktop apps I again like the GoPro one for smooth mine scrubbing and video playback same keyframing options here the man’s so annoying constantly needing to level the horizon and I did find this menu setting for mouse control traditional if horizon leveling is disabled yet when I’m shooting 360 on the max it won’t let me turn on horizon leveling in the settings and the horizon keeps getting tilted I’m not sure if I’m missing something here but it honestly shouldn’t be this complicated and an extra 10 points thrown to insa 360 for just simply working as it should out of the box editing in the gopro player app honestly takes me forever for just one clip so I personally will probably never use it the inso 360 studio desktop app is a little bit more powerful with customization it’s not as pretty in design and the timeline scrubbing is non-existent which is very bothersome but there’s a lot more utility under the hood and I can honestly reframe a keyframe rinse and repeat over and over much faster in here with keyboard shortcuts over the GoPros desktop app so which one is best well honestly if you’re thinking you just need a simple wide-angle high quality action camera with lots of shooting modes resolutions and frame rates well then the hero 8 might be your best bet it’s simple and straightforward if however you’re looking for fresh unique angles and you love the idea of over capturing the world reframing later well then 360 will open up a world of fun shooting possibilities for you as for which one well I sort of lean in both directions I really like the overall photo and video quality of the GoPro Max better even though it’s workflow is not as good as the One X I really like those protective lenses it’s built in waterproof vivid colors and contrasts great stabilization great stereo audio quality and a lot more but then again you are paying around $100 more for it which is where for some people if you’re on a tighter budget and you don’t have spare cash just lying around well that’s when I’d say go with the insta 360 1x it still has some awesome potential and features its editing workflow is better and fast sir it still produces incredible footage it’s smooth stabilization better stitching better invisible selfie stick now that I’m talking about it it’s a sweet deal for what you get and possibly more bang for the buck especially if you can find it on sale which I heard some killer deals are coming up this holiday season as always use my links down below to check out those special deals also remember the One X has been out for over a year now so a well done 2 insta 360 for creating a camera that can still stand up top-tier with the rest and be I don’t know when their next action camera will be coming out maybe the 2x or whatever but if you can wait a while well then maybe you should and I’m excited to see how cool their next camera is thank you guys so much for watching and if you’re new around here consider sticking around for a lot more tech videos like this posted every week

until next time but let’s live authentic


CBS 4 new studio



I Am CSE: Chandrakana Nandi

My name is Chandrakana Nandi, and I’m a fifth-year graduate student in the Programming Languages and Software Engineering Lab in the Allen School Computer science itself has been an incredibly successful field And within computer science, I found programming languages and compilers to be extremely exciting because everyone who works in computer science, in some way, uses compilers and programming languages Furthermore, I was really interested in computational fabrication because there has been this new revolution of democratized fabrication, where a lot of people can actually now afford desktop manufacturing devices, like 3D printers Manufacturing is not just meant for industrial purposes anymore To that end, I have built a tool called Reincarnate In my project, I used programming languages and compiler techniques to make desktop manufacturing devices like 3D printers more reliable and easier to use by end users Imagine that you want to print a simple candle holder What you can do is you can go to these online forums, and you can actually download preexisting models that are uploaded by other users Unfortunately, if you actually look at the file that I just downloaded, you will see that it is actually not in a very human-readable form And because of this, it is actually not that hard to print this directly, but modifying this design to customize it is actually really difficult So what Reincarnate does is it actually automatically infers a higher-level program that represents the same model Instead of that huge unreadable format, this same model is now represented in a six-line program Let’s say, that instead of having six sides, you just want to make it three sides You change the 6 to 3, and you re-render it, and you can see that you have a different shape After making this change, if you’re happy with the design, you can go ahead and print it The way that would work is you would export it back into a mesh And then we will actually use a slicer to generate the G-code for it And once you’ve done that, you can put it in a USB key, and you can go to the Fab Lab and print it I think that making fabrication more accessible and usable can have a huge impact in our society Right now we are printing the model that we modified off the new model that Reincarnate allowed us to modify So here’s an example of what we started with and what we get after making some changes to it My name is Chandrakana Nandi, and I am CSE


Can You Sell a Home with Mold? w/mold expert Zach Duffey I Seattle Real Estate Podcast

You’re listening to Seattle Real Estate Podcast Hey everybody, I’m Sean Reynolds, the owner of Summit Properties NW Reynolds &, Kline Appraisal, and your host of this episode of the Seattle Real Estate Podcast My next guest began his career as a home inspector where he learned a great deal about what goes on behind the walls of houses both old and new He went on to manage a mold remediation franchise and saw a need for more honest service providers in the industry After becoming a certified mold professional, he started mold mentor, with a mission to combine Unbeatable Service with sensible and effective solutions to restore air quality and prevent moisture issues in homes and buildings Welcome Mr. Zack Duffy to the Seattle Real Estate Podcast Thanks so much for coming on Thanks for having me and for working through the technical difficulties to get to this point I am in the field as you already know Yeah, you’re a busy guy you’re out in the field is In between inspections, you told me that Ohio and you’re in Columbus, Ohio, correct? That’s right And you just had some major flooding And so your services are in need right now They are Yep And so you’re just it’s how many inspections will you do in a day? Or what what what kind of stuff do you typically work on after a after a flood like that? So I can have actually multiple jobs overlapping where I’m, I’m drying out a basement with the equipment, and then I have to run over to another location and do an inspection And so when you say how many inspections will I do in a day, I could be doing three to four inspections while I have two separate jobs going on Right So it’s a it’s a delicate balancing act Yeah So you’re hopping right now in other words, right Absolutely Right And then then you’re hopping at home because when you go home you’ve got five kids Yeah. for work, and I said, Hey, so works workday You got Daddy’s got to go to work Daddy’s gotta pay the bills All right Yep, that’s awesome Okay, so as you and I kind of talked about, I own a real estate brokerage and an appraisal company So I’m going to jump right into some questions that both real estate brokers and probably appraisers have concerns concerning houses with mold And so what do you do when a house your listing has mold? What kind of things as you as a mold, professional real estate brokers got a house? Oh, man, I think it’s got mold What do I do? Well, um, so yeah, that can go a number of directions This is a pre listing house you’re talking about? Yeah, maybe a real estate broker knows he’s going to have a listing come up, but the house, he’s pretty sure it’s got mold So should he reach out to a mold professional? Should he try and fix it himself? What are his options and what should he be thinking? I would, I would start by saying I think it’s totally appropriate on the disrespect. financial disclosure form just to stay, you know, we’ve we know of some minor moisture intrusions And some of it’s been accompanied by some discoloration I don’t think you ever need to disclose on a residential property form, that you have mold unless it’s been tested by a laboratory because only a laboratory really can can determine if a microorganism is mold or not So we have to send a sample to them So that being said, we can start there and go one of two ways If an agent is concerned about a future listing having mold, he can have it tested to confirm or rule out a mold problem Okay Or you could do nothing and just disclose moisture intrusion, maybe some discoloration Okay, okay All right So we’re treated or treated Yeah So, and how often do you recommend How often do you take the phone call and you’re like, Okay, it sounds like you’ve got a major mold problem or you Got a minor mold problem? Can you kind of identify that based on what somebody’s telling you? Or do you have to really rip into things to kind of know what you’re dealing with in a house? So it depends on if the customer is paranoid, I guess for lack of other descriptive words or if they are they are, you know of sound mind and health And I can have photos sent to me texted to me and I can say Look, that’s a shower that needs some deep cleaning some common scrubbing I don’t think you have had an issue after Of course, I asked a few other questions And and and they can be happy with that Thank you so much You know, I never had to go to the house It was a minor issue, you know, the EPA recommends If a mold contamination is less than 10 square feet of surface area, it’s likely that the homeowner can deal with it with it without the help of a professional So my first question is, how much are we seeing how much area does this cover? Now if it could be behind walls then we can’t see with the naked eye How much surface area could be covered?

So, yeah, I get I get the call the time could this be serious or not? And it’s and I try and fill out the scope of possible mold growth and most most times I end up having to go out but I can answer questions over the phone and solve problems A lot of times, right? Okay, how often does somebody send you a picture and we’ve all walked into that house where an entire like fam is covered in the walls What do you recommend in that situation where there’s obviously some massive mold should somebody be walking through that room somebody I want I’ll walk through a room like that without even a mask on sometimes because I know I’m only going to be walking through for 30 seconds, a couple minutes at most taken with a flashlight, and I’m not allergic so I’m not gonna have allergic reactions to that I in order to have to toxic genic reactions are become symptomatic and poison for mold Like, we have to live in that environment for prolonged amounts of time, months or years even So, yeah, I’m not worried about quick walkthroughs Okay for the person living there, I would say that what you just described, Sean sounds like an urgent situation probably for the inhabitants But what you’re saying is that for for like a real estate broker, I’ve had brokers call me and say, hey, there’s mold in this house Should I even be inside? Because I think there’s such a push in the media towards be very, very afraid of black mold, because it’s going to kill you on site You know what I mean? Yeah And it sounds like from what you’re describing, is that it’s a time exposure thing to mold that you need to be worried about not necessarily running in as a real estate broker, but you need to be concerned for the inhabitants of the home more than anything who are going to cure live there Right, yeah, just walking through that house I mean, as a broker, you’ll know, you’ll know if you’re starting to experience reactions, I mean, your history, what are what are your reactions? If you start experiencing those, it’s time to get out Right? Not if you’re not, I would say that your body is handling, whatever is in a house just fine And you’re gonna leave without any reactions whatsoever Right? And so some people have an allergic reaction to mold and other people do not Is that correct? Yeah, absolutely like myself For example, I can, I can go into them all these basements where there’s slugs growing in the basement Like there’s plenty there’s plant life There’s algae, and there’s mold everywhere and it stinks And I might my gag reflex will will catch up to me at some point, but I don’t have any I don’t react to it any any other way I don’t get okay And I’m I do have outdoor allergies to like some sort of grass, seasonal stuff, but molds not one of them Okay, okay So different people react differently If you know that you’re somebody who reacts to mold, then it’s probably in your best interest to kind of stay away Let somebody else handle it that is less allergic Yeah, yeah, definitely And so let’s go back to Alright, a real estate broker or seller finds a little bit of mold in their bathroom It’s under a 10 square foot area It’s a pretty small area What kind of things can they do if they wanted to maybe take care of that themselves? If you have said, Hey, based on this photo, this doesn’t appear to be an extreme problem What should somebody do to get rid of that kind of smaller contained mold problem? Yeah, well, if it’s a bathroom, I mean, if it’s a shower, bleach can be used For one bleach is not engineered for mold disinfecting, but it can be effective to clean showers and get discoloration off White vinegar can change the alkalinity of the surface and kill mold spores white vinegar has been known to Campbell has been lab lab tested to be able to kill mold spores in small amounts Okay So a couple common things, but really a good scrub brush, you know? So, to show you these routine, routine cleaning, basic cleaning, basic cleaning So if you did a quick Google search, you could probably figure out what the solution is that you need to clean and then just get in there and clean it if it’s a small enough project Right, right Absolutely Yeah Okay, sounds good I will add a lot of times in bathrooms, the more that we’re seeing is related to the moisture Okay, so, inside of a shower, maybe we need to start using a squeegee After we get out of the shower, we squeegee out the walls That’s how we keep glass sliding doors clean in the shower from getting splotchy is a squeegee That’s extra work Most people aren’t going to do that Now if it’s not if it’s in the bathroom, but it’s outside the shower like on the drywall or on the ceiling That’s probably a ventilation issue And that’s Surface mold growth from moisture inside the bathroom, it’s it’s not an indication

that you have mold growing behind the walls or that moisture is getting behind the walls, or even in the attic It’s that you’re filling up that bathroom with steam And maybe we need to check that bathroom vent fan in the ceiling Maybe it’s old or tired or it’s not It’s blocked up in the attic Maybe it’s not venting properly And so we have too much moisture in the bathroom for too long of a time that we start to grow mold and in that case, usually we can hand clean that soap and water and away you go easy fix Right, right So everything you’re saying is mold needs moisture to live correct Exactly right Yeah, mold needs food and water Right So you get rid of the food and water and you’ve kind of got Problem solved That’s right And since homes are built of food sources for mold, which is dead plants, and so will in a home that’s that’s the wood framing and the paper on the drywall and things like that We can’t control the food there’s going to be our houses are constructed of food sources for mold All we can control is the moisture at that point important to keep the home dry with proper ventilation, you know, air conditioning in the summer, proper insulation and things like that buttoned up and dry Right, right And that’s one of the things that I hear about all the time is the difference between the older style homes that are more vented out because they’re not quite as tight airflow wise, versus the newer homes that are very tight As far as insulation, you know, Windows all wrapped up If you don’t have airflow, you could have a potential mold problem in a new home So it’s almost counter to intuitive to what you would think about where mold grows Is that roughly correct? Yeah, airflow is very important airflow is how stagnant moist air that might have a little bit of humidity and it gets it gets moved around so that it can be, you know, air conditioned, whether it’s the air conditioning system through the furnace, or whether it’s a dehumidifier that we have, or both All basements should have a dehumidifier as a backup They’re effortless They’re easy These days you can buy a dehumidifier for $300 or less a 70 point Moisture removal is what you want the rated and how many plants can this unit remove in a day, how many pints of moisture and I recommend 70 point minimum Okay, and you set that up in your basement you you dial in your set point for relative humidity A healthy range is 35 to 50% relative humidity So I set mine in my basement 45% the unit senses the air samples the air on a regular basis and it kicks on as needed And then it shuts itself off Okay, okay You can buy this Can you buy it at Home Depot? Can you buy it? Walmart? Costco, yep, okay Okay Okay And so is it The bigger the unit, the better as far as dehumidifiers go, so the rating is in pints per day moisture removal And that doesn’t change the size of the unit Usually the housing of the unit looks the same, it might be a little bit heavier because the compressor, the condenser is more sizeable But yeah, you’re not going to go by how big is the unit, you’re going to, you’re going to look at the rating and see how many pints per day they make 35 5070 and I say go to 70 spend the money on 70 is portable, you can take it with you to your next house It doesn’t run continuously It can but it’s not gonna just sit there and run it’s it’s more than Think of it like a backup system If it gets so humid during a summer day that your air conditioner is undersized or your air conditioner is old or for whatever reason, your air conditioner is not keeping up with the relative humidity in the basement, which by the way is 60% Relative humidity is the threshold for mold growth We have an environment Due to mold growth at 60% and a dehumidifier as a backup can help keep that in a healthy range when an air conditioner can’t keep up Okay, okay So you’re saying it’s it’s a great backup system to have just a dehumidifier in a basement area where there’s maybe some moisture coming in up through the floor, or for whatever reason in through the walls, something along those lines Yes, every I’m of the school of thought that every basement should have one I did a new build I did mold remediation on a new build where there was mold all over the fourth floor joists, it was an unfinished basement The new build had sat for a year and a half on the parade of homes here in Central Ohio And then was and then was on the market got into contracts and the home inspector found mold in the in the home builder was in denial I had to walk through and demonstrate where the small growth was and he was scratching his head because he was a new bill Never been used

There should have been a properly sized dehumidifier dropped into that basement after it was built Okay, so it had in excess of 60% humidity didn’t have air flow Just mold was sitting down there doing its thing That’s right Exactly Yeah So mold, mold grows under under humid conditions And it’s a it’s a sneaky kind of mold It’s not something that shows up as a black on the wall And you have to you have to be trained to discover this type of mold that grows under humidity You have to point your flashlight in the right direction to try and cast shadows from this mold and in any way I could go on but a liquid and liquid On the contrast, liquid mold growth, you’re going to see that it’s going to show up in a localized spot It’s going to be black and slimy, but not the humidity So we have the mind, the water source for mold growth, depending on its state, whether it’s a gas or a liquid okay Okay, and I saw within your website, some pointers on how to keep, since moisture is such a big thing with mold on how to keep moisture out of your home So we’ve talked about once moisture is already inside the home Let’s talk a little bit about how to keep moisture out of the home from from the outside perspective So, we’ll start with the basic framing I mean your wall should be insulated, that’s good to have insulation in the walls because you’re putting a barrier You’re put kind of putting a barrier where you’re not gonna have condensation if your walls are uninsulated and it’s freezing cold outside during the freezing months, and then the inside is being heated and people are breathing and living and showering you’re filling the warm air inside up with moisture which is going to be attracted to the cold exterior walls that are an insulated and You’re going to get condensation little water droplets on the walls and over time that will cause a problem to the paint it will grow mold things like that So insulation is first installation for the exterior walls and of course the attic Okay, the next would be yeah the next would be ventilation for the attic so the attic needs to have an intake and an output the current building ventilation today is soffit vent to arrange fan that’s kind of the the most desirable ventilation system for an attic It’s the intake air is at the lowest point of the roof, which is the soffit and the output air is at the highest point of the roof which is the ridge vent ridge Yeah, so we have the best passive vent system at that point Okay When it comes to a basement, yeah, the dehumidifier is very important A finished basement can be, can be tricky a finished basement We’re finishing basements with with studs With what? Well basements basement walls, breathe moisture And that’s normal Because there’s wet Earth up on the other side of that, that that foundation wall and we can’t control that So we so so an unfinished basement we have as long as there’s air moving down there any any moisture that’s breathing in from those foundation walls is being dealt with at the at the point of the air conditioner But once we trap that moisture in by framing that wall that can be problematic So if we’re going to finish the basement we should have a vapor barrier on the foundation wall and a vapor barrier can be as simple as a piece of six millimeter plastic visqueen plastic that covers that wall before the studs go in place If you want to take the step up, you can use extruded polystyrene it’s It looks like a glorified styrofoam poured Yeah, it’s big four foot by eight foot sheets of one inch to two inch thick extruded polystyrene you can cut you can layer your foundation with that stuff, tape the seams, and now you’ve got a vapor barrier and an R value of insulation included in one job, and then you can frame your stuff right up against that So that’s how we control moisture in a basement or crawlspace crawlspace is getting collected very often because they don’t get the concrete slab that a basement gets basements get three inches of concrete and that concrete that you’re walking on in the basement acts as a sort of vapor barrier Even though it can still breathe moisture like a wall a foundation wall it It acts as sort of a vapor barrier but a lot of times cross spaces are finished with just gravel on top of the dirt and there’s no vapor barrier So it’s crawlspaces should should be looked at as there should be a piece of plastic covering all ground space

In a crawlspace Yeah, the vapor barrier, yeah visqueen vapor barrier, right? Yeah For those dealing with the crawlspace, who are curious about more information, just google crawl space encapsulation, do it yourself crawl space encapsulation, they can be very costly, but they’re very effective at keeping moisture and odor down in a crawlspace to protect the rest of the home And so those are kind of the main touch points, I think, is ventilation, insulation And then of course, the basement and then in the crawlspace What about what about like major drainage projects around the home because in Seattle, as you probably well know, it rains like nine months out of the year So our ground is saturated with water And we see a lot of drainage type projects Go on, because people are trying to keep that water out of the home Yeah So the simple answer for me is if suits Talking about well perimeter drains inside of a basement that lead to a sump pump that’s the yeah the interior and then the other is the exterior where you’ve got French drains outside correct yeah yeah you can do that that’s not that’s not as common here in Central Ohio to have French drains installed on the exterior of the home to protect the the basement from water intrusion we see a lot of perimeter drains being installed and sump pumps and basements then we do the talk out here is in Ohio is digging up the foundation or the digging up the dirt along the foundation on the exterior of the home in order to waterproof they’ll gills with some sort of a membrane Okay And in that case, I guess they might go an extra measure and do a French drain but I really don’t hear any talk about that when i when i when i talk to other companies and customers who have had worked on that Not the perimeter drains are going in inside the basements out here in large part Okay Okay, interesting, because we see from what I know, and from my experience as an appraiser, I see a lot of exterior type projects And I think it’s just because we get, you know, such extended periods of rain, where it doesn’t come down hard, but the ground is saturated And we do our best to kind of keep that water from from getting into the basement foundation walls Gotcha Yeah, yeah So tell me, jack, how did you get in to the mold? How’d you get into the mold business? So I got my start in the mold industry as a home inspector That was a relationship that was developed when I was inspecting homes It was a it was a friend, a local franchisee of a national company called Green Home Solutions and That’s where I got to go out to Pittsburgh to get my formal training my field training from the franchise headquarters and learned a lot out there You know, this is kind of funny, but I, it hit me when I started my business that, you know, I had spent seven years as kind of making wine and meat in my basement Okay, and I loved it I had multiple recipes going on and people were people loved sipping on the honey wine that I was making It’s called need But my my, my favorite part about it was the yeast and I read books on East and the East has, has the most impact on the overall flavor It’s not necessarily what ingredients you’re using It’s how the yeast can react with those ingredients And I became very familiar with the micro organism when I researched yeast and it was it wasn’t until after I started my business that I realized why it came so naturally to learn about the mold spore and about how mold reacts. in different environments, and about how to control mold But you know what, once I left my position with that franchisee, I went on to get my real estate license, and I graduated in December of 2017 And then January of 2018, I had already gotten enough phone calls from people who had my phone number, asking for opinions on mold in their home, that I never practice real estate I started I started, I started mold mentor at that point Interesting So you’re like, hey, I’ve already got all these connections Instead of trying to crank up a new business plan, which is being a real estate agent You decided, hey, let’s run with this and, and keep this going And, and at that point, that’s when you started mold mentor, is that right? That’s right, you got it Okay Okay What does that what does it a standard day, if there ever is one in somebody who’s self employed, what’s your average day look like? My average day would start with Either a job or an investigation, an investigation is really

a I think I’ve got a problem And I’ll come out and take a look So and those those appointments range from five minutes to 45 minutes depending on how much they want to talk And, but a job would start I would show up and I would, you know, start setting up containment for the job that we’re about to do and bringing equipment So really, I should have a small car that I would if I if I try and separate my days from investigations, I like to do Tuesday and Thursday, for example, lineup schedule, all my appointments for looking at jobs and reading quotes for the same day And I could have a small car for that instead of this big work band Like a commuter car, right? Yeah Okay And so, but being you know, being in my third year of business for myself, I’ve got straight A’s, almost 70 reviews on Angie’s List 2018 2019 Super Service Award Yeah, I’ve been working really hard to taking care of each customer And that’s caused my business to grow organically by word of mouth and whatnot But, you know, the days that get rough are when I’ve got multiple jobs going on, and I’m getting phone calls, I’m already in a crawlspace trying to do mold remediation, and my phone’s ringing and, and so I’m at, I’m at kind of a limbo space where I’m not quite ready to support the business isn’t quite ready to support a full time employee yet So I’ve got several people that I call on and I do the 1099 thing with them for bigger jobs, where it’s a basement demo, and we have to rent a dumpster and things like that So independent contractors that you can count on, if you need some overflow help Exactly Okay Okay Okay So your your business is kind of at that point where you probably need somebody close to full time, but you’re just not quite there yet And so you’re probably having to work a bunch of extra hours So you don’t have to pay somebody full time because when it gets a little bit slower, then you’re like, what am I going to have that person do that kind of kind of the space that you’re at? That’s the space I’m at Exactly And I’m but I’ve got I’ve got I’ve got lots of ideas right now and things in the works to to make it happen So I’m really hoping for an employee this year Okay Sure It has to happen for next season next year Okay And one of your, your marketing things is to try and get on as many podcasts as you can Correct That’s right Yeah, exactly Yeah Because it’s all what I’m learning I’m not a digital space kind of guy, but but my, my buddy is who is who I’ve partnered with, and what I’m learning is, the more the more platforms we can be on like now we’re gonna have to, we have a YouTube channel, but I don’t think we have anything on there yet So this might be the first Okay The more activity that blogging and things like that the more activity I can leverage and post The more Google likes my business and wants to put me at on the first page of results So you get the Google, you get the Google love That’s right We get to we get shown love by Google And so that’s kind of one of the that was the initial idea for podcasts And to be honest, I thought the podcasting sounded like a long shot But he got he got three hits for podcasts and what like a month’s time, so yeah, it works And the thing with podcasting is that you can, so Google your number one search engine in the world, number two, search engine, YouTube So if you put up some stuff on YouTube, right into Google, and then what we do with the podcast is we video it, so we actually make it a vodcast So you’ve got you’ve got all this little stuff that all connects You’ve already got a really good blog on your website You start putting in some videos there, and I think you’ll see things Really take off because you’ve already got a pretty good footing Let’s also talk about so you’ve got Instagram So you could do Instagram igtv with some longer content video, take that same video you’re putting on YouTube, put that on HGTV and then on Facebook, you can do longer content video there as well too Good to know they’re out There’s definitely room for improvement And I’ve got, I’ve got him he’s loaded with what he’s doing right now So I think you’ll enjoy watching this too And we’ll probably have some new ideas after this one What would be fun to see is when you and I know I’ve seen building inspectors do this with their Instagram, is to take a bunch of video when you’re out on site with just, you know, kind of wild and crazy stuff that people will find entertaining that always gets a ton of attention because you see stuff that the rest of us will never see Right all the repairs that I’ve seen where people are using cassette tapes to prop up an IBM joint Then across space, and it’s like how did how did that hold up?

Yeah, yeah, exactly Yeah, that’s crazy stuff All right, so, so we’ve kind of talked about seller or real estate broker We’re going to jump we’re going to go to questions that buyers ask, and I’ve taken this right off of your website And so is it safe to buy a home with mold in it? So and that would go back to the question of is the person of sound mind and health my opinion because if they are, then yes, they can They can buy that house and then they can kind of break down, you know, the mold situation afterwards It’s because we have to be exposed and for a long amount of time, to become symptomatic to mold You can buy a house and move into it safely, even if it has mold But you certainly want to hone in and start monitoring, you know, the areas of concern Where’s the moisture coming from? You know, is the mold changing? Is it growing? You know, but you can deal with it after you move in a lot of times, so there’s a lot of scare and people are mold They say home inspections are like the number one killer of a deal of a real estate deal Yep And appraisals are right behind that I think that and I don’t, I might be making this up But it seems like during home inspection, if I had to pick one item of a home inspection that would be ruining the deal, it might be mold on top of that list, because it’s it’s the fear of the unknown Yep What What could this do to me? What could this do to my loved ones? Wait, I’m pregnant I know We have kids Granny visits, she has a guest room like it’s the fear of the unknown, a bad roof that’s gonna cost $15,000 to replace There’s nothing unknown about That you just replace it Yeah, that can that can kill a deal But I know I know it’s killing deal because of the cost involved in the mall is like, act is kind of for some people it’s like a scarlet letter It’s it’s a burden on the home It’s an embarrassment thing It’s it’s just it’s really wild how mold has worked its way into our fear culture And as a real estate broker, I’ve had so many deals where my agent has called me and said, we’ve got mold in the home And I’m like, Alright, well, let’s talk about it How, how big is it? Or how extensive is it? I don’t know But it’s got mold and my buyers don’t want it We don’t know what to do People just freak out when they hear them all weird mold, because I think they envision just massive amounts of mold behind the wall where they can’t see it You know, their buyer ripping into the home and just this enormous mold issue But most of the time in my experience, it’s not something it’s something that can be taken care of Yeah, first thing I would say for your agents is don’t don’t ever use the term black mold Because that’s not a scientific type of mold That’s a media buzzword, right? It’s not professional to say, I think that might not be black mold, you might be okay That’s it Of course, it’s not black mold black mold is not a type of mold So avoid using the term black mold If your agent is asked by a buyer is this black mold, you know, black mold doesn’t exist in this in this world The only way to know what kind of mold that is, it can be black, any kind of mold can be black, but we would have to take a sample take a sample to the lab on acute and they could tell us what kind but that’s the kind of mole doesn’t change the remediation process I don’t need to see a lab report to determine how I’m going to clean up I have my method of removing the mold from the home no matter what kind of mold it doesn’t It doesn’t change my process Interesting Okay Yeah, that’s, that’s super good to know And so that’s why throughout all Your stuff you’re like, Hey, don’t don’t throw out the word black mold It doesn’t really exist as a type of mold That is a media If it bleeds, it leads type thing Exactly clickbait the original clickbait Yeah, like, like we’ve kind of experienced throughout the last several months of our lives, correct? Yeah, boy, yeah It’s wild because it’s actually helped my business people are are cooped up in their homes and now they’re saying, honey, it’s time we deal with this mold thing It’s gonna affect our immune system And then when we go back to work, you know, we’re gonna get COVID-19 And so I’m getting a lot of calls from concerned people who are now spending most of their time in their home Okay Okay So since they’re not at since they’re not at the office, and they’re having to deal with their house, they’re like, let’s give that mold guy a call And let’s see what he says because we don’t really want to be in here more than we should if we’re exposed So that’s super interesting So your business has benefited As a small business during this time, and it’s probably

also because we’re thinking about airborne particles, and that’s part of the whole mold thing Exactly And we’ve been we’ve been disinfecting commercial facilities in our area with with our disinfection process, and that’s COVID-19 I’d have my attorney review some some waiver form so that I can feel good about offering a COVID, disinfection service and whatnot And so yeah, this whole thing has helped help the kind of the biohazard industry, I guess Right But to jump to jump back, I wanted to add also for your agents, it’s important for them to know that they should suggest if they have a buyer, in contract on a finished basement home with a finished basement, they should at least educate the buyer and say, Hey, this, this is the one place that’s highly recommended to do air sampling. is a finished basement with that you don’t know the history on that home You don’t know how that basement was finished, either Don’t know if there’s a vapor barrier installed, and the only way to have an X ray vision of that basement is to pull air sampling Okay, air sampling is as a comparison of the outdoor air to the indoor air, the laboratory analyzes two samples one outside and one taken inside and whatever is outside should be found inside in a similar amount Okay, okay, I say the outdoor air in the indoor air should be statistically the same If the if the basement sample is 10 times higher than the outdoor air, then that that would suggest that there’s mold in that basement that’s not naturally occurring from the outdoor air It’s occurring inside the basement And the only scare story I have of that is I got called out to from for a buyer who there was a finished basement It looks great There were no signs of odor or mold growth or moisture problems but on a whim, they ordered an air sample of the basement and it came back extremely elevated I’m talking 100,000 times the outdoor air Wow In a cellar was in denial because he finished He and he called faulty test fake news I went in there and with a little bit of invasive investigation, I was able to prove that there was no vapor barrier installed on the foundation wall In fact, what he had done was he had framed stones up against the foundation wall, laid fiberglass batt insulation in between the studs and then stapled a vapor barrier onto the studs before putting drywall up Therefore trapping trapping the moisture Yeah And making an accelerating was already going to be a problem was not celebrated He created he created like the perfect mold lab culture Yeah That’s crazy What’s really what’s really wild to think about is that They had a young shot and they were gonna move into that home and the selling point was the basement for them And if they hadn’t if they hadn’t pulled air samples they would have not no they wouldn’t have known they would have had their kid down in that environment just huffing all that that mold and because it wasn’t a liquid leak where it makes an ugly black spot no one would have known for potentially more more years to come Wow Okay, how much how much would one of those air the air sample tests cost? Like in Ohio? What What does that run 300 bucks should be should kind of be tops I mean, okay, no, yeah And we always we always recommend one for the basement if you if you want to get the upstairs another area in the home tested the samples, additional samples from us or $50 Okay, kind of kind of market rate or at the conservative and in the market rate There’s companies charging 350 but I don’t know what it is out there It’s probably somewhere it’s probably more expensive because Just everything’s more expensive here That’s what I hear Yeah Yeah, it’s kind of crazy So, okay, that’s, that’s really interesting So what if somebody does have a pretty major project? You talked about your work day there being an inspection or a project day? What is the cost? On most of the projects that you work on to the homeowner? Like, what’s it take to fix, you know, a mold mold issue? Yeah, good question You know, I’m in a unique position because I get to make up my own cost Whereas larger companies who have a sales guy come out, they’ve already got a structure and they say, look, it’s this much per square foot, that kind of a thing I take each job unique So I’m looking at a couple different things I’m looking at, first of all, how much source removal of mold Am I going to have to do meaning how much square footage I’m looking at? How difficult is it? If Do I have to put a Tyvek suit gotten into an army crawl through this attic while I’m

carrying equipment and cleaning materials? Or can I stand up in this attic? Things like that crawl spaces, I’m looking at difficulty So time on the job Before I before I put together an estimate in my head, and generally my estimates are lumped together Unless Unless they asked me to itemize things, then I break it down So but that being said, if I had to give you a range for an average job, if I were to look at my history, I would say the average job for me costs about $1,000 Huh, okay Okay I’ll go into homes for as little as 200 500 700 For smaller jobs I don’t have a job minimum That’s another thing my competitors do is they either have a job minimum or they charge to come out and then if you hire them, they’ll apply that toward the cost of the project I don’t charge to come out I don’t have a job minimum You know, but yeah, I did a job for $20,000 so that’d be that’d be a big one Yeah, yeah So but but but when you average everything out probably 1000 to 1200 Two and I weren’t and that’s the warranty my work I get my, if I’m treating basement then that basement is under warranty you can transfer that warranty and what that really means is not only have I removed the mold from the home, but I have identified the moisture source that led to mold growth to begin with and I have made the correction Sometimes it’s a plumbing leak and they need to call a plumber As long as they’ve hired a reputable plumber I my warranty stands okay I’m saying the moisture won’t come back So that’s why I’m gonna warranty my work Okay, okay That makes that makes total sense That seems pretty reasonable I would I would add real quick that make sure your agents are asking for warranties with mold remediation companies they can speak well, yeah, they can sneak in a quote where there’s a scope of work gonna cost but they don’t have any verbiage about having a warranty And if I were to do a retest on some of these companies and send it to the lab, they could fail us the proving that the work was ineffective And they the company might not come back out to do anything about it They might they might say, well, the mold hadn’t grown after we were done Right Okay So get get a warranty is the bottom line Yeah, absolutely So, one of the topics you covered just a little bit earlier that I want to go back to is the disinfecting service So that’s going to be in Seattle, we are still in phase one We’re not in phase two yet We’re very close, like maybe June one or a little bit later A lot of businesses are going to be opening back up and so disinfecting services are going to be kind of super hope high profile Can you tell us a little bit about that what that service looks like and like the cost and scope of work? Absolutely So first thing to consider it Are we disinfecting as a preventative measure? Or has there been an outbreak? Because if there’s been an outbreak, then you’re talking a lot of money big company bought a liability waiver forms and things like that Okay You know, we’re offering to fog disinfect real estate offices for free right now, as well as a community service in a way to, and I’ve done probably 15 so far since we opened up That’s, that’s amazing And how big How big are these offices? I’m kind of curious Some are one room 200 square feet, and others are multiple levels in a building So three, we went to three different floors, and it took us two hours And so what I want to say is I have heard of advanced clean who is a franchise? I heard that they quoted this from the we share the same insurance Rep. And he told us that he told me that it vertically quoted a building for $14,000 And they got undercut by another company for 30 $500 That’s quite a margin That’s massive Yeah What what that means Is that it’s not necessarily the product, the disinfection product that we’re using That’s not really the important thing You can go to the CDC website and find CDC approved disinfectants for COVID-19 And there’s a list of hundreds of things including white vinegar, everything on there, they have Lysol on there It’s not the product It’s the process that makes it effective in the processes that we use an ultra low volume sprayer, which is also commonly called a fogger A fog machine Okay, we put our disinfectant or mold killing or virus killing all in one product into that machine And we go through and the fog projects eight feet from the nozzle and it expands as it’s projecting from the nozzle so it fills in all the nooks and crannies places that we can’t reach to disinfect And we go through and we do a light fogging and it’s safe for plants and it’s safe for electronics and computers like that But it can it can cover surface area, so Quickly whereas going in by hand, it would take us

hours to clean everything by hand in a fog machine can do it in minutes Right And that’s what that’s what we were seeing coming out of the, the images we were seeing coming out of China Right as the outbreak hit the US guys in massive hazmat suits, with the foggers going along like the subway and that kind of thing That’s kind of what you’re talking about Correct I don’t know why they’d be doing that outside I saw that video Was it? Were they outside in the streets? Yeah, there was there was like, a whole bunch of them was almost like the army walking down the street You know, hitting cars just, I don’t we don’t do things like China I can say that That was the other bizarre video that came out from one of those countries I might have been trying to I can’t recall My wife showed it to me was kids waiting in line to enter the school building And there were guys in Tyvek suits with full face respirators on with the fog machine and they made each kid walk through the fog They’re fogging the children who weren’t wearing masks I guess it was a whole hoped, I hope they were told to hold their breath Wow I can’t believe it I mean, yeah, there’s a lot of nonsense So, so it’s the fogging process And then the other thing that we offer that we do alongside is called touch points surface disinfection So while one of us is fogging, the other one is going through with a rag with the same disinfectant on that rag and identifying doorknobs and countertops, desktops filing cabinet handles, armchairs, our chair armrests But anywhere there’s a point of contact, it sounds like Yeah, yeah So that’s really that’s really it I don’t know that there’s going to be a huge market I was prepared to take on a lot of COVID disinfection jobs, but I, I have this instinct that says there’s not going to be a big market and a lot of money being spent on this A lot of businesses and buildings are handling it in house They’re smart enough to know they looked at the CDC website, we can use we can use our Lysol guys, we just need one of you to do it Setting up systems to handle the disinfection in house Which leads me back to if there’s been an outbreak in a building I think that’s where you’re gonna see these big jobs Right Okay, that makes sense Yeah, we’ve kind of got a system here in our office And I’m the guy that does it I’m just that guy You know what I mean? So I just go through clean everything And I’ve been doing that Since this all went down, and it’s not that hard It’s just you got to get in there and do it You got to get in there and do it right Yeah Yep Crazy. was back I know you are a super busy guy and you’ve got other stuff to move on to but before we go, I wanted to find out where people can get ahold of you if you are in the Columbus Ohio area, and you need some mold help identifying it or remediation What are some places people can find you Zack? My website is mold mentor consultants dot com that’s consultants is plural It’s got an S on the enrollment or consultants Okay Okay, that will take you to my phone number that will link you to our Facebook page And our Instagram page That’s that’s the best starting point right there Okay And if people want some information just on mold in general, just head to your web website, correct? Yeah, for sure Absolutely head to the website, look at the blog, shoot me an email I’m happy to respond and help people Heck, if you’re in Seattle, and you want to call me Go for it, and I’ll return your phone call Excellent Well, thank you so much, Zack, really appreciate having you on the podcast here I hope your business goes well And you get to that point where you can have an employee or maybe two or five, who knows I mean, that’s how small business goes And if you’re picking up podcasts and you’re getting some social media out there, you you know, you’re going to get exposure, so keep doing what you’re doing And we’ll we’ll follow you online And awesome, Shawn Appreciate it Thank you You bet All right, Zack Well, thanks so much I’m gonna sign off Thanks again for being on the podcast appreciated you so much Once again, I’m Sean Reynolds from Summit Properties NW Northwest Reynolds &. Kline Appraisal today having Mr. Zack Duffy of the mold mentors company There we go If you’re watching this on YouTube, you got a mold mentor shot right here at the end Thanks again, Zack We’ll catch you in the next one All right, take care You too Bye Don’t forget to subscribe to our channel and hit the notification bell so you’ll know when our next video is out


App Engine for Startups


MANDY WAITE: Hi everyone My name is Mandy Waite from Developer Advocate for the Google Cloud platform Welcome to Google Developers Live FRED SAUER: Welcome indeed I’m Fred Sauer, also Developer Advocate on the Cloud platform Today we have a number of questions that you’ve already submitted that we want to answer But first, Mandy, you have a little presentation for us? Kind of give some of the viewers who are not as familiar with App Engine an overview, and then we’ll get right into your questions MANDY WAITE: Yes, so most people who will be on the live stream are actually people who have signed up for the App Engine startup pack So welcome to everybody Thanks for joining the program Hopefully, this will inspire you to use some more of those credits We’re going to look at some of the features of App Engine in this presentation that you may have actually overlooked, you may not have seen, or may not be aware of So we are going to dive straight into presentation Yeah, so we’ve done intros So firstly, the startup value proposition App Engine is a great product, but what’s the real value for startups? So one of the key points– and this kind of stuff is already documented at Cloud.google.com, but we were going to make a big point of these value proposition statements here App Engine is very quick to get started with It allows for very rapid development of applications And that’s important to startups who want to build a minimum viable product in perhaps hours, not days like they used to They want to get their application out to market as quickly as possible And to facilitate that Google App Engine is extremely easy to use We give you all of the tools you need to build your applications, to test, to launch, and then to update

your application’s going forward We also provide a very rich set of APIs and services, which you can build your application upon, leveraging things like mail, XMPP, and other cast services in the background Also, autoscale, people keep talking about autoscale, when I mention App Engine The media scalability almost to most users of App Engine is almost like there’s infinite scale, so you can scale rapidly and as massively as you need to And also one of the final points is that you only have to pay for what you use App Engine is free to use to get started with You get some benefits from sign-up for billing, particularly the credit card, so you guys are all already signed up for billing, but you only pay for the resources that you use as your application grows And here’s some credit, some people that have used App Engine and been successful with it So Khan Academy offer teaching programs online And they’re very pleased with App Engine We use their statements quite often, and these folks they feel that App Engine gives them the ability to spend all of their time developing their application and not worrying about infrastructure Pulse in a very similar position A quote from Greg Bayer there that they could spend time not worrying about the things that they used to have to worry about Google now takes care of those things for them Just to go over very quickly of some of the new features in App Engine We update App Engine pretty much every month, so we’re currently at 1.7.7 And in case you’re still lagging behind a little bit, it’s important to keep up-to-date with new updates And to look at some of the features we’ve introduced recently, in App Engine 1.7.5 we introduced Cloud Endpoints points, which we’ll talk about in more detail; Cloud Messaging support, high memory instances In 1.7.6, a bit of a lesser release, but we introduced Task queue Async Methods, and if you’re developing in Python, we finally got a multi-threaded development AppServer In 1.7.7, which was released in April, that was last release, we finally provided Java 7 support it was previously available in SDK SSL support for Python, Maven support for Cloud Endpoints, and also we removed the minimal charge of $2.10 So you no longer have to pay anything until you actually hit your quotas So just going to quickly go over some of those features that you may have actually missed, that maybe not actually be completely apparent You may have started off with App Engine a while back and actually not kept up-to-date with some of the features we’ve introduced Cloud Endpoints is a really important feature for us Basically, what it’s allowed you to do is to develop APIs in a way that’s very similar to the way that Google Developers their APIs You can take an existing application or develop a new application, and annotate your classes and methods within the application, to expose those classes and methods as an API Once you’ve exposed them as an API, you can discover that API, there are various utilities that we provide that allow you to get discovery documents and to build API client libraries for your own API With those libraries you can then go on to build clients for Chrome, for Web applications in general, for Android and for iOS So we thought that Cloud Endpoints is an extremely important feature So if you have interest in building particularly mobile backends– I want to say backends by using the broader sense of the term, the way App Engine uses it– but if you’re interested in building mobile backends, then you should really check out Google Cloud Endpoints Another thing that people sometimes miss is AppStats It’s not turned on by default, you can add it to your application And you can use it for profile in your OPC course You can basically use it to work out where your application is spending time App Engine Sockets This is a highly sought after feature Previously in App Engine you were not able to create sockets of your own to create outbound sockets We’ve introduced that feature It’s enabled for trusted testers at the moment, which means you have to sign up for a program and agree to do some testing and feedback with it, but it is available for you to use today Also we provide a Search API And again some of these services are experimental, which means they are liable to change Cloud Endpoints in currently also experimental They are liable to change from release to release, but they’re worth looking at now Search API is one such feature And that allows you to do Google-like search as across your own application data It support things like custom scoring, which is the ability to create algorithms that score and results they’ve returned And also rich snippeting, which allows you to return

HTML snippets of your search results with the search term highlighted in bold And also supports GeoSearch So if you’ve got the appropriate data on your application database, then you can do GeoSearch across your data And for those of you basically in the European Union, and there are several of you, many of you, in fact, we also support App Engine European Data Centers So this is great for compliance and locality of reducing ping time and suchlike You do have to sign up for paid application for this, but you guys are already signed up for paid– so that’s great– so you can use that today Also, we have a fairly extensive and growing all the time Development Stack We have Maven support, we have Maven archetypes and plug-ins that you can use to deploy your applications Jenkins is also supported in the Cloud for a thing called Cloud Base Online IDA called codenvy, which is also supported in App Engine And we also have the Google plug-in for eclipse, which ties into the SDK And very quickly looking into the storage options Storage is a big thing for application development We have many different options And the ones we’re not going to go into detail today, but you can look them up online and get more details about them But we have Unstructured data support using Google Cloud Storage and Google Drive; Structured data– The App Engine DataStore; and also Relational data by using Google Cloud SQL Very briefly covering those three options I’m not going to actually talk about Drive today Drive is a Cloud product, but we don’t generally talk about it in these particular talks NoSQL DatsStore, so this is the App Engine DataStore– schemaless; it’s for atomic transactions and queries This is really great for internet scale, denormalizable DataSets So really think differently, no joins when you’re using this kind of NoSQL DataStore And also a fairly standard Cloud SQL [INAUDIBLE] is really MySQL in the Cloud It’s very familiar to anybody who’s used MySQL before It’s fully managed and it’s best really for bounded scale We don’t have high levels of scale for Cloud SQL yet And I believe it’s actually out of experimental now And the final offering, last but not least, is Google Cloud Storage And this is really a bucket for storing all types of unstructured data The five things you really want from a DataStore– reliability, durability, speed, low cost and simplicity– are all encapsulated in Google Cloud Storage So you can check those options out online And just a quick mention of Google I/O. Google I/O is going to happen next week If you haven’t got a ticket, it’s unfortunate, but you can stream the events live You can follow up afterwards on YouTube There’s many interesting talks, many talks on the Cloud platform, including the App Engine If you want to be more involved, we also have I/O extended And you can find out more about that from these links here These links will be available on YouTube afterwards if you need to go back have a look at them So we really are going to open up for questions And Fred and I are going to try and bounce some questions back and forth to one another FRED SAUER: This is my favorite part MANDY WAITE: Your favorite part? FRED SAUER: I do MANDY WAITE: Is it? But you’re an App Engine expert, and I’m kind of new to App Engine I may struggle FRED SAUER: Well, listening to the presentation, I think I may dispute that [LAUGHTER] FRED SAUER: Since you have spent a lot of time talking already, do maybe want to throw a question at me and start it that way? MANDY WAITE: OK, yes So we opened up a moderator page on the Google Developer Live site So we have several questions from there, we pulled some Well, we’ll maybe also get a chance to look at some of them afterwards, but we have like 20 old questions we can go through FRED SAUER: OK MANDY WAITE: So one of them, the first one, was I want to develop a restful web service using Google App Engine that’s consumed by my web application when in a different domain How do I restrict the calls only to my web application? What are the best practices? FRED SAUER: OK So this is a question of cross domain, or what the standards like to refer to as cross-origin requests So if you look on the W3C web page, you’ll find something called CORS C-O-R-S and it stands for cross-origin request source? I forget the abbreviation, I use it too much MANDY WAITE: Just call it CORS, right? FRED SAUER: Cross-origin request sharing, I believe what the actual abbreviation is And basically, this is a mechanism whereby servers and browsers agree on how to communicate the domain of the

website that is making a request to the server, and from the server to either grant access to that content or forbid it And what you do is when you set up simple resources, like even images in an image tag, or when you make an XML HTTP request, you can specify that that request is a cross-origin request– so it’s headed for a domain other than the one that the website’s running on So maybe your website’s on mygreatapp.com and the Backend service that you’re connecting to, an App Engine, is actually on mygreatbackendservice.com And you can do that, but you have to set this extra flag in your HTML in your JavaScript to make the cross-origin request And then on the server, for a get request there’ll be an extra HTTP header coming in as the origin header And this is an indication to the server that this request came from another origin And right then you can make the decision in your request handler whether or not you want to allow that request And you do that by sending another response back, which is the access control cross-origin allow And you can specify the domains that are allowed If you make a post request or a put request, than the browser will do something even smarter than that; it will send what’s called the preflight request, and will send a request in advance and get permission before it actually does the submission So the short answer to this question is are there existing senders to do this? It’s very easy to do Just look up CORS, C-O-R-S, and implement the details there And you can do that in your request handler in your HTML app, and you should be good to go MANDY WAITE: Excellent, well You’ve got one for me, right? FRED SAUER: I do I do MANDY WAITE: Is it an easy one? FRED SAUER: Hopefully Come up with a shorter answer then than mine Maybe we should have started with an easier one Let’s see, so how viable or recommended would it be to build a back in service for mobile apps around Google Cloud Endpoints? MANDY WAITE: All right, OK, so I think really that’s probably the big use case for Cloud Endpoints, built-in mobile Backends And when we say Backends, again, we’re talking about the general sense of it rather than the App Engine sense of Backends FRED SAUER: Yeah MANDY WAITE: If you want to build an application that will provide resources and services that are consumed by mobile devices, then Cloud Endpoints will do that for you out of the box I think pretty much we covered most of that in the slides But the documentation is pretty extensive, both for Java and for Python So if you want to find out more about how to use Cloud Endpoints, you can check that out And there was also Google I/O Talk last year, wasn’t there, the one that was long? FRED SAUER: There was, yes MANDY WAITE: So you can actually get a lot of information from looking at the Google I/O sessions from the previous years FRED SAUER: Very good MANDY WAITE: Yeah? FRED SAUER: All right, your turn MANDY WAITE: My turn? OK, so last week– maybe that was last week, or it could have been written last week Last week I had to rename an entity-kind name for Google App Engine Java I had to write one-time code in my application to duplicate each entity to the new kind name, delete all entities, then remove the one-time code, and re-apply the application Is there an easier way of doing this? FRED SAUER: I think this is a little bit of a longer question But I think the person who asked it is actually on the right track and how to do this So first, let me contrast this with how you would do things in a relational data world, where you have a fixed schema for your database So if you have, for example, a table in a relational database that’s called customer And you decide, for whatever reason, you want to restructure that, and you want to call it customer 2 What you would actually have to do is you would have to declare a maintenance window, you’d have to take the website down, and you would then perform the database maintenance You would rename that table, which locks the entire table and all data in it You would modify the code in the application server, you’d bring everything back up And maybe half an hour later, maybe a few hours later, you’d be up and running And that was maybe acceptable a number of years ago, when website regularly went under maintenance, and a lot of systems were just 8 to 5 But now we live in a world, where everyone is connected to the internet, it’s always 5 o’clock somewhere Everyone needs to get to your site all the time And so what you really want is a system, where you can make changes while the system’s up and running And part of the question actually has the answer here and how you do this So the specific question was, how do I rename an entity? But you can generalize the question, what if you wanted to add new required properties to particular entity in the DataStore? Or you wanted to restructure, you’re going from three separate entities and you’re combining them into this new

data structure, where you put everything in one entity And these are common refactoring that will happen with your application And so the way to do this is to think about this life system that’s always handling transactions And it’s actually a three-step process In the first step, what you want to do is change your application, so that any time that you write out an entity, you’re writing it out in the new format or with the new entity name, or with the new properties, the new constraints But at the same time, that every time you do a read, you first look for the new kind of entity, and if you don’t find if, you look for the old kind And you deploy that And as soon as you deploy that, your application will slowly begin migrating data Any user that logs in and touches their data, those entities, those rows in the DataStore will begin to get migrated And now, actually, you’re in a perfectly happy state Your application will continue to function There’s a little bit of extra read overhead for people as they’re migrating, but you now have all the time in the world to make the full migration happen You’re still serving live traffic And now, depending on what you want to do, you can either say, we’ll just leave it like that But oftentimes that extra code there is complexity you don’t want, and you might want to get rid of that And so what you do is then you run a, what we call a MapReduce, which essentially iterate over all the entities of the old kind in the DataStore And you just touch each one You can do that in ApplicationLogic And as soon as you touch each one, it gets migrated to the new structure And you can take an hour to do that, a day, you take weeks or months– really, however long it takes And once everything’s been migrated, then you go back into your application and remove that extra code in the read handler And your apps migrated And many developers have many of these migrations in place at once, they might introduce a few migrations incrementally, and then maybe once a quarter do a clean up and get rid of some of the code It’s not the case that you have to migrate everything in a short window Think about doing things live in real time MANDY WAITE: Wow OK That’s cool. you have one for me? FRED SAUER: Yes All right, here’s a developer who is a little bit conscious about the bill at the end of the day He says, each day I fight to really save every penny on App Engine– MANDY WAITE: We all FRED SAUER: We all do How could you help us better profile our applications, Appstats– it’s something you actually mentioned– he thinks is a little bit hard to use And what’s the best way to tune your application? MANDY WAITE: An interesting question, OK So, yeah, we mentioned Appstats earlier I think we’ve plenty discussed it before We think the Appstats is probably the way to go It’s built into the platform already, no external tool is needed So, it’s there really to go It would only profile your own PCs It won’t tell you where all of the hot spots that need to be cleared up, but it will definitely give you a really good indication about what your application is spending its time doing, particularly in terms of RPC calls– calls of the DataStore that includes as well It’s very easy to get up and running as well So we’re not sure if it’s difficult for you to actually get up and running, and using it, or if it’s difficult for you to actually get anything meaningful from it But generally, it’s fairly easy for both Java and Python to get up and running with Appstats To get access to the console– they provide a custom console you can have access to, which can actually include your basic admin console Once you have that available to you, you can look at your own PCs, you can look at each one of your requests, open up the request and then drill down into the request to see where you’re spending your time And even then for Python– I’m not sure if this is available for Java, but with Python you can actually look at the code that actually generated RPC call FRED SAUER: Yeah, in Python you have stock traces, that’s right MANDY WAITE: Exactly, yeah, so with Python you can actually try showing to where the code is– and it’s actually called longer RPC calls– and actually modify them, redeploy the application, see what result you got So that’s really cool So I think we recommend using Appstats, definitely for RPCs I’m not really familiar with any other profilers for App Engine, although I did briefly, the Khan Academy actually produced a profiler Ben Kamens’ Maven So hopefully worth looking into FRED SAUER: Yeah, check out Ben Kamens’ blog He does have a Python profiler with some nice UI widgets that tell you about the loading performance of your site To finish up on the Appstats, I think, what some of our users sometimes run into is they feel like all the information that Appstats provide is really overwhelming And it’s good maybe to take a step back And one of the things that Appstats provides that I really enjoy looking at is there’s these timeline graphs that give you a breakdown of the RPCs visually

And 99% of the time when I’m using Appstats, I only look at the graphs There’s a whole page though of stack traces and debugging information, and data that’s been sent back and forth Almost everything related to performance of your application you can find just by looking at the graph You can see whether you have DataStore reads that are staircased all sequentially behind one another, whether they’re happening in parallel You can see that you’re making them in cache call and then a DataStore call, which tells you there’s a in cache miss, right? MANDY WAITE: Right FRED SAUER: So really just getting comfortable with those graphs, I think, will provide most of value with very little effort MANDY WAITE: OK Would you recommend it for running on production as well? FRED SAUER: I do, but depending on the traffic of your site if you only have a few requests per second, then, yeah, sure, run it If you’re doing a lot of traffic, what we recommend is that you either deploy another version in production that you hit, so you can say version dot your app ID dot [INAUDIBLE] dot com and access that directly And that way you don’t have all the data from all the requests from all your users piling on top of each other, but you can very carefully just poke at your application, say, I’m going to do this one request here and then I want to see the Appstats profile MANDY WAITE: Yeah FRED SAUER: We’ve even had people with mobile clients They’ll have a couple users that report performance problems, and they’ll change their application to send down a ping to those users, and say, OK, this mobile client should connect to a different backend name And then those users are isolated on their own version, where you can do diagnostics, maybe you can enable some traces, you can enable Appstats And then, when everything’s all [INAUDIBLE], your tell the clients to go back to the original version MANDY WAITE: OK FRED SAUER: So, yeah, a lot of ways to utilize that tool MANDY WAITE: All right, definitely, OK Well, that’s good Hopefully, that answered the question So one for you FRED SAUER: OK MANDY WAITE: Any suggestions on how to simply move data from the live server to development server, and visa versa? FRED SAUER: Sure We have in the App Engine admin console there’s a data store backup feature, and it allows you to create a snapshot of your database And those snapshots can be placed into your Google Cloud Storage, so it’s just a very large file that you can then download, you can store it offline But you can also restore that back up to another app ID And you can select actually which kinds get backed up So that’s a great way to copy data between instances If you have like production and QA environment, and you need to get another snapshot of data It’s pretty common Or if you have been working really hard for months in the development server, and you have all this configuration data, maybe you want to just take those kinds, grab a backup of that, and restore that, play that back essentially on onto the production server MANDY WAITE: Well, sounds easy OK FRED SAUER: Yeah? MANDY WAITE: OK Excellent FRED SAUER: OK, for you let’s see I need a testing strategy for App Engine applications as the SDK doesn’t implement all the services exactly the way that the production environment does, do you have any insight about how to test your applications taking into account some of those differences? MANDY WAITE: Yeah, so I guess testing, there’s a lot of ways to do testing, there’s a lot of levels you can do testing– you can do unit testing, functional testing, integration testing Development testing is pretty important of App Engine, you really probably don’t want to deploy an untested version of an application and replace the current version that’s working really well with one that’s not been tested, particularly extensively So, as always, when testing in the development environment, it’s probably important to mock services that you can’t actually access directly FRED SAUER: Sure MANDY WAITE: So in this case, if there are any shortfalls in what the development server offers and what App Engine offers itself, then you probably want to mock that FRED SAUER: OK MANDY WAITE: So that would be the same with accessing another API that you don’t want to access from development You may just mock that service up, so you can actually get reasonable results from making course of that service Then you’d probably do the same with App Engine as well if any App Engine services are missing And beyond that, really there’s lots of different options for testing Once you’ve deployed the application, you may, as Fred’s already mentioned, you may deploy a different app version, and then you may use traffic splitting So App Engine has this feature of traffic splitting, that allows you to incrementally direct traffic towards new versions of your app So you can actually test it in isolation to a degree, You’re pushing a small amount of traffic to it initially, and then slowly migrating all of the traffic over as you get more confident in the application’s performance And again also that other kind of testing, low testing and suchlike You don’t really want to do load testing on a production

applications You would probably need to build a different version of your application, then to pull it out, and undo your testing on that particular version of your application You’re also likely to need data as well, so you need to probably have a representative set of sample data that you can deploy your application If you have an empty DataStore, it’s not really going to give you the exactly same kind of results that your production application would do, so you probably want to divine a set of data that you can actually populate into your application initially to do the testing mode FRED SAUER: Yeah Those are very, very, very good points The one thing that bites people a lot of times when they’re testing in a different application ID, or even on the production application before they launch is people will not use representative workloads MANDY WAITE: Right FRED SAUER: So they’ll instead of creating say 1,000 unique accounts, which is sometimes tricky to do, they’ll say, OK, we’ll take 10 accounts and we’ll have each one log in 100 times running concurrently And those access patterns actually change the way the data is utilizing, and you don’t get very representative results there So you already talked about making sure that you have some good sample data there, making sure you have good representative requests, and users logging in It’s just as important In any case, as much as you can take the production environment that you expect and replicate it, the closer you get, the closer your results are going to match, obviously MANDY WAITE: OK, OK So you can use tools like Selenium and JMeter, those kind of things for low testing FRED SAUER: Absolutely MANDY WAITE: The same kind of tools you would use for any kind of workload web application to generate tests And they generate representative workloads, believe me FRED SAUER: They do Just watch out that you’re not testing from one machine that just doesn’t have a big enough network connection MANDY WAITE: Yeah, so don’t talk to me about client side, because if your client can’t handle the load, you’re never going to actually load on the server the way you think you are So you need to make sure you can actually generate the load correctly, you’re not falling over lack of client resources as well when you’re testing FRED SAUER: I guess you can always get a few instances on Compute Engine– MANDY WAITE: Ah FRED SAUER: There’s some capacity there MANDY WAITE: That’s interesting there Let’s go there FRED SAUER: Nice big pipe of data Let’s see, where did we leave off? It’s your turn MANDY WAITE: I’ve got one for you, yeah Could you provide us with state-of-the-art database model designs? OK, that’s a big ask Things like CRM database model, a bookshelf, an eBay kind of application FRED SAUER: OK, it sounds to me like the person asking the question is looking for sample model, sample applications And I hear the question behind the question, which is how do I go about modeling an e-commerce site, a CRM site in this non-relational, NoSQL world I want to build an application that can handle tremendous scale, but you’ve swapped the tools out from me I’m used to doing development on a SQL database, relational, I know how that works, but I know that that doesn’t scale So help me switch MANDY WAITE: Right FRED SAUER: I think it’s difficult to come up with samples that fit every single vertical But there are some general things that we can say about the way you do data modeling And really it’s sort of changing some of your habits and some of the things that you were taught maybe from the beginning about working with relational databases So one of the first tenets when you’re building a relational database model is everything needs to be normalized No data duplicated It’s typical in an order entry system to have 5, 6,7, 10, 12 different tables, that every time you want to do a query and out what the orders are, you have to join all these tables at run time and select your results So you might have a table that has items in it, and it has item description, item price, and some other information; then you have an order lines, which connects to an header, you have order line details, it goes on and on And this is a really good way of allowing you at run time the flexibility to really join any amount of data with any amount of data, which is what relational databases were built for But in a very scalable world, what you want is actually that if you and I both make a request to our website and we’re both interacting with the service, what we want is for your queries and my queries to essentially be able to be handled by different parts of the infrastructure it’s a distributed platform And the way to do that is to isolate your data from my data And that means what we don’t want is for if I’m pulling up a list of my orders and you’re pulling up a list of your orders, we don’t want to both be going to the same table and

running into each other We also don’t want to take a lot of time at run time collecting the data In a distributed system, what you like to do is amortize the cost spread out the cost and do that the cost at right time So when we write the data how we might do some deduplication, so what you would call denormalization So it’s quite common in App Engine apps for entities to have properties with string values So rather than have an item key on your item details page, you would actually have the actual item code or the actual description If you’re putting together, let’s say, countries You have a client that stores addresses and you need countries In App Engine you would actually just put the country code, or even the full country name in every single entity And you should resist the urge to say, oh, I don’t want the duplication of data, I don’t want to pay for the extra storage What you’re really doing is you’re separating your data and my data, and making it very efficient to handle very large number of requests There’s more best practices, but the first one that you want to think about is denormalization is not a bad thing That’s actually something you should really just embrace And just pick two users and imagine that they’re both doing the same thing in your application How can you make it so that the entities that you’re touching and the entities that I’m touching are different, because as soon as they’re separate from each other, then the Google App Engine infrastructure, the DataStore, can put those entities on different physical hardware, different machines And now we can have a third user, a fourth user, a fifth user We can have 100,000 users, it really doesn’t matter how big we scale up, because each user is accessing a different part of the system MANDY WAITE: Wow FRED SAUER: So there’s a lot to grasp if you’re coming from a SQL environment, but once you play around with it for a while, it’s really freeing to say, I really don’t care how many users show up on my doorstep tomorrow, I can handle 1,000 users, a million users, a billion users It’s really independent of scale And that’s what the App Engine DataStore provides is performance that’s independent of the amount of data that’s stored MANDY WAITE: Yeah Makes sense It’s really good advice FRED SAUER: All right, so, sorry for the long answer there MANDY WAITE: No, I think it’s the kind of question that deserves a long answer FRED SAUER: Yeah, it’s a common question, so it’s fair to spend a little bit of time– MANDY WAITE: A lot of people moving from the old traditional relational database to the NoSQL type one FRED SAUER: Yeah And we should say, is if you do want a relational system, you have a limited number of users You’re building a corporate application You only know your need for the existing performance You can use Google Cloud SQL– it’s MySQL database, runs in the Cloud, managed for you, no headaches And you get all the traditional kind of performance trade offs and benefits It’s still an option MANDY WAITE: Yeah FRED SAUER: All right, next question for you MANDY WAITE: OK FRED SAUER: Any optimization recommendations for e-commerce website with two million visits per month, running on App Engine Python MANDY WAITE: Well, OK, how long have we got? Optimization recommendation– I guess what we want to optimize for is probably important thing here Do we want to for performance, for load, for latency? The cost– and cost is an important consideration, when it comes to Cloud applications as well balances cost and performances Massively important So I think what I’m going to step back on this one a little bit and say, the best thing to do is to read the resources that are available in the documentation So we have a couple of really, really good pages on the App Engine documentation site One is called Managing resources– FRED SAUER: Yes, my favorite one MANDY WAITE: That’s Fred’s favorite one Whenever I ask him a question, he has have you read managing resources? So I go off and read managing resources, and I’ll find the answer So managing resources is great It tells you have to manage your instances, manage bandwidth, manage concurrency and those kind of things And so that is an excellent resource There’s also a performance-related resource on the same site I think they’re in the same section, probably one leads into the other And I will give you hints on performance Often optimization and performance are kind of related, very married together in a lot of ways, so you can take those tips as one big set of tips And you’ll fine really good, good information about how to optimize your application If it’s about cost again, I think also managing resources also talks about how to look at your quotas and things like an where you’re incurring your costs And also that becoming back to Appstats So Appstats is another really good essential tool for looking at the way you’re spending your time in your application So if you’re concerned about latency, having taken too long to respond, so you go and use this, then you can do some analysis with Appstats to find that way you’re spending time, and maybe tune your application to spend less time doing those RPCs FRED SAUER: OK MANDY WAITE: Does it seem good?

FRED SAUER: Yeah I love the managing resources article If you just do a web search for App Engine managing resources, you’d probably hit that article in our Docs And think of it as a checklist It doesn’t tell you exactly of everything to do, but it’s a checklist– think about this, think about this– and you go through each one and you know your replication best I.e. can tune it If you just want the what’s the one thing I need to do that I may not be doing that’s going to give me the biggest benefit? I say, turn on Memcache And some of you can get that for free If you’re using in Python the NDBAPI, or in Java, if you’re using Objective, which is a third party [INAUDIBLE] layer, they both have Memcache built in So any time you write or read from the DataStore, it’s checking Memcache That can save you a lot of DataStore operations, and it won’t cost you a single line of code MANDY WAITE: Wow, fantastic So use Memcache FRED SAUER: Use Memcache If you’re not using Memcache, use Memcache MANDY WAITE: I was about to suggest you use Mamcache OK, so I’ve got one for you Oh, another performance So what is the performance difference between GAE Python, GAE Java, and GAE Go? There are three different run times FRED SAUER: OK Fair question This is actually one that comes up a lot It’s a question that I tried first not answer, and then I’m happy to give an answer But I’d like to start out and say there are subtle differences between the three different runtime environments Python, if you write any application in Python versus Java, versus Go, there are some things that work better in one language versus another What’s probably much more important for your application For your service that you are running is what your developers know and where they’re going to be productive So if you have a Python shop, everyone knows Python really well, and there’s one or two guys that know a little bit of Java; even if Java were slightly better for your application, you’re going to be much more productive building in Python So you should do that And the same goes the other way around Having said that, if all things were equal, if you hadn’t hired any development staff yet, or you could pick any language, or you just want to learn something new– there are some trade offs to make And maybe you should look at the complexity of the product that you’re building and how long it’s going to last Python is a scripting language, tends to be a little bit more productive for people prototyping, iterating in small teams, working together, let’s you be very agile– you can do things very quickly If on the other hand, you’re building a product, and you have a very large development team, or a very complex code base, you’re doing a lot of refactoring and you could use some help from tools that can refactor code, do static analysis; then Java is probably the language that’s more productive And if you wanted to do something that’s really performance sensitive, you want to build with the stuff that Googlers are building on, I would seriously check out the Go runtime There are some really pleasing results there Having said that, that’s an experimental runtime still, so if you’re building a production app right now, probably Java or Python And pick the one you’ll be in productive MANDY WAITE: So retraining your Java developers, so you [INAUDIBLE] in Go is probably not the best way to get performance? FRED SAUER: Unless that’s what motivates them For some people a new language is the thing that wakes them up in the morning, and more so than a cup of coffee For other people it’s being able to be productive and hit the ground running MANDY WAITE: Excellent OK Well, good advice FRED SAUER: OK Let’s shoot one back at you here Can I pre-populate the DataStore? Is there a front end to add, remove, or update entries in the DataStore MANDY WAITE: OK, so now I’m going to punt this one a little bit, because we discussed this one yesterday, and we talked about some things and some articles that have been published about doing this very thing So I’d like to actually talk to you about that I’ll bounce the question back to you and ask what you feel is the best way to pre-populate FRED SAUER: So, I think the answer depends a little bit If you have some data on your client, like on your desktop, maybe there’s some legacy data that you need to import, some configuration data that you want to do programmatically, the development server of or the SDK does have a little tool for uploading data MANDY WAITE: OK FRED SAUER: That works fine for small scale, and that’s a tool that’s been around for a long time Probably what you want to do if you’re doing anything larger than trivial operations is just create a Cloud Endpoints application version of your app It doesn’t mean you have to build your entire service around it, but maybe this is an administrative gateway, or this is a way that a mobile clients upload data to your application Cloud Endpoints is a great easy way to create an API and

will automatically generate client libraries for Android, for iOS, for HTML5; and from any of those three clients, you can make your calls to the server side If you’re doing Python, a colleague of ours, Danny Hermes has a pretty cool project called Cloud Endpoints Proto DataStore MANDY WAITE: OK FRED SAUER: Or maybe it’s Endpoints Proto DataStore But it’s an application, where you essentially define your Python DataStore models, and then you just swap up the class name that you’re inheriting from, and then all your classes magically turn into Cloud Endpoints And so you can make calls like insert entities into the DataStore, remove them, do queries All really easy So that’s probably the way that I would go And then there’s maybe one more trick up your sleeve, the SDK has a thing called a Remote_API– Remote underscore API MANDY WAITE: OK FRED SAUER: And it’s a way for the development server to essentially proxy its DataStore and then cache request to the production environment So what you do is you deploy a special version of your app with just like a one line config flag, this is enabled Remote_API And then you can run your code locally as if you’re connected to the DataStore, but everything’s been proxy’d to the Cloud for you So that’s another neat trick for migrating data in the Cloud Or it’s a great way to do debugging, you connect to your production or staging environment, and you can interact with it, while you’re in a Python console, you have your real data models, but you’re looking at your real data MANDY WAITE: Excellent OK I would not have been able to come up with that answer So I’ll let you ask me another one now, because I think I have more questions for you than you have for me FRED SAUER: OK Let’s see, startup times for my App Engine Java are pretty high for my applications, around 30 seconds– yeah, that is actually pretty high and Java apps in general– even with reserved instances that we feel are underutilized, we’re still getting warm-up requests Any advice? MANDY WAITE: Yeah, so I think, again, this is something that is going to be covered quite extensively in Google I/O. We have a talk, Matt Stephenson is going to be giving a talk on autoscaling Java So that talk is likely to cover much of that And Matt Stephenson also with one of our other guys from Cloud Solutions, Wally Yau, he wrote an article that was published on the, again, the App Engine Developers documentation site for managing optimization of startup Spring applications And often we find that it’s Spring applications that really take the longest time to start There’s a lot of libraries involved, there’s a lot of scanning of classes and suchlike that goes one automatically with Spring when it starts up And that can lead to quite long low times As I suspect, this is probably likely to be a Spring issue At the same time, there’s also the possibility to go loading libraries that you may not need immediately And you may want to actually look at lazy loading those– loading them in the background– as opposed to actually loading them as you’re instant starts Anything that you can do to minimize the amount of time it takes to start the application is good And really, I think, that’s the key to handling things like a startup requests, things that will cause a new instance to start It’s basically just to minimize the time it takes to load your application initially Only load what you need, lazy load everything Don’t load things you don’t need to load And also make sure if you’re using Spring that you follow the advice in the Google [INAUDIBLE] and in that particular document on the website That will minimize your setup times And you won’t be so worried about startup requests then You can actually loading a new instance, because that one takes that long to do FRED SAUER: Yeah All very good advice So, yeah, definitely check out the Google I/O talk All the talks at I/O are going to be recorded, so we can check those out, talks happening next week So it’s a great resource MANDY WAITE: That’s on Wednesday, by the way Are they live streamed? FRED SAUER: Some of the talks are MANDY WAITE: Some of the talks are live streamed, yeah FRED SAUER: I’m not sure if Matt’s talk is live streamed, but you’ll be able to catch it pretty soon Maybe something– just to say a little bit more general about App Engine Java, especially these Spring apps or applications that have these big frameworks and long loading times So one way to think about this is the way that we’ve been building application servers for many years has been in an environment where you would spin up, say, five application servers And these would be big server that would sit in a rack somewhere And they would take several minutes to boot up, do all the memory checks Machine would load up, it would load the application server software, than would load the specific applications And that whole process could take 10-20 minutes sometimes But then those servers they would run for weeks or months,

or however long they needed to run And in that environment, what you actually want to do is do all the expense work upfront when you’re loading the application, because you know you’re only going to that one time, and then you’re going to have long-lived servers, App Engine tries to really fundamentally deal with applications in a different way When we want to scale up very quickly to handle an increase in traffic to your website, or when we want to scale down, because there’s a little bit less traffic coming in, we really need the ability to quickly spin up new instances of your application, and then spin them back down, at a moment’s notice And so App Engine is optimized around apps that have a very short startup time Ideally, it’s hundreds of milliseconds, maybe it’s a few seconds on the outset When you start getting into a 30-second time frame, that’s really a whole different ballpark Now, if that’s something you want to do and you want to run Spring applications, and you say, I get a lot of benefit out of that, and why can’t I just run an App Engine? You can We actually, I think, a year ago, we introduced some extra knobs on the admin console, specifically for these users What you’ll need to do is go into the admin console, make sure you enable billing, because that exposes all the knobs that we have And then go into your application settings And there’s a couple of settings around Min/Max Idle Instances , Min/Max Pending Request Latency And those allow you to tune the cost performance of your application So if you know you have long loading requests and you just need to keep in a few extra instances around to make sure you can handle those spikes, even though you have long instances, you can actually crank up the min idle instances MANDY WAITE: Right? OK FRED SAUER: On the other hand, if you’re a fast starting app and you’re very cost sensitive, there’s ways to turn it the other direction We recommend everyone start out with the automatic settings, but we do have some special knobs for that And I’m sure Matt in his talk will go into a lot more detail than we did right here, specifically around Java application So that’s definitely something we care about, and we’ll have some advice for you MANDY WAITE: Yeah, that’s great [INAUDIBLE] give excellent advice Oh, you’ve only got one question FRED SAUER: I’ve got one MANDY WAITE: And I’ve got four FRED SAUER: And then we have a few more that came up in a moderator since we looked at these yesterday MANDY WAITE: Well, let me ask you a question FRED SAUER: OK, go for it MANDY WAITE: How popular is Google App Engine for Java? Most examples seem to be reaching for Google App Engine Python FRED SAUER: I hope that’s not to the case as far as the sample mismatch It is true that we launched App Engine Python a year before the Java runtime So in 2008 was Python, and then in 2009 we added the Java runtime, and later we added Go So there may be just a little bit of bias still left in the system, because we originally set out to just have a Python runtime But I think things are fairly evened out If you go to the Google Cloud Platform user on GitHub, there’s many samples there There’s Java and Python samples; documentation, there should be samples in both languages I don’t recall the exact kind of popularity, but I thought it was fairly even, maybe a 60/40 split or something like that, with some percentage are going to go But you shouldn’t feel like Java is an under appreciated language in App Engine, it’s definitely not the case And if you look at the talks at I/O, you’ll then see our focus on Java users and making sure they as just as good experience as Python So pick the language that’s good for you MANDY WAITE: Yeah, definitely, Yeah, absolutely good advice Do you have one for me? FRED SAUER: OK, I do So the App Engine documentation is incomplete, for example, the NDB Docs– NDB is a Python library for the DataStore– remain minimal and really document possible errors Anything you can do about this? Or are there other places I should be looking? MANDY WAITE: Well, I guess really, the one of the standard answers is to go to Stack Overflow You’re going to find documentation, people talk about things undocumented or not so well documented, providing examples and suchlike on Stack Overflow Pretty much everything anybody’s tried to do is going to be documented on Stack Overflow Things that have worked, things that haven’t worked– so you can probably find good answers there If you want to contribute information as well, Stack Overflow is not just for questions It’s also for sharing information as well So if you actually have some useful information– something you’ve written about, something you’ve documented– you can share that on Stack Overflow as well Definitely a good place to put it And it also stops you from being spread around everywhere We could have multiple places where people would go to find different documentation So that kind of ad hoc organization probably is the best place in the Stack Overflow

FRED SAUER: Yeah, I think of the App Engine core documentation is kind of the reference material, the baseline Sometimes there’s Javadocs or Pydocs, two of the methods that may help a little bit further, but some of these, like how do I use this specifically, Stack Overflow is a great place to go And we do sometimes, if we find that there’s a lot of questions around a particular are in Stack Overflow, we do go in and update the documentation, and maybe add some clarification So that’s definitely a place to contribute MANDY WAITE: Yeah, definitely Excellent, OK, so a question for you FRED SAUER: All right MANDY WAITE: OK, I’m going to read this through first, so I know what I’m asking I’ve developed an application– 12 [INAUDIBLE] But today I had difficulties to setup the IT strategy related to the maintenance of the source code Taking into account the amount of leases of Google App Engine platform, do you have any advice? FRED SAUER: If I remember right, I think you were suggesting that the person who asked this question, asked a few more questions And I think you said, it look like they were maybe running Python 2.5 Yeah, so App Engine has a deprecation policy on all of our production services– I believe it’s one year– and that’s our guaranteed to you that we’re not just going to change APIs on you, that when we bump up the next version of the application, your application is going to continue working, the APIs are going to behave the way that they expected Having said that, there are incremental changes that we make Just like the Python languages is– Python 2.5 is not as common any more, it’s Python 2.7 At some point you need to make a switch It should certainly not be a regular burden I have applications that I deployed more than a year ago, and they are still running just fine That’s the experience of most of our users Usually what we find is that developers are making changes because they want to take advantage of new APIs or new capabilities that we’ve launched There’ve been probably a couple of cases, where we’ve had a change in– not a breaking API change, but a new version of a capability that we’ve made available– and that’s from Python 2.5 to 2.7 There are some differences And Python 2.7 provides a lot of great features that weren’t just possible in 2.5, like the ability to handle concurrent requests, there’s many language preferences We also had to change from the Master/Slave DataStore to the High Replication DataStore, which has huge advantages The Master/Slave DataStore had regular maintenance periods, there are still a few users on that But I believe the majority of users are now on the High Replication DataStore Just a much better experience When you commit your data, it gets committed to multiple data centers synchronously When a data center goes down, you don’t even notice it And that sort of improvement is sometimes worth going through the effort and making sure that the way you’re executing queries is still compatible with the new version, but you have plenty of time to make that change on your schedule So what we find that developers do is they use one version of the SDK, they’re developing locally, they go through a couple of releases on their own And then there’s a break, maybe once a quarter or so, they say you know what, let’s catch up to the latest features Let’s see what we can incorporate that we haven’t yet And then they’ll bump up the version And so you should naturally flow along with the App Engine releases MANDY WAITE: OK It’s important to also stress what we mentioned earlier Experimental features are always likely to change from one release to another So if you are using an experimental feature, you are probably getting an enormous benefit just from using that feature But you may have to adjust your code when we release a new version, just to keep up-to-speed with us Hopefully, that’s not a big impact, though– not something that causes huge– FRED SAUER: No, it’s generally a polish around the edges when it’s in the experimental Or we’re still trying to collect feedback from the community, find out new use cases; Maybe there’s ways that we can improve the API a little bit And so you’ve seen that, for example, with the search API that from the very early days to now, there’s a couple of changes But generally things have just gotten better, and all of our users have been quite happy with that MANDY WAITE: All right, excellent, OK So we’ve got two questions here You can ask me one if you want FRED SAUER: OK, well I have a couple here too I can ask you MANDY WAITE: Oh, I see, yeah, we can do it You’re putting me on the spot now FRED SAUER: I am Let’s see Please share tips on rate limiting requests without having to hit the DataStore

Memcache-based counter could work, but item expiry time is a little bit unpredictable How would you go about rate limiting individual users? MANDY WAITE: Interesting, I don’t know That’s probably the kind of thing that I have not actually run across yet So I’m going to punt that one to you as well FRED SAUER: OK, well I think the person here suggested a really good place to start, which is a Memcache-based counter Memcache is using– at least recently used– an LRU-based cache And that means that the users that are most heavily accessing your application, those Memcache keys are more likely to actually be in the cache And those that haven’t been around a while, they’re further down the list, and those will expire So you sort of naturally get the keys that you care about most recent memory If you’re looking at maybe even more aggressive throttling, where maybe you have some abusive users, and you really want to cap that behavior down, you can take it up a notch and do some throttling within the instance So you could introduce a data structure, a global static variable, in your code, that for each App Engine instance keeps track of the hot users or users recently seen, and maybe throttles those individually And then if there’s a miss there, then you go to Memcache And what you’re doing is you’re just taking up another tier You have at the bottom you have the DataStore, where there’s transactions happening And there’s this limit of one transaction per second that you want to deal with per entity group Then Memcache, you can go much faster And then in the Instance cache, you can go even faster, because you’re just in memory There’s no cross-network call to be made MANDY WAITE: OK FRED SAUER: So, yeah, I think Memcache is a great place to get started and you have to use Instance caching if you need to there Swell Right, you had another one, you said? MANDY WAITE: I do, indeed, yeah Last one– last one from the list We have a few more over there So for GAE Java, there are detailed documentation for low-level DataStore API However, fewer and incomplete documentation for how to map entity relationships with JDO and JPA The data nucleus is detailed, however, there’s no easy way to tell if GAE supports a feature So, there’s a question in there somewhere FRED SAUER: Yeah, so the question is really, I think, around the giving to this the full details and capabilities of JDO and JPA on App Engine Maybe I should put this a little bit in context When we launched the Java runtime on App Engine, we looked at the act of frameworks and an abstraction that people were using, JDO and JPA were very popular And so we did a lot of work to make sure that those were properly supported in App Engine We also felt like users really benefited from SQL like language, and so we launched GQL, as a way of accessing the DataStore And what we found over time, actually, is that developers after a little while feel much more productive when they’re working much closer with low level APIs of App Engine, or some abstraction that sits on top of those low level APIs So in the case of Python, we had the low-level Python DataStore We’ve now iterative on that and we have NDB, the new database, which is the way that we wish we had written the first time with Memcache built-in, and just usability is a lot better On the Java side, we’ve actually had the community step up and produce an [INAUDIBLE] framework called objectify And there’s others out there There’s a developer in Japan who created something called slim 3, it’s another abstraction layer for Java And they do a really good job of obstructing the way of the DataStore a little bit, but still staying very close to the native performance characteristics doing things the right way Objectify, I know, also has Memcache built-in And it just lets you work with POJOs– so plain old Java objects So you have a Java object with a bunch properties– getters and setters– you can just take that object and push it in the DataStore and get it right back out And that feels a lot more natural working in App Engine than in JDO and JPA And so the way to look at those technologies right now is if you’re a big JDO and JPA fan, or you just use those in your environment and you’d like to, please continue to use them, they are there for you

But if you’re new to App Engine or you’re at all unfamiliar about JDO and JPA, by all means, use either the low level APIs or something like Objectify in Java, because you’ll just have a much better experience And those were written from the ground up for App Engine Whereas JDO and JPA were written in a world where SQLs came And it’s a little bit of an unnatural fit We got that square peg into the round hole, but we had to use the hammer a little bit, and so the corners are a little bit rounded off MANDY WAITE: OK I come from a Java EE background, so I used to have a very big hammer FRED SAUER: I’m just visualizing the big hammer Let’s see, a question here for you At the rate at which you can write the same entity group is limited to one entity group of write per second, and this developer is writing, it seems really low If you’re imagining a Facebook applications with 200,000 daily active users, which means something like 20,000 concurrent users on peak So they’re kind of contrasting this They say, well, on the one hand I have 20,000 concurrent users all making requests at the very same moment, and you’re telling me one per second? Where is the disconnect? MANDY WAITE: I think here depends on what your entity group was actually developed to represent and how extensively you’ve modeled it and the way you’ve modeled your data If your entity group is really specific to the particular user that’s making the call, then you won’t really have to worry about it One write per second will be perfectly adequate But if you kind of sprawled it a bit, and the entity intergroup touches multiple users, then you’re likely to have some contention in that entity group And so you have multiple users banging away at the same time So really it’s best practices of modeling your entities that will actually avoid that kind of issue So if you have 20,000 concurrent requests, they’re likely to be accessing 20,000 different entity groups You wouldn’t have an issue FRED SAUER: And that should be fine If you want to do two million concurrent users on two million different entity groups, that’s absolutely fine MANDY WAITE: Absolutely FRED SAUER: So think about entity groups as, for the most part, your unit of transactionality So If you need some data related to a given user, like a user and their achievements, for example, and you have those stored in three or four different entities, you can put them all in one entity group, and then the App Engine DataStore will make sure that you can only have one transaction in flight at a time for that entity group So generally what we see is that each user becomes an entity group, or each order in an order entry system, each customer in a CRM system becomes an entity group We actually also have– we didn’t have this initially, but now we introduced this again about a year ago– something called cross-entity group transactions, or XG transactions And that allows you to transact up to five different entity groups in a single transaction So it used to be the case that before we had XG transactions, there was a little bit of this trade off between I want to make my entity groups bigger, because I want to do transactions, but I need to make them smaller to have the right throughput And that was sometimes a challenge There were in fact libraries that sprung up to try to figure out how to– the classic example is I’ve two bank accounts I want to move $10 from this bank account to that bank account, I need to do that within a transaction And if I deduct 10 here and then add 10 there, and something goes wrong in the middle, the $10 had disappeared Or if I add $10 first, and then remove $10, I’ve created $10 out of nothing MANDY WAITE: I like that one FRED SAUER: Well, let’s do that one With cross-entity group transactions, that’s no longer a problem, you can actually in a single transaction make that change So really, I think, this is all about just structuring your data store in such a way that you do no more than one break per second Another classic way that people run into this is they’ll do something like they’ll create a site counter They want to know how many visitors came to the website MANDY WAITE: Exactly FRED SAUER: Favorite example, right? And do every time a user comes onto the website, they increment the counter And this is what you would do in a SQL Database, you would increment a particular row in the DataStore and the problem is only one person at a time can update that one record, because that one record is on disc somewhere, and there’s some server responsible for it And you can only touch that record one transaction at a time And that’s really limiting for the number of things you can count And so typical strategy that you use for the App Engine DataStore, is to create something called a charting counter, charted counter, where you partition the counter into multiple counters So instead of, saying, say, one counter, you split it out

and say, OK, let’s make five counters or 50 Some number, and it’s configurable And now every time someone comes to the website, I’m going to at random pick a number from one to five, and then I’m going to update that counter So, let’s say, it’s counter three this time, the next visitor comes in, oh, it’s again, the counter three Oh no, it’s counter two And you say, well, that’s weird, because now your total patriot count is split up around five different counters But that’s not a big deal, because you can easily select five numbers and add them together This is a very easy task for a computer to do But by doing so, you’ve just increased the throughput of your web counter fivefold If you need 50 transactions per second, you go 50 fold, plus a little bit buffer So maybe you go 60, or something like that It’s very easy to chart your counter out as far as you need to go MANDY WAITE: There’s actually examples on the website, isn’t there? FRED SAUER: Yeah, it’s an article So with a little bit of data modeling you can do as many concurrent users as you like And, hopefully, you do a lot more than 20,000 But 20,00o is pretty awesome I wish my website had– MANDY WAITE: 20,000 is pretty very good, yeah Well, I guess the message here really is avoid shared meterable stay I love shared meterable stay as a saying Every time you share, state is meterable You’re going to run into problems with concurrently So just keep it isolated And– FRED SAUER: If do that, you’re going to have a bad time MANDY WAITE: You’re going to have a bad time Absolutely FRED SAUER: Do you want to ask me these ones? MANDY WAITE: Definitely, yeah This one, I think everybody is interested in This person is asking the question, what are 10 different definitions of the frontend versus the backend instance? And I’m still not very clear what the difference is between frontends and backends Is that something you can explain to everybody? FRED SAUER: Yeah, so, actually, you mentioned this in the introduction, sort of the two different notions of a backend You talked about mobile backends for Cloud Endpoints And then App Engine has a specific definition of what the frontend is and what the backend is And, frankly, other customers have said that’s confusing And we found it a little bit limiting in what developers want to do with their applications And so if you have been actually reading through the release notes carefully, you’ll notice that we’ve already started talking about a new way of classifying components in an application, called servers And what we’re going to do is we’re going to get rid of the two special cases the frontends that do things one way, and the backends, who do things slightly differently And we’re just going to say, you can have as many different types of servers as you want You can call them frontends and backends Maybe one of them is your mobile API, and one is your admin API, and the third one is something experimental Maybe you have a component that’s text message processing, another component that’s real time interactive stuff, and the third one that’s reports Every application is different But when you get beyond a trivial application, you often want to logically split your application into pieces, or you may want to deploy the different pieces separately Like your mobile backend that’s handling all your mobile requests, maybe you have a new version for that And with this we’ll allow you to see separate traffic graphs for each of your server versions– your logs will be separated out So your mobile backend logs and your static image logs can be separate So all of that’s coming Just check out the release notes, you’ll see that And just forgive us for calling them frontends and backends, we’ll make that go away You have an application, you split it in logical pieces, and we’ll allow you to configure the different pieces the way you want, whether you want different instance sizes, different logs, et cetera MANDY WAITE: So things like what if [INAUDIBLE] has currently autoscaling, that would just be a property of a server instance– of a type of server FRED SAUER: Exactly So you say, if I have a mobile backend, autoscaling on I have this batch backend, where I’m only going to do certain administrative tasks I actually only want one instance of the server running, and it’s just going to be crawling some database or doing some background work And you can have variations in between MANDY WAITE: Excellent, OK Sounds good FRED SAUER: So we’re making it easier and we’re simplifying the terminology MANDY WAITE: Fantastic Thank you So you did explain it FRED SAUER: I hope so All right, question here What are the best practices for migrating from Python DB to the Python NDBAPI? MANDY WAITE: OK, so I’ve got a confession to make here FRED SAUER: OK MANDY WAITE: I’m a Java girl I come from a Java background So I don’t really deal with a huge amount with NDB I’ve actually looked through the documentation, it looks really cool, but I wouldn’t know where to start FRED SAUER: OK, well, so, I have been calling myself a Java guy, but in the last year and a half or so, I have to

confess, I’ve been becoming somewhat of a Python fan And I’ve been taking my own projects and migrating them So I think I can say maybe a few words about this First thing is you don’t have to migrate all at ones If you have an application, you have many different Python classes, and you’ve models all over the place, you can actually migrate a class follow the time, you can migrate a model at a time, you can intermix the two APIs And the good news is that along the way you’ll probably, if you’ve done things around Memcache caching, there’s a lot of code you can delete, because NDB just does things for you The syntax between the two is a little bit different So one of the first things you might do is maybe go around hunting for there’s a cheat sheet I’ve seeing that has the DB way and the NBD way And that in the beginning is very good for just like mapping Like, oh I need to change this around and do this But you’ll find that they’re very, very similar And like I said, just do one model at a time, one class at a time; test it, make sure it works And then keep going And maybe what you do is you start out with any new models that you create, do them in NDB, so you start to get really familiar with it, and then migrate your old stuff But like with any real-time application, where you’re always serving requests, think about ways of doing things incrementally, rather than in one big step I don’t think there is really anything, any other tips for people migrating MANDY WAITE: What about people getting started? So like when you’re starting out developing an App Engine application using Pythons, is it immediately obvious in the documentation suchlike when you’re developing with DB and when you’re developing with NDB? Is it easy to get pushed on one path? FRED SAUER: You can accidentally flip between the two– both are in the left nav of the documentation, but it’s very clear in your code, because in one case you’re importing from Google App Engine EXTDB; and the other place you’re doing NDB So there’s no confusion there MANDY WAITE: OK FRED SAUER: But, yeah, I do find, when I’m clicking around the documentation, I sometimes end up looking in the wrong one You’ll get through that MANDY WAITE: So I guess if people are reading books on App Engine development, Python-based, they may be looking at older examples that use DB, and probably worth spending some time, investing some time to actually look at the differences in how you would do that in NDB, rather than actually just writing up the samples in the DB code FRED SAUER: Absolutely, yeah, if you’re building anything new, you should only be using NDB The author who wrote both basically he rewrote it and said this is how it should have been done the first So I can see a future where DB is either deprecated or tucked away in the documentation for very few to find And NDB is just the way you should be writing applications And we’ve actually just going through a process with our sample applications So if you look on again GitHub on the Google Cloud Platform, hopefully right now all the samples have been updated to NDB So if you want good samples of best practices and how to do that, definitely go to our GitHub page MANDY WAITE: Excellent OK FRED SAUER: Let’s see, we have another question here What’s the best strategy to respond to requests that probably take more than 60 to process? MANDY WAITE: Was that part of the question? So make a backend process to process the request That’s the best thing to do You can do these things out of band, or almost natural request itself So the user makes a request You need to actually service that request within 60 seconds, otherwise you’ll hit a deadline exceeded exception in both Java and Python runtimes That’s the request timeout we have that’s always in place for what we call frontend instances currently There’s no such limitation in backend instances So what Fred said about moving to the server model, if you choose to use a standard instance, so it’s not using autoscale, those things won’t apply It won’t automatically hit its request timeout So the best thing to do is if you need to do heavy lifting, then when working to answer a user request you should find a way to do that asynchronously out of band from actually responding to the user So long running task, let’s say user makes a request that will result in generating 100 e-mails to different people What you need to do is return a request back, respond to the user pretty quickly; and offline doing it in the background– the e-mail process using a backend instance It doesn’t need to be a backend instance, you could push it off the task queues, and then you can start using things like Compute Engine and suchlike Compute Engine has the ability to do pull requests using using task queues So you can use Compute Engine instances to service those

requests as well But yeah, basically take it away from the frontend instances and push it into a background somewhere, using the backend end process or Compute Engine FRED SAUER: Yeah, task queue is definitely the way to go The pull queues you mentioned are great for working with external systems, like doing off App Engine work, like Compute Engine, but you could really do it on a server that you have sitting in a rack in your office if you have some special processing that you need to do And then push queues are a great way of enqueing work asynchronously And those requests come back to your application as if it were traffic, but its internal traffic So App Engine will scale up and down if you create more requests And then you have a 10-minute deadline instead of the 60-second So you can easily make those long running URL fetch calls or do more complex processing MANDY WAITE: OK FRED SAUER: All right, when a user uploads a photo, like a movie or an audio clip, how do you store it so it can be statically available to your application? So it looks like they’d rather not have a dynamic request handler, where they have to handle the request and figure out what [INAUDIBLE] to serve, what file to serve But how do you make it available statically, so the user can always get to it without involving App Engine runtime? MANDY WAITE: So correct me if I’m wrong, but I guess you would probably find a place where you would actually store this photographs, other images and movies- that kind of blob data And then you would make sure that you declare that folder, that directory to be a static directory within the declaration of the application And that way all of that will be pushed to the edge by our edge servers FRED SAUER: Yeah, you could do that for pre-defined content If you’re doing dynamically uploaded content though, you’d probably do that through your Google Cloud Storage MANDY WAITE: Yes, basically storing in Google Cloud Storage and then you can provide a URL back to the user that can actually access it directly from Cloud Storage, as opposed to going through your App Engine instances FRED SAUER: Exactly, yeah MANDY WAITE: Yeah, good point FRED SAUER: And, yeah, we have a new Google Cloud Storage Client Library that you can find in the groups And it’s basically a new library for accessing Google Cloud Storage very easily and efficiently from App Engine So I’d probably use that to get the files into the Cloud Storage And then just like you said, [INAUDIBLE] the URLs directly to that MANDY WAITE: And it has the additional benefit of not actually hits in your instances So you won’t actually incur any costs, because you’ll be serving that data directly from Google Cloud Storage FRED SAUER: Yeah, you’re just paying for bandwidth, not CPU Yeah If I built my start up on App Engine, will Google help me with publicity? MANDY WAITE: Well, I guess, it depends how successful you are FRED SAUER: Depends on how cool an idea you have You’ve probably seen a number of customers with interesting stories on the App Engine blog I think if you came to us with a really cool story, just reach out We love to tell these stories and share them with our users So that’s probably– yeah MANDY WAITE: Tell us about your application Tell us about all the of the cool and interesting things you’re doing and we can look into it for you FRED SAUER: And also you and I both do presentations We often share partner stories and things like that So, definitely, tell us about them We’d love to feature them MANDY WAITE: Yeah, definitely FRED SAUER: Well, that was the last question of our list Looks like we’ve got to pretty much everything There is a couple that we skipped over due to time I want to thank all the viewers for visiting today Send us more questions if you really like this Let us know, because we can definitely do it again I think it was fun answering questions Thank you, Mandy, for doing the presentation at the beginning It was very helpful MANDY WAITE: Thank you, Fred And don’t forget Google I/O next week Follow it as live if you possibly can, if the sessions you are interested in are actually being streamed live If not, follow them on YouTube afterwards or the Google I/O sites The links, as I say, are in the the YouTube video of this recording And also, if you can, participate in Google I/O Extended So that’s really one way of getting involved in Google I/O outside of event itself FRED SAUER: Yeah All right Thank you MANDY WAITE: Thank you so much Cheers Bye-bye [MUSIC PLAYING]


You Want To Have Your Favorite Car?

We have a big list of modern & classic cars in both used and new categories.