so welcome to the Jenkins users conference or user singular my name is Andrew Bayer and this is seven Habits of Highly Effective Jenkins users I’m the building tools architect at Cloudera a contributor to Jenkins core and author and contributor to a number of plugins since actually I think that’s all party with spring a 2008 we actually figured out when a bunch of us showed up for the first time in Jenkins earlier this year for the 10th anniversary and I’m a member of the apache software foundation and I volunteer maintaining Apaches Jenkins instance that builds on Apache org and various other things so what’s this talk about these are lessons that I’ve learned and then other people i know have learned over the last five to ten years from maintaining large jenkins instances often more than one of them in terms of what your best practices and bad ideas are at Cloudera for example we’ve got five Jenkins masters each of which have over a thousand jobs with dozens running all the time we run it’s actually a fairly ridiculous number of jobs at this point and it builds at Apache or we’ve got a different kind of problem we’ve got a lot of jobs but more importantly we have a lot of different project teams doing things in very different ways but all on the same master so before I start I should always give my caveats always give my warning your mileage may vary these I believe that these habits i’m going to talk about are going to be valuable on every Jenkinson sense but some of them are more relevant at a large scale for more complex jobs more production critical workflows and but at the core these are my recommendations they’re based on my experiences and what I know of there is no one-size-fits-all solution there’s no one answer so do what’s best for your situation and use these just as a starting point so the first habit is I think the most critical one it’s to make your master stable and restorable or actually if your master is not stable if your master is not restorable then when it goes down your users can’t build anything you can’t ship you can’t test and that’s obviously not good and restorable because machines die and with jenkins storing everything on disk that means if you have a disk failure you can end up losing a lot of history or you can also inadvertently RM dash RF / bar lib jenkins I’ve done it and you need to be able to recover from those emergencies anything that’s production quality your production level of importance needs to be production quality and it’s restore ability so the first part of that is to always use LTS releases the long-term support releases for Jenkins the trains are created about every 12 weeks roughly every three months and the act of train gets updated three times before the next one starts if you’re familiar with Ubuntu’s LTS is like 1204 and 1404 it’s a similar kind of model where those the LTS releases get fixes backported to them from the master branch from what’s going on elsewhere but not features though it’s a more stable more reliable version that isn’t going to have the same kind of compatibility breaking or functionality breaking changes and more importantly the LTS releases go through a lot of testing before they go out this started with a really big testing matrix that was run through at Red Hat and it’s grown since then so we validate that accepted be expected behavior can say use to behave the way we expected on the LTS releases before they go out the door I think that’s really important the normal Jenkins releases it’s great that we release weekly I in continuous delivery but they’re bleeding edge it’s the only

reason you should ever run a Jenkins release that’s new is if there’s something in that you specifically need for that use case and even there you’re probably going to get bit by bugs that you did not expect that are in different areas again be conservative about upgrading plugins plugins can change a lot without you actually being able to tell they change a lot part of that is that we’re not great about updating the release notes there’s actually some work going on there to try to automate that and base it on good history more but on the wiki you can see oh there’s a new release what changed yeah and even when those changes are fairly well explained what they mean is not necessarily explain well backwards compatibility can break and it can break bad the example I always site is the extended email plugin which I’m willing to bet most of you who have Jenkins instances have the extended email plugin it’s about as core as a plugin can get that isn’t actually part of the core but they drastically changed how their recipient and trigger settings worked in early 2014 so that all of a sudden our emails started going to way more people than they were supposed to including a lot of people who didn’t work at Cloudera it’s a fun advantage of open source there so you need to be careful about your plug-in updates because you may pull in things you didn’t expect you may break your existing workflows and existing tools and new features and plugins can be very unstable and problematic in the wild again just because somebody wrote a new feature and it works great in their use case and their environment doesn’t mean it will keep working in yours always be conservative only upgrade a plug-in if you need to that’s my basic opinion I’ll upgrade it if there’s something very specific I need in there and even then I will test it a lot myself before I’ll upgrade it and speaking of that you should have an upgrade testbed I’m not as good about this as I should be because it can be hard to build a good test upgrade upgrade test pad because what you want is to have an environment where you can replicate some set of your jobs behavior some set of the coverage of your plug-in usage so that you can verify that your critical work flows will keep working with your upgraded plugins with your upgraded core if at all possible you really want to do this at scale with a decent amount of jobs and with a decent amount of builds and slaves etc you also need to make sure that these changes get a chance to bake in you give them a few days to run before you assume everything’s working just because a bill graham once successfully doesn’t mean it’s going to run successfully under every condition that it’s going to hit on a regular basis right if you’re built as successful nine times out of ten normally it could turn out that the only way things go horribly wrong is if the builds failed so you kind of need to make sure the build will fail at some point to see what it’ll do and you should back up for me this one was not very intuitive i just didn’t actually think about it very much but there are a lot of different options and ways that you can backup Jenkins since Jenkins is serializing straight to disk it’s both simpler and harder that there’s no one right answer for how you should back up I’ve looked at a number of backup plug-ins for Jenkins I don’t love any of them the thin backup plugin is the best I’ve seen some of the backup plugins can be a little intrusive and cause some stability problems then back up seems to be the best I’ve run into but it’s not my primary backup means I use the latter two examples that I’ve got here we have an rsync backup of the full Jenkins tree that we run every night but that means that I need to have disks as big as my Jenkins master just to back that stuff up and I don’t care about the build artifacts to the build history I care about the configuration so the more important stuff is I think backing up your configuration files the slide deck well previous versions of slide deck are already up on slide shared this will be up on SlideShare after the talk and there’s a link there to a script that I wrote that I use to find the relevant config files and check them in to get just to use that as a source of backup because that way I know I can recreate all the jobs and the configuration just by checking that out and copying it somewhere I don’t keep the build history but build history

shouldn’t be that important don’t use the maven job type I love maven I know that a lot of people hate it they’re wrong but the maven job type in Jenkins seems really elegant I loved it at first I spent a lot of time working on getting it to do better parallelism getting it to do incremental builds do all these nifty things then I realize it’s a giant pain because there’s a number of things that are different about the maven job type than the freestyle or matrix job type that will result in some plugins not working right that will make you vulnerable to issues with lazy loading you end up loading a lot more builds because each individual module counts as a build it just gets a can go exponential really quickly and really hurt you if you’re vulnerable to performance so unless you have a really good reason why you have to use the maven job type and not just maven build steps and a freestyle project don’t use it it seems niftier it’s not worth the cost habit number to break up the bloat if you’ve got a lot of different teams and projects you should have more than one master there’s a number of really good reasons for that but the main ones of that if you’ve got multiple teams they’ve got multiple needs they’re going to need different plugins they’re going to need different plug in upgrades they’re going to you’ll eventually hit some performance problems etc so if you split it out and there’s you can split that by team by function by control there’s a lot of different ways you can do that you’re going to get more stable Jenkins masters and it’s going to be a lot easier when you need to make changes to your Jenkins masters there is a plug in the parameterize remote trigger plugin that theoretically can communicate between Jenkins masters that’s an area that I think really needs to improve so that you can really do orchestration across multiple masters I think cloudbees has some stuff in the works in that direction but I mainly working the the free and open source area break up your jobs not just your masters obviously modularization and reuse is good and programming and it’s just as good and Jenkins multi job builds you know a workflow that’s made out of a number of different jobs that run together will allow you to reuse a generic job when you’ve got a particular chunk of behavior that’s going to happen more or less the same from say ten different projects they’re all going to need to build rpm packages and they’re all gonna have the same entry point for building that they’re just going to be building a from a different source package so have a generic job that they all can call that will do the same thing but even if you can’t go with the generic just breaking things out into multiple jobs allows you to restart partway through so that you’re not in that wonderful situation where a random ec2 instance falls over nine and a half hours into a ten-hour build and you can’t restart it without going all the way through again if you got a multi job bill that you set up correctly that you’ve designed thinking about this use case you can restart it part way through and continue there are any number of tools for breaking up your jobs I personally use the kind of Swiss Army knife of the parameterised trigger plug-in and conditional build step a lot it’s lower level it’s not as easy to configure as something like the workflow plugin that all the collab these people will always talk about but it fits more smoothly into the Jenkins you I it easily works with all your other plugins and it is ludicrously powerful but i think the direction that we’re going is more the workflow plugging the idea of a multi job workflow a multi-step workflow as something native to jenkins that jenkins actually understands and doesn’t just interoperate with but you do have to define your job with a DSL it’s a fairly simple dsl it’s not a problem but it isn’t quite as nice as just going through the UI third habit automate jenkins tasks i’m lazy i think most of us should be lazy if you’re not lazy get lazy we all work in the computer industry we don’t do things by hand so why should you be doing things by hand in jenkins you can get deep into jenkins and its own controls and the internals

using either the script console or the script look plugin where you can use groovy scripts to really get deep into jenkins and its object model and make changes to your jobs find bad patterns and job configuration so i can find when people have misconfigured the email plugins and you can use this plugin to store and share those groovy scripts so that you can reuse them on that master and you can reuse them on other masters and you can use other people’s plugins it’s very handy for sharing and reusing utility scripts that make you able to have a greater power over jenkins some of the examples that I’ve used from the script logo catalogs these are publicly available scripts that you can get through the script blue plug-in disable and enable jobs matching a pattern so that when a release is done I just disable all of the jobs for that release so they don’t randomly start building again you can clear the build queue when something goes insane and spawns 150 bills that are all waiting and you don’t want to have to click the reach of them by hand you can tweak the log rotation or discard all builds configuration across all your jobs so that you can dictate a policy say yeah you don’t get to have more than 15 builds archived and enforce that across all the jobs of all view user’s you can turn off SCM polling at night every night so that or off during the day or whatever permutation so that your polling when it’s relevant and you’re not running builds when no one really cares and probably my favorite you can actually run the log rotator and make it go discard all builds that according to the rules should be thrown away without having to run a new build of each of those jobs which is great when you just need to purge stuff related to this or system groovy build steps with the system groovy plugin you can run these kinds of scripts as part of your jobs you can actually have your job talked and control the Jenkins internals this is not the most secure thing in the world obviously because you’ve got you’re giving your job full access to Jenkins but it’s a great way to pilot a concept for a plugin and see how it would work and without actually having to go through all the process of writing a plugin and all the pain when you have to upgrade or do things that build on to existing plugins or that don’t quite get big enough to be worth their own plugin we use these heavily for tricking the Jenkins cloud provisioner to provision things a little earlier when we know they’re going to be needed or for checking our build history to see why the build failed and automatically retrying it on a when we’re talking about a large multi-node that sprawls across a lot of things looking for patterns in the history and auto retrying what appropriate you can also run script ler steps as build steps so that you can do the same kind of thing there and you can generate jobs from code now we’ve I back when I was always using the rest api or the CLI to basically write the xml or pull down xml for an existing job tweak it post it networks but you can also define your whole job or work flow of multiple jobs in a job dsl two examples of that are my favorite the job dsl plugin which is a full groovy dsl for defining a Jenkins job with the groovy representing the dsl represents the Jenkins object model so it does require you to actually know what the Jenkins job configuration is so it is definitely a power tool but you can use it to do things like okay you can generate your jobs from a build step so we have a fairly complicated script and whenever we branch for a new release we have 30 different components that also need new builds when we do that one script gets run when we do that and it creates and updates all of their appropriate jobs without any direct intervention so that we’re able to propagate changes change our version numbers and all that by just changing things in one place and having a job that pulls against that script and runs it when something’s change and there is a great talk by Daniel Spilker letter today about this plugin and you should go see it and I’m not just saying that because i love the plugin maybe i’m saying that because i love the plugin on the other side there’s the dot see i plug in if you’re familiar with Travis you’ve probably seen that Travis yamo where it kind of declarative fairly simple jobs you don’t have a lot of power but you just just declare your job

in the ammo and Travis just does it you don’t actually have to go to the UI you don’t have to configure anything it’ll just happen the dot C I plug-in came out of Groupon and it’s very similar there’s a yamo file you define your job in there and it will automatically generate the Jenkins jobs for you sorry wrong hand and then there’s the workflow plugin it’s as you may have seen it’s a way to define multiple complex steps in just one relatively simple DSL I’m have not really used it yet i’ll be honest i was wary of it for a while it didn’t run on LT yeses for a while it’s a new job type you can’t just use your existing jobs you have to start over and when you’ve got a big enough technical debt or just inertia behind your billet jobs you can’t really change everything over so i’m not going to recommend you don’t use it but i’m not going to necessarily recommend you use it either i think it’s worth investigating and i’d like to hear what people’s experiences with it are but i don’t feel qualified to say anything on it yet habit number four tend your plug-in garden so this is a simpsons reference dear mr. Jenkins there are too many plugins these days please eliminate 300 PS i am not a crackpot there are over a thousand plugins that is too many plugins I mean plugin discovery is hard it’s not easy to figure out what the right plug in for your use cases or whether that plugin can cause problems how the plugins will interoperate so you should be careful about your plugin usage you should not install a plug-in unless you know you’re going to use it don’t just install plug-ins on your production master because maybe you’re going to use it or it looks interesting use a test bed for that figure out how it would work for you before you install it there’s a lot of duplication of functionality across plugins figure out which one does what you need and install that one don’t get like eighty percent of the functionality from one plug-in but then you get another plugin that has it’s slightly different subset of the eighty percent and try to settle on maybe not getting everything you need but with as few plugins as you can use and plugins can cause real instability in areas you don’t expect and they can add a lot of time to load and run time for jobs so if you’re not going to use a plug-in why take a hit from it why I have risk from it if you’re not going to use it when you remove plugins well it’s easy to uninstall or disable them in the Jenkins UI and then after you’ve restarted go to manage Jenkins there may be a note there about old data you can just clear that out that with the button there that will remove references to the uninstalled plugin from your configuration your build files which will speed up your build loading your building config loading in the future so these are some of my essential plugins just just a subset job config history plugin it’s not source control for Jenkins but it works a lot better than source control integration with Jenkins has it can let you see what changed in jobs and who did it so XML diffs so it can be a little weird sometimes but it’s a good way to see what happened when and by who I always recommended the disk usage plugin until about a year and a half ago when it just went insane and completely lost the ability to scale across large numbers of builds so it’s a good example of having getting bit by a plug and it’s gone a little bit sour as it’s gotten newer the whole suite of static analysis plugins and the tooling around them are fantastic if you’re not generating j unit formatted XML from your test right now if you’re using no they’re not using something like j unit or surefire or knows the x unit plug-in has converters from a whole lot of fairly common test output formats into the j unit format that Jenkins speaks so that way you can use you can really take advantage of Jenkins built-in test reporting to even if you’re not using j unit style tests like i mentioned the parameterize trigger and conditional build step plugins are my Swiss Army knife they’re about as power tool as you can get in terms of constructing chains of jobs and the tool environment plugin is great when you’ve got yeah you’ve got your maven in Jay and Java and aunt and groovy and all those tools configured in Jenkins and if you’re using you know the maven build step great it ends up in your environment automatically but with

the tools environment plugin you can just check a box and it will automatically install the tool for you and then it’ll be available in your build steps environment so that you can use Java in a shell step and not worry about having to install Java enth inject is probably the best option these days for exporting environment variables and loading environment variables into your build I’m weary on this because there are differing opinions out there it’s an area that there’s been a lot of turnover over the years and the envy Jack plugin has gotten a bit of feature bloat and so not a hundred percent sure how well it interacts with other plugins the rebuild plugin is an incredibly simple plug-in that I cannot live without we have a ton of parameterised builds in our setups and so how do you rerun a build you have to go re-enter all the parameters again with the rebuild plugin you just click the rebuild button gives you the form with the fields already pre-populated with that builds parameters click build there you go just simple wonderful the build time out plugin builds hang or just don’t finish or take too long and you need to shoot them sometimes the build time out plugin does that it can do absolute time elastic timeouts based on whether there’s an output it’s a really really handy plugin but like i mentioned at the beginning don’t take my word for it alone don’t assume that these are the o that you have to be using these plugins or that these are the only plugins you should use these are what I consider my essential plugins for from my experience at for my use cases you may not need any of these plugins you may need completely different plugins and I didn’t even get into things like source control plugins because those seem fairly self-explanatory but these are plugins that I think have a lot of versatility and a lot of value and with the possible exception of n been checked very little risk they’re stable they’re well-designed they fit well into Jenkins and they don’t cause problems particularly when you install a plugin always remember to check the global Jenkins configuration afterwards a decent number of plugins have global configuration that you should take a look at you may not want the defaults you may want to tweak the default behavior of an otherwise great plugin like the job config history treats every individual maven module as a separate job and saves each of the change for each of those anytime you change the parent job itself which is completely redundant creates hundreds of tiny files for no good reason is but you can change that just by going to the global configuration and changing the settings fifth have it integrate with other tools and services so like pretty much anything else anyone’s actually using these days Jenkins plays well with others it integrates well with other tools and vice versa either via Jenkins plugins or the Jenkins REST API it’s really easy for other tools talk to Jenkins and it’s obviously very easy for jenga’s to talk to those other tools you can trigger builds based on get a pull request you can update dear upon successful builds and a lot more I’m only going to touch on a couple of these tools or services because these happen to be ones I’m familiar with and used but there are many many many many many many many many many more on the Jenkins wiki as is why we have over a thousand plugins so yeah source controlled I mean if you’re not using source control you should be using source control and are you a time traveler from 1990 so moving on the garret and github pull requests the Garrett trader plugin by Roberts & L now of cloud-based the github pull request builder plugin jacobs enterprises version of the github pull request but builder plugin which I’d prefer it’s simpler are all really useful again like Travis will run builds for you whenever it is a pull request same model when there’s a change proposed and Garrett or a pull request open on github it’ll run a build it’ll report back to the review tool with the results of the build so that you can then make your decision about whether that change is actually good or not not just based on a you know manual code review but based on whether or not it

builds and passes tests it felt revolutionary five years ago when I first worked on a workflow like this now it just kind of feels like the default it’s of course you’re doing that right and so with this you can enable a lot more automation for promotion for automatic merging for a more complicated workflow that doesn’t require as much human intervention but still gets you through multiple levels of testing multiple levels of validation of stability JIRA or I assume there’s other bug trackers but jerez the best of the worst as far as I know you can update JIRA issues when commits with messages containing those issues go into Jenkins now there’s you can also integrate github with JIRA but it’s a little questionable in my mind you can follow the build fingerprints that cause guys mentioned within the context of the dock or traceability plugin so that a commit that went into Anna far upstream project with a jira in it can be identified as being resulting in fixing a bug in a downstream project because the downstream project is consuming the app hood of the upstream project and one of my favorites you can generate release notes jo release notes as part of the build process anything that makes our doc team not have to do incredibly boring and automatable work like generate release notes at a jira is an inherently good thing artifactory so the jenkins artifactory plugin allows you to do things like define your credentials for deployment or your configuration for artifact resolution across all of your jobs without in one place so it’s kind of like the maven settings file except it doesn’t just work with maven it works with anything that’s interacting with artifactory you can override mavens distribution management section on a per job basis I’m not sure how often you want to do that but if you’re using a staging model that might be handy you can restrict where the maven jobs and builds tests will look to resolve artifacts so you can block them from looking outside when you’re running your official build so you can be sure that you actually have all of the things they care about mirrored on to your artifactory server and you can capture build info and the relationships between artifacts and builds and each other in artifactory through the jenkins plugin then of course there’s docker everybody loves dr. these days it’s it’s it’s the new hotness it’s the cool thing it’s totally much cooler than OpenStack was ah I haven’t used any of cloud-based docker plugins yet so I can’t actually say whether they’re any good but I’m assuming they are and it’s definitely there’s a lot of really fascinating areas that they’re if you’re using docker you’ve got to build docker images automation around that that integrates more smoothly with Jenkins is going to be useful running your jobs in docker containers is an area i’m really interested in if you have to support building your software on a whole lot of different platforms we have to build packages for eight different linux distributions for example that’s a lot of different am is that I have to have for a lot of different ec2 instances so that I can build all of our stuff on all those platforms being able to build in containers will make our lives a lot simpler traceability is always a good thing knowing what you built where and how it’s used is an inherently good thing and if you’re going to use workflow integrating doctor and workflow seems like it’s probably a good idea the six habit make your slaves fungible so what does fungible mean the dictionary definition is fungibility is the property of a good or commodity use individual units are capable of mutual substitution in other words a fungible slave is a slave you can easy replace with another slave you don’t have to go manually configure it you don’t have to you know go to the effort of getting IT to provision a new host for you were buying new hardware and that installing from scratch and a it seems self-evident it wasn’t for a long time but being able to quickly either replace or increase your slaves is one of the best things about Jenkins the ability to burst dynamically the ability to grow your resources without having to spend a lot of human time recreating your environments is critical the easier it is to add the slaves the easier your

life is going to be so how do you do that how do you make your slaves fungible be at the core it’s you make the creating the environments easily repeatable this is true throughout the entire IT dev ops dev ops ops ops dev ops step whatever all of that you always want to have your environments be repeatable not just your built now you can use config management for this puppet chef ansible the CF engine still exists ok CF engine saltstack there’s probably eight more that we’re invented since I got off the plane you can use pre-baked images like docker container images or cloud images amazon ec2 pixie booting i use Packer from hachey Corp to combine with puppet and some shell scripts to generate our build environments to generate our am is that we then save and can just spin up 30 of them when we need them I have no opinion on what the right config management tool is because as far as I can tell there is no one right config management tool I have some biases and some personal preferences but it doesn’t matter what works for you works for you whatever your standard is in your shop you can just use that to make your build slaves and you should anything they can set up your environment consistently and reproducibly is good enough so it’s not just a matter of making it easy to create your environment reproduce your environment you also want to make your slaves as general purpose as you can you don’t want to have this one box that this subset of your builds is the only one they can run on because then you can’t scale you cannot even if it’s four boxes and you’ve got 30 builds that means you can only run for them at a time you want to be able to burst out you want to be able to use your resources better not have 10 jobs in q4 for slaves while another 10 slaves are idle so when you can make your slaves reusable make them general make them interchangeable I think doctor can running in a container honestly this is a really fascinating way to do this because you can if you need access to a my sequel server that you’ve got you know configuring a certain way it during the bill during a test build fine run the building a container that has that my sequel server in it so you don’t have to say but we only have this one my sequel server configure that way so we can only run one job at a time and when you do need specific custom slaves you should make them on demand via cloud dock or whatever don’t tie up your static resources that are always on that you are paying either paying for all the time in the cloud or that are running on servers that are consuming power and cooling in your data center don’t use those resources for things that are not going to have a consistent demand it’s wasteful it’s just not efficient the edge cases should be dynamic and not static here we go and obviously going to the cloud is a great way to scale let Amazon or Microsoft or Google or whoever be the one who has idle resources waiting to be used you’re only paying for what you actually use or your IT people or whoever yours cloud you’re using private cloud public cloud docker containers it doesn’t really matter the goal is to have idle resources is to avoid having idle resources that can’t be used for anything else so that to avoid that situation where you’ve got a set of jobs in q4 one environment and a bunch of idle slaves that aren’t in that they can’t run that environment we end up provisioning about 200 instances for our full builds because we’ve got so many components in so many platforms if we had those on all the time it would just get stupidly expensive as it is we can be much more efficient in our expenditures and in our run time by going wide and dynamic and we pre-baked all of our images we do our best to never have a situation where we have to run configuration at its instance creation time at slave creation time where do the slave boots up and it’s ready to run this means we have a faster turnaround and we have more consistency because there’s less things that can go wrong and the final habit is join the

community partially that’s self-serving because obviously everybody’s already in the community benefits from anybody else joining but more specifically write plugins or even better than writing a plug-in extend an existing plugin contribute bug fixes open JIRA’s get help on the mailing lists or IRC and well what tends to happen once you spend enough time asking questions in IRC for example you start answering questions to its kind of viral getting involved in the community is not just good for your job experience for your resume etc though it is trust me on this one it’s it’s good for your Jenkins usage you’ll be more familiar with what other people are seeing you’ll be have a better understanding of the internals you’ll be able to do more with Jenkins and take care of your Jenkins master better than you could otherwise and plus then you can speak at conferences like this it’s not actually that glamorous it’s not glamorize alright ah looks like I got within a few seconds of perfect timing so I hope you take you all for coming I hope you got some value out of that like I said the slide deck will be on SlideShare and I will probably be getting drafted into sitting at the ask the experts table at various points over the next two days and you can also find me on Twitter a bear email me whatever I’m always happy to talk about this stuff and always happy to help out any way I can thank you all very much you

You Want To Have Your Favorite Car?

We have a big list of modern & classic cars in both used and new categories.