The goal of Continuous Integration is to provide fast feedback in order that if a defect is introduced into the code base, it may be known and corrected as presently as attainable. Continuous integration code tools may be accustomed automate the testing and build a document trail.
Few advantages of Continuous Integration
- Risk Mitigation: “it worked on my local machine” scenario. Continuous Integration allows you to mitigate risk not only with testing, but also by enabling production parity.
- Confidence : Having robust test suite confidence that aren’t shipping a bug goes way up. Being transparent with the process and educating the rest of your team and clients, their confidence in development team improves as well.
- Team Communication : Improves having continuous feedback across the multiple development tools integrated in the CI pipeline
- Reduced Overhead : Automating large parts of your workflow will free up time for billable work, which is something everyone can appreciate. Automated testing allows to fail early, than fixing defects later which is more challenging. New developers on the team can get up and run faster since they won’t need to learn all the steps involved that CI is now responsible for.
- Consistency of Build Process : Having automated build and related processes on CI means that nobody ever forgets a step in the process
The primary goal of continuous delivery is to make software deployments painless, low-risk events that can be performed at any time, on demand. The practices at the heart of continuous delivery help us achieve several important benefits Continuous delivery is a series of practices designed to ensure that code can be rapidly and safely deployed to production by delivering every change to a production-like environment and ensuring business applications and services function as expected through rigorous automated testing
Benefits of Continuous Delivery :
- Deliver software with fewer bugs and lower risk. When you release smaller changes more frequently, you catch errors much earlier in the development process. When you implement automated testing at every stage of development, you don’t pass failed code to the next stage.
- Continuous Delivery reduces waste and makes releases boring. One of the main uses of continuous delivery is to ensure we are building functionality that really delivers the expected customer value. … Second, continuous delivery reduces the risk of release and leads to more reliable and resilient systems.
( Continuous delivery is the ability to deliver software that can be deployed at any time through manual releases; this is in contrast to continuous deployment which uses automated deployments. According to Martin Fowler, continuous deployment requires continuous delivery )
Principles of Continuous Delivery and Integration
The process for releasing/deploying software MUST be repeatable and reliable.
- Automate everything!A manual deployment can never be described as repeatable and reliable (not if I’m doing it anyway!). You have to invest seriously in automating all the tasks you do repeatedly, and this tends to lead to reliability.
- If somethings difficult or painful, do it more often.On the surface this sounds silly, but basically what this means is that doing something painful, more often, will lead you to improve it, probably automate it, and this will eventually make it painless and easy. Take for example doing a deployment of a database schema. If this is tricky, you tend to not do it very often, you put it off, maybe you’ll do 1 a month. If you’re doing something every day, you’re going to be a lot better at it than if you only do it once a month.
- Keep everything in source control– this sounds like a bit of a weird one in this day and age, I mean seriously, who doesn’t keep everything in source control? Integral to continuous delivery is making sure you can keep adding versions without affecting existing components and features. This not only backs up your code, but also allows for continuous integration and, thus, continuous delivery.
- Done means “released”. This implies ownership of a project right up until it’s in the hands of the user, and working properly. There’s none of this “I’ve checked in my code so it’s done as far as I’m concerned”. I have been fortunate enough to work with some software teams who eagerly made sure their code changes were working right up to the point when their changes were in production, and then monitored the live system to make sure their changes were successful. (will write on “Done” in new post)
- Build quality in!Take the time to invest in your quality metrics. A project with good, targeted quality metrics (we could be talking about unit test coverage, code styling, rules violations, complexity measurements – or preferably, all of the above) will invariably be better than one without, and easier to maintain in the long run.
- Everybody has responsibility for the release process.A program running on a developers laptop isn’t going to make any money for the company. Similarly, a project with no plan for deployment will never get released, and again make no money. Companies make money by getting their products released to customers, therefore, this process should be in the interest of everybody. Developers should develop projects with a mind for how to deploy them. Project managers should plan projects with attention to deployment. Testers should test for deployment defects with as much care and attention as they do for code defects (and this should be automated and built into the deployment task itself).
- Improve continuously.Don’t sit back and wait for your system to become out of date or impossible to maintain. Continuous improvement means your system will always be evolving and therefore easier to change when needs be.
- Pull Requests.Using a pull request can speed the process significantly. When others can review the author’s code simultaneously as well as suggest modifications before it’s integrated into the master, there will be better communication and fewer bottlenecks.
- Always Be Testing.Obviously, if you want faster and better release cycles, then you need to test constantly. As mentioned, automated testing is a key to continuous delivery, and you don’t want a bottleneck between QA and development clogging up your process.
- Small Releases.One of the core continuous delivery principles is that smaller and multiple releases are usually better than one large one. It’s more efficient—and even safe—to keep releasing a few changes then waiting to add a large batch of features and bug fixes before you deliver. Then, if any revisions need to made, it will be easier to make them without affecting the other features.
Practices of Continuous Delivery
- Build binaries only once.You’d be staggered by the number of times I’ve seen people recompile code between one environment and the next. Binaries should be compiled once and once only. The binary should then be stored someplace which is accessible only to your deployment mechanism, and your deployment mechanism should deploy this same binary to each successive environment.
- Use precisely the same mechanism to deploy to every environment.It sounds obvious, but genuinely seen cases where deployments to different QA environments were automated, only for the production deployments to be manual.
- Smoke test your deployment.Don’t leave it to chance that your deployment was a roaring success, write a smoke test and include that in the deployment process. I also like to include a simple diagnostics test, all it does it check that everything is where it’s meant to be – it compares a file list of what you expect to see in your deployment against what actually ends up on the server. It is also called as diagnostics test because it’s a good first port of call when there’s a problem.
- If anything fails, stop the line!Throw it away and start the process again, don’t patch, don’t hack. If a problem arises, no matter where, discard the deployment (i.e. rollback), fix the issue properly, check it in to source control and repeat the deployment process. This may seem impossible, especially if you’ve got a tiny outage window to deploy things to your live system, or if you do your production changes in the middle of the night while nobody else is around to fix the issue properly. I would say that these arguments rather miss the point. Firstly, if you have only a tiny outage window, hacking your live system should be the last thing you want to do, because this will invalidate all your other environments unless you similarly hack all of them as well. Secondly, the next time you do a deployment, you may reintroduce the original issue. Thirdly, if you’re doing your deployments in the middle of the night with nobody around to fix issues, then you’re not paying enough attention to the 6th principle – Everybody has responsibility for the release process. Not recommend doing releases when there’s the least amount of support available.
Practices of Continuous Integration
The continuous integration practices were formed by Martin Fowler and they have not changed. Here are the practices, organizations must follow to truly do CI correctly.
- Commit to the mainline:A developer can set up an automated build and have the build run on every commit. But if the culture is to not commit frequently, it won’t matter. If a developer waits three weeks to commit or branches off for three weeks, he has delayed the integration and broken the principles. If a build breaks, the team has to sort through three weeks of work to figure out where it broke.
- Maintain a single-source repository:In complex applications, developers often branch and maintain changes off of a trunk (branch) or main. The branching creates complexity and prevents everyone working with a single source of truth. Teams need to commit/merge to trunk or main at least once per day, or even better for every
- Automate the build:This is a practice most organizations tend to do well. However, some who claim to practice CI are simply doing is scheduled builds (i.e. nightly builds), or continuous builds but they are not actually testing or validating each build. Without validation of the build, you are not doing continuous integration.
- Make builds self-testing: The first step of the validation process is to know that a build with problems actually failed. The next step is to determine if the product of the build is operational and that the build performs as we expect it to. This testing should be included as part of the build process. This consists of fast functional and non-functional testing.
- Build quickly: If it takes too long to build an app, developers will be reluctant to commit changes regularly or there will be larger change-sets. In either case, no one will spot a failure quickly. By building quickly and integrating quickly, you can isolate changes quickly. If it takes hours to run, you might have 20 to 30 more changes during that time and it will be difficult to quickly spot problems.
- Test in a clone: Validation processes verify that software performs as expected in its intended If you test in a different kind of environment, it may give you false results.
- Fix broken builds immediately: It’s critical for development teams to find problems fast and fix them immediately, so they don’t move downstream. Years ago, Toyota instituted a “stop-the-line” approach where workers could pull a rope and stop the manufacturing process if they spotted a problem. CI sets up a process where builds are validated and committed continuously, so if something goes wrong, it’ll be easy to fix.
- Automate Incidents / Incident Response / Feedback loops
Continuous integration is simply a practice that requires developers to integrate code into a shared repository several times a day. CI is the practice of merging all developer working copies early and often. As soon a developer merge your changes with the others, the pitfalls like “merge hell” or “integration hell” are reduced. As a good practice, each programmer must do a complete build and run (and pass) all unit, integration, acceptance tests, etc. Normally acceptance tests and more high level tests run on a CI server, and developers stay with only build and unit tests before push the changes to the main repository. I can say that the fundamental principle of CI is merge your working code early and often.
Building the Pipeline
Essential ingredients for a healthy CI/CD architecture, with benefits to the business:
- Micro-agility in the form of testing and committing code frequently: The development life cycle begins with a request to development, which then takes the request and designs a solution, codes it, tests it and commits it a code repository. From here, the code is tested by QA and then deployed to production. While each step is important, we begin here, as local testing occurs most frequently and bugs found at this stage of the process help avoid the large expense of bugs found in production. To achieve micro-agility, developers must have a clear understanding of where they are expected to commit their code after changes are made and they should be expected to test and update their code frequently—many times a day. This means development testing ought to be simple to run regionally and could be a set of QA.The result’s that code pushed by development is by definition QA-ready.
- Automated QA: In my work I see many organizations whose QA team is one of the largest bottlenecks in the process. It’s not for a lack of skill, but too few staff who spend a good portion of their time waiting on the QA test environments they need to do their work. To avoid this situation, the QA process should be fully automated, from QA ready code coming in the door to one-click promotion on the other side. In between the 2, QA ought to have a continuing feedback loop with development wherever communication about — and chase of—bugs is obvious and simple to follow, permitting development to simply repair issues and recommit the code for further testing. Moreover, automated QA in a CI/CD environment should focus on microcycles consisting of small test batches rather than week long macro tests. Automation and communication are the overriding imperatives here, exhibited by reports to developers that are communicated immediately and code commits that are fail-rejected automatically.
- Production-like environments: To increase efficiency and effectiveness, development and testing environments should mirror production as closely as possible. They should follow the PRQ rule, as I like to call it—production-like, repeatable and quick. Specifically, it should be fast and easy to set up new environments, to the point where developers can, with the push of a button, deploy new environments for their development and testing needs. Doing so allows developers to fail fast, breaking and re-creating environments without fear as they know they can easily recreate the same environment every time. I often recommend using tools such as Jenkins to help with this process; if developers and IT operations use the same tool, it makes it that much easier to streamline the CI/CD process and achieve the promise of DevOps.
- One-click promotion: A well-architected CI/CD pipeline provides one-click promotion to production. This ensures changes are seamlessly moved to the overall production environment with the least amount of friction possible between development and operations. To that end, this reduction in friction should occur across all the steps of the process, with efficient code delivery pipelines that are designed to balance security and agility from the outset. Cloud infrastructure coupled with container technology and automation can easily optimize the process across teams.
- Fully automated deployments: The last factor of an efficient CI/CD pipeline is full automation of deployment. Any configuration changes made by development and accepted by QA should automatically be applied. Continuous integration in this way helps avoid system downtime and human errors are decreased greatly. Deployments also should be continuously monitored so that IT is the first to learn of potential problems. This helps nip issues in the bud, allowing for faster rollback and return of code to development to fix anything that may not be functioning properly. Catching issues before customers do helps ensure customer satisfaction and strength of reputation.
- Automate infrastructure delivery : This is defined by the automation of systems configuration and provisioning, which many people consider to be a high-priority outcome of a DevOps initiative. Automating infrastructure delivery resolves the issue of developer throughput outpacing operations, and therefore the ability to deploy. Automated system configuration makes it possible for ops teams to deliver systems to developers and QA that match the eventual production environment — and deliver them faster. Infrastructure automation certainly addresses a local pain point for IT operations teams, but it goes much further than that: It catalyzes the creation of self-service more broadly throughout the organization in subsequent stages. Self-service for multiple departments ultimately leads to greater efficiency and satisfaction throughout the organization
Benefits of the Pipeline
In addition to these benefits, when combined effectively, these ingredients minimize the time and cost of local test, QA and production processes; reduce the time it takes to identify problems and their causes; and accelerate delivery of production-ready code. We can see this throughout the CI/CD pipeline:
- Developers have greater control over the process and can respond more quickly to market demands by quickly bringing up new environments, testing against them and easily starting over, if need be. Moreover, developers can quickly adjust to new environments—an approach that has proven to decrease failures and resolution time when they do occur.
- Operations benefits as reduced friction in the CI/CD pipeline created by automation decreases repetitive, manual work and with it the opportunity for “fat finger” mistakes and the greater opportunity to introduce inefficiency into the process.
- And, less manual, repetitive work provides everyone in the process with more time for strategic work, which provides direct bottom-line value to the organization.
Tools to build CI / CD pipeline
There are several tools you can choose from depending on your needs. A few things to keep in mind when starting to research your choices
- Open source vs proprietary
- Self-hosted or in the cloud
- Ease of setup
- Integration capabilities of tools
- Automation of feedback loops
- Continuous Monitoring of build status
Check out comprehensive list of continuous integration tools by Stackify
- Most widely used tool is the Jenkins. Jenkins is an open-source automation tool with a powerful plugin architecture that helps development teams automate their software lifecycle. Jenkins is used to drive many industry-leading companies’ continuous integration and continuous delivery pipelines. Docker is a tool designed to make it easier to create, deploy, and run applications by using containers. Containers allow a developer to package up an application with all of the parts it needs, such as libraries and other dependencies, and ship it all out as one package. Docker containers ensure consistency across multiple development and release cycles, standardizing your environment. One of the biggest advantages to a Docker-based architecture is actually standardization. Docker provides repeatable development, build, test, and production environments. Kubernetes is a container orchestration system for Docker containers that is more extensive than Docker Swarm and can coordinate clusters of nodes at scale in production in an efficient manner
Measure effectiveness of CI / CD so that it delivers intended outcome
- Reduction in manual intervention in pipeline
- Deployment Quality trend / Deployment Success rate trend
- How quickly service can be restored after a failed deployment
- Capability of Business Disruption Recovery
- % Reduction in similar feedback
- % Security Automated
Important to have the KPI dashboard automated having live display to the development, operations and also to the Leadership teams
CI / CD pipeline improves velocity, productivity, and sustainability of software development teams. Teams that master an efficient CI/CD pipeline deliver faster, higher-quality applications to market . Mixing these ingredients together results in an efficient, streamlined CI/CD pipeline that delivers direct business benefits.
- Integrate your changes every day
- Write the tests to create a safety net
- Run your tests to know what to fix
- Fix or delete the tests you no longer need
- Keep builds green
- Involve the team(s)
2018 State of DevOps report also stresses on Automating security policy configurations is critical to reaching the highest levels of DevOps evolution. As organizations evolve, security policy becomes part of operations, not just an afterthought when an audit looms. This requires first breaking down boundaries between ops and security teams (which are further from production). This practice evolves from resolving immediate pain to a more strategic focus — in this case, from “keep the auditors off my back” to “keep the business and our customers’ data secure.” In other words, teams automate security policy configurations initially for their own benefit, and as their understanding evolves, the automation evolves to benefit the entire organization.