Friday, February 24, 2017

Jenkins and Continuous Integration

Jenkins is an open source tool which comes with built in plugin for continuous integration purpose. Jenkins primary functionality is to keep a track of version control system and to initiate build and monitor a build in case of any changes. Jenkins monitors the entire process and provides reports and alert notifications.

For using Jenkins we would require the source code repository (example Git repository, PVCS, SVN), a working build script (example Ant / Maven script), which is checked-in to the code repository.

Jenkins captures any build failures during integration stage, automatically generates build report notification for every code commit changes, sends notification to developers about build report success / failure, achieves continuous integration and test driven development. With simple steps, maven release project is automated. Jenkins comes with built in plugin for continuous integration like Maven 2 project, Amazon EC2, etc.

Jenkin is mainly integrated with two components, version control repositories like GIT, SVN and build tools like Ant, Maven.

Within Jenkins, builds can be triggered by source code management commits, can be triggered after completion of other builds, can be scheduled to run at specified time, or by manual build requests

In order for Jenkins to do a clean build, it’s advised that developer perform a successful clean install on local machine with all unit tests successful. Only then the code changes must be checked into the code repository.

In case a Jenkins build failure, open the console output for the build and debug to see if any file changes were missed. If not able to find the issue that way, then clean and update your local workspace to replicate the problem and try to resolve it.

Monday, February 20, 2017

DevOps explained


DevOps is a cultural shift that merges development and operations. Apart from having skills of web languages such as Python or Java, the ideal DevOps team should have some experience using infrastructure automation tools like Chef or Ansible. Organisations also think about the essential interpersonal skills that make DevOps successful.

Instead of releasing big bang release of features, companies are now trying to see if small features can be delivered in short and regular intervals. This enables many advantages like getting quick feedback from customers and better quality of software and higher customer satisfaction. For achieving this, there is a need for increasing the deployment frequency, reduce the failure rate of releases, reduced time gap between fixes.

Some of the popular DevOps tools:
Git - for version control system tool
Jenkins - continuous integration tool
Chef, Ansible - configuration management and deployment tools
Docker - Containerization tool
Selenium - continuous testing tool
Nagios - Continuous Monitoring tool

Developers develop the code and this source code can be managed by Version Control System tools like Git. Developers check-in this code into the Git repository and any changes made in the code is committed to this repository. Jenkins pulls this code from the Git repository using the Git plugin and build it using tools like Ant. Configuration management tools like Chef / puppet deploys & provisions testing environment and then Jenkins releases this code on the test environment on which testing is done using tools like selenium. Once the code is tested, Jenkins send it for deployment on the production server. Post deployment it is continuously monitored by tools like Nagios

Docker containers provides testing environment to test the build features.

Version control is a system that records changes to a file or set of files over time so that you can recall specific versions later. Version control systems consist of a central shared repository where team members can commit changes to a file or set of file. Git has a major advantage it has over other VCS tools like SVN is that it is a distributed version control system. Distributed VCS tools do not necessarily rely on a central server to store all the versions of a project’s files. Instead, every developer “clones” a copy of a repository and has the full history of the project on their own hard drive.Git is a Distributed Version Control system (DVCS). It can track changes to a file and allows you to revert back to any particular change. There is a central cloud repository as well where developers can commit changes and share it with other team members.


Continuous Integration (CI) is a development practice that requires developers to integrate code into a shared repository several times a day. Each check-in is then verified by an automated build, allowing teams to detect problems early.

Developers check out code into their local work spaces. Once done with code changes, they commit the changes to the Version Control Repository. CI server monitors the repository and checks out changes as and when they occur. CI server then pulls these changes and builds the system and also runs unit and integration tests. CI server will inform the team of the successful build. In case of build or tests failures, the CI server will alert the team for fixing.

Success factors for Continuous Integration would include maintaining a shared code repository, automated build, making build self-testing, everyone commits to the baseline, every commit to the baseline should be built, test in a production like environment, all team members can see the results of the latest build, automated deployment.

Usage of Jenkins for CI: move a job from one installation of Jenkins to another by simply copying the corresponding job directory. Make a copy of an existing job by making a clone of a job directory by a different name. Continuous Testing is the process of executing automated tests as part of the software delivery pipeline to obtain immediate feedback on the business risks associated with in the latest build. In this way, each build is tested continuously, allowing Development teams to get fast feedback so that they can prevent those problems from progressing to the next stage of Software delivery life-cycle. Automation testing is a process of automating the manual process to test the application/system under test. Automation testing involves use of separate testing tools which lets you create test scripts which can be executed repeatedly and doesn’t require any manual intervention.

Selenium supports two types of testing 1. Regression Testing: re-testing a code around an area where a defect was fixed. 2. Functional Testing: refers to the testing of software features individually.

Infrastructure as Code IAC is a type of IT infrastructure that operations teams can use to automatically manage and provision through code, rather than using a manual process.

Puppet is a Configuration Management tool which is used to automate administration tasks.
Puppet has a Master-Slave architecture in which the Slave has to first send a Certificate signing request to Master and Master has to sign that Certificate in order to establish a secure connection between Puppet Master and Puppet Slave. Puppet Slave sends request to Puppet Master and Puppet Master then pushes configuration on Slave

Chef is an automation platform that transforms infrastructure into code. Chef is a tool for which you write scripts that are used to automate processes.Chef Server is the central store of your infrastructure’s configuration data. Chef Server stores the data necessary to configure your nodes and provides search. Chef Node is any host that is configured using Chef-client. Chef-client runs on your nodes, contacting the Chef Server for the information necessary to configure the node. Since a Node is a machine that runs the Chef-client software, nodes are sometimes referred to as “clients”. A Chef Workstation is the host you use to modify your cookbooks and other configuration data.

Containerization: containers are used to provide consistent computing environment from a developer’s laptop to a test environment, from a staging environment into production.
Container consists of an entire runtime environment, an application, plus all its dependencies, libraries and other binaries, and configuration files needed to run it, bundled into one package. Containerizing the application platform and its dependencies removes the differences in OS distributions and underlying infrastructure.

Docker image are used to create containers. Images are created with the build command, and they’ll produce a container when started with run. Images are stored in a Docker registry because they can become quite large, images are designed to be composed of layers of other images, allowing a minimal amount of data to be sent when transferring images over the network. Docker containers include the application and all of its dependencies but share the kernel with other containers, running as isolated processes in user space on the host operating system. Docker containers are not tied to any specific infrastructure: they run on any computer, on any infrastructure, and in any cloud. Docker containers can be created by either creating a Docker image and then running it or you can use Docker images that are present on the Dockerhub. Docker containers are basically runtime instances of Docker images.

Docker hub is a cloud-based registry service which allows you to link to code repositories, build your images and test them, stores manually pushed images, and links to Docker cloud so you can deploy images to your hosts. It provides a centralized resource for container image discovery, distribution and change management, user and team collaboration, and workflow automation throughout the development pipeline.

Docker Swarn is native clustering for Docker which turns a pool of Docker hosts into a single, virtual Docker host. Docker Swarm serves the standard Docker API, any tool that already communicates with a Docker daemon can use Swarm to transparently scale to multiple hosts.

Friday, February 17, 2017

DevOps - Adopting Continuous Deployment




Continuous Delivery and Deployment, there is a difference between the two. When adopting DevOps, Continuous Delivery is a core capability to adopt. Continuous Deployment to Production is an optional capability that you may or may not adopt, based on your needs and constraints.

Then what does one do ‘deliver’ when adopting Continuous Delivery or Deployment? We shall look at it from the perspective of  People, Process and Technology.

As mentioned in my previous blog, DevOps is a cultural movement. The people aspect of DevOps is where it all begins. While adopting Continuous Delivery, the most important part is the culture of continuous delivery and continuous collaboration between all the stakeholders who need to be onboard to enable and consume continuous delivery. This includes Dev and Ops but also stakeholders across the SDLC including Business, Analysts, Product owners, Architects and Design, Information Security, Quality Assurance and Management. There is importance of  creating a culture where all these stakeholders contribute to the continuous delivery, and also accept the feedback that comes from the continuous delivery at every stage. Dev teams uses the feedback to decide what to work on for the next Sprint, Quality Assurance or testing teams uses it to test and validate functionality, integrations and performance. The Ops team determines how the environment performed and where it needs enhancement or corrections. Project management team will manage their project and release plans, and so on.

The change in culture is of continuous collaboration, communication, trust and working towards common business goals.

When continuously delivering software, one is not only validating the functionality and performance of the software being delivered and the environments it is being delivered to, but also the process of deploying the software. Deployment of the code involves code deployment, file transfers, configuration changes to operating systems, databases and middleware (SOA / OSB, ODI etc). It also involves an orchestration of steps. The middleware processes may need to be restarted after configuration changes. Services may need to be stopped before file transfers and then restarted. Hence, continuous delivery allows for these processes to be tested and refined to ensure that when it comes to the final deployment to production, it is not the first time the team is executing the processes. They are tested and proven to work in production like environments.

Deployment requires set of tools that can automate the deployments and ensure continuous delivery of all changes from one environment in the SDLC to the next – to Dev, to testing, to Performance testing, to SIT integration testing, UAT user acceptance, Pre Prod and Production.


The key here is to start continuous delivery from the start of the project, from Sprint 0 all the way thru the project. In the beginning, the deployments may be simple, to much more complex orchestrated deployments later in the project. Continuous delivering changes – application, middleware, configuration, data and environment – in small pieces, using the right automation tools, reduces risk by validating the automation, the deployment processes, the configuration changes, the environments being deployed to and of course, the application being deployed.

Reduce food wastage with IoT Solution

Ethylene gas is produced by most plants, which use it as a hormone to stimulate growth & ripening . Fruits and flowers under stress can...