DevOps: business, ongoing integration, docker, microservices and much more


Author: Jose Otero-Pena, technical manager of Cloud Computing, Services & Applications Department at Gradiant


What is DevOps?

Although organizations often see the DevOps concept as a role within an organization or as a set of technologies or tools, in fact DevOps combines a certain number of principles, values, methodologies, tools and practices. The main goal of DevOps is to enable organizations to react to change as quickly, efficiently, and reliably as possible. It is closely related to Agile methodologies [1] and to the automation of development (Dev / Development) and systems (Ops / Operations) tasks where its name come up from. While much emphasis is placed on the benefits of adapting to new requirements, changes from business or adapting development to results and feedback obtained; most of its benefits are also obtained in projects with traditional cascading schemes.

In a highly competitive environment such as software development, it is becoming increasingly important to use techniques to improve agility and adaptability, preserving stability and reliability. Despite widespread belief in using ongoing integration is the same as deploying DevOps, it is necessary to remark that this is just one more ingredient to accelerate the release of new versions. DevOps is much more than that.

Although there is still confusion around the DevOps, many of its characteristics are spreading thanks to the clear benefits provided by many of its practices. In line with the trend towards software development based on agile methodologies, DevOps tries to avoid any unnecessary effort or expense in delivering quality code to its customers.

 

Why should a company pay attention to this concept?

The adoption of the DevOps philosophy by any company is a consolidated trend [2]. It is the answer to changing needs and opportunities in this stage of technological deployment. It allows to apply the innovation to the developments following the pace marked by business. If what is outside the organization changes faster than the inside, the end is close.

The exponential growth of technology introduces an important factor of pressure in companies due to ongoing introduction of innovations. To deal with this constantly changing business environment, companies need to adapt continuously, and innovation must increasingly become the main process in a company to prevail in its market. This forces to facilitate an internal culture of innovation that confronts distrust or resistance feelings.

ICT and the internet democratize the market, so more than ever each customer must be satisfied. Otherwise he/she will simply choose another provider and, with the social approach of the internet, may set aside other potential clients. This also contributes to a permanent adaptation to all customers.

However, DevOps will not work if non-specialized personnel do stand-alone tests that do not lead to a practical approach for the organization. Neither will do it if there is no great perseverance by companies, since it requires discipline and attention in the details to achieve a successful transformation. In this sense, it can be extremely useful to implement the DevOps culture with Gradiant’s support, working as a technology partner thanks to its deeply knowledge of the DevOps, in order to transfer the advantages, train the key personnel and make easier the transition for using suitable tools.

 

How to design a roadmap to DevOps?

Although not all the steps following introduced are necessary for the use of DevOps, they are essential for a total adoption. For example, a company can use DevOps by using points 2-6 but using cascaded schedules, but in that way, the company will not take the full advantages of DevOps. We then introduce the route from the Gradiant’s approach that an organization should follow towards the DevOps philosophy:

1. Use of agile methodologies: What agile methodologies of development pursue is to satisfy the client through the early and continuous delivery of software. This allows changing requirements to arrive without much of a problem in the long run, ie the traditional method of planning. Business people and the development team must work together on a daily basis with frequent communication and, if possible, face to face. Currently, the most common agile methods are Extreme Programing (XP) [3], Scrum [4] and Kanban [5].

  • Scrum is the most commonly used framework. It tries to deliver products with the highest possible value to the customer and to handle problems and complex situations. It uses iterative and incremental approaches to develop using cross-functional teams. Based on empiricism, it focuses on knowledge, experience and decision making based on what is known.
  • XP emerged to help small teams to develop software when requirements are not very well defined and change frequently. It is considered a light methodology and focuses on cost savings, unit tests, system integration frequently, pair programming, simple design and frequent deliveries of software that works.
  • Kanban came up as a LEAN tool [6] to manage manufacturing operations. It is also applied to software development and has been considered a methodology oriented to the adaptability, visual and cost saving trying to avoid the waste of production processes. The main rules are to visualize the workflow, to limit the work in progress and to measure the time to complete the tasks.

Other less common methodologies are Dynamic Systems Development Method (DSDM) [7], Adaptive Software Development (ASD) [8], Crystal [9], Feature-Driven Development [10] and Rational Unified Process [11].

2. Use of test and methodologies techniques: Tests have acquired a major role in software development. Within practices [12] that help any organization to successfully test and improve development efficiency, there are techniques for testing (white box and black box testing), or different activities and objectives in how to apply tests (unit, integration, acceptance or system); as well as software development methodologies focused on testing (BDD-Behauvior Driven Development, TDD-Test Driven Development, ATDD-Acceptance Test Driven Development, STDD-Story Test Driven Development) or even to support new methodologies related to projects that aim to perform tests such as TMAP [13].

3. Use of microservices in the design of development: These types of architectures come up to solve the problems inherent to monolithic systems. These are some of the advantages they bring:

  • Small and independent services, which improves the assignment of responsibilities in development teams.
  • Small deployment units, to facilitate the encapsulation in containers Docker [14] taking advantage of all the opportunities already known of that technology:
    • Portability: Pack your application in a standardized way, allows you to enter dependencies and run the same way on any infrastructure.
    • Light: Containers share operating system so you save the necessary resources on virtual machines where each one runs a complete operating system. Only the really necessary resources are consumed.
    • Open: Docker is standards-based and allows the execution of Linux, MacOS and Windows containers on any private infrastructure or in the public cloud.
    • Isolation: The containers isolate applications by adding a layer of protection between them.
    • Simplify maintenance and operation: Docker reduces the effort and risk of managing application dependencies, improves management of updates, and provides features such as self-healing, load balancing, high availability, rollback, service discovery, etc.
    • Reduction of time, in the development, the deployment, the tests and operation time.
    • Agility in hot fixes (consequence of the previous ones).
    • Multi-technology, that is, it allows to combine different languages ​​and technologies in order to implement each component in the most convenient way.
    • Easy horizontal scaling.

4. Use of automation in infrastructure, deployments and configuration management. On this process automation we will speak in the following section in which we will mention examples of the necessary tools.

5. Use of integration / deployment / continuous delivery methods: Ongoing integration is a practice that establishes the execution order of any component in a project, including test cases. This way, it facilitates the correct construction of the software when the integration process is executed, making it transparent to the development team. It also allows the development team to visualize the state of the builds during the integration process allowing the detection of possible incidents, thus avoiding a propagation of these mistakes in the future. After the ongoing integration, the continuous delivery includes the processes of storage of the artefacts and binaries result of the construction and the validation of the code pushed to the repository. Continuous deployment to greater than continuous integration and continuous delivery, provides a later phase with the automation necessary for the deployment of the results of previous phases in the desired environment, production, staging, etc. When choosing the right tools, especially in large organizations or with multiple projects and clients, it is important to properly manage the permissions of users. Not surprisingly, most tools do not allow a developer or client to access the code repository with their permissions but with those of a project developer (who may have permissions in other repositories) or those of a generic user for all repositories. This creates clear security problems.

6. Use of best practices: These best practices include activities to properly implement DevOps and refine any problems that may appear during the adaptation period in the organization. Some of the good practices are:

  • Plan, follow up and a good version of the development. This is important for a proper development and continuous improvement of the DevOps process.
  • Record every issue: Each issue must be reflected in some tool for its subsequent treatment.
  • Ensure repeatability: Each operation should be automated as much as possible, providing automatic mechanisms also for the rollback to the previous state of a change.
  • Test everything: Every change must be tested, if possible in an automatic way. For this purpose, there are integration / deployment / continuous delivery systems that have already been mentioned.
  • Monitor and audit what is needed: The use of tools that allow monitoring the behaviour of applications, as well as events in the logs is very important so that the development team can fix problems. In addition, there must always be a responsible for every change in the system, generic accounts should be avoided, and each user must perform operations in an identifiable way. Each access for changes or operations must be stored for the determination of responsibilities and users must be aware of it.

 

What tools should a company use for transitioning to DevOps?

The use of facilitating tools for both, the Dev part and the Ops part, becomes essential in the context of agility through automation. However, there are many alternatives and combinations of tools that must adapt to the organization in order to make a transition to DevOps in the best way. It is important to make a small group of tools that can be used to make easier the work of DevOps, for that purpose teams will define 7 categories which include tools that may be more or less necessary but many of them are basic, such as Docker or Jira. Parentheses include many of the tools Gradiant uses to deploy DevOps to its clients or within its own DevOps strategy.

  • Code: This category includes those tools aimed at improving the performance of development. We could also split tools into subcategories such as IDEs for developers (Eclipse, IntelliJ IDEA), version management repositories (git, Gitlab, Bitbucket, FishEye), code quality analysis tools (Sonarqube), code review tools (Crucible, gerrit), issues and bugs tracking tools (Jira, Redmine, Project, Mantis), tpair programming tools (ScreenHero) or code merge tools (KDiff3,DiffMerge, P4Merge).
  • Build: In this group, we can find those tools whose purpose is to prepare the code for execution and testing. The role of these tools is to validate that a development has been validated and can be passed into a staging or production environment. If used correctly allow to eliminate much of the testing time that developers have to employ and also ensures the detection of unexpected problems introduced in parts of development that worked well before the change. Continuous integration tools (Jenkins, Bamboo, Go.CD) and build automation (Maven, Make, Gradle, Ant).
  • Test: The objective of test tools is to reduce manual test cases while increasing the coverage, thus saving time to create automation and effort and test execution time. Continuous web tools (Selenium), mobile continuous test tools (Appium), iterative test tools (Lux, Expect), tools for test acceptance criteria (FitNesse, FIT, EasyAccept), or unit tests (XUnit, JUnit, Unit.js, ATF).
  • Packaging: In this category, there are tools in order to avoid having to depend on suppliers and external repositories in the processes of build and to be able to keep controlled the proprietary libraries of the company as well as to be able to share with external clients the results developed. Artifacts Repository (Nexus 3, Artifactory), Containers (Docker), document management (Confluence, Owncloud).
  • Configuration management: This category groups tools designed for the purpose of using scripts, recipes, blueprints, playbooks, charms, templates, etc. For the simplification of automation and orchestration in the execution environments of the organization. The aim is to provide a standard and consistent way of performing the configuration of the services. In this category are the configuration management tools and version management (Puppet, Salt, Chef, Ansible, Vagrant, Juju).
  • Deployment: This category includes tools and services that allow execution on infrastructure (on premise, datacenter or public cloud) of both the tools needed for DevOps and the developments. In this category are the private infrastructure orchestration tools (Gradiant ITBox, MaaS, Openstack, Kubernetes), service orchestration tools (Docker Compose, Docker Swarm), multicloud orchestration tools (Gradiant ITBox, Cloudify, Apache Brooklyn), virtualization (Docker, KVM), public cloud (Amazon AWS, Azure, Google CE).
  • Monitoring: One of the walls that DevOps tries to break is the lack of feedback on the applications put into production once the deployment has been performed by the operations department. Monitoring should be transverse and should arrive without filters to the development department that will be able to detect and interpret the day-to-day results of their developments. In this category are application performance tools (Zabbix, Nagios, Influxdb, Telegraf, Grafana), logging (GrayLog, Logstash, Kibana, Elasticsearch), user experience and web analytics (Piwik, Google Web Analytics).

 

 

 

References:

[1] Fowler, M., & Highsmith, J. (2001). The agile manifesto. Software Development, 9(8), 28-35.

[2] Mastering Digital Disruption with DevOps Design to Disrupt Report 4 of 4 https://www.capgemini.com/resource-file-access/resource/pdf/design_to_disrupt_4.pdf

[3] Beck, K. (2000). Extreme programming explained: embrace change. addison-wesley professional.

[4] Schwaber, K., & Beedle, M. (2002). Agile software development with Scrum(Vol. 1). Upper Saddle River: Prentice Hall.

[5] Sugimori, Y., Kusunoki, K., Cho, F., & Uchikawa, S. (1977). Toyota production system and kanban system materialization of just-in-time and respect-for-human system. The International Journal of Production Research, 15(6), 553-564.

[6] Naylor, J. B., Naim, M. M., & Berry, D. (1999). Leagility: integrating the lean and agile manufacturing paradigms in the total supply chain. International Journal of production economics, 62(1), 107-118.

[7] Stapleton, J. (1997). DSDM, dynamic systems development method: the method in practice. Cambridge University Press.

[8] Highsmith, J. (2000). Adaptive software development. Dorset House.

[9] Cockburn, A. (2004). Crystal clear: a human-powered methodology for small teams. Pearson Education.

[10] Palmer, S. R., & Felsing, M. (2001). A practical guide to feature-driven development. Pearson Education.

[11] Kruchten, P. (2004). The rational unified process: an introduction. Addison-Wesley Professional.

[12] Yagüe, Agustin; Garbajosa, Juan; (2009). Comparativa práctica de las pruebas en entornos tradicionales y ágiles. REICIS. Revista Española de Innovación, Calidad e Ingeniería del Software, Diciembre-Enero, 19-32.

[13] Koomen, T., van der Aalst, L., Broekman, B., Vroon, M.: TMap Next, for result driven testing. UTN Publishers, 2006

[14] Merkel, D. (2014). Docker: lightweight linux containers for consistent development and deployment. Linux Journal, 2014(239), 2.