Why Cloud Demands DevOps

Let's face it — most legacy applications aren't making the move to the cloud, and not everything can be delivered as a service. That means helping customers with software development projects.

Channel Partners

August 3, 2016

18 Min Read
DevOps

Michael BiddickBy Michael Biddick

For some, the words “software development” evoke fear, loathing and flashbacks to failed or costly (or both) projects. But the emergence of cloud has spawned a renaissance in how development happens, and channel providers need to understand how to help customers revamp legacy applications for the modern era.

You’ve probably seen the term “DevOps.” The movement was coined in 2009 as a reaction to decades of failed development projects. DevOps, at its core, is a cultural movement in response to the mistakes commonly made by large agencies developing software that went over budget, continually missed deadlines and then did not meet user requirements. DevOps aims to fix that.

DevOps found initial traction among large public cloud service providers, where much of what used to be considered infrastructure is now part of the code. The lessons learned from Google, Amazon, Twitter and Etsy are now directly applied to new software development projects for the federal agencies that I work with. The concepts are proven and ready to move downstream to small and midsize companies in all sectors and verticals.

The DevOps approach emphasizes communication, collaboration and integration between software developers and IT operations. In the past, these groups typically worked as silos. DevOps acknowledges the interdependence of code and infrastructure. It helps an organization produce software and IT services more quickly, with frequent iterations.

What is tough for most agencies is the cultural change DevOps demands. Shared ownership and collaboration are cornerstones. Functional silos must be broken down so applications can adapt more quickly to user needs. This can only happen with close teamwork.

Agencies also struggle with significant talent gaps. The supply of DevOps engineers comes nowhere near meeting current demand. Progressive federal CIOs have been building teams and augmenting their staff with partners; I expect this to become the norm with all businesses.

Let’s look at how to develop a DevOps practice.

First, you need to lose the idea of coding as a mysterious black box and demand that all software be intuitive for users, reliable, provide easy-to-interpret results and visual cues, and be able to run on modern, virtualized infrastructure.

I’ve found that managing the technical know-how, processes and communications to deliver software that checks these boxes and meets user and business needs is an art, but one worth cultivating. In a business world now driven by software, Accenture says you could speed up customer application development cycles by factor of 30, with 50 percent fewer failures. Return on investment will …

{vpipagebreak}

… improve as more information is shared, not only between developers and operations staff, but also with marketing and business teams. As better software helps customers become more productive and competitive — which, at the end of the day, is the goal of DevOps — they spend more.{ad}

DevOps isn’t just a way of working. There are also tools used in the following stages of the code development and release cycle:

  • Version control. Code is stored in a way that lets developers work on different modules at the same time. Afterward, changes are merged back into one new master version. Version control applies to applications as well as, for example, configuration code for servers running the application in production.

  • Continuous integration. Developers may work on separate parts of code, but all the parts must also work together. By bringing together new developments more frequently and testing them together, problems are found faster. Continuous integration can be automated and triggered every time a new master version of code is produced.

  • Containers. Applications often need to access other modules, such as runtime libraries, in order to run. Instead of relying on other servers to have the right modules available, the applications and the modules can be put into a container, which is then portable and works across many servers.

  • Code deployment. After code has been developed and integrated (and successfully tested), the next step is to put it into production. This may mean installation across many servers in different locations. Automated deployment allows configurations to be applied in a repeatable and consistent way.

  • Performance and log monitoring. By measuring and monitoring applications in production, DevOps teams can spot problems and opportunities for improvement. 

Successful DevOps also means reconciling developer and IT operations goals. The role of developers is to change software. IT operations staff, on the other hand, want stability. By working together and using the tools described above, both goals can be achieved by making sure software is continually produced in a ready-to-deploy state. The cultural change (the close collaboration) and the automated tools are both crucial for improving the speed and quality of software development and deployment. For the enterprise as a whole, DevOps then means higher productivity, faster time to market, and increased competitiveness.

Channel Benefit: Automation

Software ready for the real world still has a virtual last mile to go. After development, testing, continuous integration, and quality assurance, for example, it must be installed on a production server — or perhaps hundreds or even thousands of production servers. When these servers are largely identical, as they are in the cloud, it makes sense to automate deployment and subsequent maintenance. Not only does this save time and effort, it eliminates many of the potential errors inherent in manual deployment.

Deployment management tools offer automation and orchestration capabilities for speedy and reliable deployment over large, decentralized and cloud-based populations of servers. With increasing frequency of incremental software releases, these tools become even more important for IT operations staff to manage their server bases. By using predefined schemas (also called templates, recipes or playbooks), administrators can …

{vpipagebreak}

… “define once and deploy many.” In addition, popular deployment management tools can be linked into a continuous delivery chain. Software releases can then automatically travel from stage to stage for final, automated deployment.{ad}

Four products currently dominate the market for these tools, with a choice of management applications to federate these tools and others for DevOps continuous delivery. Most have partners programs:

  • Chef. This configuration management tool can automatically provision and configure servers. Chef integrates with cloud platforms including Amazon EC2, Azure, Google Cloud Platform and Rackspace and is compatible with OpenStack. The tool is written in Ruby and uses a Ruby DSL (domain specific language) for writing configuration recipes to put resources into declared states.

  • Puppet. System configurations are declared in Puppet using Puppet’s own declarative language. A resource abstraction layer lets administrators define the configuration using high-level terms such as users, services and packages.

  • Ansible. Offering configuration management with multinode deployment, Ansible requires that Python is installed on the servers concerned, then uses SSH (secure shell) to issue instructions. Administrators write reusable descriptions of systems in “human-readable” YAML language.

  • Salt. Initially a tool for remote server management, Salt, also known as SaltStack, is a Python-based open source application that now offers infrastructure automation and predictive cloud orchestration.

Federating applications for these and other deployment management tools include:

  • Ansible Tower. Provides dashboards, role-based access control, job scheduling, and graphical inventory management. A REST API facilitates integration with other tools in a DevOps continuous delivery chain.

  • Foreman. Often used as an all-purpose GUI for automation solutions, although built for integration with tools like Chef and Puppet.

  • HashiCorp Atlas. More recently introduced, Atlas offers configuration management, together with visibility into servers, virtual machines, containers and other infrastructure. A closed-source product, Atlas provides dashboard facilities for developing applications, as well as deploying and maintaining them.

Which product should you choose? Larger customers looking for mature, stable products may prefer Chef or Puppet. Those who prefer speed and simplicity may be drawn to Ansible and Salt. As a federating application, Ansible Tower offers ease of integration through its REST API. Foreman on the other hand has the advantage of being open-source and freely available, compared with Ansible Tower’s commercial licensing.

Pick Your Container

In the beginning, applications ran on one operating system on one physical server, period. Then came hypervisors and virtual machines. A VM offers a full operating system and associated resources, and several VMs can run in the same physical machine. However, VMs must still be started or “spun up” in a server. They can consume a relatively large amount of resources and there are often limits on their portability.

Containers offer a new type of virtualization by packaging an application and its dependencies — runtime, system tools, system libraries — together, but while using a host’s operating system. As a result,containers use fewer resources, can be moved from one host to another, and are simpler and faster to deploy. They offer a number of …

{vpipagebreak}

… advantages for DevOps:

  • Built-in version control. Different versions of application libraries on other machines no longer affect your application, because it has what it needs in its container.

  • Suited to microservices. A highly modular microservices architecture fits well with agile development and therefore with DevOps. Each microservice can be put into a container with the runtime files it needs.

  • Pre-build before deployment. Stable containers can be built earlier in the DevOps cycle and made ready for immediate deployment. By comparison, virtual machines need time to “spin up” within a physical machine.

  • Easy to monitor and manage, compared with VMs.

Does that mean containers will replace virtual machines? Although their resource savings and speed of deployment make containers attractive, they are less secure than VMs, and they must often all use the same Linux operating system. Coexistence is more likely. In shared environments like public clouds, containers are often implemented within a VM, for this reason.{ad}

Container technology has only recently achieved widespread recognition, starting with the introduction of Docker in 2013. Since then, the number of container technologies has increased:

  • Docker. Originally built on Linux Containers, or LXC, Docker containers now have their own execution environment. The Docker platform is supported by Microsoft Azure, and Docker has a strategic partnership with the IBM Cloud. Google created Kubernetes, a tool specifically for managing Docker containers across server clusters.

  • Rocket. While Docker has become a complex platform, the goal of Rocket is to provide a simple, secure, reusable component (like the original Docker design goal) for deploying applications.

  • Microsoft Drawbridge. Microsoft’s own Drawbridge container technology has so far been used internally and as a sandboxing solution within Azure services. Plans for wider availability have not yet been announced, even though the technology was prototyped in 2011.

  • LXD. Offered by Canonical, LXD is based on top of LXC. LXD provides system containers, where each container runs a copy of a Linux distribution. By comparison, Docker and Rocket are application container managers. A Docker or Rocket container could run inside an LXD container, and benefit from LXD host resource management and security.

Which container technology should you use? Developers may find Docker attractive because it has a large ecosystem. They may opt for LXD for its flexibility in mixing and matching different Linux operating environments. Operations staff may prefer for Rocket for reasons of simplicity and security. Microsoft shops may opt for Docker, supported by Microsoft’s Azure, or in the future perhaps — but don’t hold your breath — Microsoft Drawbridge containers.

The closer a team works together on producing and deploying a software application, the better the chances of a timely, high-quality result.

CI: The Road Through Integration Hell

DevOps is the way to achieve this, through tight collaboration between developers and operations staff, and automation of the different release steps. To move ahead in creating code, developers also need to work separately on individual parts of the code.

A risk in working alone, however, is that difficulties will arise when all these individual efforts are integrated. In pre-DevOps days, this integration phase was often…

{vpipagebreak}

…pushed back until after all the development work had been done. This led to “integration hell,” where teams struggled to make different modules work together. Code then had to be rewritten or even scrapped, meaning a waste of time and money.

DevOps reduces this risk radically by using continuous integration (CI). Smaller, but more frequent, code changes are committed to the version control system being used. A CI server then builds the overall application from the latest versions of the modules in the version control system and tests the build. These integration test results shine a spotlight on problems or conflicts, which can then be resolved, giving a solid, tested code base on which the next development can be based. With the higher frequency of integration testing, rework and waste are minimized.{ad}

Continuous integration, which can be performed several times per day, has existed for some time. In DevOps, CI can be extended to a process of continuous delivery and deployment. Code that has passed CI testing is automatically made available, for instance, to a deployment management application for automated installation on a range of production servers. CI products include:

  • Jenkins. This continuous integration tool lets you automate a process to build an application, test it and deliver it to the next release stages (for instance, quality assurance), and to final deployment. Jenkins runs on a server. It offers its own build and test functionality, and can also work with popular build tools like Ant and Maven.

  • Bamboo. Bamboo offers triggers to start builds automatically when code changes are committed and sports native support for Git, SVN and Mercurial version control systems. Automated builds, tests and releases can be combined into a single workflow to extend to continuous delivery and deployment.

  • Solano. You can use Solano as a SaaS solution or as a version for a private server. The tool allows code and tests to be written in a variety of programming languages. It works with Git and other version control systems and produces reports on each build and test cycle with details on screenshots, logs and metadata.

  • Atlas. Enabling DevOps over a number of popular cloud services, Atlas can run builds and integration tests with Packer, the machine image creation tool from the same company (HashiCorp.) Atlas can also work with other build tools. Tested code can be entered directly into the Atlas workflow, or added after the code has been definitively merged in a version control system such as GitHub. 

Which one should you use? All are strong choices, but some tools providing continuous integration possibilities also extend to handle much or even all of the overall DevOps workflow. They offer a “pipeline as code” approach, in which a workflow and deliveries from one stage to another can be automated, end to end. If that’s attractive,consider Atlas or Jenkins.

Mind the Versions

In DevOps, the goal is not only to excel in producing applications. It is also to control the IT infrastructure through code, including integration testing, server deployment and configuration, monitoring and reporting. Once the code …

{vpipagebreak}

… is in place for any of these items, it can be automatically triggered and executed. This code-based approach is what makes continuous delivery possible in DevOps: Applications flow from their creation through to deployment in the real world of production.

Code version control is therefore vitally important to successful DevOps. Application code branches must be correctly merged for continuous integration testing. Infrastructure configuration code must be available in the latest version to operations staff.

Version control systems can also hold configurations for performance monitoring and log management systems. In addition, clear records of which code changes were made where, when and why are crucial for speedy problem resolution and auditing.{ad}

Although today’s popular version control systems have similar goals, they have different strengths and weaknesses:

  • Git. Developed from the start as a distributed system, Git has no centralized base for files. This allows your team to have multiple redundant code repositories and multiple branches of code that can be worked on, online or offline. Because Git operations are mostly local to each repository, network latency is not an issue, and Git is relatively fast.

  • SVN. Apache Subversion, or SVN for short, is single-server-based, instead of being a distributed system like Git. SVN takes the most practical features of an older system called CVS (still in use) and adds more of its own. In particular, it prevents corruption to the database of code revisions by using atomic operations. Atomic means that changes to source code must be applied fully or not at all; partial changes are not allowed. While very popular, SVN is slower in performance compared with Git, although easier to use.

  • Mercurial. A director competitor to Git, Mercurial also uses a distributed model. It is written in Python and so likely familiar to your Python developers. It’s easier to learn than Git but offers less powerful branch merging capabilities.

Which one should you use? If your DevOps team is small, SVN may be the better choice. For larger projects or ones in which several developers will be updating the code at different times, Git may be preferable. If developers balk at Git, perhaps for lack of user friendliness, Mercurial may be a suitable compromise. However, the following recent developments associated with Git may have solved major usability issues:

  • GitHub offers the power and functionality of Git, but with a web-based graphical user interface (Git has a command-line interface.) It also includes collaboration functions, such as wikis, task management, bug tracking and feature requests.

  • GitLab also includes wikis and issue tracking as well as LDAP integration to directory servers for tracking resources and user privileges.

  • Bitbucket is similar to GitHub, although it is written in Python and can be used as a web hosting service for projects using Git or Mercurial.

Whichever version control system you use, each code change committed to the system can also automatically trigger continuous integration and/or continuous deployment. Speed and reliability of releases can then go up, while issues and incidents go down.

Round Up the Data

Servers and applications generate a wealth of performance feedback and log data. This data can be used at …

{vpipagebreak}

… various stages of a DevOps release cycle to help pinpoint the origins of issues and indicate opportunities for improvement. With the right tool for collecting data from different locations, collating it and making sense of it, developers and operations staff can:

  • Debug applications that are in development, including configurations involving multiple modules, systems and networked locations;

  • Monitor applications in production and react to abnormal events;

  • Troubleshoot problems in production systems using precise information on the time issues occurred and comparing faulty and correctly operating systems; and

  • Identify opportunities to improve performance, including the successive elimination of bottlenecks.

Rapid analysis of feedback data, at whatever stage, allows developers to produce and test new versions of software in a timely way. These new versions can then follow the same DevOps path to production release as the original version of the application — for instance, via continuous integration and testing, code version control and deployment management.{ad}

Better still, performance analytics can be moved forward (“feedforward”) in the DevOps cycle to evaluate performance of an application even before it is deployed in production. With customers, your team can define key performance indicators to measure performance when an application goes live. 

Well-known application performance and log management tools include:

  • Splunk. One of the richest tools in terms of features and apps for analyzing different kinds of data, Splunk also offers extensive search and visualization capabilities. It runs as an on-premises application under commercial licensing conditions and may also require a dedicated cluster of servers for execution. Mature partner program.

  • SolarWinds PaperTrail. Ease of use is a key feature with a single user interface collating log data from multiple machines. While text-based and affordable, PaperTrail does not offer advanced predictive or reporting capabilities.

  • Logstash. This tool is part of an open-source stack. Logstash itself handles the collection and management of log files. ElasticSearch indexes and searches the data, and Kibana then provides charting and visualization functions. The three different modules use three different technologies (Ruby, JSON and JavaScript, respectively.)

  • New Relic. As a SaaS application, New Relic offers application performance baselines and deployment markers to help DevOps teams see the impact of successive versions of an application on performance and stability.

Which one should you use? That depends on your, or your customer’s, budget and willingness to handle complexity. Out of the four, PaperTrails is the entry-level solution in terms of simplicity, range of functionality and cost of implementation. New Relic offers pay-as-you-go pricing. An implementation of Logstash is also affordable, although the mixed technologies make its implementation more complex. Splunk, on the other hand, can be costly, both in licensing and implementation, positioning it as more of an enterprise-level solution.

As CEO of Fusion PPT, Michael Biddick is responsible for the strategic vision, market strategy, project quality and overall performance of the company. His unique blend of technology experience coupled with business acumen has helped the company achieve triple-digit growth. A published author with more than 60 articles on IT topics, Biddick earned bachelor’s degrees in political science and African-American history from the University of Wisconsin-Madison, and a master’s degree in information systems from John Hopkins University.

LinkedIn: linkedin.com/in/michaelbiddick
Twitter: @michaelbiddick

Read more about:

Agents
Free Newsletters for the Channel
Register for Your Free Newsletter Now

You May Also Like