Docker’s rapid build, ship, and run paradigm has realized the principal DevOps ideals of consistency, simplicity, and automation. Virtualization can enhance the DevOps benefits of containerization even further. Running containers on the Nutanix hyperconverged platform has many advantages and brings DevOps-style management to your virtualization infrastructure.
DevOps as a Technology DriverWe’re hearing more and more about containers as many enterprises are transforming their IT development processes to focus on DevOps. Traditional development models that create applications and then hand them over to operations are more difficult to maintain, less reliable, and slower than modern businesses can endure. DevOps is a culture and set of methods around integrating development and operations organizations into more cohesive structures that can rapidly deploy reliable applications.
From a cultural and structural standpoint, DevOps encourages IT to adopt three main concepts, as adapted from Gene Kim:
- Systems thinking: approach development and operations as a giant system and account for everything in the system during build and deployment. This approach incorporates the entire business value stream and avoids separating technology from the business itself.
- Feedback loops: create feedback loops that amplify the good and promptly eliminate the bad.
- Continual experimentation and learning: understanding complete systems with tight feedback loops allows you to take incremental risks, fail quickly, and recover without delay. These short cycles facilitate constant improvement while minimizing overall risk.
DevOps technology focuses on these areas:
- Automation: automating maintenance, provisioning, and orchestration to improve speed, consistency, and fault avoidance.
- Source control: the ability to reproduce the same application repeatedly, over time, and the ability to back out changes and revert to a known good state.
- Monitoring and dashboards: assessing the complete state of an environment: what's deployed, how it's performing, and its security status.
The combination of structure, culture, and technology in DevOps seeks to create reliable environments that are capable of continuous delivery. That is, instead of the large release and deployment cycles in traditional environments, a DevOps environment should be capable of generating small releases at high frequency. In some of the best examples, it's even possible to have dozens of updates a day.
DevOps and Docker Complement Each OtherDocker containers enable the key cultural and technological shifts that characterize DevOps:
- Systems thinking: Containers reduce the number of variables in deploying an environment; this makes the overall delivery and production system less complex and easier to manage.
- Continual experimentation: Containers allow you to roll out incremental changes and, if necessary, quickly revert to a known good state.
- Continuous delivery: The ability to rapidly create, modify, deploy, and destroy containers means that you can implement an application update quickly in a new container, which can start seconds after you shut down the previous one.
- Automation: It’s easy to automate Docker images and files. Images reside in a repository and can be pulled as needed. Dockerfiles are human or machine editable and automate the commands used to create a container. Docker Machine lets you automatically create and configure Docker hosts and supplies various commands for managing them.
- Source control: Automated images and container files provide source control and revision history for containers.
Docker supports DevOps by ensuring uniformity in OS and application environments across development, QA, and operations. The QA person can easily detect whether an application has an issue, because QA occurs in a container identical to the one used in the development environment. The operations process is streamlined in the same way. A new production container is simply another instance of the same container used by QA and development. This automated consistency across all environments supports a higher level of trust in the organization and can serve as a core building block for a DevOps shop.
Nutanix Provides a Foundation for Both Docker Containers and DevOpsThe DevOps and Docker themes are fundamental aspects of the Nutanix platform as well. Virtualizing containers may seem counterintuitive at first, since containers are themselves a logical partition of a running OS. But since the machines running Docker must be installed and maintained like any other equipment in the datacenter, managing them as virtual machines becomes a practical solution.
While the white box approach has been common in the large, internet-scale environments where Docker first took root, many enterprises run application environments at a smaller scale that makes managing single-purpose platforms an inefficient choice. If you think of containers as part of the application stack, then it becomes clear that virtualizing host OS containers produces the same benefits as all server virtualization: VM-level high availability, disaster recovery, resource scheduling, and fast provisioning The new container paradigm benefits from these foundational technologies provided by the more mature virtualization ecosystem.
A container must also exist on an operating system that you have to manage. If the Docker engine’s host OS exists on a physical server, that OS must be installed in a consistent way on each machine. This requirement necessitates a machine-level configuration and maintenance regime that must be performed against a live operating system on live hardware. Likewise, variance in the hardware itself can affect container performance. When the container is running in a virtual machine, the attributes you set, like the number of vCPUs, or amount of RAM, are hardcoded into the VM and remain consistent across different physical servers.
Similarly, when Docker containers exist inside VM instances, those VMs can be cloned and replicated like any other VM. This also reduces the operations overhead of managing the host OS across potentially hundreds or thousands of machines. The host OS for the container becomes a “golden image,” and you can use a clone of that VM, running on the same hypervisor and the same Nutanix platform, throughout development, QA, and release. Integrating cloned VMs and Docker images ensures absolute consistency at every level of the stack.
At the most basic level, the Nutanix invisible infrastructure takes care of hardware installation and management and provides the high performance benefits of the Nutanix Distributed Storage Fabric (DSF) to every node. Nutanix Prism is a built-in, best-in-class infrastructure management tool that operates across the entire system and makes monitoring the environment and identifying hot VMs utterly transparent. You can still monitor individual applications at the container level, and Prism tracks performance and health at the infrastructure level, so you get a complete picture without a separate management stack.
Integrating container VMs with your other enterprise VMs on a Nutanix cluster yields higher efficiencies across the entire virtualized environment—there’s no need to keep container systems separate from the rest of the server plant. When the Nutanix system is projected to run out of compute or storage resources, simply add more nodes to scale the cluster seamlessly and redistribute VMs to the new nodes. This kind of consistency at every level of the infrastructure on a Nutanix platform reinforces a DevOps-style environment and makes it part of a larger enterprise infrastructure.
For more on Docker on the Nutanix Invisible Infrastructure, check out our on Nutanix Best Practices Guide: Quick-start guide to implementing the Docker container stack on Nutanix AHV