This post was authored by Michael Haigh and Rahul Dabke, and co-hosted on D2IQ’s blog.
Traditionally, deployment of stateful data services on Kubernetes was not easily managed and required manual intervention. Operators are powerful design patterns that help deploy stateful data services on Kubernetes and help developers with application lifecycle management, taking Kubernetes to the next level of orchestration. In order to create a production level operator, it is not uncommon to write tens of thousands of lines of code in the process. Writing, managing and implementing gobs of code has been cumbersome and often unforgiving.
Writing an operator requires deep understanding of the underlying service in addition to a deep understanding of Kubernetes. Although there is commonality between two different data service operators, the need for managing different orchestration concerns commonly requires significant implementation differences. What is needed is a set of operators that are packaged for easy deployment and management on any Kubernetes cluster.
To Helm or not to Helm?
While Helm excels at the installation of services on Kubernetes, it does not have full lifecycle management. Helm charts only provide basic functionality for packaging workloads and deployment. Developers’ experience has been that the first time you use them they are nice, but when using at scale - they are often difficult to manage. Also, Helm charts are difficult to deploy for stateful data services. Operators encompass the entire lifecycle of applications - from deployment to upgrades, but also recovery, scaling, and more.
Building Operators - Who and How?
Building operators was necessary, especially for deploying stateful data services on top of distributed computing. If you look at the beginning era (Day 0) of Kubernetes, deployment was mostly around stateless applications. Kubernetes has been good at managing stateless apps. However, for stateful applications and workflows such as databases or message queues, deploying on Kubernetes is challenging and manually intensive.
Stateful apps are procedural, require more hand holding and lifecycle management is a challenge. Stateful data service deployment and efficient application lifecycle management is the single most common use case that calls for the need for operators.
Operators help independent software vendors (ISVs) automate the operations of their products on Kubernetes, without requiring deep expertise in Kubernetes. Early attempts by some ISVs at writing operators resulted in building mere scripts. There were several approaches to implement operators, but often yielded the same level of integration with Kubernetes.
This is because, for service providers developing operators - in addition to writing tens of thousands of lines of code, developers also need to be on top of their Kubernetes API game, know when those changes occur and if they are breaking. You as a developer may need to keep up with all those challenges as well as wrap solutions for integration testing, RBAC configuration, documentation, etc.
Operator SDK Landscape
The Kubernetes Operator ecosystem is growing rapidly with Operator developers and users. Some prominent approaches in the operator ecosystem, based on the amount of code required to be written are
- D2iQ’s Kubernetes Universal Declarative Operator (KUDO)
- Kubebuilder by Google Cloud Platform
- Operator SDK by CoreOS
With Operator SDK framework, Operators are developed using Ansible, Helm Charts, or Go. Operator SDK requires the greatest level of understanding of Kubernetes API and requires the most code to write an operator. Kubebuilder provides boilerplate templates and solutions for common operator patterns. KUDO operators are written as templated YAML manifests and requires the least amount of code to implement.
Operators can alleviate the challenges of deploying stateful data services on Kubernetes. Operators can help stand up several data services and reduce the amount of manual touch required to support them. Different operator SDKs provide varying levels of writing code and hence different levels of control.
2019 Nutanix, Inc. All rights reserved. Nutanix, the Nutanix logo and the other Nutanix products and features mentioned herein are registered trademarks or trademarks of Nutanix, Inc. in the United States and other countries. All other brand names mentioned herein are for identification purposes only and may be the trademarks of their respective holder(s).