@NutanixNext
Visit Nutanix

Nutanix Connect Blog

Welcome to the Nutanix NEXT community. To get started please read our short welcome post. Thanks!

Showing results for 
Search instead for 
Do you mean 

Stateful Container Services on Nutanix Part 3: MongoDB Replica Set

by Community Manager ‎12-06-2016 09:38 AM - edited ‎12-09-2016 01:00 PM (2,072 Views)

 

This blog was authored by Ray Hassan, Sr. Solutions & Performance Engineer at Nutanix, and Kate Guillemette, Technical Writer & Editor, Solutions & Performance

 

In this final post in our stateful container series, we use some of the container tools available to set up a MongoDB replica set within containers and have each instance store its data to a volume on the Nutanix storage layer. We created and provisioned these Dockerized VMs in Part 2; here, we are getting them to run MongoDB.

 

The best way I’ve found to generate a repeatable build process for setting up the same MongoDB instance every time is with Docker Compose. Compose uses a YAML-based configuration file to set up multiple container environments, so a single command can stop, start, and rebuild container services.

 

Where Docker Machine and Compose really come into their own is in conjunction with an orchestration and scheduling framework like Docker Swarm, for example. Swarm treats a collection of Docker hosts as a single virtual host. With the scaling that Compose makes possible, you can then deploy services across the hosts according to desired constraints and affinities. I’m not going to cover Swarm or other automation frameworks here, but such technologies will be part of our ongoing work on the orchestration and scheduling of “stateful” applications or services.

 

Here are the next steps to get our MongoDB instances up and running:

  • Connect to each Docker Machine.
  • Install Docker Compose using the instructions available here.
  • On each machine, create a Compose file with the following contents:

Screen Shot 2016-12-06 at 7.26.30 AM.png

 

In the above Compose file, we define a mongodb: container service using the default mongo image: from Docker Hub, and we are using the Nutanix volume_driver: for our volumes. We create a single persistent volume, dbata0n, where n = 1…3. This volume maps to /data/db within the container runtime.

 

I decided to map the ports: like for like, so here port 27017 (mongod) in the container maps to 27017 on the host. I start the mongod daemon with the replSet option. Using this option requires an additional ntnx parameter (the name of the replica set). Finally, I am using the host: networking directly. In our example, I’ve made this choice purely for convenience, but database vendors deploying their applications in containers often call direct host networking out as a best practice.

 

  • Start the container service by running the compose up command on the Compose file above like so:

# docker-compose -f ./mongo.yaml up -d

 

  • Verify that your MongoDB container service is running via one of the commands below:

Screen Shot 2016-12-06 at 7.28.50 AM.png

 

In the steps above, we’ve covered the creation of volumes “on the fly” within the Compose workflow. However, you can also use volumes that you’d already created and had ready for reuse. Maybe your workflows demand that you create volumes up front, ahead of time. We’ll cover that use case in the following section.

 

Volumes are a first-class citizen within Docker, which means that you can create them as standalone units. So, if you prefer to precreate your volumes using docker volume create, you need an alternate “version 2” of the Compose file syntax. First, create the Docker volumes using the Nutanix volume driver, then call them out as “external” volumes in the Compose file itself. Here is an example of this procedure:

 

  • Precreate your named volume and ensure that you specify the Nutanix volume driver.

Screen Shot 2016-12-06 at 7.31.53 AM.png

  • Bring up the MongoDB container service using a Compose file with the version 2 syntax.

Screen Shot 2016-12-06 at 7.34.19 AM.png

 

When you compare this version 2 block to our original Compose file above, notice that services: and volumes: are now in their own separate YAML stanzas. If we were to use overlay networks, we would do the same with networks. We’ve also changed the network_mode: label in this version.

 

Both mechanisms for running container services with persistent volumes created via the Nutanix volume driver create the volumes as iSCSI block devices in a VG within the Prism GUI. See below:

 

e1.png

 

Now that we have each individual MongoDB instance running in its own container with its own persistent volume, we can finally initialize our replica set. We need to connect to a Docker Machine, attach to its running container, and manipulate the database via the MongoDB API interface.

 

Screen Shot 2016-12-06 at 7.39.12 AM.png

 

  • Obtain the container ID and attach a bash shell to that running container.
  • From within the container, run a mongo shell session.

 

Screen Shot 2016-12-06 at 7.40.05 AM.png 

 

  • From within the mongo session, create a replica set configuration JSON document and then initialize the replica set.
  • Obtain the Docker Machine IP addresses from either docker-machine ls or docker-machine ip <machine hostname> I strongly recommend using the form of set initialization below. If you use the default rs.initiate() without a cfg document parameter, your primary MongoDB instance hostname could end up resolving to the loopback interface or localhost, leaving you unable to speak to the other replica set members.

Screen Shot 2016-12-06 at 9.26.47 AM.png

 

 

One of the issues people frequently encounter with pet containers is the requirement for longstanding DNS names. Therefore, unless you want to manage IP addresses locally, you definitely need to assign each of the replica set members an IP address that is resolved via DNS. Without a resolvable address, the other instances might not be able to synchronize data from the primary, so you would have to attempt to reconfigure the set membership by hand. Not only is this not a nice position to be in at this stage, but it also might prove difficult to do at all without starting the whole process again.

 

At this point, you should have a replica set running between your three MongoDB instances:

 

Screen Shot 2016-12-06 at 9.29.35 AM.png

 

The process we’ve outlined so far has not quite brought the deployment to the point where it’s truly production-ready. However, in a real use case, we could automate much of this labor. Further work toward streamlining deployment could include building out stateful services across a cluster of Docker Machine hosts; this project would allow us to think about affinities and constraints when considering how to locate (or more importantly not colocate) the various database services.

 

If you're working on stateful services that are running in a hybrid cloud environment, we would love to talk more and share experiences. Get in touch with us via our social media channels or the Nutanix Community Forums.

 

Additional Information

The Intersection of Docker, DevOps, and Nutanix

Containers Enter the Acropolis Data Fabric

Nutanix Acropolis 4.7: Container Support

 

 

 

Disclaimer: This blog contains links to external websites that are not part of Nutanix.com. Nutanix does not control these sites, and disclaims all responsibility for the content or accuracy of any external site. Our decision to link to an external site should not be considered an endorsement of any content on such site.

Stateful Container Services on Nutanix Part 2: MongoDB

by Community Manager ‎11-28-2016 03:32 PM - edited ‎12-09-2016 01:01 PM (2,205 Views)

This post was authored by Ray Hassan, Sr. Solutions & Performance Engineer at Nutanix and Kate Guillemette, Technical Writer & Editor, Solutions & Performance at Nutanix.

 

In our last post on stateful containers, we talked about using the Nutanix-Docker integration components to configure and run stateful (“pet”) applications or long-running services in containers. Many organizations have been reluctant to try running their pet workloads in containers, at least in a production setting. Using MongoDB as a test case, I hope to provide some assurance that containers aren’t just for short-lived applications anymore.

 

Picture1.png

Source: https://twitter.com/mfdii/status/697532387240996864 (Used with permission.)

 

Using containers to bring up a cloud-native database application like MongoDB is nontrivial to do right. I don’t plan to cover every aspect of a production deployment at this stage, but future solutions-based work in this area will work on it. You can see from the image above that there is far more to deploying containers than the docker pull/docker run sequence.

 

First, let’s consider the virtual host provisioning and data persistence aspects of a stateful deployment.

 

Installation and Setup

 

First off, we need to download the Docker Machine driver for Nutanix along with the CentOS container host image from the Nutanix portal. Upload the container host image to the Acropolis Image Service and copy the Docker Machine driver to a directory on your Docker CLI host. In these examples, the CLI host is a Linux VM running on my desktop, but Nutanix also provides drivers for Mac/OSX and Windows—just adapt the instructions accordingly. I’ve tried to capture all of the required installation steps below, but if you need more details, please download the Acropolis Container Services Guide.

 

[ Related Reading: Stateful Container Services on Nutanix Part 1 ]

 

Ensure that you are running AOS version 4.7 or later and assign a dataservices IP to the cluster via either Acropolis Prism or nCLI.

 

Screen Shot 2016-11-28 at 3.12.35 PM.png

 

  • Download the CentOS container image to your laptop or workstation from the Nutanix support portal, then upload it to the Acropolis Image Service—the example below uploads directly from the portal URL.

Screen Shot 2016-11-28 at 3.13.29 PM.png

 

The next step is to download the Docker Machine driver from Nutanix and install it into my Linux VM.

Screen Shot 2016-11-28 at 3.14.46 PM.png

 

Now we’re ready to spin up Docker Machines on the Nutanix platform.

 

Nutanix Docker Machine and Volume Driver

The Nutanix Docker Machine driver allows us to deploy VMs with Docker Engine preinstalled, much like you could do in a public cloud. The difference is that the VMs run on Nutanix AHV, the hypervisor that underpins the Nutanix Enterprise Cloud Platform. Using the create option, you can customize your Docker Machines, not only with VM sizing and base OS configuration, but also with the Nutanix driver itself. This flexibility means that we can generate VMs in various sizes. The following command line help option shows some of the additional parameters you can supply to the Nutanix driver when building out Dockerized machines:

 

Screen Shot 2016-11-28 at 3.16.33 PM.png

 

This command returns Nutanix driver-related options that allow you to create VMs with the desired RAM (--nutanix-vm-mem) and CPU or core count (--nutanix-vm-cpus or --nutanix-vm-cores) using the docker-machine CLI.

 

Now that we’ve seen how to tailor VM specifications, it’s time to create some machines using the Nutanix driver. Let’s start by setting up three Dockerized VMs. Each VM runs a MongoDB instance that will eventually form part of a replica set.

 

Screen Shot 2016-11-28 at 3.19.58 PM.png

The intention is to use the Docker ecosystem components (Machine, Compose, and so on) to make a repeatable, automated build process for our MongoDB instances. The command line syntax for each host follows the same format.

 

Screen Shot 2016-11-28 at 3.21.11 PM.png

 

In the Docker Machine command above, we’re creating a Dockerized VM called dbhost01. We specify the Nutanix machine driver, the Nutanix Prism network endpoint, and the user credentials for talking to the Prism API. We also call out the CentOS container host image and the network (vlan.68) where the VM should run. I added in a few extra options that set CPU and RAM resources a little above default, at one CPU, eight cores, and 1 GB RAM—it’s going to run a database, after all!

 

Next, we can choose how we administer Docker on the newly minted VM: either connect to the machine directly via SSH or run the eval command string locally and talk remotely to Docker Engine on the Docker machine host (VM). Let’s do a little of both to show you the options:

 

  • Set up the Docker Machine connection environment.

 

Screen Shot 2016-11-28 at 3.25.01 PM.png

 

  • Now we are talking to the Docker daemon on the new machine. Pull the Nutanix volume plugin from its repository if required—the start-volume-plugin.sh script should already be installed on the VM in the root home directory, however.

Screen Shot 2016-11-28 at 3.26.14 PM.png

 

  • Connect to the Docker Machine and configure the Nutanix volume plugin:

Screen Shot 2016-11-28 at 3.27.23 PM.png

 

In total, create three Dockerized VMs and deploy them on the Nutanix platform. Each one has the Docker Engine and the Nutanix volume plugin installed, so we can create persistent volumes for the database instances.

 

You can find these VMs in the Prism GUI under the VM → Table dropdown menu and manage them like any other VMs. The Acropolis DHCP/IPAM facility assigns them an IP address. Even after creation, you can change the number of cores or the memory capacity as shown below.

 

Picture1.png

 

 

Here we have our Dockerized VMs up and ready to host services. In the next post, we’ll cover the setup of a MongoDB replica set, using the Nutanix volume plugin to create persistent storage on the DSF.

 

If you’re working on stateful services that are running in a hybrid cloud environment, we would love to talk more and share experiences. Get in touch with us via our social media channels or the Nutanix Community Forums.

 

Additional Information

The Intersection of Docker, DevOps, and Nutanix

Containers Enter the Acropolis Data Fabric

Nutanix Acropolis 4.7: Container Support

 

Disclaimer: This blog contains links to external websites that are not part of Nutanix.com. Nutanix does not control these sites, and disclaims all responsibility for the content or accuracy of any external site. Our decision to link to an external site should not be considered an endorsement of any content on such site.

Stateful Container Services on Nutanix Part 1

by Community Manager on ‎11-21-2016 10:42 AM - last edited on ‎11-21-2016 10:58 AM by Community Manager (2,927 Views)

 

This post was authored by Ray Hassan, Sr. Solutions & Performance Engineer at Nutanix and Kate Guillemette, Technical Writer & Editor, Solutions & Performance at Nutanix.

 

Picture2.pngSource: https://twitter.com/jboner/status/736095483559481345 (Used with permission.)

 

This short series of posts will be our first look at using the Nutanix Docker Machine and the Nutanix volume driver plugin to configure “stateful” or “pet” type containers (as contrasted with “stateless” or “cattle” type containers). In the original pets vs. cattle analogy, pets were the two-node failover clusters that ran large databases on big iron systems, while cattle were the more cloud-aware, horizontal scale-out applications designed to handle failure.

 

As the popularity of containers has risen recently—as has the use of Docker to deliver them—these categories have been reassigned. Now, many in the container community believe that only ephemeral or stateless apps are suited to running in containers, while, somewhat ironically, the applications previously defined as cattle have become today’s pet workloads.

 

A lot of the existing work with Docker surrounds short-lived containers hosting stateless applications that only require ephemeral storage. Really, though, today there are no stateless applications. Containers have introduced a step change in the way we deploy applications, and even applications that are cattle-like by today’s definition, such as load balancers and web servers, need to be able to store access and error logs.

 

Otherwise, where’s the audit trail when things go wrong? Although long-lived pet-like applications, or services, are more complex than cattle when it comes to production-ready deployment, they similarly require persistent storage that can exist both outside and independent of a container runtime.

 

Consider the fact that containers allow us to roll out an application as an immutable image, handling versioning and dependencies implicitly. Why would we not want the same convenient unit of deployment for our longer-running services? These services are still just applications, after all, and have similar dependencies and environment requirements, easily satisfied by running in a container.

 

The major hurdle to overcome when trying to containerize pet applications is figuring out how to put data volumes on backend storage subsystems. Running persistent volumes that containers in virtual machines (VMs) can consume is another step toward production deployment of containerized cloud-native application services.

 

Dockerized VMs with Persistent Volumes

 Since Nutanix released both a Docker Machine driver and a persistent volume driver plugin earlier this summer with Acropolis 4.7, I have been keen to look at how we can support more stateful deployments. Rather than look at more readily containerized frontend cattle apps (such as nginx or apache), in this series I want to go over deploying a cloud-native database, like MongoDB, to showcase some of the Docker ecosystem integration work that Nutanix has been doing.

 

MongoDB, a NoSQL database that scales horizontally, can take full advantage of the Nutanix web-scale architecture. The I/O intensive nature of a database workload can mean that you need to write directly to specialized storage. If that storage is distributed, the service can take advantage of stateful failover between hosts. Stateful failover in turn facilitates faster recovery by shortening the period of potential downtime.

 

For the purposes of this article, pet or stateful containers are simply long-running services that require the data persistence provided by the Nutanix volume driver and the mobility afforded by running the service as a VM on AHV. To provision such VMs, the Nutanix Docker Machine driver preinstalls the Docker Engine and configures secure certificate access for SSH.

 

The prospect of spinning up a containerized database has been a bit controversial, but vendors and ultimately end users are coming around to the idea. For one thing, it’s been considered something that’s hard to do. For another, there seems to be some general reluctance to run a production database in a container, in much the same way that we saw resistance to running databases in VMs lingering until the last few years.

 

Nutanix Acropolis Container Services

 Just so we’re all on the same page, let’s describe the implications of the Docker Machine driver and the persistent volume driver plugin in conjunction with the rest of the Nutanix Enterprise Cloud Platform. Docker Machine spins up VMs running the Docker Engine on desktop VMs and on public and private cloud provider targets, like AWS, Digital Ocean, and so on. Now, with the Docker Machine driver for Nutanix, we can also use the Nutanix platform as an on-premise backend target.

 

This option means that we can provision and remotely manage CentOS-based VMs with a preinstalled Docker Engine. From your laptop or workstation, you can now build out virtual environments to form clusters of Docker hosts—the first step in any orchestration play going forward. In addition, the Nutanix driver takes care of all security certificates (TLS) for remote SSH access.

 

Screen Shot 2016-11-21 at 10.29.02 AM.png

 

Picture1.png

The Nutanix persistent volume driver plugin uses a sidekick container pattern. This sidekick container calls the Docker Volume API to create a block storage (iSCSI) volume in a Nutanix volume group (VG) within the Nutanix Distributed Storage Fabric (DSF). The resulting iSCSI volume attaches to the sidekick or data-only container.

 

When you create application containers that require persistent storage volumes, they can map the volumes already created on the container host to their own container runtime. Subsequently, when you need to rebuild or destroy the application for any reason, the iSCSI volume in the VG on the Nutanix storage container preserves all data. When you rebuild an application container that needs that data, the volume is available for reuse.

 

Picure1.png

 

In the next post, we’ll cover the installation and setup of both the Nutanix driver for Docker Machine, and the Nutanix volume plugin.

 

If you’re working on stateful services that run in a hybrid cloud environment, we would love to talk more and share experiences. Get in touch with us via our social media channels or the Nutanix Community Forums.

 

Additional Information

The Intersection of Docker, DevOps, and Nutanix

Containers Enter the Acropolis Data Fabric

Nutanix Acropolis 4.7: Container Support

 

Disclaimer: This blog contains links to external websites that are not part of Nutanix.com. Nutanix does not control these sites, and disclaims all responsibility for the content or accuracy of any external site. Our decision to link to an external site should not be considered an endorsement of any content on such site.

Announcements

One of the features you see a lot of on social media sites are @Mentions. With @Mentions you can acknowledge other community members.

Read More: Did you know: You can @Mention other members
Labels