Nutanix Connect Blog

Welcome to the Nutanix NEXT community. To get started please read our short welcome post. Thanks!

cancel
Showing results for 
Search instead for 
Did you mean: 
Trendsetter

Stateful Container Services on Nutanix Part 3: MongoDB Replica Set

 

This blog was authored by Ray Hassan, Sr. Solutions & Performance Engineer at Nutanix, and Kate Guillemette, Technical Writer & Editor, Solutions & Performance

 

In this final post in our stateful container series, we use some of the container tools available to set up a MongoDB replica set within containers and have each instance store its data to a volume on the Nutanix storage layer. We created and provisioned these Dockerized VMs in Part 2; here, we are getting them to run MongoDB.

 

The best way I’ve found to generate a repeatable build process for setting up the same MongoDB instance every time is with Docker Compose. Compose uses a YAML-based configuration file to set up multiple container environments, so a single command can stop, start, and rebuild container services.

 

Where Docker Machine and Compose really come into their own is in conjunction with an orchestration and scheduling framework like Docker Swarm, for example. Swarm treats a collection of Docker hosts as a single virtual host. With the scaling that Compose makes possible, you can then deploy services across the hosts according to desired constraints and affinities. I’m not going to cover Swarm or other automation frameworks here, but such technologies will be part of our ongoing work on the orchestration and scheduling of “stateful” applications or services.

 

Here are the next steps to get our MongoDB instances up and running:

  • Connect to each Docker Machine.
  • Install Docker Compose using the instructions available here.
  • On each machine, create a Compose file with the following contents:

Screen Shot 2016-12-06 at 7.26.30 AM.png

 

In the above Compose file, we define a mongodb: container service using the default mongo image: from Docker Hub, and we are using the Nutanix volume_driver: for our volumes. We create a single persistent volume, dbata0n, where n = 1…3. This volume maps to /data/db within the container runtime.

 

I decided to map the ports: like for like, so here port 27017 (mongod) in the container maps to 27017 on the host. I start the mongod daemon with the replSet option. Using this option requires an additional ntnx parameter (the name of the replica set). Finally, I am using the host: networking directly. In our example, I’ve made this choice purely for convenience, but database vendors deploying their applications in containers often call direct host networking out as a best practice.

 

  • Start the container service by running the compose up command on the Compose file above like so:

# docker-compose -f ./mongo.yaml up -d

 

  • Verify that your MongoDB container service is running via one of the commands below:

Screen Shot 2016-12-06 at 7.28.50 AM.png

 

In the steps above, we’ve covered the creation of volumes “on the fly” within the Compose workflow. However, you can also use volumes that you’d already created and had ready for reuse. Maybe your workflows demand that you create volumes up front, ahead of time. We’ll cover that use case in the following section.

 

Volumes are a first-class citizen within Docker, which means that you can create them as standalone units. So, if you prefer to precreate your volumes using docker volume create, you need an alternate “version 2” of the Compose file syntax. First, create the Docker volumes using the Nutanix volume driver, then call them out as “external” volumes in the Compose file itself. Here is an example of this procedure:

 

  • Precreate your named volume and ensure that you specify the Nutanix volume driver.

Screen Shot 2016-12-06 at 7.31.53 AM.png

  • Bring up the MongoDB container service using a Compose file with the version 2 syntax.

Screen Shot 2016-12-06 at 7.34.19 AM.png

 

When you compare this version 2 block to our original Compose file above, notice that services: and volumes: are now in their own separate YAML stanzas. If we were to use overlay networks, we would do the same with networks. We’ve also changed the network_mode: label in this version.

 

Both mechanisms for running container services with persistent volumes created via the Nutanix volume driver create the volumes as iSCSI block devices in a VG within the Prism GUI. See below:

 

e1.png

 

Now that we have each individual MongoDB instance running in its own container with its own persistent volume, we can finally initialize our replica set. We need to connect to a Docker Machine, attach to its running container, and manipulate the database via the MongoDB API interface.

 

Screen Shot 2016-12-06 at 7.39.12 AM.png

 

  • Obtain the container ID and attach a bash shell to that running container.
  • From within the container, run a mongo shell session.

 

Screen Shot 2016-12-06 at 7.40.05 AM.png 

 

  • From within the mongo session, create a replica set configuration JSON document and then initialize the replica set.
  • Obtain the Docker Machine IP addresses from either docker-machine ls or docker-machine ip <machine hostname> I strongly recommend using the form of set initialization below. If you use the default rs.initiate() without a cfg document parameter, your primary MongoDB instance hostname could end up resolving to the loopback interface or localhost, leaving you unable to speak to the other replica set members.

Screen Shot 2016-12-06 at 9.26.47 AM.png

 

 

One of the issues people frequently encounter with pet containers is the requirement for longstanding DNS names. Therefore, unless you want to manage IP addresses locally, you definitely need to assign each of the replica set members an IP address that is resolved via DNS. Without a resolvable address, the other instances might not be able to synchronize data from the primary, so you would have to attempt to reconfigure the set membership by hand. Not only is this not a nice position to be in at this stage, but it also might prove difficult to do at all without starting the whole process again.

 

At this point, you should have a replica set running between your three MongoDB instances:

 

Screen Shot 2016-12-06 at 9.29.35 AM.png

 

The process we’ve outlined so far has not quite brought the deployment to the point where it’s truly production-ready. However, in a real use case, we could automate much of this labor. Further work toward streamlining deployment could include building out stateful services across a cluster of Docker Machine hosts; this project would allow us to think about affinities and constraints when considering how to locate (or more importantly not colocate) the various database services.

 

If you're working on stateful services that are running in a hybrid cloud environment, we would love to talk more and share experiences. Get in touch with us via our social media channels or the Nutanix Community Forums.

 

Additional Information

The Intersection of Docker, DevOps, and Nutanix

Containers Enter the Acropolis Data Fabric

Nutanix Acropolis 4.7: Container Support

 

 

 

Disclaimer: This blog contains links to external websites that are not part of Nutanix.com. Nutanix does not control these sites, and disclaims all responsibility for the content or accuracy of any external site. Our decision to link to an external site should not be considered an endorsement of any content on such site.

4 Comments
Adventurer

Interesting! Though i haven't use mongodb before this is worth a try!

Community Manager

Hi @billytiangco -- give it a go and let us know how it works out for you Smiley Wink

Adventurer

Looks like i have some experimenting to do!

Community Manager

Have fun @jharm73 and continue the conversations in our forums

About the Author
Labels
Top Kudoed Authors
User Kudos Count
2
1