@NutanixNext
Visit Nutanix

Nutanix Connect Blog

Welcome to the Nutanix NEXT community. To get started please read our short welcome post. Thanks!

Showing results for 
Search instead for 
Do you mean 

Displaying articles for: 12-04-2016 - 12-10-2016

Stateful Container Services on Nutanix Part 3: MongoDB Replica Set

by Community Manager ‎12-06-2016 09:38 AM - edited ‎12-09-2016 01:00 PM (2,072 Views)

 

This blog was authored by Ray Hassan, Sr. Solutions & Performance Engineer at Nutanix, and Kate Guillemette, Technical Writer & Editor, Solutions & Performance

 

In this final post in our stateful container series, we use some of the container tools available to set up a MongoDB replica set within containers and have each instance store its data to a volume on the Nutanix storage layer. We created and provisioned these Dockerized VMs in Part 2; here, we are getting them to run MongoDB.

 

The best way I’ve found to generate a repeatable build process for setting up the same MongoDB instance every time is with Docker Compose. Compose uses a YAML-based configuration file to set up multiple container environments, so a single command can stop, start, and rebuild container services.

 

Where Docker Machine and Compose really come into their own is in conjunction with an orchestration and scheduling framework like Docker Swarm, for example. Swarm treats a collection of Docker hosts as a single virtual host. With the scaling that Compose makes possible, you can then deploy services across the hosts according to desired constraints and affinities. I’m not going to cover Swarm or other automation frameworks here, but such technologies will be part of our ongoing work on the orchestration and scheduling of “stateful” applications or services.

 

Here are the next steps to get our MongoDB instances up and running:

  • Connect to each Docker Machine.
  • Install Docker Compose using the instructions available here.
  • On each machine, create a Compose file with the following contents:

Screen Shot 2016-12-06 at 7.26.30 AM.png

 

In the above Compose file, we define a mongodb: container service using the default mongo image: from Docker Hub, and we are using the Nutanix volume_driver: for our volumes. We create a single persistent volume, dbata0n, where n = 1…3. This volume maps to /data/db within the container runtime.

 

I decided to map the ports: like for like, so here port 27017 (mongod) in the container maps to 27017 on the host. I start the mongod daemon with the replSet option. Using this option requires an additional ntnx parameter (the name of the replica set). Finally, I am using the host: networking directly. In our example, I’ve made this choice purely for convenience, but database vendors deploying their applications in containers often call direct host networking out as a best practice.

 

  • Start the container service by running the compose up command on the Compose file above like so:

# docker-compose -f ./mongo.yaml up -d

 

  • Verify that your MongoDB container service is running via one of the commands below:

Screen Shot 2016-12-06 at 7.28.50 AM.png

 

In the steps above, we’ve covered the creation of volumes “on the fly” within the Compose workflow. However, you can also use volumes that you’d already created and had ready for reuse. Maybe your workflows demand that you create volumes up front, ahead of time. We’ll cover that use case in the following section.

 

Volumes are a first-class citizen within Docker, which means that you can create them as standalone units. So, if you prefer to precreate your volumes using docker volume create, you need an alternate “version 2” of the Compose file syntax. First, create the Docker volumes using the Nutanix volume driver, then call them out as “external” volumes in the Compose file itself. Here is an example of this procedure:

 

  • Precreate your named volume and ensure that you specify the Nutanix volume driver.

Screen Shot 2016-12-06 at 7.31.53 AM.png

  • Bring up the MongoDB container service using a Compose file with the version 2 syntax.

Screen Shot 2016-12-06 at 7.34.19 AM.png

 

When you compare this version 2 block to our original Compose file above, notice that services: and volumes: are now in their own separate YAML stanzas. If we were to use overlay networks, we would do the same with networks. We’ve also changed the network_mode: label in this version.

 

Both mechanisms for running container services with persistent volumes created via the Nutanix volume driver create the volumes as iSCSI block devices in a VG within the Prism GUI. See below:

 

e1.png

 

Now that we have each individual MongoDB instance running in its own container with its own persistent volume, we can finally initialize our replica set. We need to connect to a Docker Machine, attach to its running container, and manipulate the database via the MongoDB API interface.

 

Screen Shot 2016-12-06 at 7.39.12 AM.png

 

  • Obtain the container ID and attach a bash shell to that running container.
  • From within the container, run a mongo shell session.

 

Screen Shot 2016-12-06 at 7.40.05 AM.png 

 

  • From within the mongo session, create a replica set configuration JSON document and then initialize the replica set.
  • Obtain the Docker Machine IP addresses from either docker-machine ls or docker-machine ip <machine hostname> I strongly recommend using the form of set initialization below. If you use the default rs.initiate() without a cfg document parameter, your primary MongoDB instance hostname could end up resolving to the loopback interface or localhost, leaving you unable to speak to the other replica set members.

Screen Shot 2016-12-06 at 9.26.47 AM.png

 

 

One of the issues people frequently encounter with pet containers is the requirement for longstanding DNS names. Therefore, unless you want to manage IP addresses locally, you definitely need to assign each of the replica set members an IP address that is resolved via DNS. Without a resolvable address, the other instances might not be able to synchronize data from the primary, so you would have to attempt to reconfigure the set membership by hand. Not only is this not a nice position to be in at this stage, but it also might prove difficult to do at all without starting the whole process again.

 

At this point, you should have a replica set running between your three MongoDB instances:

 

Screen Shot 2016-12-06 at 9.29.35 AM.png

 

The process we’ve outlined so far has not quite brought the deployment to the point where it’s truly production-ready. However, in a real use case, we could automate much of this labor. Further work toward streamlining deployment could include building out stateful services across a cluster of Docker Machine hosts; this project would allow us to think about affinities and constraints when considering how to locate (or more importantly not colocate) the various database services.

 

If you're working on stateful services that are running in a hybrid cloud environment, we would love to talk more and share experiences. Get in touch with us via our social media channels or the Nutanix Community Forums.

 

Additional Information

The Intersection of Docker, DevOps, and Nutanix

Containers Enter the Acropolis Data Fabric

Nutanix Acropolis 4.7: Container Support

 

 

 

Disclaimer: This blog contains links to external websites that are not part of Nutanix.com. Nutanix does not control these sites, and disclaims all responsibility for the content or accuracy of any external site. Our decision to link to an external site should not be considered an endorsement of any content on such site.

Data Distribution with Acropolis File Services (AFS)

by Community Manager ‎12-05-2016 07:29 AM - edited ‎12-05-2016 07:32 AM (3,663 Views)

This blog was authored by Dwayne Lessner, Sr Technical Marketing Engineer at Nutanix

 

There two types of shares that can be created with AFS (Acropolis File Services); the Home share and General share. The General share is backed by a volume group with 6 vdisks when its’s created. The Home share is backed by 5 of the some type of volume group per File Server VM that make up the cluster. So in a small AFS deployment there would be 15 volume groups backing the Home Share. The Home share is automatically created when you deploy AFS.

 

Picture1.png

 Figure 1Volume Group used by AFS

 

Home shares distribute data by dividing the top-level directories across all of the file server VMs that make up the file server. Acropolis File Services maintains the directory mapping for each responsible file server VM using an internal scale out database called InsightDB.

 

 

Picture2.png

 Figure 2 Distribution of Home Directory Shares

 

If a user creates a share called “\\FileServer1\Users,” which contains top-level directories \Bob, \Becky, and \Kevin, \Bob may be on file server VM1, \Becky on file server VM2, \Kevin on file server VM3, and so on. The file server VMs use a string hashing algorithm based on the directory names to distribute the top-level directories.

 

This distribution can accommodate a very large number of users in a single share. The scaling limits of more traditional designs can force administrators to create multiple shares in which, for example, one set of users whose last names begin with A through M run off one controller and users whose names begin with N through Z run off another controller. This design limitation leads to management overhead headaches and unnecessary Active Directory complexity. For these reasons, AFS expects to have one home directory share for the entire cluster. If there is a reason to have more than one home directory share, you can create it using nCLI.

 

The top-level directories act as a reparse point, essentially a shortcut. Consequently, all user folders must be created at the root for optimal load balancing. Since it appears as a shortcut, we don’t allow user files in the share root; we recommend setting permissions at the share root before deploying user folders.

 

General-purpose shares (non-user directories) do not distribute top-level directories. The files and subfolders for general-purpose shares are always owned by a single file server. The diagram below illustrates two general-purpose shares (for example, accounting and IT) on the same file server.

 

 

Picture3.png

 Figure 3 Two General Purpose Shares on the Same File Server

 

Unlike home directory shares, with general shares you can store files in the share root.

 

Continue the conversation in our comunity forums and share your experiences with the community. You can also ask questions on Twitter, using the hashtag #AskNutanix

 

Disclaimer: This blog contains links to external websites that are not part of Nutanix.com. Nutanix does not control these sites and disclaims all responsibility for the content or accuracy of any external site. Our decision to link to an external site should not be considered an endorsement of any content on such site.  

 

 

 

 

 

Nutanix Community Top Contributors | November 2016

by Community Manager on ‎12-04-2016 09:13 AM (1,144 Views)

Nov-TC.png 

A community is only as vibrant as its members - these are folks who jump into conversations, offer solutions and genuinely want to collaborate with others. Each month we highlight community members that have spent their free time sharing and helping others.

 

I want to take a moment and recognize our top community contributors for the month of November. They have all authored posts, provided guidance and demonstrated their expertise.

 

Congratulations to November's top contributors!

 

Top Topics Posted

 

profile1.png @nherman
profile2.png @vivekds
profile3.png @Flavio77
profile4.png @Pierreragainass
profile5.png @billytiangco

 

Top Replies Authored

 

profile11.png @bezeddin
profile22.png @tc
profile.png @missionleben
profile44.png @Shawner
profile1.png @nherman

 

Thanks for all your contributions folks and let's make December 2016 - Super Awesome!

Announcements

One of the features you see a lot of on social media sites are @Mentions. With @Mentions you can acknowledge other community members.

Read More: Did you know: You can @Mention other members
Labels