@NutanixNext
Visit Nutanix

Nutanix Connect Blog

Welcome to the Nutanix NEXT community. To get started please read our short welcome post. Thanks!

Showing results for 
Search instead for 
Do you mean 

Displaying articles for: 11-20-2016 - 11-26-2016

Part I: How to setup a three-node NUC Nutanix CE cluster

by Community Manager on ‎11-23-2016 01:16 PM - last edited on ‎12-08-2016 11:53 AM by Community Manager (5,913 Views)

 

This blog was authorized by @MarcNutanix Sr. Systems Engineer at Nutanix.

 

The following is my experience in successfully setting up a three-node Nutanix Community Edition (CE) cluster for my home lab.  During this process I did a lot of research and reviewed the CE forums, [go].

 

Since much of the information can be found in many locations, I thought it was a good idea to put them together in a single post that consolidates the details on how to get your first Nutanix CE cluster up and running.

 

To build a single NUC Node you need the following

 

(1) Intel NUC Kit NUC6i7KYK Mini PC [go]

 

(1) Transcend 32GB JetFlash 710 USB 3.1/3.0 Flash Drive [go]

 

(2) Sandisk X400 Solid State Drive - Internal (SD8SN8U-512G-1122) [go]

 

(1) Crucial 32GB Kit (16GBx2) DDR4 2133 MT/s (PC4-17000) SODIMM 260-Pin Memory - CT2K16G4SFD8213 [go]

 

 A single NUC using the ingredients above will cost you around $1,000.

 

NOTE: Nutanix CE can run on 1, 3 or 4 Node Configurations.  I specifically selected a (3) Node configuration in my home lab since this is also the minimum config for our Nutanix Customers in the real world.

 

The hardware install is very easy, just flip the NUC over, loosen 4 screws at each corner, remove the bottom cover and insert memory and the SSD drives.

 

Picture1.png

 

Now that you have your hardware setup, the next step is to register and download the Nutanix CE software.

 

Register to join the Community

  1. Start here http://www.nutanix.com/products/community-edition/
  2. Scroll down the page and click on Get Access

Picture1.png

 

  1. Fill in the info and click on submit.

 

Picture1.png

 

Now that you have access to the Community

  1. Download the Nutanix CE Image file [go]

 

  1. Next, you have to get the image onto a USB Drive which your Node will boot from. I used Rufus on Windows, https://rufus.akeo.ie/

 

  1. Verify you have the following files downloaded.

Picture1.png

 

Create a bootable Nutanix CE image on your USB Flash Drive

  1. Insert your USB Flash Drive.

 

  1. Run Rufus and click on the Image Icon.

Picture1.png

 

  1. Make sure to change below to “All files” instead of ISO file, find the Nutanix CE Image file and click Open.

 Picture1.png

 

 

  1. Just click on Start and the Nutanix CE Image will be “burned” onto the USB Flash Drive.

Picture1.png

 

  1. IMPORTANT: Each NUC needs its own USB flash drive to boot up and run Nutanix CE.
  2. Now plug in the USB flash drives into each one of your NUCs, power on the NUCs and install Nutanix CE.

 

Planning my (3) Node NUC Nutanix CE Cluster

 

Picture1.png

 

 

I wanted to take some time and explain how my home network setup is configured and how my Nutanix CE home lab will interface with it. The Nutanix CE configuration will be sitting on my bookshelf in my home office, ready for use!

 

So my cable modem and wireless access point (with 4 Ports) sits on the main floor.  All my laptops, tablets, phones, devices connect to the main floor WAP.

 

Since I wanted to have my Nutanix CE home lab sit in the bookshelf of my home office which is upstairs, I did the following:

-       Purchased a gigabit ethernet switch for my lab

-       Purchased a WAP to extend my home wireless

-       Simply plugged in each NUC’s ethernet port into the gigabit ethernet switch

-       Since the WAP is extending my home wireless, anything I plug into the bookshelf switch is connected to the internet

 

NOTE: You need to have internet connectivity when you log into the CVM or Cluster IP address via Prism to manage and run your Nutanix CE Cluster because this is tied to your Community email/password on my.nutanix.com

 

My configuration

 

Host (Physical NUC) IP Address: 192.168.1.150-152

Host Subnet Mask: 255.255.255.0

Host Gateway: 192.168.1.1

 

CVM (Each Host runs a Nutanix Controller Virtual Machine) IP Address: 192.168.1.160-162

CVM Subnet Mask: 255.255.255.0

CVM Gateway: 192.168.1.1

 

Things to do after Nutanix CE Installation and before you create your Cluster

After the installation, the first thing I did was to change the CVM memory which by default takes up 16GB.  The NUC has a total of 32GB of memory, I wanted to reduce the amount of memory the CVM default takes so I can run plenty of virtual machines in my Nutanix CE home lab.

 

I decided to reduce the CVM memory to 8GB.  After a couple weeks of using my Nutanix CE Cluster with plenty of virtual machines, I have not experienced any problems or concerns at all running each CVM with 8GB of memory.

 

To reduce the CVM memory to 8GB you can either (1) log directly into the Nutanix CE node or (2) SSH into the Nutanix CE node.  Either way make sure to log in as “root” and not “nutanix”.

 

NOTE: Remember anytime you log into the NUC or Node you use “root” and anytime you log into the CVM you use “nutanix”. The password by default is nutanix/4u for either “root” or “nutanix”.

 

To set the CVM memory to 8GB

  1. virsh list --all
  2. virsh shutdown <CVM-Name>
  3. virsh setmem <CVM-Name> 8G --config
  4. virsh setmaxmem <CVM-Name> 8G --config
  5. virsh start <CVM-Name>
  6. virsh list –all, to make sure CVM is back up and running

 

Confirm CVM memory is set to 8GB

  1. virsh dominfo <CVM-Name>

Picture1.png

 

 

Creating the Cluster

Now that you have installed each NUC node and configured each CVM with 8GB of memory, now is the exciting step of setting up a Cluster with your NUC Nodes.

 

  1. SSH into one of the CVMs, I opened up terminal on my Mac and typed “ssh nutanix@192.168.1.160” and used the password “nutanix/4u”
  2. Type “cluster –s cvmip,cvmip,cvmip create (no spaces between the commas)
  3. Example, to create my cluster I used “cluster –s 192.168.1.160,192.168.1.161,192.168.1.162 create”

Picture1.png

  1. After completion, hopefully everything is up and running
  2. You can type “cluster status” and each CVM will display its status

 

Picture1.png

 

Logging into your super cool new Nutanix CE Cluster for the first time

  1. Open your web browser of choice and connect to one of your CVM IP Addresses
  2. Use “admin” for the username and the password
  3. You will then be asked to change the password

 

NOTE: You need to make sure your laptop, tablet, etc. that you are using to access Prism also has internet access since Prism will ask for your email/password that you used to log into the Community site (my.nutanix.com).

 

Picture1.png 

Congrats!  You are now logged into your Nutanix CE Cluster

 

Confirm that CVMs are set to 8GB of memory

- Click on Home – VM

 

Picture1.png

 

- Click on Table, check the Include Controller VMs box and verify your CVMs are running at 8GB of Memory

 

Picture1.png

 

 

Now let the fun begin!

 

Here is a pic of my Nutanix CE Home Lab Bookshelf Cluster

 

Picture1.png

 

 

Disclaimer: This blog may contain links to external websites that are not part of Nutanix.com. Nutanix does not control these sites, and disclaims all responsibility for the content or accuracy of any external site. Our decision to link to an external site should not be considered an endorsement of any content on such site

 

Nutanix .NEXT 2016 Europe - Opening Keynote

by Community Manager on ‎11-23-2016 12:32 PM (3,456 Views)

Thank you for joining us in Vienna for an EPIC .NEXT conference. We had a great sense of community building and discussing topics such as the Enterprise Cloud. In case you missed the livestream, I have posted the day one keynote session for you to re-live the exciement from Vienna. Enjoy!

 

Nutanix CEO Dheeraj Pandey, along with Nutanix Chief Product & Development Officer Sunil Potti and special guests from Citrix, Puppet and Docker, to learn about the new enterprise cloud platform that is radically simplifying virtualization, containers, compute and storage for all workloads.

  

 

 

 

 

 

 

 

Recap of .NEXT Announcements

Nutanix Announces the Industry’s Only Hyperconverged Solution for Cisco UCS Blades

Today, we are at the .NEXT On-Tour event in Boston and are bringing more exciting news about continuous innovation. A number of vendors, including Nutanix™, offer hyperconverged solutions running on Cisco® UCS® C-Series rackmount servers.

 

Nutanix Enterprise Cloud Platform Just Got Better

The ambition of the Nutanix Enterprise Cloud Platform is bringing cloud-like operational simplicity to enterprise datacenters. If the enterprise datacenter behaves and operates like a public cloud, then application demands will drive private cloud vs public cloud (aka “buy vs rent”) decisions, instead of vendor-driven requirements that are imposed on enterprise IT users.

 

Nutanix Unveils Powerful One-Click Networks to Broaden Enterprise Cloud Platform

Built-in Network Orchestration and Microsegmentation will Deliver Seamless Visibility and Control Over Entire Infrastructure Stack.

 

Continue the conversation on our forums or share any blogs you have on these topics. 

 

Disclaimer: This blog contains links to external websites that are not part of Nutanix.com. Nutanix does not control these sites, and disclaims all responsibility for the content or accuracy of any external site. Our decision to link to an external site should not be considered an endorsement of any content on such site.

 

Ten Things you need to know about Prism Self-Service

by Community Manager on ‎11-22-2016 07:41 AM (5,383 Views)

 

This post was authored by Shubhika Taneja, Product Marketing Manager at Nutanix and Constantine Kousoulis, Sr. Member of Technical Staff at Nutanix.

 

Continuing with the “Ten things you need to know about” blog series, we will be discussing the Prism Self-Service in this blog. Prism Self-Service is a new capability from Nutanix which is scheduled to be available in our 5.0 software release.

 

  1. Prism Self-Service will be a core component of the Nutanix Enterprise Cloud Platform. It empowers the end users such as developers to get AWS like experience by enabling frictionless access to infrastructure resources.
  2. Access to Prism Self-Service Portal (SSP), which is the UI for Prism Self-Service, will not require installation of a separate management tool as it is well integrated into Prism. It can be launched in a single click from Prism. Administrators can then set up self service access for their Nutanix environment in a few clicks out of SSP.
  3. The end users will have a very simple way to access resources. They will not need access to Prism to consume the infrastructure resources. The administrator sends a URL of a web portal to the end user, and the end user can login to this using their corporate credentials to access resources in a self service manner.
  4. SSP will optimize the end user experience by applying consumer design principles to a powerful cloud platform. Troubleshooting a VM is easy with single-click access to performance metrics and a display console to the VM.
  5. Projects are a grouping construct, that will bring users and policies attached to these users together in a framework. Projects enable the assignment of resources and privileges to a group of users for consumption. Project quotas provide limits on infrastructure usage by end users, and access to specific networks is under the administrator’s control.
  6. SSP will enhance security with an access control layer that provides fine-grained permissions to entities and operations. Administrators can create roles to control the privileges of a user in the system.
  7. The Catalog will provide access to shared VM templates and images. Administrators control the accessible items, and end users create their VMs by deploying from approved items in the Catalog.
  8. Usage analytics will make the task of administering the cluster resources simple. SSP visualizes resource allocation with point-in-time and time series charts, so it’s easy to spot rogue VMs.
  9. Admins and end users alike will be able to extend their usage of SSP by authoring programmatic clients with help from a rich SDK that includes REST documentation and sample code.
  10. SSP will be initially limited to Windows Active Directory for authentication and AHV for compute. Since it is built to be hypervisor agnostic, we anticipate that it will be supported on other hypervisors as well in a future release.

 

Picture1.png

Stateful Container Services on Nutanix Part 1

by Community Manager on ‎11-21-2016 10:42 AM - last edited on ‎11-21-2016 10:58 AM by Community Manager (2,927 Views)

 

This post was authored by Ray Hassan, Sr. Solutions & Performance Engineer at Nutanix and Kate Guillemette, Technical Writer & Editor, Solutions & Performance at Nutanix.

 

Picture2.pngSource: https://twitter.com/jboner/status/736095483559481345 (Used with permission.)

 

This short series of posts will be our first look at using the Nutanix Docker Machine and the Nutanix volume driver plugin to configure “stateful” or “pet” type containers (as contrasted with “stateless” or “cattle” type containers). In the original pets vs. cattle analogy, pets were the two-node failover clusters that ran large databases on big iron systems, while cattle were the more cloud-aware, horizontal scale-out applications designed to handle failure.

 

As the popularity of containers has risen recently—as has the use of Docker to deliver them—these categories have been reassigned. Now, many in the container community believe that only ephemeral or stateless apps are suited to running in containers, while, somewhat ironically, the applications previously defined as cattle have become today’s pet workloads.

 

A lot of the existing work with Docker surrounds short-lived containers hosting stateless applications that only require ephemeral storage. Really, though, today there are no stateless applications. Containers have introduced a step change in the way we deploy applications, and even applications that are cattle-like by today’s definition, such as load balancers and web servers, need to be able to store access and error logs.

 

Otherwise, where’s the audit trail when things go wrong? Although long-lived pet-like applications, or services, are more complex than cattle when it comes to production-ready deployment, they similarly require persistent storage that can exist both outside and independent of a container runtime.

 

Consider the fact that containers allow us to roll out an application as an immutable image, handling versioning and dependencies implicitly. Why would we not want the same convenient unit of deployment for our longer-running services? These services are still just applications, after all, and have similar dependencies and environment requirements, easily satisfied by running in a container.

 

The major hurdle to overcome when trying to containerize pet applications is figuring out how to put data volumes on backend storage subsystems. Running persistent volumes that containers in virtual machines (VMs) can consume is another step toward production deployment of containerized cloud-native application services.

 

Dockerized VMs with Persistent Volumes

 Since Nutanix released both a Docker Machine driver and a persistent volume driver plugin earlier this summer with Acropolis 4.7, I have been keen to look at how we can support more stateful deployments. Rather than look at more readily containerized frontend cattle apps (such as nginx or apache), in this series I want to go over deploying a cloud-native database, like MongoDB, to showcase some of the Docker ecosystem integration work that Nutanix has been doing.

 

MongoDB, a NoSQL database that scales horizontally, can take full advantage of the Nutanix web-scale architecture. The I/O intensive nature of a database workload can mean that you need to write directly to specialized storage. If that storage is distributed, the service can take advantage of stateful failover between hosts. Stateful failover in turn facilitates faster recovery by shortening the period of potential downtime.

 

For the purposes of this article, pet or stateful containers are simply long-running services that require the data persistence provided by the Nutanix volume driver and the mobility afforded by running the service as a VM on AHV. To provision such VMs, the Nutanix Docker Machine driver preinstalls the Docker Engine and configures secure certificate access for SSH.

 

The prospect of spinning up a containerized database has been a bit controversial, but vendors and ultimately end users are coming around to the idea. For one thing, it’s been considered something that’s hard to do. For another, there seems to be some general reluctance to run a production database in a container, in much the same way that we saw resistance to running databases in VMs lingering until the last few years.

 

Nutanix Acropolis Container Services

 Just so we’re all on the same page, let’s describe the implications of the Docker Machine driver and the persistent volume driver plugin in conjunction with the rest of the Nutanix Enterprise Cloud Platform. Docker Machine spins up VMs running the Docker Engine on desktop VMs and on public and private cloud provider targets, like AWS, Digital Ocean, and so on. Now, with the Docker Machine driver for Nutanix, we can also use the Nutanix platform as an on-premise backend target.

 

This option means that we can provision and remotely manage CentOS-based VMs with a preinstalled Docker Engine. From your laptop or workstation, you can now build out virtual environments to form clusters of Docker hosts—the first step in any orchestration play going forward. In addition, the Nutanix driver takes care of all security certificates (TLS) for remote SSH access.

 

Screen Shot 2016-11-21 at 10.29.02 AM.png

 

Picture1.png

The Nutanix persistent volume driver plugin uses a sidekick container pattern. This sidekick container calls the Docker Volume API to create a block storage (iSCSI) volume in a Nutanix volume group (VG) within the Nutanix Distributed Storage Fabric (DSF). The resulting iSCSI volume attaches to the sidekick or data-only container.

 

When you create application containers that require persistent storage volumes, they can map the volumes already created on the container host to their own container runtime. Subsequently, when you need to rebuild or destroy the application for any reason, the iSCSI volume in the VG on the Nutanix storage container preserves all data. When you rebuild an application container that needs that data, the volume is available for reuse.

 

Picure1.png

 

In the next post, we’ll cover the installation and setup of both the Nutanix driver for Docker Machine, and the Nutanix volume plugin.

 

If you’re working on stateful services that run in a hybrid cloud environment, we would love to talk more and share experiences. Get in touch with us via our social media channels or the Nutanix Community Forums.

 

Additional Information

The Intersection of Docker, DevOps, and Nutanix

Containers Enter the Acropolis Data Fabric

Nutanix Acropolis 4.7: Container Support

 

Disclaimer: This blog contains links to external websites that are not part of Nutanix.com. Nutanix does not control these sites, and disclaims all responsibility for the content or accuracy of any external site. Our decision to link to an external site should not be considered an endorsement of any content on such site.

Announcements

One of the features you see a lot of on social media sites are @Mentions. With @Mentions you can acknowledge other community members.

Read More: Did you know: You can @Mention other members
Labels