Nutanix Cloud Clusters (NC2)
Unify all your Private and Public Clouds
- 44 Topics
- 54 Replies
Nutanix Clusters OverviewNutanix Clusters provides a single platform that can span private and public clouds but operates as a single cloud using Prism Central enabling true hybrid cloud architecture.Using the same platform on both clouds, Nutanix Clusters on AWS (NCA) reduces the operational complexity of extending, bursting, or migrating your applications and data between clouds. Because Nutanix Clusters runs Nutanix AOS and AHV with the same CLI, UI, and APIs, existing IT processes or third-party integrations that work on-premises continue to work regardless of where they are running. Nutanix Clusters resources are deployed in your cloud provider account, thereby enabling you to use your existing cloud provider relationship, credits, commits, and discounts.Figure. Overview of the Nutanix Enterprise Cloud SoftwareClick to enlarge Nutanix Clusters place the complete Nutanix hyperconverged infrastructure (HCI) stack directly on a bare-metal instance in Amazon Elastic Compute Cloud (EC
Licensing is an important and integral part of Nutanix Clusters. All Nutanix Software Products does not require a license. Nutanix provides these products and their features without requiring you to do anything license-wise:Nutanix AHV Karbon (enabled through Prism Central) Prism Central (Prism Pro, however, requires a license for its advanced features, but you can manage registered Prism Element clusters with the base Prism Central software) Framework and utility software such as Life Cycle Manager (LCM), X-Ray, and Move FoundationViewing License Status: The most current information about your licenses is available from the Prism Element or Prism Central web console. It is also available at the Nutanix Support Portal from the Products link. You can view information about license levels, expiration dates, and any free license inventory (that is, unassigned available licenses)Displaying License Features and DetailsRequirements and Considerations for LicensingLicensing Guide
Why a Grizzly bear? Why not? Awhile ago I created a video showing how you could easily save lots of money by saving dollars on the EC2 costs by using hibernate. Nutanix Clusters is the only on-prem vendor offering this for customers that want to lift and shift to the public cloud or use the public cloud for DR. I want to share the code snippet and how to create an API user in the Clusters portal.You can first get familiar with hibernate here: To create an API user to make the call:Go to the Clusters portal Under Organization go to usersGo to the user optionCreate an API userAdd an API user You can limit the scope of the user to a single cluster. Once you have a API user you can use the below script to make an API call or add a script to a VM.#!/usr/local/bin/pythonimport hashlibimport hmacimport timeimport requestsimport base64# Client credentials#client_id from the user from the clusters portalclient_id = "*********.img.cloud-internal.nutanix.com" #client_secret password from the
BACKUP AND RECOVERY:For Nutanix Clusters on AWS Nutanix provides the following backup and recovery options.Disaster Recovery for Instance, AZ, and Region Failure Nutanix has native inbuilt replication capabilities to recover from complete cluster failure. Nutanix supports both near-synchronous (NearSync) and asynchronous replication on an AHV cluster. Integration with Third-Party Backup Solutions Deploying a single cluster in AWS is great for more ephemeral workloads where you want to take advantage of performance improvements and use the same automation pipelines you use on-prem.Disaster Recovery for Instance, AZ, and Region Failure:Nutanix has native inbuilt replication capabilities to recover from complete cluster failure. Nutanix supports both near-synchronous (NearSync) and asynchronous replication on an AHV cluster.You can set your Recovery Point Objective (RPO) to be as little as one minute with NearSync, one hour with asynchronous and instant with synchronous replications. You
I had made a short video on auto scaling Nutanix cluster in AWS using new Nutanix Playbooks. Playbooks offer a visual way to start automating actions in your hybrid cloud. Due to making the video short, I didn't spend a lot of time talking about options that you could set. Prism central has a lot of alerts that you can use to trigger the action that you want to take. If you don't see the right “Alert” you can customize one of the existing ones to meet your needs and will be a user defined alert. In my example I was using a critical memory capacity over 80% capacity on adding nodes. If your clusters are typically higher than 10 nodes you might try using a higher percentage based on one node is going to be a lower percentage of memory overall. Likewise you could be more aggressive with picking a lower memory percentage. One thing that you'll want to set as well is the “trigger alert if condition persists for". I didn't have it set so I could easily record the demo but I think it would be
I make no bones about it, bare-metal nodes in AWS is pricey. The questions becomes if we want to save money and have DR with AWS, how do keep the number of nodes small until we failover?In AWS all the Nutanix rules still apply. We need a minimum of a three-node cluster to get started. If we have enough storage capacity, we can keep accepting replications from an on-prem cluster and not worry about CPU and memory constraints. You could also use your on-prem AHV supported backup software to backup directly to S3. Then when you have an outage, you could quickly build a Nutanix Cluster in AWS and start restoring the data to that cluster.DR on the skinnyI was a asked to automate the process of adding nodes to a small Nutanix Cluster in AWS and then start a failover using Leap (DR runbooks) and X-play(automates actions based of events and alerts) in Prism Central (PC). The below video is what I came up with. The code to add a node via API from the Cluster portal is listed here:https://gith
Nutanix’s support for bare-metal is getting close. Please take a look at some of the foundational components for networking in Azure.NCM (Nutanix Clusters on Azure) utilizes Flow Networking to create an overlay network in Azure to ease administration for Nutanix administrators and reduce networking constraints across Cloud vendors. Flow Networking is used to abstract the Azure native network by creating overlay virtual networks. On the one hand this abstracts the underlying network in Azure, while at the same time, it allows the network substrate (and its associated features and functionalities) to be consistent with the customer’s on-premise Nutanix deployments.You will be able to create new virtual networks (called Virtual Private Clouds or VPCs) within Nutanix, subnets in any address range, including those from the RFC1918 (private) address space and define DHCP, NAT, routing, and security policy right from the familiar Prism Central interface.Flow networking can mask or reduce Cl
Hi Nutanix Community and fellow lurkers!If your just getting started with AWS, we have made some videos to go along with the documentation to make the process even easier. I would appreciate any feedback on what else should be added based on your own experience. In the next week we will add a fourth video showing up to setup your VPN on the AWS side.
The Flow Gateway VM (FGW) for Nutanix Cloud Clusters In Azure(NC2) is the Lynch pin to having a your own Supercloud. The FGW is responsible for all VM traffic going north and south from the Nutanix cluster in Azure. This virtual machine allows outside communication from both cloud and on-premises services to the workloads running on the Azure NC2 Cluster. The Flow Gateway VM connecting Azure and on-prem services to the Nutanix Cluster deployed in Azure.Once Prism Central is automatically deployed from the cluster creation process, the FGW deploys into the same VNET that Prism Central is using. The FGW is a native Azure VM. The VM has two network interface cards(NICs) attached to it, one for internal traffic and one for external traffic. The external NIC where the floating IP's are configured for workloads that may need away outside clients connecting into running workloads on the cluster. The default installation will configure 50 floating IP's on the external NIC. These floating I
Flow Security Central (FSC) is a SaaS product that detects and analyses security vulnerabilities in near real-time across multiple cloud environments. FSC supports the following capabilities.Visibility into Security Compliance: FSC provides businesses with a security heat map and complete visibility into the security posture of their environment using more than 800 automated audit checks based on the industry's best practices. Optimization of Security Compliance: FSC provides cloud operators with a one-click feature to easily fix their security issues. FSC also provides out-of-the-box security policies to automate the checks for common regulatory compliance policies, such as HIPAA, PCI-DSS, CIS, and so on. Control over Security Compliance: FSC helps you to set policies that continuously detect security vulnerabilities in real-time and automate the actions needed to fix them. You can also create your custom audit checks in FSC to meet your business-specific security compliance needs. Fo
So i’ve been going through Google searches to find an answer whether Cascade Lake vs Ice Lake can coexist within the same cluster. Unfortunately no luck. So i hope this forum can help me answer this question.My concern is that i was made to understand that Cascade Lake and Ice Lake CPU architecture is different, thus that may pose a performance problems to VM even if the nodes can coexist within the cluster.So the question is, can NX-G7 and NX-G8, which will run on different CPU archtiecture coexist within the same cluster and will it cause any performance issues if this is possible?It would help if you can point me to a KB article or any article for that matter to explain this.Thank you in advance for the help……..
Hello All, I'm working with a customer on a new solution designthe customer has a VMware workload and we plan to put AHV,I want you to advise me on how I calculate the storage requirement, Let’s say the customer workload is about 170 TB how do i size the cluster storage Also, I have some questions related to storage sizesif I have 3 nodes clustereach node with 10 disks each disk 4 TB (numbers for examples only)that means total raw storage is 120 TBwe will go with RF2 which means 1 node is out of our usable capacity calculation So what is the usable capacity here after the 1 node for failure and the Raid used in the cluster creation, also what else should be conducted from the storage in storage planning?
One of the biggest changes with Nutanix Cloud Clusters (NC2) on Azure compared to it's AWS counterpart is the requirement for Flow virtual networking. Flow virtual networking provides an overlay in Azure to provide secure communication between multiple tenants you may be hosting on the Nutanix cluster and also provides north and southbound connectivity. North and southbound connectivity is provided through the Flow gateway virtual machine (FVGW). Workloads running on the cluster can either go through a network address translation (NAT) path or a routed path not using a NAT. Which path your virtual machines take in and out of the NC2 Cluster will depend on how other services need to talk to the running virtual machines on NC2.The FVGW Is a native Azure VM that gets deployed when the first cluster is created. The FVGM has both and internal and external network interface cards(NIC). Traffic from your Azure Nutanix cluster is directed towards the FVGM’s Internal NIC And then eventually rou
Hi,New to Nutanix. Setting up our new Nutanix cluster that will run VMware as the hypervisor. Is it recommended to create 1 storage container that utilizes all storage capacity for all VMs (150) to reside? I am use to creating multiple datastores in a traditional 3 tier architecture, but i know a lot of those practices go away with hyperconverged.The main reason I ask is that although we have no plans at this point to use other Nutanix services such as Files or Database, I don’t want to be in a position if I create a storage container for VMware VM’s and assign it all of the capacity and then find out later that I cannot use Nutanix Files because all space was initially given to VMware.Any recommendations or best practices you can share to ensure I don’t shoot myself in the foot?Thanks
Nutanix Clusters deliver a hybrid multi-cloud platform addressing the need of a single platform, that can span private, distributed, and public clouds. This will help in managing traditional and modern applications using a Consistent Cloud Platform.Nutanix Clusters has features like:Operational Simplicity Seamless Application Mobility Cost EfficiencyThe above features help to run applications in private or multiple public clouds and reduce the operational complexity of migrating, extending, or bursting your applications and data between clouds.Nutanix Clusters extends the simplicity and ease of use of Nutanix Hyper-Converged Infrastructure (HCI) software, as well as the full Nutanix stack to public clouds such as AWS.We can Modify, Update Display, Hibernate, Resume or Delete Nutanix clusters running on AWS by using the Nutanix Clusters console.Release NotesKnown Issues
NUTANIX CLUSTERS REGISTRATION To register for Nutanix Clusters, you must have a My Nutanix account. A My Nutanix account allows you to access, manage, and use Nutanix Clusters. Your My Nutanix account is your first point of access to Nutanix Clusters. After you create a My Nutanix account, you can onboard into Nutanix Clusters either by signing up for one of the paid plans and applying Nutanix licenses at the same time.Pro-tip:You can also start a 30-day free trial of Nutanix Clusters. Nutanix Clusters Hybrid Subscription Model:You can sign up for Nutanix Clusters with any one of the payment plans (Pay As You Go or Cloud Commit) and apply Nutanix software licenses (that you might already have purchased or planning to purchase) such as Prism Pro, Files, and more at the same time. If you choose to apply licenses, the licenses are consumed first and then your payment plan falls back to the one you selected. If you do not have licenses or do not plan to purchase licenses, you can simply se
I would like to take full blame for people thinking self healing solves all problems with Nutanix Cloud Clusters (NC2). The self-healing is pretty awesome for bad nics, hard drives and nodes but there are some instances where the portal can’t take action.Status checks on AWS are performed every minute, returning a pass or a fail status in the Cloud Portal. If all checks pass, the overall status of the instance is OK. If one or more checks fail, the overall status is impaired.There are two types of status checks, system status checks, and instance status checks. System status checks monitor the AWS systems on which instance runs. Instance status checks monitor the software and network configuration of individual instances. Notification Center in the Clusters PortalThe Clusters portal has an notification service that keeps track of all informational, warnings and critical alerts. Like most Cloud based services there is no support for SNMP in the portal but does have the ability to send
Flow Virtual Networking Gateway VM High Availability When you initially deploy your first cluster, you can create two to four FGW VMs vs only one prior. The resiliency change is a big deal but getting traffic in and out of the cluster became a lot easier for Azure/Nutanix admins. The addition of the Azure Route servers along with the deployment of BGP VMS means that User Defined routes don’t need to be created. Anytime manual operations can be removed from a design, it means a win for the customer.Prior to AOS 6.7, if you deployed only one gateway VM, the NC2 portal redeployed a new FGW VM with an identical configuration when it detected that the original VM was down. Because this process invoked various Azure APIs, it took about 5 minutes before the new FGW VM was ready to forward traffic, which affected the north-south traffic flow.To reduce this downtime, NC2 on Azure uses an active-active configuration. This setup provides a flexible scale-out configuration when you need more traff
Login to the community
Login with your account
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.