Nutanix Cloud Clusters (NC2)
Unify all your Private and Public Clouds
- 45 Topics
- 55 Replies
Prism Central in Azure provides the control plane for Flow Virtual Networking. The subnet for Prism Central is delegated to Microsoft.BareMetal/AzureHostedService, so you can use native Azure networking to distribute IP addresses for Prism Central.Once you deploy Prism Central, the Flow Virtual Networking Gateway (FGW) VM deploys into the same VNet that Prism Central uses. The FGW allows communication between the guest VMs using the VPCs and the native Azure services. Using the FGW, guest VMs have parity with native Azure VMs for elements such as:User-defined routes: You can create custom or user-defined (static) routes in Azure to override Azure’s default system routes or to add routes to a subnet’s route table. In Azure, you create a route table, then associate the route table to zero or more virtual network subnets. Load balancer deployment: You can balance services offered by guest VMs with the Azure-native load balancer. Network security groups: You can write stateful firewall pol
Flow Virtual Networking Gateway VM High Availability When you initially deploy your first cluster, you can create two to four FGW VMs vs only one prior. The resiliency change is a big deal but getting traffic in and out of the cluster became a lot easier for Azure/Nutanix admins. The addition of the Azure Route servers along with the deployment of BGP VMS means that User Defined routes don’t need to be created. Anytime manual operations can be removed from a design, it means a win for the customer.Prior to AOS 6.7, if you deployed only one gateway VM, the NC2 portal redeployed a new FGW VM with an identical configuration when it detected that the original VM was down. Because this process invoked various Azure APIs, it took about 5 minutes before the new FGW VM was ready to forward traffic, which affected the north-south traffic flow.To reduce this downtime, NC2 on Azure uses an active-active configuration. This setup provides a flexible scale-out configuration when you need more traff
AOS 6.7 has added more options for securing your cluster in AWS. We will take a look at the existing options and dive into the new feaure in AOS 6.7.You can use AWS security groups and network access control lists to secure your cluster relative to other AWS or on-premises resources. Nutanix automatically creates three security groups to limit traffic to the cluster:Internal management: Allows all internal traffic between all CVMs and all AHV hosts (EC2 bare-metal hosts). Don’t edit this group without approval from Nutanix Support. User management: Allows users to access Prism Element and some other services running on the CVM. UVM: Allows UVMs to talk to each other. By default, all UVMs on all subnets can talk to each other, but you can edit the policy to lock down more traffic. You could alternatively use Flow Network Security to prevent east-west traffic. With AWS security groups, you can limit access to the AWS CVMs, AHV host, and UVMs only from your on-premises management network
We are experiencing issues with the "ncc health_checks run_all" command on the Nutanix CVM machine, both from Putty and the web console. The checks are failing or aborting after waiting for more than 30 minutes. Your assistance in resolving this matter would be greatly appreciated. Thank you.
Hybrid Cloud extended network between Azure and private datacenterThis is just one possible scenario that you could use create a Layer 2 stretch network in your hybrid cloud environment. The goal is to understand this scenario and to have a place to ask questions. The plan is to introduce additional scenarios to drive understanding. This scenario is using AOS 6.6. The scenario will mostly likely change with newer AOS releases. We have an Azure environment on the top the below diagram and private datacenter below the Azure environment. We want to understand what happens if the VMs running on Nutanix in Azure failback/migration to the private datacenter while using layer 2 stretch. After the failover, how will the native Azure VMs in Step 4 reach the VMs that failed/migrated. The VMs running the NC2 cluster in Azure are using a routed path to get access. The means we have ability to route on-prem an native Azures services to the VMs running on the NC2 cluster. Step 1 – L2We have a setup
Hi,New to Nutanix. Setting up our new Nutanix cluster that will run VMware as the hypervisor. Is it recommended to create 1 storage container that utilizes all storage capacity for all VMs (150) to reside? I am use to creating multiple datastores in a traditional 3 tier architecture, but i know a lot of those practices go away with hyperconverged.The main reason I ask is that although we have no plans at this point to use other Nutanix services such as Files or Database, I don’t want to be in a position if I create a storage container for VMware VM’s and assign it all of the capacity and then find out later that I cannot use Nutanix Files because all space was initially given to VMware.Any recommendations or best practices you can share to ensure I don’t shoot myself in the foot?Thanks
Hello Everyone,I am trying to deploy NC2 on azure but , I need to Add cloud account .Everytime I gave all the correct information of azure like, Directory ID , Subscription ID, Application ID, and secret value it gave error “Azure credentials are not valid.” How to resolve it .Please Help.Best Regards ,Rajesh Kumar
Stop the Man in the Middle Attacks.Whether you're replicating to the cloud or to a remote branch site you may not control the networking stack and to end. When you don't control the networking stack end to end or you're in an environment that simply doesn't have a firewall you can use Nutanix Native DR encryption between your Nutanix clusters. The feature is fully supported for both PD and Nutanix DR(PC) based replication.Changes will persistent after reboots of the CVMs and upon upgrades.AOS needs to be on 6.1 or higher.DR with Encryption will use 14119,14108 as additional ports that need to open bi-directional when all of the CVMS. *** Note you need to run the below steps on each cluster.To enable this featureSSH to the CVM Change the folder to bin - all the python commands need to ran from the bin directory cd bin Run the script. For PD based Replication python onwire_encryption_tool.py --enable <remote_cluster_vip> For PC/Nutanix DR Replication Enure your Prism Centrals
Hello All, I'm working with a customer on a new solution designthe customer has a VMware workload and we plan to put AHV,I want you to advise me on how I calculate the storage requirement, Let’s say the customer workload is about 170 TB how do i size the cluster storage Also, I have some questions related to storage sizesif I have 3 nodes clustereach node with 10 disks each disk 4 TB (numbers for examples only)that means total raw storage is 120 TBwe will go with RF2 which means 1 node is out of our usable capacity calculation So what is the usable capacity here after the 1 node for failure and the Raid used in the cluster creation, also what else should be conducted from the storage in storage planning?
The Flow Gateway VM (FGW) for Nutanix Cloud Clusters In Azure(NC2) is the Lynch pin to having a your own Supercloud. The FGW is responsible for all VM traffic going north and south from the Nutanix cluster in Azure. This virtual machine allows outside communication from both cloud and on-premises services to the workloads running on the Azure NC2 Cluster. The Flow Gateway VM connecting Azure and on-prem services to the Nutanix Cluster deployed in Azure.Once Prism Central is automatically deployed from the cluster creation process, the FGW deploys into the same VNET that Prism Central is using. The FGW is a native Azure VM. The VM has two network interface cards(NICs) attached to it, one for internal traffic and one for external traffic. The external NIC where the floating IP's are configured for workloads that may need away outside clients connecting into running workloads on the cluster. The default installation will configure 50 floating IP's on the external NIC. These floating I
One of the biggest changes with Nutanix Cloud Clusters (NC2) on Azure compared to it's AWS counterpart is the requirement for Flow virtual networking. Flow virtual networking provides an overlay in Azure to provide secure communication between multiple tenants you may be hosting on the Nutanix cluster and also provides north and southbound connectivity. North and southbound connectivity is provided through the Flow gateway virtual machine (FVGW). Workloads running on the cluster can either go through a network address translation (NAT) path or a routed path not using a NAT. Which path your virtual machines take in and out of the NC2 Cluster will depend on how other services need to talk to the running virtual machines on NC2.The FVGW Is a native Azure VM that gets deployed when the first cluster is created. The FVGM has both and internal and external network interface cards(NIC). Traffic from your Azure Nutanix cluster is directed towards the FVGM’s Internal NIC And then eventually rou
Hello All, I have added a managed network in our cluster. We have since purchased another cluster and in the process of configuring VXLan and the ability to live migrate VM’s between clusters. From what I have been seeing we cannot have a managed network across clusters is this correct? If not does anyone know how to configure a managed network to work between clusters.Thanks,Scott
I would like to take full blame for people thinking self healing solves all problems with Nutanix Cloud Clusters (NC2). The self-healing is pretty awesome for bad nics, hard drives and nodes but there are some instances where the portal can’t take action.Status checks on AWS are performed every minute, returning a pass or a fail status in the Cloud Portal. If all checks pass, the overall status of the instance is OK. If one or more checks fail, the overall status is impaired.There are two types of status checks, system status checks, and instance status checks. System status checks monitor the AWS systems on which instance runs. Instance status checks monitor the software and network configuration of individual instances. Notification Center in the Clusters PortalThe Clusters portal has an notification service that keeps track of all informational, warnings and critical alerts. Like most Cloud based services there is no support for SNMP in the portal but does have the ability to send
Login to the community
Login with your account
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.