Nutanix Cloud Infrastructure
The Foundation for Your Hybrid Cloud
A customer would like to run Microsoft Windows Server DataCenter 2022 in an AHV environment and we are looking at licensing for this. We want DataCenter cause of unlimited VM’s. We have recently heard that this has changed in the past year and only retail or perpetual licensing can run on Hyper-V and you can not run on AHV or Vmware anymore. This is because it needs the host to communicate with some licensing portal. So we have to purchase Open Value with SA for running AHV which is considerably more money and have to pay on a yearly subscription. They gave us this description. Windows Server 2022 Datacenter 2-Core License + SA (3 Yr)SKU: 9EA-00643Open Value with Software Assurance (3 Years)(3) 24-Core Hosts & Unlimited VMSNutanix HypervisorFulfilled with MAK/KMS LicenseDowngrade Rights to 2019 & 2016 Can anyone confirm this?If true this is crap and Microsoft is pushing their stupid hypervisor.
I wanted to start a discussion on our new Flow Virtual Networking VPC feature based on this video that TME Eric Walters created.You can use VPCs to make isolated overlay networks, do self-service network creation, or multi-tenant separation. Have you tried out VPCs in your environment?
Hello All, I'm working with a customer on a new solution designthe customer has a VMware workload and we plan to put AHV,I want you to advise me on how I calculate the storage requirement, Let’s say the customer workload is about 170 TB how do i size the cluster storage Also, I have some questions related to storage sizesif I have 3 nodes clustereach node with 10 disks each disk 4 TB (numbers for examples only)that means total raw storage is 120 TBwe will go with RF2 which means 1 node is out of our usable capacity calculation So what is the usable capacity here after the 1 node for failure and the Raid used in the cluster creation, also what else should be conducted from the storage in storage planning?
Stop the Man in the Middle Attacks.Whether you're replicating to the cloud or to a remote branch site you may not control the networking stack and to end. When you don't control the networking stack end to end or you're in an environment that simply doesn't have a firewall you can use Nutanix Native DR encryption between your Nutanix clusters. The feature is fully supported for both PD and Nutanix DR(PC) based replication.Changes will persistent after reboots of the CVMs and upon upgrades.AOS needs to be on 6.1 or higher.DR with Encryption will use 14119,14108 as additional ports that need to open bi-directional when all of the CVMS. *** Note you need to run the below steps on each cluster.To enable this featureSSH to the CVM Change the folder to bin - all the python commands need to ran from the bin directory cd bin Run the script. For PD based Replication python onwire_encryption_tool.py --enable <remote_cluster_vip> For PC/Nutanix DR Replication Enure your Prism Centrals
Hello everybody, I started using Storage VM migration … vm.update_container VMNAME container=test-container … to move VMs from an old to a new container. Big VMs have been moved. I would like to delete the old storage containers, but space usage is still high, allthough VM have been moved to another storage container. I am aware of the directories … .acropolis, .file_repo and .ngt … and those directories are empty now. How can I reclaim space in a storage container, when VMs have been moved, the Recycle Bin has been emptied.No VMs have been deleted, only moved, so the 36 hours rule for space reclaimation doesn’t seem to apply here?Any thoughts?Regards,Didi7
Hi guys,Currently working on Ansible to deploy K8s clusters through Karbon and i got a fatal issue.Ansible is version core 2.12.9, PC is 2022.1, nutanix.ncp collection has been tested in version 1.6.0 and 1.7.0 and there’s absolutely no traffic denied between ansible and PC (to sum up, everything is opened). I use this playbook, based on example provided in ansible-doc (i also tried the example, and i got the same errors in any cases) ###EDIT : i can’t add code in this post, got a banner Something gone wrong everytime ### Here’s the logs when the playbook fails (after ~5min running, from PC, ETCD deployment get stuck at 8%) : ###EDIT : i can’t add code in this post, got a banner Something gone wrong everytime ###Moreover, K8s cluster deployment through PC GUI is working smoothly. Does anyone got an idea to fix the “failed to deploy ntnx dvp” error ? Thanks a lot :) Gael
HiI’m currently working on RBAC rights for Prism Central but also for entities (VM,Apps...)In my understanding and in the documentation, i saw that you have to be User Admin on PC to get full rights on K8s cluster. In that statement, how does Karbon acts on K8s cluster rights while it deploy a new cluster ? (For instance, if you are a Viewer on PC you can’t connect to K8s cluster, if you’re User Admin, you can do whatever you want. ) Best regards Gael
We’re running 2 clusters which are pretty much configured the same running ESxi 7.02 AOS 5.20.4 LTS both running on nutanix hardware. One cluster is totally fine no issues but the other you cannot create a virtual disk greater than 2Tb! The issue only came to light when we had to move a VM off to another non-nutanix host for a few days then when we went to vmotion it back it errored complaining about the disk being greater than 2Tb, it is its nearly 6Tb as its a fortinet analyzer. I’m pretty sure when the VM was sat on the nutanix cluster its disk was greater than 2Tb and it was thin provisioned with a capacity of 7Tb when it was created! So you can’t move anything to any of the datastores that are greater than 2Tb and you also can’t create a new disk greater than 2TB. Also if you take a look at the datastores details it says maximum VM disk size 2TB, on the other clusters datastore it says 62Tb. It's getting to be a bit of a pain as the fortinet analyzer is sitting on a box we want
Hello everybody,I have an issue that PC Web console refuse login as the user/password were wrong, but that same user/password works fine on SSH, after troubleshooting, PC time zone set to PST not UTC from CLI while on VM configuration it’s UTC, I tried to change it to UTC from CLI , the web console accept the login, but after while it return to PST timezone, I configure NTP to local NTP.
When running foundation from the same network as the CVM/AHV host should be in, I’m getting this error. Phoenix Failed to load squashfs.img. I can’t ping the gateway, and it looks like during the bring up it’s unable to create bond0 with the NIC, which is 534FLR-SFP+. It’s like the link just doesn’t come up when booting into the rescue image.I’ve tried:Disabling the 4 port onboard NIC Pinging itself from ILO remote console (it works) Pinging the gateway and Foundation VM IP from ILO remote console (it fails) Installing ESXi manually on the server to confirm tagging/networking is set up properly, and it is Changing the IP of the foundation VM to another networkReally stuck hard. Any advice nutants? It’s like the rescue image doesn’tError MessageWaiting for eth0’s link to come up
hello.If you have any questions about UUIDs, please contact us.acli vm.get <VM Name> Nutanix-Clone-Study_1: Original container_id: 8 container_uuid: "04a931b9-4d2f-44a7-9903-8a5a3a3c463b"1. device_uuid: "4c4ae3e4-530d-4167-872a-ef6f2e6a41e9"2. naa_id: "naa.6506b8d8a5a92f6208b7e2380facb355"3. source_nfs_path: "/default-container-64847153591394/.snapshot/44/4359493875056948270-1663090801228723-26644/.acropolis/vmdisk/175ba8a7-0ca1-402f-9ab0-81ac7bfaa704"4. storage_vdisk_uuid: "fffd1e57-0fe4-4ed8-b706-9747e0e99449" vmdisk_size: 161061273600 5. vmdisk_uuid: "c1769c0e-2536-444a-8e08-2b986a6fada6"------------------------------------------------------------ ------------------------------------------Nutanix-Clone-Study_2 : Clone container_id: 8 container_uuid: "04a931b9-4d2f-44a7-9903-8a5a3a3c463b" device_uuid: "5f35410f-5bd1-4e06-a789-308276493787" naa_id: "naa.6506b8d3e59972c080ee639dae788a25" 6. source_vmdisk_uuid: "c1769c0e-2536-444a-8e08-2b98
Howdy,I am trying to envision what this would look like in a Nutanix environment for planning purposes.Currently we have two datacenters. We have one large vcenter cluster (Cisco UCSs, Synchronous Nimbles, Nexus vpc pair, L2 stretch on our own dark fiber (multiple 10gb links)) stretched across both DCs. This allows us to vmotion all VMs from one side to another, take down half our hosts, or even a whole DC without any issues.Thinking what this would look like if we were full Nutanix and AHV I am pretty sure this would not work (1 large cluster as we couldnt take down 1 DC since that would take out half of the hosts\storage). Correct?If so, I am thinking this would require at least a cluster at each DC, and then you metro-availability to uses the real-time sync. Does that sound about right from a high level?Thanks
The Flow Gateway VM (FGW) for Nutanix Cloud Clusters In Azure(NC2) is the Lynch pin to having a your own Supercloud. The FGW is responsible for all VM traffic going north and south from the Nutanix cluster in Azure. This virtual machine allows outside communication from both cloud and on-premises services to the workloads running on the Azure NC2 Cluster. The Flow Gateway VM connecting Azure and on-prem services to the Nutanix Cluster deployed in Azure.Once Prism Central is automatically deployed from the cluster creation process, the FGW deploys into the same VNET that Prism Central is using. The FGW is a native Azure VM. The VM has two network interface cards(NICs) attached to it, one for internal traffic and one for external traffic. The external NIC where the floating IP's are configured for workloads that may need away outside clients connecting into running workloads on the cluster. The default installation will configure 50 floating IP's on the external NIC. These floating I
“Kubernetes deployments are inherently dynamic and challenging to manage at scale,” said Thomas Cornely, SVP, Product Management, Nutanix. “Running Kubernetes container platforms cost-effectively at large scale requires developer-ready infrastructure that seamlessly adapts to changing requirements. Our expertise in simplifying infrastructure management while optimizing resourcesーboth on-premises and in the public cloudーis now being applied to help enterprises adopt Kubernetes more quickly. The Nutanix Cloud Platform now supports a broad choice of Kubernetes container platforms, provides integrated data services for modern applications, and enables developers to provision Infrastructure as Code.”According to Gartner, by 2027, 25% of all enterprise applications will run in containers, an increase from fewer than 10% in 2021. This is a significant challenge for many given most Kubernetes solutions are not meant to support enterprise scale, even less can do so in a manner that is cost effe
Login to the community
Login with your account
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.