Nutanix Cloud Infrastructure
The Foundation for Your Hybrid Cloud
Hello everybody, let's assume we have one Nutanix block with 3 nodes. In each node we only have 1 CPU socket with 10 Cores. A VM can only run on one node. So, in this case, wouldn't it make more sense to let a VM with 4 CPUs run on 1 VCPU and 4 Cores, instead of 4 VCPUs with 1 core, as each node has only 1 CPU socket? Or does Nutanix always recommend to use only 1 Core per VCPU not matter how much CPU sockets are available in a node? Best regards, Didi7
I ugraded my ESXi hosts using 1-click upgrade to VMware-VMvisor-Installer-6.7.0.update03-15160138.x86_64-DellEMC_Customized-A04 with no issues. Trying to apply patch ESXi670-202004002.zip using 1-click upgrade and get the following error: Upgrade bundle is not compatible with current VIBs installed in hypervisor. [DependencyError] VIB QLC_bootbank_qedi_126.96.36.199-1OEM.6188.8.131.5269922 requires qedentv_ver = X.11.15.0, but the requirement cannot be satisfied within the ImageProfile. I am able to successfully patch using esxcli and update manager. I updated the image profile to DellEMC-ESXi-6.7U3-15160138-A04 on the hosts thinking that may resolve the issue but this did not work. Anyone have any suggestions to get 1-click upgrade to function with 6.7u3?
Physical Windows Server to Virtual conversion (P2V for AHV) Depending on the number of physical servers in your environment, you might want to convert them tovirtual servers and migrate over to Nutanix AHV. Nutanix Image Service supports VHD, VMDK, vDISK, IMG, qcow formats for disk images. Summary:1) Prepare the Physical Windows Server - Install Virtio Drivers from: Nutanix Portal > Downloads > Tools & Firmware > Virtio2) Download Disk2VHD from Microsoft site:(https://docs.microsoft.com/en-us/sysinternals/downloads/disk2vhd)3) Convert the Disk or Disks (Make sure to select "bootable" for the appropriate disk)4) Upload the Converted Disk to Nutanix Cluster via Image service Uploading VHD to the Cluster via Image Service:1) You can upload via the browser or2) Image service also allows for uploading files via http source (existing or new web-server required) Once the converted VHD is uploaded to the Image Service, we can now create a new VMCreate a New VM:Create the new
Good Day, I am very new to Nutanix and recently purchased a cluster. It has only been running for about 30 days now managing my network and I am in the process of make some configurations to it. I need some assistance with an error message that I am receiving. I am using AHV as hyper-visor on my 3 node cluster. On this cluster I am running 5 Windows based Server VM's (not using Hyper-V or VMware). I followed the instructions from the Administration Guide by enabling VSS Shadow Copies on the Servers, then installing guest tools on all the servers and creating a protect domain Async DR. My configurations are working and snapshots are being created for my Domain Controllers and Application Servers. However, when a snapshot is trying to be created of my File Server, I keep getting the following error. "Warning : VSS snapshot failed for the VM(s) FS-01 protected by the FileServer in the snapshot (169035, 1563300389081879, 960) because Quiescing guest VM(s) failed or timed out. Impact
Ok, yesterday one of our many Nutanix clusters was upgraded to 5.15 LTS and I started to create a new Windows Server 2016 template using UEFI-configuration. Now I am stuck at display resolution of 1280x1024 and change is not possible, because it is grayed out. Nutanix Guest Tools from 5.15 LTS and VirtIO 1.1.5 are installed. Is this bug known and does there exist a work-around? What about 5.16 STS? Is it fixed there? Thanks for any reply. Regards, Didi7
The technical piece below found our way through our partner channels. Installation instructions for Red Hat OpenShift on Nutanix are detailed in the documentation below. Enjoy, and as always feel free to provide us with feedback. User Provisioned Installation of Red Hat OpenShift 4.3 on Nutanix AHV 5.15 This manual was created during a proof of concept environment using Nutanix AHV 5.15, the KVM-based hypervisor of Nutanix, with OpenShift 4.3 in combination with the Nutanix CSI driver. The Nutanix CSI driver provides scalable, persistent storage for stateful applications using Nutanix Files and Nutanix Volumes. Please note: At the time of writing, Nutanix AHV in combination with OpenShift is supported by Nutanix, but not certified by Red Hat. If certification is required, clients are advised to use any of the other hypervisors supported by Nutanix. The installation steps followed are documented in the IBM Cloud Architecture & Solution Engineering repository guide. The PoC envi
Today - we will highlight the flexibility Nutanix offers with Protection Domains. You can access Protection Domains via the “Data Protection” Menu option in Prism.DR Basics in Nutanix:The foundation of data protection and disaster recovery in a Nutanix cluster is the concept of snapshots.Snapshots work very similar to VM/vDisk clones - leveraging a "redirect on write" algorithm that marks a snapshotted vDisk as immutable and directing new write operations (block overwrites and new blocks) to a new vDisk. Read operations reference the correct vDisk blocks based on metadata lookup from the meta-data store. From the Nutanix Bible - Book of Acropolis - Backup & Disaster RecoveryWhat is a Protection Domain (PD)?* Key Role: Macro group of VMs and/or files to protectDescription: A group of VMs and/or files to be replicated together on a desired schedule. A PD can protect a full container or you can select individual VMs and/or files.A protection domain is a group of Virtual Machines fo
I know there’s another topic like this but, it is 3 years old. Is there an easy way to put a single ESX host into maintenance mode?Not to compare products but with Cisco Hyperflex, you right-click on the host and there’s a “Hyperflex maintenance mode” option. This does what needs to be done to the CV without having to SSH into anything.I was just wondering if the process has improved in 3 years.
Hello mates, We have 2x NX3350 and Arista switch which handle roughly 100 VM. Workloads are: 20x VM with heavy disk workload which are used mainly for reporting and analysis. R/W ratio is roughly 50/50. They created 20K IOPS workload when they lived on SAN storage with RAID10 and tiny flash tier and average latency was always below 5 ms. 10x VM which are used for application virtualization 70x VM which are use for desktop virtualization. Now, when we migrated to Nutanix, we have only 4K cluster IOPS and 20 ms latency which does not seems to be very good for us. Trying to resolve the issue, we enabled inline compression and increased CVM memory up to 20GB. We also tried to change tier sequential write priority. Unfortunately, this does not help. ncc, cluster status and prism health claim that everything is ok. Before we migrated our environment, we've run diagnostic VM and the results was roughly 100K IOPS for read. Here is the current configuration of cluster:NOS Version: 4
We have a dozen or so VMs/appliances that do not support AHV and thus cannot be migrated over to our new AHV cluster that runs 95% of our servers. We would like to expose a container/Volume Group from Nutanix to the VMware hosts as NFS datastores to build these VMs/Appliances on. Our goal is to then replicate that Container/Volume Group to our DR Nutanix cluster and then connect our DR VMware cluster for the same VMs/Appliances to then use as well incase of failover. We have gotten conflicting information from all parties and wanted to know if anyone was already doing this and what your results are? These VMs/Appliances would have fairly high IO.
I am migrating from ESX 6.0 to AHV 5.1.2I have successfully mounted the container from VMware and migrated a disk. When trying to start the image from the vmdk file in Prism Console than I get the following error: Booting from Hard Disk... in the boot dos screen. Regards, Onder Avcu
Hi, It’s possible to manually adjust the file /var/nutanix/etc/kubernetes/manifests/kube-apiserver.yaml and apply this update into the kubernetes cluster? I tried to adjust and after run: sudo systemctl daemon-reload && sudo systemctl restart kubelet-master But when I describe the kube-api pod I see that the adjusts are not applied. Anibal
What is Karbon? Nutanix Karbon is a curated turnkey offering that provides simplified provisioning and operations of Kubernetes clusters. Kubernetes is an open source container orchestration system for deploying and managing container-based applications. The Karbon web console simplifies the deployment and management of Kubernetes clusters with a simple GUI and built-in event monitoring tools. There are built-in add-ons such as Kibana and Prometheus for parsing logs and monitoring alert triggering mechanism in you cluster. Using Karbon you can: Deploy Kubernetes clusters. Use Nutanix Volumes and Nutanix Files storage for your applications. Manage Kubernetes cluster resources. Upgrade your Kubernetes deployment. Ensure high-availability. To set up your Karbon environment, perform the following tasks: Enable Karbon through Prism Central, Enabling Karbon Download an image, see OS Images Create clusters, see Creating a Cluster Download the kubeconfig, Downloading the Kubeconfig Configure
Let's say you're running a huge infrastructure of 10 pretty big SQL VMs and just shifted your infrastructure to Nutanix. You must be bewildered and confused regarding the best practices of SQL in Nutanix and how can you improve the performance. For most SQL server databases, there is nothing that needs to be done in order to successfully run on Nutanix. SQL servers with small DBs are normal and okay but what about larger workloads?Do we have some best practices guide that can help us improve our performance?Yes, we have some general recommendation for running SQL workloads. Try giving the following Knowledgebase articles a read to understand the configuration and recommended settings to improve performance of SQL workloads.KB-3532KB-1833 Want to know the best practices for different SQL workloads. Try giving the following solution documentations a coffee read. Microsoft SQL Server MySQL on Nutanix PostgreSQL HammerDB on SQL You can try giving the following blog a read which s
Trying the veeam AHV appliance, i had a crash whilst taking a backup now am left with orphaned snapshots on a PAID ACCOUNT, and nutanix sends the ball to veeam back to nutanix, so here i am Protection Domain DP-QC-3 has 3 aged third-party backup snapshot(s) and may unnecessarily consume storage space in the cluster. i have a multitude of these and want to manually remove those snapshots, is there a process for this ?
Hey Everyone. So, I am relatively new to the whole Nutanix game. Im curious what everyone is suggestion is for CVM allocation. We had nutanix do our original provisioning, but the provisions the installer set it up with seem a bit excessive. 10 Cores and 32gb ram per CVM, Any thoughts?[img]https://d1qy7qyune0vt1.cloudfront.net/nutanix-us/attachment/2333i06E549D1B99C0D7A.png[/img]
Hi, We have deployed couple of guest VMs on Windows 2019 Standard Edition OS on our 6 node cluster. Both the guests are configured with 1 VCPU with 2 cores along with 16 & 32 memory respectively. We have procured standard paper license of the OS. The hypervisors are running on AHV version 20170830.337 & AOS - 184.108.40.206 LTS. Is there any specific licensing model for Windows Server OS to run on AHV? How are the Windows licenses determined? Kindly help.
Does AHV supports MSCS ? I am aware about two node windows cluster with witness VM mentioned in below : [url=https://portal.nutanix.com/#/page/docs/details?targetId=Web-Console-Guide-Prism-v58:wc-cluster-two-node-c.html]https://portal.nutanix.com/#/page/docs/details?targetId=Web-Console-Guide-Prism-v58:wc-cluster-two-node-c.htm[/url]l But just want to confirm if it supports MSCS with shared storage including quorum?
I have a large number of VMs created with the boot firmware set to EFI. As I understand it, AHV cannot convert "out of the box" The options I have been told include the following:- [list] [*]Use VMware Convertor to perform a virtual to virtual VM conversion to change the boot firmware back to BIOS [*]Use an import tool to bring in the VMDK files over and run a CLI command to accept the UEFI setup[/list] Does this sound right? Is there an easier way? Can new VMs under AHV be created with UEFI ?
Login to the community
Login with your account
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.