License-Free Virtualization for Your Enterprise
- 470 Topics
- 1,559 Replies
Hello everybody, let's assume we have one Nutanix block with 3 nodes. In each node we only have 1 CPU socket with 10 Cores. A VM can only run on one node. So, in this case, wouldn't it make more sense to let a VM with 4 CPUs run on 1 VCPU and 4 Cores, instead of 4 VCPUs with 1 core, as each node has only 1 CPU socket? Or does Nutanix always recommend to use only 1 Core per VCPU not matter how much CPU sockets are available in a node? Best regards, Didi7
I ugraded my ESXi hosts using 1-click upgrade to VMware-VMvisor-Installer-6.7.0.update03-15160138.x86_64-DellEMC_Customized-A04 with no issues. Trying to apply patch ESXi670-202004002.zip using 1-click upgrade and get the following error: Upgrade bundle is not compatible with current VIBs installed in hypervisor. [DependencyError] VIB QLC_bootbank_qedi_188.8.131.52-1OEM.6184.108.40.20669922 requires qedentv_ver = X.11.15.0, but the requirement cannot be satisfied within the ImageProfile. I am able to successfully patch using esxcli and update manager. I updated the image profile to DellEMC-ESXi-6.7U3-15160138-A04 on the hosts thinking that may resolve the issue but this did not work. Anyone have any suggestions to get 1-click upgrade to function with 6.7u3?
Physical Windows Server to Virtual conversion (P2V for AHV) Depending on the number of physical servers in your environment, you might want to convert them tovirtual servers and migrate over to Nutanix AHV. Nutanix Image Service supports VHD, VMDK, vDISK, IMG, qcow formats for disk images. Summary:1) Prepare the Physical Windows Server - Install Virtio Drivers from: Nutanix Portal > Downloads > Tools & Firmware > Virtio2) Download Disk2VHD from Microsoft site:(https://docs.microsoft.com/en-us/sysinternals/downloads/disk2vhd)3) Convert the Disk or Disks (Make sure to select "bootable" for the appropriate disk)4) Upload the Converted Disk to Nutanix Cluster via Image service Uploading VHD to the Cluster via Image Service:1) You can upload via the browser or2) Image service also allows for uploading files via http source (existing or new web-server required) Once the converted VHD is uploaded to the Image Service, we can now create a new VMCreate a New VM:Create the new
Ok, yesterday one of our many Nutanix clusters was upgraded to 5.15 LTS and I started to create a new Windows Server 2016 template using UEFI-configuration. Now I am stuck at display resolution of 1280x1024 and change is not possible, because it is grayed out. Nutanix Guest Tools from 5.15 LTS and VirtIO 1.1.5 are installed. Is this bug known and does there exist a work-around? What about 5.16 STS? Is it fixed there? Thanks for any reply. Regards, Didi7
I know there’s another topic like this but, it is 3 years old. Is there an easy way to put a single ESX host into maintenance mode?Not to compare products but with Cisco Hyperflex, you right-click on the host and there’s a “Hyperflex maintenance mode” option. This does what needs to be done to the CV without having to SSH into anything.I was just wondering if the process has improved in 3 years.
We have a dozen or so VMs/appliances that do not support AHV and thus cannot be migrated over to our new AHV cluster that runs 95% of our servers. We would like to expose a container/Volume Group from Nutanix to the VMware hosts as NFS datastores to build these VMs/Appliances on. Our goal is to then replicate that Container/Volume Group to our DR Nutanix cluster and then connect our DR VMware cluster for the same VMs/Appliances to then use as well incase of failover. We have gotten conflicting information from all parties and wanted to know if anyone was already doing this and what your results are? These VMs/Appliances would have fairly high IO.
The technical piece below found our way through our partner channels. Installation instructions for Red Hat OpenShift on Nutanix are detailed in the documentation below. Enjoy, and as always feel free to provide us with feedback. User Provisioned Installation of Red Hat OpenShift 4.3 on Nutanix AHV 5.15 This manual was created during a proof of concept environment using Nutanix AHV 5.15, the KVM-based hypervisor of Nutanix, with OpenShift 4.3 in combination with the Nutanix CSI driver. The Nutanix CSI driver provides scalable, persistent storage for stateful applications using Nutanix Files and Nutanix Volumes. Please note: At the time of writing, Nutanix AHV in combination with OpenShift is supported by Nutanix, but not certified by Red Hat. If certification is required, clients are advised to use any of the other hypervisors supported by Nutanix. The installation steps followed are documented in the IBM Cloud Architecture & Solution Engineering repository guide. The PoC envi
Hello mates, We have 2x NX3350 and Arista switch which handle roughly 100 VM. Workloads are: 20x VM with heavy disk workload which are used mainly for reporting and analysis. R/W ratio is roughly 50/50. They created 20K IOPS workload when they lived on SAN storage with RAID10 and tiny flash tier and average latency was always below 5 ms. 10x VM which are used for application virtualization 70x VM which are use for desktop virtualization. Now, when we migrated to Nutanix, we have only 4K cluster IOPS and 20 ms latency which does not seems to be very good for us. Trying to resolve the issue, we enabled inline compression and increased CVM memory up to 20GB. We also tried to change tier sequential write priority. Unfortunately, this does not help. ncc, cluster status and prism health claim that everything is ok. Before we migrated our environment, we've run diagnostic VM and the results was roughly 100K IOPS for read. Here is the current configuration of cluster:NOS Version: 4
I am migrating from ESX 6.0 to AHV 5.1.2I have successfully mounted the container from VMware and migrated a disk. When trying to start the image from the vmdk file in Prism Console than I get the following error: Booting from Hard Disk... in the boot dos screen. Regards, Onder Avcu
Hi, We have deployed couple of guest VMs on Windows 2019 Standard Edition OS on our 6 node cluster. Both the guests are configured with 1 VCPU with 2 cores along with 16 & 32 memory respectively. We have procured standard paper license of the OS. The hypervisors are running on AHV version 20170830.337 & AOS - 220.127.116.11 LTS. Is there any specific licensing model for Windows Server OS to run on AHV? How are the Windows licenses determined? Kindly help.
Let's say you're running a huge infrastructure of 10 pretty big SQL VMs and just shifted your infrastructure to Nutanix. You must be bewildered and confused regarding the best practices of SQL in Nutanix and how can you improve the performance. For most SQL server databases, there is nothing that needs to be done in order to successfully run on Nutanix. SQL servers with small DBs are normal and okay but what about larger workloads?Do we have some best practices guide that can help us improve our performance?Yes, we have some general recommendation for running SQL workloads. Try giving the following Knowledgebase articles a read to understand the configuration and recommended settings to improve performance of SQL workloads.KB-3532KB-1833 Want to know the best practices for different SQL workloads. Try giving the following solution documentations a coffee read. Microsoft SQL Server MySQL on Nutanix PostgreSQL HammerDB on SQL You can try giving the following blog a read which s
Hey Everyone. So, I am relatively new to the whole Nutanix game. Im curious what everyone is suggestion is for CVM allocation. We had nutanix do our original provisioning, but the provisions the installer set it up with seem a bit excessive. 10 Cores and 32gb ram per CVM, Any thoughts?[img]https://d1qy7qyune0vt1.cloudfront.net/nutanix-us/attachment/2333i06E549D1B99C0D7A.png[/img]
Hello All,I am new to Nutanix and AHV; however, I have a large amount of experience with Hyper-v and SCVMM. One of the things I am trying to do is automate the creation of a vm’s. My end goal is to take an “image” base drive I have uploaded to images containing an install of my windows OS.Deploy a new vm with that image, have a static IP assigned to the new vm automatically from a pool of IP’s, join my domain, and run a few powershell scripts in the VM. I would have done this all with an IP pool and a template in SCVMM and as much as I have read about templates in AHV I am not seeing anything like it. Am I missing something here?Do we have a resource on how to configure something like this?
I have a large number of VMs created with the boot firmware set to EFI. As I understand it, AHV cannot convert "out of the box" The options I have been told include the following:- [list] [*]Use VMware Convertor to perform a virtual to virtual VM conversion to change the boot firmware back to BIOS [*]Use an import tool to bring in the VMDK files over and run a CLI command to accept the UEFI setup[/list] Does this sound right? Is there an easier way? Can new VMs under AHV be created with UEFI ?
You might have noticed a 100% CVM memory usage in the Prism Element or high CVM memory usage alarms in vCenter. In vSphere 6.5 and later, this issue is seen with virtual machines configured with PCI pass-through devices. Symptoms: In the vSphere Web Client: CVMs trigger the Virtual machine memory usage alarm. The VM's Memory Usage/Active performance metric is continually reported as 100%. In Prism Web Console: CVM Memory Usage % is permanently shown as 100% There is nothing to worry about as this is just a cosmetic issue and has no effect on the CVM performance. The NCC health check report will not include any errors or warnings regarding memory usage. Similarly, o alert will be generated in the Prism Element. To take a detailed look at why ESXi displays CVM memory usage as 100% and the solution for the above, take a look at CVM Memory is 100% in Prism and Web Client with ESXi 6.5 or later
Does AHV supports MSCS ? I am aware about two node windows cluster with witness VM mentioned in below : [url=https://portal.nutanix.com/#/page/docs/details?targetId=Web-Console-Guide-Prism-v58:wc-cluster-two-node-c.html]https://portal.nutanix.com/#/page/docs/details?targetId=Web-Console-Guide-Prism-v58:wc-cluster-two-node-c.htm[/url]l But just want to confirm if it supports MSCS with shared storage including quorum?
Hi Team, We are running few file servers (windows 2016 machines) which are running on Nutanix + VMware 6.5. For e.g. i have a 5TB VMDK attached to the VM which is holding up a lot of files (being a file server). Now when we check the size of this disk in VMware, we this as around 4TB consumed (similar figures in Prism as well). However within Windows, the actual space consumed is only 1 TB. The question is - how do we reclaim this space to match up the numbers seen by OS with what VMware believes the used space is. (i have seen some posts where users have mentioned using sdelete etc. but it isn’t really a scalable solution) Thoughts ? Thanks. Ravi
I have a problem with a CVM that won’t boot. This is on a semi-retired production cluster (not CE) that has no workloads running on it.I found the console output in /tmp/NTNX.serial.out.0 and I can see it trying to enable RAID devices, scan for a uuid marker and find 2 of them, then abort and unload the mpt3sas kernel module before trying again in 5 seconds. This repeats a few times before the hypervisor resets it and it starts booting again.The most relevant sections of the log (copious kernel taint messages removed) are [ 9.543553] sd 2:0:3:0: [sdd] Attached SCSI disksvmboot: === SVMBOOTmdadm main: failed to get exclusive lock on mapfile[ 9.790075] md: md127 stopped.mdadm: ignoring /dev/sdb3 as it reports /dev/sda3 as failed[ 9.794087] md/raid1:md127: active with 1 out of 2 mirrors[ 9.796034] md127: detected capacity change from 0 to 42915069952mdadm: /dev/md/phoenix:2 has been started with 1 drive (out of 2).[ 9.808602] md: md126 stopped.[ 9.813330] md/raid1:md126:
Login to the community
Login with your account
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.