License-Free Virtualization for Your Enterprise
- 476 Topics
- 1,567 Replies
My current Nutanix cluster version is 5.18, vCenter and ESXi is 7.0u1. I would like to upgrade the vCenter and ESXi to 7.0.u2. What version Nutanix cluster version support both vCenter and ESXi? And also the Prism version is:-Version pc.2020.9.0.1NCC Version: 126.96.36.199LCM Version: 188.8.131.52 is it above Prism and other component need t upgrade as well.
Hi there community, I’m hoping that someone can give me some guidance. We recently acquired a facility with some Nutanix clusters deployed, and this is my first time working with Nutanix. I have access to the vSphere hosts but no idea on the Nutanix side of things. Trying to activate the support portal with any and all serial numbers I can find just gives me an invalid serial number message….which means I can’t open a support ticket to ask about the serial numbers. I see each of the host machines running a Nutanix Controller VM, but I’m guessing there is a central management console tied to the hardware (NX?), and a spreadsheet from the seller lists some hardware/enclosure serial numbers, but under serial number just mentions refer to PRISM. I’m trying to hunt down someone at the seller to provide more details and make sure things get transferred, but is there anything I can do or look at in the meantime to figure this thing out?
What is the status of Credential Guard on Nutanix VMs? We are running AOA 5.20.I have created a new VM with UEFI, Secure boot and Credential Guard enabled, but I can’t get it to work. Credential Guard is enabled with GPO, but still will not run. When I look at device security, it says “Standard hardware security not supported” and there is no compatible TPM shown in tpm.msc.The OS I’m testing it on is Microsoft Server 2019.
We have a UTM with 6 interfaces that go to separate subnets. Each subnet has a ToR Switch. The UTM handles the routing. We are installing a new Nutanix 4 node Block and virtualization physical servers. Assuming that we have 6 physical servers and each server is on a separate subnet, what would our network diagram look like if we included the Nutanix Block? Would we need another switch that has the 6 subnets vlanned, ie: 1 connection from the block and then based on the traffic destination send the traffic out through the vlan port to the ToR switch in the destination subnet?
Hello, when I use the OpenStack integration and try to use vnc via OpenStack, I got some problem on my noVNC like this belows. Command line for enable vnc on OpenStack : /usr/bin/prism_vnc_proxy --bind_address=0.0.0.0 --bind_port=6080 --prism_hostname=[My-IP] --prism_username=[My-Username] --prism_password=[My-Password] --docroot=/usr/share/nutanix_openstack/vnc/static &_________________________________________________________________________/var/log/prism_vnc_proxy.outINFO:nutanix_openstack.vnc.wsgi_prism_websocket_proxy:Authenticating with Prism at [My_Cluster_IP]WARNING:py.warnings:/var/lib/kolla/venv/lib/python2.7/site-packages/urllib3/connectionpool.py:858: InsecureRequestWarning: Unverified HTTPS request is being made. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings InsecureRequestWarning)WARNING:py.warnings:/var/lib/kolla/venv/lib/python2.7/site-packages/urllib3/connectionpool.py:858: Insecu
I have a problem with a CVM that won’t boot. This is on a semi-retired production cluster (not CE) that has no workloads running on it.I found the console output in /tmp/NTNX.serial.out.0 and I can see it trying to enable RAID devices, scan for a uuid marker and find 2 of them, then abort and unload the mpt3sas kernel module before trying again in 5 seconds. This repeats a few times before the hypervisor resets it and it starts booting again.The most relevant sections of the log (copious kernel taint messages removed) are [ 9.543553] sd 2:0:3:0: [sdd] Attached SCSI disksvmboot: === SVMBOOTmdadm main: failed to get exclusive lock on mapfile[ 9.790075] md: md127 stopped.mdadm: ignoring /dev/sdb3 as it reports /dev/sda3 as failed[ 9.794087] md/raid1:md127: active with 1 out of 2 mirrors[ 9.796034] md127: detected capacity change from 0 to 42915069952mdadm: /dev/md/phoenix:2 has been started with 1 drive (out of 2).[ 9.808602] md: md126 stopped.[ 9.813330] md/raid1:md126:
Hi Folks,I want to virtualise 2 SQL servers on Cisco UCS hardware to Nutanix. I found references for Xtract for DB, however it seems it is a legacy tool as I am not able to get much information on it. Nutanix ‘Move’ seems to be used for VM migration.On Nutanix I have ESXi, so i know we can use ‘Vmware Converter’, but I would like to know if there is any solution from Nutanix for this.Can some one please guide me about the ways I can achieve this P2V of SQL servers?
When I searched the documentation, it was said that scsi-3 pr3 is supported when attaching volume from AOS 5.17.my curiosity1. In versions prior to 5.17, when allocating a shared volume for vm redundancy configuration, if you use Attach instead of using iscsi-initiator, does it not work?2. Considering I/O speed and stability, which do you think is better, mounting a volume through iscsi-initiator or connecting to a VM as a direct attach?
Hi - I’m having an issue with a new installation of Nutanix on AHV (single node cluster) that I’m hoping someone hear can help with.Basically, using Prism, when I attempt to create a new VM the process fails with the error “VM with ID ‘*********’ was not found”. However, if I attempt to add a VM without a new disk, it successfully creates the VM but I can’t add a disk to it, error “Operation Failed: InternalException”.I’ve been able to upload my ISOs for installation etc. and of course the CVM is seemingly working fine. Any ideas what could be wrong? Many thanks.
Is there the possibility of using the microVM concept in Nutanix? Are solutions like firecracker (https://firecracker-microvm.github.io/) possible with Nutanix?As I understand it is not possible to nested virtualization with Nutanix (https://next.nutanix.com/server-virtualization-27/nested-virtualization-33423), so is there any provision for a feature that enables the use of microVMs with Nutanix?
Virtual Machine High Availability (VMHA) ensures that VMs restart on another AHV host in the cluster if a host fails. VMHA considers RAM when calculating available resources throughout the cluster for starting VMs. VMHA respects affinity and anti-affinity rules. For example, with VM-host affinity rules, VMHA does not start a VM pinned to AHV host 1 and host 2 on another host when those two are down unless the affinity rule specifies an alternate host. There are two VM high availability modes: Default: This mode requires no configuration and is included by default when installing an AHV-based Nutanix cluster. When an AHV host becomes unavailable, the VMs that were running on the failed AHV host restart on the remaining hosts, depending on the available resources. If the remaining hosts do not have sufficient resources, some of the failed VMs may not restart. Guarantee This non-default configuration reserves space throughout the AHV hosts in the cluster to guarantee that all VMs can re
I am looking for a patching product that will tie into Nutanix and be able to take a snapshot before patching, we are looking to automate this process to get away from manually taking snapshots prior to deploying patches. We do have Protection Domains setup but with the timing intervals it is not ideal only use those pre-deploy as they can be hours behind the actual patching deployments. If anyone is using a product that has this capability please let me know or if you may have a better way to accomplish this within Nutanix.
So I use the following script to create a VM. I actually have a small batch file that opens putty, logs into a CVM, prompts me for the $name and then runs the script below. The image is one I made, then sysprepped and uploaded as a disk image.Everything works great except that when I then open the console it goes through the Windows setup. I have an Unnattend file but I am unsure how to use it with my script. If I go through the GUI to create a VM I just point it to the unattened file in our file share.Is there a way to attach the sysprep file? I thought it might be a parameter vm.disk_create but I am not clever enough to figure that part out and thought maybe someone else smarter could help. First world problems I know but it could help save me 30 extra seconds of not having to type the admin password in and set the keyboard settings etc.#creates a vmacli vm.create $name memory=$mem num_cores_per_vcpu=$core num_vcpus=$vcpu uefi_boot=TRUE &&#Create C:acli vm.disk_create $name c
The technical piece below found our way through our partner channels. Installation instructions for Red Hat OpenShift on Nutanix are detailed in the documentation below. Enjoy, and as always feel free to provide us with feedback. User Provisioned Installation of Red Hat OpenShift 4.3 on Nutanix AHV 5.15 This manual was created during a proof of concept environment using Nutanix AHV 5.15, the KVM-based hypervisor of Nutanix, with OpenShift 4.3 in combination with the Nutanix CSI driver. The Nutanix CSI driver provides scalable, persistent storage for stateful applications using Nutanix Files and Nutanix Volumes. Please note: At the time of writing, Nutanix AHV in combination with OpenShift is supported by Nutanix, but not certified by Red Hat. If certification is required, clients are advised to use any of the other hypervisors supported by Nutanix. The installation steps followed are documented in the IBM Cloud Architecture & Solution Engineering repository guide. The PoC envi
Login to the community
Login with your account
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.