Stressed About Managing Multiple Clouds? We've Got Your Back!
Hi. I’m looking for some experiences in migrating windows user vms from acropolis hypervisor to vmware esxi. I tried and followed severals KBs and articles but none were successful. After powering on windows VM on esxi, they immediately go to the famous “blue screen” and don’t recognize the disk. Articles that I followed: https://portal.nutanix.com/page/documents/kbs/details?targetId=kA032000000PMcKCAWhttps://portal.nutanix.com/page/documents/kbs/details?targetId=kA03200000098T7CAIhttps://portal.nutanix.com/page/documents/details?targetId=Prism-Central-Guide-Prism-v5_19:mul-vm-export-as-ova-pc-t.htmlhttps://next.nutanix.com/move-application-migration-19/want-to-export-a-vm-from-ahv-here-s-how-37275Has anyone has experience with this ? Can you help me ? Best regards
Hello I’m trying to figure out why We are unable to login in to Prism central as below message appear when trying to login: as it show in the dev tools ( Failed to load resource ) I have checked the apache and its not working but not sure if the issue has anything to do with httpdbelow is the status of httpd on the PCVMRedirecting to /bin/systemctl status httpd.service● httpd.service Loaded: masked (/dev/null; bad) Active: inactive (dead)nutanix@NTNX-1-A-PCVM:~$nutanix@NTNX-A-PCVM:~$ sudo service httpd startRedirecting to /bin/systemctl start httpd.serviceFailed to start httpd.service: Unit is masked. after that checked if a service does not start or there is any FATAL logs for i in `svmips`; do echo "CVM: $i"; ssh $i "ls -ltr /home/nutanix/data/logs/*.FATAL"; done /home/nutanix/data/logs/magneto.FATAL/pollux.ntnx-10-0-22-199-a-pcvm.nutanix.log.FATAL.20220510-022710.119479/home/nutanix/data/logs/lazan.FATAL/home/nutanix/data/logs/uhura.FATAL/home/nutanix/data/logs/catalog.FATAL/h
I am provisioning a Windows clone VM using API, can you help how to inject the sysperp file in vm_customization_config?
As seen in this attached screenshot, Prism Central appears to have lost track of this “running” task. Updates of network security rules normally finish very quickly (I think might have clicked an apply button twice after editing a FLOW security policy during on-site FLOW training.) Is this something only Nutanix Support can clear away?
Hi We have 2 sites with fast links between them with 1 Nutanix cluster on each site. All the hosts are running the ESXi 6.7 hypervisor. We use Metro availability so we have 1 ESXi cluster stretched across both sites. Currently Prism central runs on a VM in located on hosts in the primary site in a Metro availability enabled container\DS. We looking into scaling out our Prism Central installation from 1 node to 3 nodes. Ideally I was thinking we could spread the Prism Central nodes across the two sites, perhaps by having 1 node on site 1 and 2 nodes on site 2. However, the Prism Central scale out manual states the following: “All scale out Prism Central VMs must run on the same cluster. For example, running two VMs in cluster_1 and one VM in cluster_2 is not supported". Is this referring to one ESXi cluster or one Nutanix Cluster - I just want to be sure? Thanks in advance.
On cvm, the cs and cluster start commands are output to the next screen.Hypervisor : ESXI 6.7U3AOS : 5.15.4What's the problem?
Hi all, I’m trying to create additional admins accounts with LDAP authentication on Nutanix. I’ve got clusters up and running, got LDAP working, and the generic RBAC ‘VM admin/VM read only’ etc accounts (that I’m creating) are all good. But I thought Nutanix had added the ability to create a full cluster admin with AD authentication rather than creating local ones on something like prism central now? Or am I mistaken? (If someone has a way of doing this, it would be much appreciated). Kind Regards,SiW
I want to ask about replaced hypervisor drive. Here is the machine Nutniax Node : NX-1065-G6 Default Hypervisor Boot Drive : Micron 5100 MMTFDDAV240TBC New Hypervisor Boot Drive : SSD WD Green 240GB 3D NAND M.2 SATA 2280 Question : is it compatible and working properly if the default replaced with new hypervisor boot drive ? Reference : Node Naming (NX-1065-G6). Only give information 2 x M.2 Device 240GB.
Hi!We are in the process of migrating our ESXi-vms to Nutanix AHV. Does anyone know if it is a problem to migrate a Witness VM (actually used for two 2-node clusters) to AHV with Nutanix Move or would it be better to install a new one on AHV and switch the clusters over to it?
Hello everyone!Just started playing around with the Nutanix Sizer and I am having issues creating a Citrix VDI Workload. No matter what setting is selected, I am unable to save the changes. Above the webpage I get this error:“You can not add/edit advanced workload options for basic user”Am I missing something obvious here?
Already have an account? Login
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.
Sorry, we're still checking this file's contents to make sure it's safe to download. Please try again in a few minutes.
Sorry, our virus scanner detected that this file isn't safe to download.