Hey, We recently upgraded on of our clusters to AOS 5.01. This process went flawlessly! After the upgrading I noticed some minor bugs which I would like to share: 1) Missing names: [img]https://d1qy7qyune0vt1.cloudfront.net/nutanix-us/attachment/2616i82890AFCEDDB58CB.png[/img] Minor but worth mentioning. 2)Missing title: [img]https://d1qy7qyune0vt1.cloudfront.net/nutanix-us/attachment/2618i073B8D8311B00E3B.png[/img] Minor but worth mentioning 3)Missing export button. This is the main reason I'm posting this(preventing others to waste their time on this); It took me quite a while to find this "invisible button". [img]https://d1qy7qyune0vt1.cloudfront.net/nutanix-us/attachment/2619i018A0275429043E4.png[/img] With AOS 5.01, the (invisible) button is on the right side of the graph: [img]https://d1qy7qyune0vt1.cloudfront.net/nutanix-us/attachment/2620iEE0200D1B2B8CE66.png[/img] If I come accross more bugs I will post them here Seba
Hey all! Just got my 3 node monster Nutanix box up and running and LOVE it! Quick theoretical question though: I blew through my budget on this box, and would LOVE to have a DR setup somewhere else in the building (to combat fire, flood, earthquake, etc.) Could I take some spare hardware lying around, and as long as it meets the specs needed, install the CE version, and make that the DR for my existing paid for version of Nutanix? I've gotten mixed answers from various people is why I ask. Thanks in advance!
Hi , I am facing issue with foundation VM based process.Installation failed after phoenix got installed with fataling errorAs well as throwing error as foundation failed configuring IPMI ip's [i did manually configured iDrac IP's before starting foundation ][img]https://d1qy7qyune0vt1.cloudfront.net/nutanix-us/attachment/2127i3D11472F80496F3D.jpg[/img]
Hello Folks, NX-1065 has only two 10G port, and two 1G port. and I have only one pair of 10G switch, and one pair of 1G switch. In my opinion, I`d like to make ESXI port(mgmt, vmotion,nfs vmkernel), and CVM port(prism mgmt, cluster mirror ) run on 10G switch, and configure 10g switch uplink to other network which I can connect to ESXI MGMT and prism. make vm data port connect to separate 1g vswitch. but network guy told me the 10g switch can`t uplink to externel network, so I`d like to know Can I move the ESXI port (mgmt) and CVM mgmt(VIP) to 1G vswtich， and create addtional vmotion,nfs kernel port connected to 10G vswtich, also leave CVM cluster mirror interface connected to 10G vswtich. networking design is the key things.
Hi, I am new to Nutanix so would appreciate a bit of advice on importing a cluster into System Center Virtual Machine Manager. I have an existing Nutanix Hyper-V cluster which I want to manage via SCVMM. Has anyone got any advice in importing this into SCVMM and anything I should be aware of? Also, has anyone converted Standard switches to logical switches once the cluster has been imported or is that not a good idea? Thanks!
Hai Nutanix Team I have a question regarding to customer needs, what sould I suggest to customer if they want to plan migrate their Database existing to Nutanix Acropolis? and If existing physical email server has 2 or more NIC, 1st WAN 2nd LAN, then we migrate to acropolis, so we add another NIC to VM, so what we configure for mapping NIC's WAN or LAN is at Switch Tor or VSwitch at Acropolis ? Regards, Caplin
Hi guys， two basic questions: 1)can cvm cluster and ESXI data use the same 10g switch with no performance penelty ? (I have only one pair of 10g switch, and NX1065 one dual port card. As we know, cvm cluster must use 10g network, but I`d like also to use 10g for ESXI data network. ) 2) generally ESXI mgmt(usually,vmkernel port for NFS access) is on the same 10g link with cvm cluster(also CVM prism mgmt), If the user`d like to have a separate uplink or subnet for ESXI MGMT, is it ok ? I`m worried about the performance problem. thanks in advance !
Hello- I had a question on remote site best practices. When we create a remote site for PD replication do you think we should create a new container at the remote site for the replicated vms to reside in or just use our existing container that is running that active vms at the remote site? In the second scenario the active vms at the remote site would be on the same container as the snapshots replicating from the PD active site. Just wonder what other people are doing? Thanks, Erik
Hi everyone, I imaged a nutanix cluster these days, and found strange phenomena. I found many error key words in foundation node log, but imaging showed successful finally. I clicked "retry imaging failed nodes with last config" many times, after some node failed, and it succeed. I`d like to know why it failed and succeed after just retry. also the foundation process tab showed 3 node failed at 78%, and the node log showed it`s finished ! and after retry, it`s 100%. I`m very worried about if it`s truly successful and there`s potential risks. p.s my environment: Foundation 3.1.1, AOS 4.6.1, ESXI5.5U3D，NX-1065-G4 [img]https://d1qy7qyune0vt1.cloudfront.net/nutanix-us/attachment/1437iFB4C37F4160B7ACC.png[/img] [img]https://d1qy7qyune0vt1.cloudfront.net/nutanix-us/attachment/1439iD2B0077C9E4C1B09.png[/img] [img]https://d1qy7qyune0vt1.cloudfront.net/nutanix-us/attachment/1441i15C9718ECFC6FC6C.png[/img][img]https://d1qy7qyune0vt1.cloudfront.net/nutanix-us/attachment/1443i821C11FF15
Hello, I have had no issues building Microsoft Failover clusters using volume groups, vDisks, and MPIO; the procedure is very straightforward. Now I want to clean up a few test configurations and remove unwanted clusters, but I'm not sure of the best approach. How do I: [list] [*]List and remove attachments [*]List and remove vDisks [*]List and remove volume groups[/list]I'm sure it's all in the acli; I'm hoping someone has already written a procedure. Thanks, Scott
I'm running NOS 4.7.1 with vSphere 6.0 and loving the seemless vm migration to the remote site using Metro (for a planned outage). Is there a way to perform a failback just as seemlessly - no downtime? After reading through Mike McGhee's v2.0 Best Practices document I wasn't able to find if this is possible or not.
Hi all, we're on AOS 4.7.2 with vmware 6.0 upd 2 I was thinking to add a secondary vmknic management interface on all esxi hosts (to be used for backups) doing that just on vmware config without touching anything in cluster config? Thanks a.
Dears, I found I can`t find ESXI 6.0U2 and AOS188.8.131.52 release in Nutanix compatibility online tool. but I can find ESXI6.0U2 in the iso whitelist json. So It make me confused Which release exactly can be supported by nutanix ？ Finally, I install ESXI6.0U2 and AOS4.6.1. I·m not sure if Compatibility tool is just a best practise or not up to date timely. Can I simply always install newest ESXI version and NOS ?
Login to the community
Login with your account
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.