Hey, We recently upgraded on of our clusters to AOS 5.01. This process went flawlessly! After the upgrading I noticed some minor bugs which I would like to share: 1) Missing names: [img]https://d1qy7qyune0vt1.cloudfront.net/nutanix-us/attachment/2616i82890AFCEDDB58CB.png[/img] Minor but worth mentioning. 2)Missing title: [img]https://d1qy7qyune0vt1.cloudfront.net/nutanix-us/attachment/2618i073B8D8311B00E3B.png[/img] Minor but worth mentioning 3)Missing export button. This is the main reason I'm posting this(preventing others to waste their time on this); It took me quite a while to find this "invisible button". [img]https://d1qy7qyune0vt1.cloudfront.net/nutanix-us/attachment/2619i018A0275429043E4.png[/img] With AOS 5.01, the (invisible) button is on the right side of the graph: [img]https://d1qy7qyune0vt1.cloudfront.net/nutanix-us/attachment/2620iEE0200D1B2B8CE66.png[/img] If I come accross more bugs I will post them here Seba
Hi, we're starting to look at using Spark on our Nutanix cluster. Not in a huge way but to run some ETL processes in parallel. I'm under pressure to install Hadoop, or at least HDFS on the cluster but the entire concept of adding a distributed, resilient "filesystem" (actually I think it's more an object store) on top of the one already provided by Nutanix seems somewhat off. Is there a recommended way of doing this? I know that containers are exported to ESXi via NFS. Would that be usable? Would that be able to leverage stargate to access from anywhere? All I really need is a globally available volume shared between all my nodes.
As the most of you probably know, with AOS 5.6, has been introduced the Volume Group with Load Balancing function, well known as VGLB As i know the 5.6 version is in short term support. Now i'm deploying a two Oracle 12c RAC clusters on 2 8000 6-nodes AOS-AHV clusters with AOS 220.127.116.11. It involves using volume group with multiple vdisk, network related configuration, Linux related tuning and so on... Of course, with 18.104.22.168 (so far the latest GA version in long term support), i've not the VGLS choice, every volume group's I/O is managed by a single CVM. By the other side, with AOS 5.6, i could distribute this load on every single CVM and every single node storage on the cluster. Of couse this heavily impact on the resiliency, the performance and the resources distribution. The questions are two and i need some suggestions. 1) could be better to upgrade to 5.6 even if it is in short term support? 2) is it possible to update "on the fly" the volumes's configuration with "vg.upd
Hi, I am new to Nutanix so would appreciate a bit of advice on importing a cluster into System Center Virtual Machine Manager. I have an existing Nutanix Hyper-V cluster which I want to manage via SCVMM. Has anyone got any advice in importing this into SCVMM and anything I should be aware of? Also, has anyone converted Standard switches to logical switches once the cluster has been imported or is that not a good idea? Thanks!
Hi last week we installed the first Nutanix cluster in Venezuela with 3 nodes, it was founded and as a best practice, the vSwitch br0 was "splited"into two vswitches by creating the br1 then the 10 GBE interfaces (eth2 and eth3) were assigned to the br0 and the 1 GBE interfaces (eth0 and eth1) were assigned to the br1, the administrator of the cluster decided to switch that assignment, i.e. to assign the 1 GBE interfaces to the br0 and the 10 GBE interfaces to the br1 by executing on the CVM's command line the command. [b][i]allssh manage_ovs --bridge_name br0 --bond_name bond0 --interface 1g update_uplinks[/i][/b] after the third node response to the command, the connection to the ssh sesion to the CVM was lost and now can't access the CVMs to revert the command neither ssh connection nor PRISM. In the nodes the 1 GBE interfaces leds are orange and red, only have access to the console by connecting a monitor and a keyboard to the physical nodes, my question is, does exist a way t
Hi guys， two basic questions: 1)can cvm cluster and ESXI data use the same 10g switch with no performance penelty ? (I have only one pair of 10g switch, and NX1065 one dual port card. As we know, cvm cluster must use 10g network, but I`d like also to use 10g for ESXI data network. ) 2) generally ESXI mgmt(usually,vmkernel port for NFS access) is on the same 10g link with cvm cluster(also CVM prism mgmt), If the user`d like to have a separate uplink or subnet for ESXI MGMT, is it ok ? I`m worried about the performance problem. thanks in advance !
Hello, I have had no issues building Microsoft Failover clusters using volume groups, vDisks, and MPIO; the procedure is very straightforward. Now I want to clean up a few test configurations and remove unwanted clusters, but I'm not sure of the best approach. How do I: [list] [*]List and remove attachments [*]List and remove vDisks [*]List and remove volume groups[/list]I'm sure it's all in the acli; I'm hoping someone has already written a procedure. Thanks, Scott
Hello- I had a question on remote site best practices. When we create a remote site for PD replication do you think we should create a new container at the remote site for the replicated vms to reside in or just use our existing container that is running that active vms at the remote site? In the second scenario the active vms at the remote site would be on the same container as the snapshots replicating from the PD active site. Just wonder what other people are doing? Thanks, Erik
Hi all, we're on AOS 4.7.2 with vmware 6.0 upd 2 I was thinking to add a secondary vmknic management interface on all esxi hosts (to be used for backups) doing that just on vmware config without touching anything in cluster config? Thanks a.
While trying to add a new node to my cluster where I need to re-image before adding to the cluster, the iso_whitelist.json in the foundations downloads page appears to be broke, just redirect back to the support portal. appologies if this is the wrong place to rais the issue Regards, Stephen
Hi all, I created a document describing how to upgrade ESXi on the XC platform for a customer. It uses the Dell iDRAC specifically. If anyone has feedback on anything that was missed or could make things easier I'd appreciate it. [list][list=1] [*]Move VMs and place host into maintenance mode[list=1] [*]From vCenter Web Client navigate to Hosts and Clusters [*]Select the host to be upgraded [*]Click Related Objects and Virtual Machines [*]Select all running VMs [b]except CVM[/b] and migrate to another host [*]Once all VMs have been migrated right click the host, select Maintenance Mode and Enter Maintenance Mode [*]SSH to the Nutanix CVM on the host [*]Run the following at the prompt: cvm_shutdown -P now[/list]Login to iDRAC of the ESXi server[list=1] [*]Default user: root [*]Default pass: calvin[/list]Click Launch in the Virtual Console Preview section [*]Click Run twice and accept any errors on the Java applet [*]Click on Virtual Media then Connect Virtual Media [*]Click Virtu
Hello Nutanix Currently, my demo cluster has had problem in removing host from cluster The host that I wanted to remove has an SSD showing status: "Marked for removal but not detachable" and it has been kept by that status for a long time with nothing changed despite that the host was shown up as successfully removed the host So, after that I have tried reinstalling this host but that status still showing up and I cannot do anything with the new reinstalled host as well as expanding the cluster with it By the way, I noted that before reinstalling this host, this host was running as a cluster member
I'm running NOS 4.7.1 with vSphere 6.0 and loving the seemless vm migration to the remote site using Metro (for a planned outage). Is there a way to perform a failback just as seemlessly - no downtime? After reading through Mike McGhee's v2.0 Best Practices document I wasn't able to find if this is possible or not.
Dears, I found I can`t find ESXI 6.0U2 and AOS22.214.171.124 release in Nutanix compatibility online tool. but I can find ESXI6.0U2 in the iso whitelist json. So It make me confused Which release exactly can be supported by nutanix ？ Finally, I install ESXI6.0U2 and AOS4.6.1. I·m not sure if Compatibility tool is just a best practise or not up to date timely. Can I simply always install newest ESXI version and NOS ?
Login to the community
Login with your account
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.