Installation & Configuration
This forum is the best way to get up and running with the Nutanix platform
- 1,112 Topics
- 2,983 Replies
I was looking for confirmation about a statement that I recently heard that the best practice for the power supply voltage for a block was 220 volts. And that if 110 volts was used, it was less likely that one power supply could handle the load if one power supply failed. Thank you.
I currently have 2 x NX-1450, both are running NOS 4.0.1 and I'm using Hyper-V. Everything is linked up to 10Gb and I'm using a physical veeam server outside of Nutanix to back up both clusters, he is linked up to 4Gb. We are currently suffering from a bad backup performance, let's say something around 30-40Mb/s processing rate for one VM that is on a node that isn't doing anything at all. If I perform the same tests on an NX-3360, I can easily get up to 160-180MB/s processing rate, so that kinda rules out my network/configuration. Is there anyone out there running NX-1050 with vmware or hyper-v and Veeam and that can share their configuration/processing rates?
Has anyone experienced this: When updating esxi 5.5.0 to 5.5.0 Update 1 using the update manager in vsphere, the host never reconnects. In fact, somewhere during the upgrade process, while the host is "down", iscsi gets turned ON whch causes the host to become unavailable to the vcenter server... helpdesk was able to run some commands in the ssh: # esxcfg-swiscsi -d# reboot to disable the iscsi and reboot the host which seems to fix the issue, (You have to manually reconnect the host once it is back up...) but I was curious as to the "WHY" of it all... Thanks all! Rick
Good afternoon, Can someone please sanity check something for me, one of the BEarena team is running a diagnostics test which is taking 7-8 minutes to deploy each diagnostics VM. What is considered to be a normal amount of time? At this rate the 8 node cluster will take over 1 hour to run diagnostics. Thanks Darryl
Dear All , For importing Vm's from the existing vmware envirronment to Nutanix cluster I extracted the below note from Nutanix admin doc. [i]Note: Due to a limitation with VMware vSphere, a temporary name and ip address of a controller VM must be used to mount the Target NFS datastore on both the source host and the target host for this procedure.[/i] 1.Could you please help in this what this temporary name and ip mean? the CVM ip addrress of the nutanix host is already configured ,shall i need to modify it ? or keep it as it is? 2. For target host (Nutanix Host) it is clear that it contains CVM but for the current (Not Nutanix Host) there are no CVM. 3.This should be done through adding storage on both hosts .Right? Mount the target NFS datastore on the source host and on the target host (you can mount NFS datastores in vSphere client by clicking Add Storage on the Configuration>Storage screen for a host) [b]so instead of using the CVM ip of the target host (Nutanix Host) in Se
Dear All We are installing HyperV 2012r2 with foundation 2.0, we successfully installed the Cluster, but after running the scripts for joining the hosts to domain setup_hyperv.py setup_hosts, eventhough the hosts are joined to the domain, the cluster status command stops with the below error "WARNING genesis_utils.py:325 Failed to reach a node where Genesis is up. Retrying... (Hit Ctrl-C to abort)" We disable the 10g interface when the script asks so Please advise. Thank you
we are trying to install Hyper-V 2012R2 and Phoenix 2 using the Foundation 2.0 in Oracle Virtual Box The block what we have is Nutanix 3X50. we were successlly able to configure the IP Address for IPMI, Hyper-V & CVM, while doing the imaging its getting failed at 1% with thebelow error. NameError: global name óriginal_wims" is not defined. Can anyone help on this ..
I have an NX-1450. I'm using the IMPI to mount a new Phoenix ISO. The IPMI CD-ROM Image status message shows "There is a disk mounted." However when I power cycle the node and hit F11 to bring up the boot device menu, I don't see an IPMI Virtual CDROM in the list at all. What could I be doing wrong? Here are the boot devices I see. [img]https://d1qy7qyune0vt1.cloudfront.net/nutanix-us/attachment/286iA5EB2A6F94B845B9.png[/img]
I took the admin course two weeks ago and have been tasked with writing an operations manual. One of the sections relates to backup & recovery so my question is "Do we need to backup anything like configs/settings to a USB drive in case of a serious outage?". If not, how does one recover from the most serious of outages? Being new to Nutanix, I'm not even sure what a catastrophic outage would entail so if someone could articulate same and provide a high level recovery process, I would be most appreciative.
Hello folks, Recently I was running some Resilency testing, powering down a node using IPMI (Power off server -= Immediate) to ensure VMware High Availability worked as expected for a simulated power outage. When I finished I powered the node back on via IPMI. I was surprised the CVM did not automatically start. Is it expected that the CVM would not restart when a node was powered back on? Thank you for your help.
Hello folks, What are the advantages and disadvantages for use of LACP to improve east/west traffic and prevent unnecessary north/south traffic? Are there any documents on it's use for a Nutanix installation? Is the use of LACP very common for Nutanix installations? Are there folks kicking themselves for not implementing LACP. Thank you for your opinions.
I have 3350 cluster. After upgrade cluster to 4.0.1 i have error : Critical i RESILIENCY STATUS Yes REBUILD CAPACITY AVAILABLE Yes AUTO REBUILD IN PROGRESS What i have to do ? ncli> cluster get-domain-fault-tolerance-status type=node Domain Type : NODEComponent Type : STATIC_CONFIGURATIONCurrent Fault Tolerance : 1Fault Tolerance Details :Last Update Time : Fri Jul 04 06:14:33 PDT 2014 Domain Type : NODEComponent Type : ZOOKEEPERCurrent Fault Tolerance : 1Fault Tolerance Details :Last Update Time : Fri Jul 04 05:50:41 PDT 2014 [b]Domain Type : NODE[/b][b]Component Type : EXTENT_GROUPS[/b][b]Current Fault Tolerance : 0[/b][b]Fault Tolerance Details : Based on placement of extent group replicas the[/b][b]cluster can tolerate a maximum of 0 node failure(s)[/b][b]Last Update Time : Fri Jul 04 05:55:55 PDT 2014[/b] Domain Type : NODEComponent Type : OPLOGCurrent Fault Tolerance : 1Fault Tolerance Details :Last Update Time : Fri Jul 04 05:55:55 PDT 2014 Domain Type : NODEComponent Ty
Hello, Filesystem Whitelists does not seem to work in NOS 4.0.1. However, Whitelist via nCLI works just fine. Also, if the whitelists are added via the GUI they don't show up in nCLI, and vice-versa. Has anyone encountered this issue? Thank you.
Hi everyone, we saw a nutanix conference in vforum 2014 México, how can i contact a sales representative for talk about costs of the hardware? Regards. Ing. Ernesto CardenasUnique Comm S,A de C.V.[url=http://www.unicomm.com.mx]www.unicomm.com.mx[/url]
Login to the community
Login with your account
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.