Installation & Configuration
This forum is the best way to get up and running with the Nutanix platform
- 1,181 Topics
- 3,238 Replies
Hello Folks, NX-1065 has only two 10G port, and two 1G port. and I have only one pair of 10G switch, and one pair of 1G switch. In my opinion, I`d like to make ESXI port(mgmt, vmotion,nfs vmkernel), and CVM port(prism mgmt, cluster mirror ) run on 10G switch, and configure 10g switch uplink to other network which I can connect to ESXI MGMT and prism. make vm data port connect to separate 1g vswitch. but network guy told me the 10g switch can`t uplink to externel network, so I`d like to know Can I move the ESXI port (mgmt) and CVM mgmt(VIP) to 1G vswtich， and create addtional vmotion,nfs kernel port connected to 10G vswtich, also leave CVM cluster mirror interface connected to 10G vswtich. networking design is the key things.
I've this error on the Prism Console: genesis is down on controller VM 10.1.82.21 I've run the command ncc health_checks system_checks cluster_status_check --cvm_list=10.1.82.21 and this is the error log:2016-11-10 00:16:00 INFO salt_helper.py:52 Verifying CVM salt states 2016-11-10 00:16:00 ERROR command.py:156 Failed to execute sudo /bin/ls /home/saltstates: [Errno 12] Cannot allocate memory 2016-11-10 00:16:00 CRITICAL decorators.py:46 Traceback (most recent call last): File "/home/hudsonb/workspace/workspace/User_builds/builds/build-danube-4.7.1-stable-release/python-tree/bdist.linux-x86_64/egg/util/misc/decorators.py", line 40, in wrapper File "/home/hudsonb/workspace/workspace/User_builds/builds/build-danube-4.7.1-stable-release/python-tree/bdist.linux-x86_64/egg/cluster/genesis/node_manager.py", line 2474, in sync_configuration_thr File "/home/hudsonb/workspace/workspace/User_builds/builds/build-danube-4.7.1-stable-release/python-tree/bdist.linux-x86_64/egg/cluster/genesis/node_
Hello's I have an existing VMware install using a storage array. I have a three node Nutanix cluster that i'll be migrating to. I've read through some of the solutions offered and i have an additional question. When i migrate my VM's using vmotion from the legacy platform, do i have to make a decision to which host on the Nutanix side the VM will reside and if so, do i have to think about balancing out my VM's across the three hosts? Thanks all for your time, Sky.
We are using Apache Cloudstack 4.8.0 with Vmware 5.5 to to provide IAAS services . We want to use Nutanix as underlying infrastructure Is this implementation supported ? Is there any recommendations, best practices for deployment or a reference architecture? We also plan to have HA for a class of our vm whereas others should not be replicated. What kind of mapping between Cloudstack and Vmware organisation units, and Nutnaix Protections domains strategies should we have to achieve this goal. If none of the above is supported is Apache Cloudstack in Nutanix roadmap on Vmware or AHV ? Thank you in advance
Hi, We have setup in pri production stage. Cluster is having 2 x 800 GB disk & 2 x 6 TB disk on each node. Cluster is having 3 nodes. We removed single disk from one host & after 10 min. we reinserted disk in same slot. Now cluster is showing 11 disk & not allowing us to format disk which is reinserted. How to add this disk to current cluster? Thanks in advance. Vivek
We are spinning up some new ESXi 6 clusters and have encountered some strange behavior on 1 of these clusters. The CVMs periodically contact the host machines to gather various stats. All logging seems to indicate this process is working properly, but every few hours to few days we will have a host lose connection to vCenter, and stop responding to all remote management requests (SSH, etc.) It appears that the VMware hostd service simply runs out of resources and stops responding. The only way to remedy that we have found so far is restarting the host completely. In some instances the host console was still responsive, but attempting to restart the management services was unsuccessful. We have ensured all passwords are congruent and have regenerated the certificates used to secure hypervisor to CVM communication. Has anyone seen anything similar?
as picture below[img]https://d1qy7qyune0vt1.cloudfront.net/nutanix-us/attachment/3484e038-9621-48a0-8791-935bd9b31482.png[/img] [code]when i configure the command "nutanix@cvm$ allssh 'manage_ovs --bridge_name br1 --interfaces eth0,eth1 --bond_name bond1 update_uplinks'"in cli but cannot work. plz anybody help [/code] [img]https://d1qy7qyune0vt1.cloudfront.net/nutanix-us/attachment/c418c165-ce0a-4e7c-b805-4063a0008f93.png[/img]
I have 3 nodes of Lenovo HX and running on ESXi. As we need to use Acropolis, we decided to do one click conversion and the process failed by first node unable to boot up with anything and boot up with phoenix# boot. Decided to do a reimage of the cluster so use foundation 3.5 with AHV and 2 nodes manage to install but the same failed node failed to install and stuck at 17%. Upgraded firmware and tried manual installation using AHV and later ESXi together with phoenix disc and the same node still failed while running phoenix. Attached the screenshot of the error. Appreciate any help. Thanks.[img]https://d1qy7qyune0vt1.cloudfront.net/nutanix-us/attachment/2044iB40DE4C80E55BBC9.jpg[/img]
Hi, We have 3 Hyper-V hosts those local admin passwords got expired and i changed them (and they are enabled). Now WSMan Connectivity Check fails in Prism. I have run these lines in CVM for all three hosts but it won't help: ncli managementserver edit name=hostIP password='newhostpass'ncli host edit id=cryptical-host-id hypervisor-password='newhostpass' Any ideas? Our nutanix version is 188.8.131.52.
Hi, which 10G SFP+ LR (single mode) modules are supported in Nutanix NX-8035-G5 and NX-6035C-G5 nodes with C-NIC-10G-2-SI 10G network interface cards? Do the SFP+ need special branding, as e.g. for use with Cisco switches? Thanks, Erik
Hi, we have installed a 4 node Nutanix cluster. In the installation process you have to select the cluster redundancy factor, RF2 or RF3. We tried to select RF3 but the system says that you need at least 5 nodes. So we were forced to create the cluster with RF2. In the future we plan to extend the cluster with additional nodes, may be only storage nodes. Will it be possible to upgrade from RF2 to RF3 without having to delete the existing cluster and create a new one? Stefano
Hi, Does anyone have a step-by-step guide on creating a Windows cluster on ESX on Nutanix? I've seen various examples online of using ISCI, SMB etc file shares but I'm not entirely sure on what to do. I've got the cluster as far as the file share witness working but how do I then configure the disks that make up the cluster resources? I will eventually be clustering SQL 2012 on this. Cheers, Steve
Trying to setup a Nutanix cluster (non prod) to automatically email me the NCC results. Have setup the SMTP and alert email configuration sections in Prism Element. Verified from a CVM that emails are being sent okay. Next, on a CVM, I ran ncc --set_email_frequency=4 and saw that it took successfully. Finally, running ncc --show_email_config but gettting result Value for the flag --show_email_config has not been set. Obviously I am missing something?
I have a 3-node cluster for VDI running ESXi on all three nodes. At the moment vCenter (v6.0) is running on a single virtual machine (this virtual server is on a physically separate cluster from the VDI/Nutanix one). I'm starting to look into how to go about moving from that vCenter server to the VMware vCenter appliance. What considerations would need to be made to get the 3-node Nutanix cluster to move from a Windows-based vCenter server to the vCenter appliance? Would there be downtime involved?
Hi Folks, I`m installing a new also my first AHV(latest version installer-el6.nutanix.20160601.20,with latest AOS4.7.1). I`m new to KVM,as well as AHV. So some problems met, tough time these days. Any help would be highly appreciated, thanks in advance. 4) centos 4.7 guest vm also the guide 《AMF_Guide-AOS_v4_7》 says only support CentOS 6.4, 6.5, 6.6, 7.0, 7.1, 7.2 Is there a support matrix for all the vm guest type ? [img]https://d1qy7qyune0vt1.cloudfront.net/nutanix-us/attachment/1561iBAB9B8FBC3F8C6C2.png[/img] 7) kvm vm mgmt via Prism there`s too little function for manaing ahv hypervisor and guest vm via prism(it seems only basic vm life-cycle mgmt). Can I add usb device, make an ovf template like vmware vcenter ? I`m not such familiar with virsh, ovs cli tool
Hello! I am looking to delegate the ability to shut down and start back up our Nutanix cluster to our team without providing the root nutanix passwords. Is it necessary to perform a cluster stop/start via SSH into a CVM, or can we shut down everything without stopping and starting the cluster? Thanks! Ryan
Hi, I apologize in advance for the long post !!!! From the information I've reviewed, the Xpress Models (I'm looking at Lenovo's offering) will support a maximum of 4 nodes. In addition there is no Protection Domain. Let's pretend that I'm a cloud provider for multiple customers, and I host their (Domain/File/Print/Email/Sql) servers in my 4 node domain. I want to use an (Acronis/Storagecraft) in VM backup software. These products store their backup images to a NAS/Share. If I setup a second 3 node Xpress Model that is storage "heavy" running Acropolis File Services, as a NAS/Share destination for my backup images, is there any restriction that you can think of that would prevent/limit me from doing this ? Right now we use StorageCraft and save to a Windows Storage NAS that has RAID 10 and I'm concerned that writing to will not be able to handle the I/O? I'm guessing that the 3 node Xpress AFS will easily handle intensive I/O writes. On weekends, the Full backup runs a
Hi , I am facing issue with foundation VM based process.Installation failed after phoenix got installed with fataling errorAs well as throwing error as foundation failed configuring IPMI ip's [i did manually configured iDrac IP's before starting foundation ][img]https://d1qy7qyune0vt1.cloudfront.net/nutanix-us/attachment/2127i3D11472F80496F3D.jpg[/img]
Hello guys, I have a question regarding schedule auto-reboot of nutanix hyper-v nodes. Understand there is a cau preupdate powershell script to auto reboot the node 1 by 1 and ensure CVM is up and running before executing reboot action on the next node. Is there a similar script we can use so that we can schedule auto reboot the Hyper-V nodes 1 by 1 without having the risk of all the nodes / CVM or cluster down at the same time? Thanks. Regards, Tze Siong
Login to the community
Login with your account
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.