Installation & Configuration
This forum is the best way to get up and running with the Nutanix platform
- 1,181 Topics
- 3,238 Replies
Hi all, This morning I tried to log in to Prism only to find it was unavailable. I tracked down which node had the IP of prism (logged into all nodes - there must be an easier way!) but I didn't know what to do to restart it. Anyway, I decided to go for breakfast and it was back up again when I returned, but was down for at least 10 minutes. Can I restart it, should it move to another node? Regards, Nick
Hi All, According to my material on hand that Nutanix will keep a copy in local storage for local data required. So if there are some storage intensive VMs running in one node and the local storage in that node is not enough for data of all local VM, some data should still need to be retrieved in remote node? would like to know the exact behavior. And one more question for DR. If I use vsphere, I still need to buy SRM license or SRM is optional (e.g. for some dedicated features?). For Hyper-V, since the DR feature of Hyper-V is not as rich as VM, can we still use the "Protect Domain" feature in Nutanix for DR in Hyper-V? Thanks a lot in advance! Best Regards, Teru Lei
Hi all, I have a upcoming appliance installation and it might be necessary that we need to downgrade the current NOS version. Is there an out-of-the box approach / method available that could be used to downgrade an existing cluster? Or maybe KB-articles that could help me regarding this issue? Thank you and best regards, Andreas
Hello Nutanix community, Yesterday, I tried installing a Nutanix block by Foundation PC to change the hypervisor from HyperV to AHV. But unluckily error occurred with 3 nodes per 4 nodes and one node had been successfully installed. Although all 4 nodes had already run with the same configuration before, why this case happened?! After 1 day troubleshooting, I had finally and luckily installed 3 nodes left by resetting IPMI on these nodes. As I already followed the Foundation document from Nutanix, "please have a look at the logs files at /data/logs/foundation... on each nodes" to get the details of problem, but actually, in my case those log files did not contain any data that related to the errors occurred?! So how could I dive deeper into the problem that I had? They were still there but no words, to sentences, no data?! I really need a document that explains in detail how Foundation works, so that I can troubleshoot it easier than I had yesterday. Can Nutanix publicily post it
I went about updating a couple ESXi hosts the old school way with using zip files and doing vib installs. My first host worked without a hitch. My second victim... I mean host is having problems. In total I installed 4 zips per host. After bringing the second one back out of maintenence mode, and having the CVM start, a few minutes later I keep getting errors about the CVM not connecting. Then I check the console on the CVM in question, and it keeps rebooting, after a kernel panic. Here's the beginning of the output: [img]https://d1qy7qyune0vt1.cloudfront.net/nutanix-us/attachment/971iE6A6E0FF19ABCDA1.png[/img] Any insight on what I can do?
Hello Guys, I need your help because I run into a strange issue with my Nutanix cluster (NOS 4.6). Nutanix generates a warning as below: "Data Protection > Protection Domain > Container Mount > Containers are not mounted on all nodes". We've got only one container and it's properly mounted on all ESXi 6.0 hosts. NCC (ncc health_checks data_protection_checks protection_domain_checks container_mount_check) returns: Detailed information for container_mount_check:Node [url=http://172.30.10.13/]172.30.10.13[/url]:FAIL:Container ntnx-ds not mounted in : 172.30.10.111 172.30.10.112Node [url=http://172.30.10.12/]172.30.10.12[/url]:FAIL:Container ntnx-ds not mounted in : 172.30.10.111 172.30.10.113Node [url=http://172.30.10.11/]172.30.10.11[/url]:FAIL:Container ntnx-ds not mounted in : 172.30.10.112 172.30.10.113Refer to KB 1888 for details on container_mount_check I've checked all tips from KB article 1888 but everything seems to be OK. Furthermore the cluster has been restarte
Hi Folks, I'm in the middle of a deployment and have a question. We have a pair of Nexus 3548's acting as 10 G edge switch to our Nuntanix Clusters who are trunked up to a pair of Cisco 4500's We have 4 nodes in the cluster with a total of 10 nics all 10 gig and assigned to the VDS in Vsphere. On the VDS we set the MTU to 9000 Is there a way to separate the CVM traffic from the management traffic? I would like to keep the CVM traffic from going up to the 4500's however we needed to create an SVI on the 4500's so we could manage them. I'm getting Jumbo frame errors on both the Nexus and the 4500's and when I set the MTU on the VDS back to 1500 the errors went away. My guess is the CVM traffic is what's causing the issues and I would prefre not to mess with the MTU settings on the 4500's. What is best practice here? What are other doing? Thanks
i got error message when created container? Failed to modify service settings. The default virtual machine configuration store cannot be changed to '\CLHV.abcd.comiso -VhdPath CLHV.abcd.comiso': The user name or password is incorrect. (0x8007052E) Ensure the path is valid and the directory exists. If the directory is remote, ensure that it is properly configured for sharing, and that the current user and computer accounts have read/write access. what happen? can you tell me? thanks
What is the recommended CPU performance setting for AHV? Im curious if AHV has the ability to interract with the Intel throttling or if it's better to just have it run full speed all the time. The CPU power options are: Performance Per Watt DAPC Performance Per Watt OS Performance Acropolis Hypervisor 201602173 Dell xc730xd nodes with 2 X Xeon E5-2630 v3 @ 2.40GHz The Dell Active Power Control (DAPC) mode allows the BIOS to manage the processor power states in order to achieve Performance/Watt maximized at all utilization levels and workload types while still meeting performance requirements. In the OS (Demand Based Power Management (DBPM) mode, the operating system (OS) controls the processor’s power management. In the Maximum Performance mode, the processor runs at the highest frequency all the time. [list] [*]Performance Per Watt Optimized (DAPC) This mode allows the BIOS to manage the processor power states in order to achieve Performance/Watt maximized at all utiliz
Hi, We have recently upgraded our Nutanix cluster from 4.0.3 to 18.104.22.168 and are noticing slower logons to the Prism interface. Before the upgrade this was almost instant while now it might take up to 10 seconds. Has anyone had the same experience?
Hi , has anyone encounter any issue regarding firefox running on foundation 3.1 or 22.214.171.124 ? I've tried deployed or even upgrade my foundation from the older version to 3.1 and it seems that the same issue still happen on firefox. [img]https://d1qy7qyune0vt1.cloudfront.net/nutanix-us/attachment/1011i14A517F58C6C9B74.jpg[/img] Regards, Edwin.
Hi, We have a cluster of Dell XC730xd-12 and would like to update them to ESXi U3a (to match the rest of our vSphere environment). I know Dell don't have a customised ESXi U3a iso yet, so is there any advice on how to do the upgrade such as any specific extra drivers required in the Update Manager baseline or will I be able to use the json and the vanilla VMware zip and use one-click cluster upgrade? Regards, Nick
Hi, Can anyone help me find a way to stop the creation of a remote site. I was told it would take around 10-15 minutes, but so far after 3 hours it is still running at 0%! There is probably some configuration issues on the network, so I would like to fix that up first and try again. For the life of me, however, I cannot find how to stop the task. Regards, Nick
Just had the cluster installed. I'm going to patch my 5.5 installs, I put the first node into maintance mode and its just waiting for the CVM to power off or move. Should the CVM shutdown on its own or do I need to power it off before I patch and reboot? thanks,jb
I have running foundation to reimage my Nutanix Box , it stuck at 73 % , below are the error i get from the log : 20160330 08:16:02 INFO INFO: Conserving space in bootbank by moving large installation files to the scratch partition.20160330 08:16:02 INFO INFO: Running cmd [u'mkdir -p /vmfs/volumes/NTNX-local-ds-13SM15340016-C/Nutanix']20160330 08:16:02 CRITICAL FATAL: Execution of cmd [[u'mkdir -p /vmfs/volumes/NTNX-local-ds-13SM15340016-C/Nutanix']] failed for reason [mkdir: can't create directory '/vmfs/volumes/NTNX-local-ds-13SM15340016-C/': Operation not permitted]20160330 08:16:02 INFO INFO: Running cmd ['touch /bootbank/Nutanix/firstboot/.firstboot_fail']20160330 08:16:02 INFO INFO: Changing ESX hostname to 'Failed-Install'20160330 08:16:02 INFO INFO: Running cmd ['esxcli system hostname set --fqdn Failed-Install']20160330 08:16:02 ERROR Exception in running Traceback (most recent call last): File "/home/abetiger/main/builds/build-installer-126.96.36.199-release/foundation-python-tr
Hi all, finally, after a long POC i'm a new Nutanix customer :D I'm waiting N°2 NX-1365S with 6 nodes and the idea is to create a new vsphere cluster version 6 and migrate the existing VM's from the old vsphere 5.5u2. I've just check the Nutanix support page and i see the latest supported version is 6.0u1a build 3.073.146. This version seems to be affected to CBT issue, is it possible to install the correction patch ([url=http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2136854)]http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2136854)[/url] after the the installation? Or install directly the 6.0u1b? Thanks in advance, and have a good w.end!
HI Guyzz, recently i used foundation 3.0 to image nutanix nodes with hyperv.but imaging got stuck at 82%.then i imaged node using 2.version of foundation.Then i came to see the 3.0.1 release note in which it is mentioned the problem with hyperv imaging is removed in 3.0.1 version so if you have any base version upgrade to latest version and check the release note before using foundation hope this will help cheers
Hi I am using foundation 3.0.1 to image a 3 node cluster of Dell R730's however it fails at 2% progress for StandardError: Mount failed: NFS path not in 'remoteimage -s' The system can access and configure the Dell iDRAC IPMI interface no problem and the virtual cd is enabled on each server. It appears to not be able to mount the required ISO according to the logs. [i]There is no ISO file in the path mentioned.[/i] Below is the relevant section of the log. Using nutanix_installer_package-danube-188.8.131.52-0771.tar and ESXi 6.0.0 iso image. Any help would be greatly appreciated. Andrew 20151118 072259: /opt/dell/srvadmin/sbin/racadm -r 10.50.177.73 -u root -p calvin remoteimage -c -l 10.50.177.203:/home/nutanix/foundation/tmp/phoenix_node_isos/foundation.node_1.iso -u nutanix -p nutanix/4u20151118 072302: /opt/dell/srvadmin/sbin/racadm -r 10.50.177.73 -u root -p calvin remoteimage -s20151118 072305: Security Alert: Certificate is invalid - self signed certificateContinuing execution.
Hello, We have just recently stood up a brand new cluster and our prepping it for production use. Currnetly we are using acrpololis verion 184.108.40.206 + vsphere version 6.0. Curious to hear if anyone has any feedback in regards to any upgrades that we should consider prior to putting this into production. I know there are some CBT bugs w/ vsphere 6.0 and have also heard that there are some issues with certain combinations of acropolis + vsphere. If you've had stability with a particualr combination it'd be great to hear your feedback. Thanks in advance.
Login to the community
Login with your account
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.