Installation & Configuration
This forum is the best way to get up and running with the Nutanix platform
- 1,179 Topics
- 3,234 Replies
I am planning to make Nutanix PoC with the Community Edition. And if successful use Nutanix. The goal is to make a tree or four node cluster with local storage.And how would it be best to prepare the tree nodes. The have buildin Hardware Raid controller. (Cisco UCS servers). I could put in up to 8 disks. They can be in same size or in deferent sizes. I would probably be some 1-2 TB SSD disks. I can make one or more Raid disks. (Raid 1, 5, 10). Would would make sense in a performance and redundancy perspective.I have read the CE Getting started guide. Here there are mentioned:Storage Devices (Max 4 HDD/SSD) Cold Tier (500 GB or greater, maximum 18 TB (3x6 TB HDDs) Storage Devices, Hot Tier Flash (Single 200 GB SSD or Greater. Hypervisor Boot Device (32 GB pr. Node)Witch storage needs to be fast? And should I have my hardware raid make the disks ?.Would it make sense to build a raid 10 on eight equal sized disks and make logical drives fitting each of the above disks types?
I would like to apply Host-VM affinity/anti-affinity rules. Some of these referenced hosts are currently in maintenance mode.Am I able to apply these rules while the hosts are in maint mode? If not, am I correct in thinking that once the hosts are taken out of maint mode, then I can apply the affinity rules, and the system will apply these rules (migrating VMs etc. where it needs to)?
As Nutanix recommended connectivity is 10G but due to some ports availability issue we initially configured Nutanix cluster with 1G but now we are planning to change our Nutanix environment uplink to 10G, any advise will there be any impact or can you share the procedure to change uplinks to 10G.
Hello, after a firmware upgrade, one host is locked DOWN in maintenance mode :CVM: 192.168.131.132 DownI can run a command to exit maintenance mode but it is not working and it is "Removed from metadata store" :nutanix@:~$ ncli host edit id=7 enable-maintenance-mode=falseId : …Hypervisor Address : 192.168.131.122Host Status : NORMALOplog Disk Size : 394 GiB (423,054,278,649 bytes) (3.9%)Under Maintenance Mode : false (ncli_manual)Metadata store status : Node is removed from metadata store...So I tried to recover it but the script fails :nutanix@:~$ python /home/nutanix/cluster/bin/lcm/lcm_node_recovery.py 192.168.131.122Recovering node 192.168.131.122Checking if the node 192.168.131.122 is in phoenixCurrent node status host Node 192.168.131.122 out of phoenix modeBringing host None out of maintenance mode Successfully put host None out of maintenance mode Bringing CVM 192.168.131.122 out of maintenance modeTraceback (most recent call last): File "/home/nutanix/cluster/bin/lcm/lcm_node_
Hi I have setup the nutanix AOS 184.108.40.206 with Hyper V hypervisor server 2016 on 2 nodes with foundation.I have some question that I am still unable to figure out.I am able to console in the SVM in both nodes via SSH, but I am unable to access the Prism on Web Browser til I create the cluster in SVM. Is this correct?After accessing the Prism, most of the prism element function like VM, Storage and etc apart from health was available. Is this due to nutanix main role in Hyper V was limited as to only manage to see dashboard?I try to add the second node to the first node that has created the cluster in Prism, but was unable. Does it mean I can only do VM creation, modification, deletion etc and join cluster via the Server failover over cluster in Hyper V?Is there a guide for Nutanix with Hyper V from setup to managing, as I have read the AAPM which doesn’t seems to apply on this.
Dears:I followed the LCM dark site guide to configure the IIS, the screenshot is shown is below: But when I tried to let PC to use local web server, the error showed below: It looks builds folder is not exist in the release folder. I am not sure if this is the key point. Hope anyone could help me. Thanks!
Hello I am looking for guidance regarding setting up a 3-node and 2 node AHV cluster on 1G switches . Our workloads are low. The official response is to use a 10G switch fabric. If I opt for 2 node, can I connect the servers back to back for distributed storage ? Can data network go over 1G ? I plan to procure 2 servers with 2x 10G for distributed storage , 2x 1G NICs for data Thanks,Ajay
We moved a NX3000 to a new environment but one of the nodes died. So we had to reimage that node. The other two where still up and running. But after a power-failure on the rack, the two nodes rebooted but won't come up anymore. Genesis showing some failures:2016-08-18 14:38:40 INFO node_manager.py:3492 Svm has configured ip 10.160.35.124 and device eth0 has ip 10.160.35.1242016-08-18 14:38:43 INFO node_manager.py:3542 Setting up key based SSH access to host hypervisor for the first time...2016-08-18 14:38:43 INFO hypervisor_ssh.py:32 Trying to access hypervisor with provided key...2016-08-18 14:38:46 INFO hypervisor_ssh.py:40 Failed.2016-08-18 14:38:46 INFO hypervisor_ssh.py:44 Trying to access hypervisor with provided password...2016-08-18 14:38:49 INFO hypervisor_ssh.py:52 Failed2016-08-18 14:38:49 ERROR node_manager.py:3547 Failed to set up key based SSH access to hypervisor, most likely because we do not have the correct password cached. Please run fix_host_ssh command manually to
So the hosts themselves have been changed accordingly for the timezone the customer wants, but was wondering, for Prism to show the correct timestamps on stuff being done, would this come from the CVM time zone or where is the setting for the switching PRISM to use the customer specified timezone for timestamps?
Hi Folks, For Nutanix product, Is it a best practice to have the latest software version (e.g AOS,NCC) installed, No matter first-time installation, or upgrade. Some venders may advise just install the slightly lower version then the newest. My real user case: we installed NX 2 months ago, and plan to put it into produciton these days, but two new AOS version has been available on Portal.nutanix.com, So Can I update AOS to the newest one ?
I have 3350 cluster. After upgrade cluster to 4.0.1 i have error : Critical i RESILIENCY STATUS Yes REBUILD CAPACITY AVAILABLE Yes AUTO REBUILD IN PROGRESS What i have to do ? ncli> cluster get-domain-fault-tolerance-status type=node Domain Type : NODEComponent Type : STATIC_CONFIGURATIONCurrent Fault Tolerance : 1Fault Tolerance Details :Last Update Time : Fri Jul 04 06:14:33 PDT 2014 Domain Type : NODEComponent Type : ZOOKEEPERCurrent Fault Tolerance : 1Fault Tolerance Details :Last Update Time : Fri Jul 04 05:50:41 PDT 2014 [b]Domain Type : NODE[/b][b]Component Type : EXTENT_GROUPS[/b][b]Current Fault Tolerance : 0[/b][b]Fault Tolerance Details : Based on placement of extent group replicas the[/b][b]cluster can tolerate a maximum of 0 node failure(s)[/b][b]Last Update Time : Fri Jul 04 05:55:55 PDT 2014[/b] Domain Type : NODEComponent Type : OPLOGCurrent Fault Tolerance : 1Fault Tolerance Details :Last Update Time : Fri Jul 04 05:55:55 PDT 2014 Domain Type : NODEComponent Ty
Hello everyone, I was wondering if it's possible to have different VLANs ID/Subnet range for each of the different traffic type bellow: - Hypervisor Management (ESXi) - Nutanix Cluster administration - Nutanix Cluster replication / AutoPath And the very best would be to even have replication & AutoPath on different VLANs. The rationale here is to comply with customer internal security policies regarding DMZ virtualization. We are allowed to use VLANs and are not forced to use differents physical ports, but the security team (worldwide bank) is concerned about the ESXi & Nutanix being on the same VLAN. Sylvain.
I have an NX-1450. I'm using the IMPI to mount a new Phoenix ISO. The IPMI CD-ROM Image status message shows "There is a disk mounted." However when I power cycle the node and hit F11 to bring up the boot device menu, I don't see an IPMI Virtual CDROM in the list at all. What could I be doing wrong? Here are the boot devices I see. [img]https://d1qy7qyune0vt1.cloudfront.net/nutanix-us/attachment/286iA5EB2A6F94B845B9.png[/img]
Hi there, I'm facing a problem with Prism that display some weird chart. As you can see above, the chart shows no value (blank) every 6mins. [img]https://d1qy7qyune0vt1.cloudfront.net/nutanix-us/attachment/685iB57C026A9AF18DE6.png[/img] When I go to the analysis page, the blank is still present and there is some strange movement of the Memory line: [img]https://d1qy7qyune0vt1.cloudfront.net/nutanix-us/attachment/686i56E0C99ADEDE0F4A.png[/img] Do you know where this could come from ?
Dears , I have aleady configured 3 nodes Nutanix block(NX-1000 series) more than year ago ,and now our customer requested anew node (NX-3000 serie) to be added into existing cluster.Could you please letme know the best way to add this node without impacting the running nodes? Regards,
I have this same error on 3 nodes - I have checked networking / Time and connectivity and it all seems ok. Running ncc health checks on different services gives me a range of things to check. Not sure where to start with this. The Hosts are Hyper-V.
I have encountered a fault when installing the guest tools on Linux. The guest agent fails on start and atempts to start it exhibit the same. Linux is Centos 6.8 and AHV Nutanix 4.6.2 Process to produce problem /media/installer/linux/install_ngt.pyUsing Linux Installer for centos linux distribution.Setting up Nutanix Guest Tools - VM mobility drivers.Successfully set up Nutanix Guest Tools - VM mobility drivers.Installing Nutanix Guest Agent Service.Successfully installed Nutanix Guest Agent Service.Waiting for Nutanix Guest Agent Service to start.Nutanix Guest Agent Service failed to start.Check /usr/local/nutanix/logs/guest_agent_stdout.log for info.more /usr/local/nutanix/logs/guest_agent_stdout.logTraceback (most recent call last): File "/usr/local/nutanix/bin/guest_agent_service.py", line 239, in start() File "/usr/local/nutanix/bin/guest_agent_service.py", line 56, in start service = NgtGuestAgentService() File "/usr/local/nutanix/bin/guest_agent_service.py", line 1
I currently have 2 x NX-1450, both are running NOS 4.0.1 and I'm using Hyper-V. Everything is linked up to 10Gb and I'm using a physical veeam server outside of Nutanix to back up both clusters, he is linked up to 4Gb. We are currently suffering from a bad backup performance, let's say something around 30-40Mb/s processing rate for one VM that is on a node that isn't doing anything at all. If I perform the same tests on an NX-3360, I can easily get up to 160-180MB/s processing rate, so that kinda rules out my network/configuration. Is there anyone out there running NX-1050 with vmware or hyper-v and Veeam and that can share their configuration/processing rates?
Hi Folks, Got something wiered happened today, might be a bug or something. I was to change current time zone setting by Get-NTNXCluster | Set-NTNXCluster -Timezone "Australia/Sydney" However it throw me an error: "The cluster name may contain only English letters, decimal digits (0-9), dots, hyphens and underscores. Set-NTNXCluster : The remote server returned an error: (500) Internal Server Error." Then I thought it might be because I did not include cluster id, so I ran below command : Get-NTNXContainer -Id xxxxxxx-xxxxxxx-xxxxxx-xxxxxx-xxxxxxxxxxxxxx::4769 | Set-NTNXCluster -Timezone "Australia/Sydney" Apprently you will find instead of using Get-NTNXCluster, I used Get-NTNXContainer which I copied from command I previously ran without double check, you would assume that will throw me an error, but it did not, it finished with a "Success". Yes, it did change the timezone on the cluster, however at the same time it changed cluster name to the Container name as wel
[img]https://d1qy7qyune0vt1.cloudfront.net/nutanix-us/attachment/160iFF41C41040150583.png[/img] Does anyone test Nutanix with FIO but not with diagnostics VM? When we testing Nutanix our processors on host all cores are almost 100 % loaded. We started test 220.127.116.11 and get some disks maked offline, we return them and upgrade to 18.104.22.168. For the first time after install results is ok (13k IOPS for 400 Gb disk on 3350), but next day (all data is cold) we have afwul resuls with zero counters sometimes.For example 0 read/0 write. Nutanix is upper consoles on slide. Does it normal resuls? ncc check tell that everything is ok. For test on slide we use 100Gb disk so we should be in one node. Fio config >sudo fio read_config.ini root@debian:~# cat test_vmtools.ini [readtest] blocksize=4k filename=/dev/sdb rw=randread direct=1 buffered=0 ioengine=libaio iodepth=32 >sudo fio write_config.ini root@debian:~# cat test_vmtools.ini [writetest] blocksize=4k filename=/dev/sdb rw=randwrite direct=
Login to the community
Login with your account
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.