Installation & Configuration
This forum is the best way to get up and running with the Nutanix platform
- 1,148 Topics
- 3,142 Replies
Upgrade Status of 'Upgrading' hung in Prism Central
I'm running Prism Central 184.108.40.206 and have recently upgraded 6 of our clusters to 220.127.116.11. Of those 6 clusters, 3 of them are still showing an Upgrade Status of 'Upgrading' in Prism Central. It's been a over a month for one cluster and I've restarted Prism Central appliance to no avail - has anyone else seen this?
We are moving from a traditional model with shared storage to Nutanix converged Infra, I want the options for VM migration.I want a best solution, without downtime to migrate my VMs:Our current traditional is:-4 node cluster (2 clusters)-FC storage array-ESXi version 5.5-vCenter 5.5-10Gbps network-vDS - Distributed Switch Our new Nutanix model is:-4 node Nutanix cluster (2 clusters)-NFS local storage-ESXi version 6.0-vCenter 6.0-10Gbps network-NSX Thanks for the suggestions....
What's the behavior when the Nutanix node local storage is not enough for data of local VM?
Hi All, According to my material on hand that Nutanix will keep a copy in local storage for local data required. So if there are some storage intensive VMs running in one node and the local storage in that node is not enough for data of all local VM, some data should still need to be retrieved in remote node? would like to know the exact behavior. And one more question for DR. If I use vsphere, I still need to buy SRM license or SRM is optional (e.g. for some dedicated features?). For Hyper-V, since the DR feature of Hyper-V is not as rich as VM, can we still use the "Protect Domain" feature in Nutanix for DR in Hyper-V? Thanks a lot in advance! Best Regards, Teru Lei
Storage Node - disk utilization
We added a couple of storage nodes to a modest 3-node cluster that was running out of disk space. The alerts about running out of storage, and not having enough for redundancy, are gone. A few months in, and every time I look in Prism, under Hardware, I see that the storage nodes' "Total Disk Usage" has not gone up very much. The compute nodes' disk usage is about 5-6 times as much as the storage nodes (and the compute nodes actually have more storage onboard than the storage nodes). I understand that Nutanix tries to keep a VM's storage on the same host that provides its compute resources (and the copy is sharded and spread out among other nodes). Is that why I am seeing so little utilisation of the new storage nodes in Prism?
Repost:Communication trouble of CVM when building CE on Nested ESXi
# On March 4, I posted the following but I have written the wrong answer myself, and the status of that bank has become "solved". # Actually it remains unresolved. I will post it again because I want to resolve it by all means. # The problem is "CVM can only communicate with AHV". Dear All, I am trying to build Nutanix CE into the environment of Nested ESXi. I downloaded and prepared the latest version of ce - 2018.05.01 - stable.img.gz. The ESXi to build the AHV is 6.0. First of all, we started to work as a single node cluster, we plan to develop into a 3 node cluster if it works. 192.168.0.27/24 to AHV, GW: 192.168.0.254. I set up CVM 192.168.0.28/24, GW: 192.168.0.254 and started the installation. After waiting for a while, a message stating that the installation was successful is displayed and the message "Nutanix CVM IP: 192.168.0.28" is displayed. However, only AHV can communicate with this CVM, and it can not communicate with other devices on the same subnet. If you che
Nutanix, Hyper-V, Veeam and FC tape library
Hello Guys, I got a new environment to take care of and need some help here. Setup is based on Nutanix 3 Nodes with Hyper-V cluster. There is requirement to use Veeam BR as backup solution. All veeam components has to be virtualized i.e., Backup server, Proxy Server, Repository and Tape Server if required. Backup destination is a Tape library which can be connected via FC. Documents that I read "veeam on Nutanix with Hyper-V" mentions about hybrid scenario, where Physical repository server can be used to connect tape library. In my case there is no physical server available. What can be possible ways to get this done? 1. Is it mandatory to use a physical server to connect to tape? 2. Can an HBA installed in NTNX Node be used by a VM on Hyper-V (pass-through)? 3. Can I designate Hyper-V parent OS as Veeam Proxy and Tape server and use HBA directly from parent-OS? How about the compatibility/support in these configuration? If you guys have any other way to get this running, pleas
NX-1175S-G6 New setup unable to pass foundation stage
Hi everyone, I have an issue with a newly purchased NX-1175S-G6 that need to deploy at the EU environment.The system doesn’t allow to raise a case to the support portal.The installation just can’t seems to pass through the foundation stage.The latest place it stuck is at the screenshot. That is still just a small part. Before this when I try to put in VLAN to setup, it just can’t get through it. Anyone can guide me to a correct direction? Thanks
Is different Hypervisor support Nutanix cluster?
Hello All, I’m quite new to nutanix, I have a question: Can we add three different hypervisor nodes in a single Nutanix cluster. ex:- if we have three nodes with below hypervisor Node 1 - Esxi Node 2 - AHV Node 3 - Hyper-v So can we add these three servers in nutanix cluster??
New Nutanix Block AHV
Hi All, We are about to secure our first Nutanix Deal (in our region) - our existing customer have a vsphere environment with 35 Virtual Machines (Exchange / AD / SQL / Oracle DB) - they are willing to go AHV. After setting up the first block - what would be the best way to migrate all virtual machines from existing vspehre cluster? Existing vSphere cluster is using VMFS data stores off their existing SAN. will be grateful for your suggestions / insights / best approach towards this. Kind regards
Erasure coding - 4.7 - Where did the option go?
Hey, We're in the process of getting an exchange server virtualized and according to Nutanix best practice, we should create a container with EC-X enabled. When I go to create the container in the PRISM gui, there's no option to enable erasure coding. This contradicts documentation where it shows the option under "advanced options" but on my end, it's not appearing.
Stargate continues to restart leaving node ....down
My stargate contimues to restart. and I cannot find anyting in the GUI to indicate what it my be. When I get in the cli, i see the cluster says all four nodes are up. i try to ping 192.168.5.1 and the icmp request never completes. pinging all other ip’s work as expected. my esx is 6.5u3 aos is 5..5.5 my NCC is 18.104.22.168 foundations is 4.3
Power requirements for 3 node NX-3360
We are getting ready to deploy our first set of clusters and in the preinstall brief we hit a little snag regarding power. The specs state that with dual power supplies you need 208 volts per power supply. I have read in other post that the 3000 series should be OK with 120V if you only have 3 nodes. Can anyone confirm this? My goal is to have redundance if power was lost to a single power supply. If we get the 4th node we will go with 208V, but currently I just don't have that kind of power or mainly a 208/240 UPS.
Failover Testing Failed
Hello we have a brand new cluster (4 nodes) and did some failover testing over the weekend prior to placing the new gear into production. We have 2 cisco 4500x 10GB switches and have the nodes split between the two switches for redundancy. We simulated a switch failover by pulling the plug on one of them and experiened some very bad results. Some CVMs became unresponsive and we lost connectivity to most of our VMs. Has anyone else experienced a similar issue? Or can anyone point me in the right direction for configurations to double check? Would appreciate any insight!
Automating DRS rule creation for Metro Availability
We're in the process of deploying our new VDI environment, using View 6 & UniDesk ontop of our Nutanix MetroAvailability Clusters. We will be balancing the number of production desktops running at both sites to spread the load around (we'll have some containers active in site A, some active in Site B), and want to ensure VM's do not get vmotioned to the 'standby' site. Although we do have significant bandwidth (10Gbe via dark fibre) & low latency (~1-2ms) between sites, we'd like to avoid the additional overheads where possible. Is there a way for us to easily create DRS rules within vSphere to ensure VM's on Active Containers at Site A should always run on those hosts & vice-versa for Site B? I vaguely remember coming across a PowerCLI script somewhere that did this (or something similar), but have been unable to find it again. Ofcourse it would be amazing if this could be natively handled by Nutanix, but for now i'd be happy with a script we can run on a schedule to update the DR
Automatic Download not working
Recently it appears that our "Automatic Downloads" function is not working anymore. I referenced following link to force immediate download for updates: [url=http://aakashjacob.blogspot.com/2015/02/nutanix-techtip-2-enable-automatic_6.html]http://aakashjacob.blogspot.com/2015/02/nutanix-techtip-2-enable-automatic_6.html[/url] The results of the auto download log output are below: [img]https://d1qy7qyune0vt1.cloudfront.net/nutanix-us/attachment/2266i82DB36CD1DDA6450.jpg[/img] This is output from a Nutanix CE cluster but it is the same error I see on one of our other full blown Nutanix clusters. What do the errors mean and what could be causing them? AU used to work before the holidays and there was a Palo Alto FW update done during the holidays. I have worked with the person responsible for the PA and he has tried everthing to allow traffic out from the IPs of the CVM(s) and still no luck. Any input would be appreciated...
Communication trouble of CVM when building CE on Nested ESXi
Dear All, I am trying to build Nutanix CE into the environment of Nested ESXi. I downloaded and prepared the latest version of ce - 2018.05.01 - stable.img.gz. The ESXi to build the AHV is 6.0. First of all, we started to work as a single node cluster, we plan to develop into a 3 node cluster if it works. 192.168.0.27/24 to AHV, GW: 192.168.0.254. I set up CVM 192.168.0.28/24, GW: 192.168.0.254 and started the installation. After waiting for a while, a message stating that the installation was successful is displayed and the message "Nutanix CVM IP: 192.168.0.28" is displayed. However, only AHV can communicate with this CVM, and it can not communicate with other devices on the same subnet. If you check the Arp Table of another Linux machine on the same subnet, the correct MAC address of CVM is registered but the MAC address of this Linux machine is not registered in the Arp Table of CVM side. Only the MAC Address of AHV is registered in the Arp Table of CVM. CVM can not communic
Reason Why AOS Preupgrade Fails at 5%
Attempting to upgrade AOS from 22.214.171.124 to 126.96.36.199, but the preupgrade steps fails after 5% and the reason for failure isn’t displayed. Is there somewhere I can go to get some details about why the preupgrade keeps failing at 5%? This is what is displayed after the failure: Thank you.
Already have an account? Login
Login to the community
Login with your account
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.