Installation & Configuration
This forum is the best way to get up and running with the Nutanix platform
- 1,101 Topics
- 2,923 Replies
Hi Everybody, I'm new to Nutanix, I'm having my first node installed on Monday so ery excited ! I have the chance to dedicate a Nutanix host for a Lab and will have the time to play around with it. I've seen that 192.168.5.0/24 is a dedicated subnet for the CVM, but this is also the subnet in which my Lab resides. Is it ok to use the CVM and the production VM in the same subnet ? What are the options ? I've been told the CVM network cannot be changed. But is it ok to have the VMs and the IPMI in the same LAN ? For a production environment I can understand that security wise and performance wise, this is mandatory to split the subnets but in my case this is a bit different. Thanks. Cheers. Fred.
Trying to find the best procedure for the following. Had 36 nodes. Imaged all with foundation and IP'd in a standalone environment with private IP's on a basic switch, in an effort to image them all with ESXi at one time. No datastores were created yet using that process. Had to split up those 36 imaged nodes to move to 4 separate networks to create 4 separate clusters (9 nodes each). EAch network is completely isolated from the others, and not connected the internet. Started with one block (4 nodes) -- moved to new network, manually re-IP'd everything for that one block to have valid Network1 IPs. Now I am wanting to move the other 2 blocks to this first network. Am I forced to do a manual re-IP of each node now, after having used foundation to IP them in the beginning with private IP's? Prism cannot see (discover) the new nodes (via IPv6) after connecting them up to the same network. And the re-IP IPv6 cluster_init page is not reachable, . Just wondering if there is a way to b
Hello - In Prism on an 8-node cluster, we are getting an alert under Health showing 3 of 8 CVMs that their Memory Usage are @ 100%, for about a week now as shown in the graph (in Prism). However, those CVMs seem fine in vCenter. At a high level, Guest Mem % for the CVMs are around 40% or less. Thoughts? Thanks.
When I go to edit the Break/Fix options for my products, I can't change their location - I can only change the support personnel. We're just getting up and running with our Nutanix hardware and we're moving one of our Blocks to a DR data center in a few days; our primary data center is where the Block has been assigned to.
One of our newly installed Nutanix blocks shows 0 VMs from the web interface. There are no user VMs on this block; wouldn't I at least see a controller VM per node (3 nodes currently so I think I should see 3 controllers VMs). The cluster is up and all shows as healthy. Running vSphere on these nodes and when I connnect to the nodes indiviually via vCenter, I can see the controller VM. What am I missing?
Hi, I was hoping if someone can shed a light on some queries I have re snapshots. I am running low on space to the point where I currently have no autobuild resiliency. I do get a FAIL when it comes to snapshot_chain_length when checking with NCC.The value of that is 41 and ideally should be around 10. I have 3 protection domains with a total of 127 VM. I do a daily snapshot but only retain one copy. I have been getting too many snapshot alerts even when I only had snapped 50 VM in one protection domain. -Can someone explain what that snapshot_chain_length actually is and where the 41 comes from? -How much space is actually used by the snapshots? How can I get this info (NOS 22.214.171.124) -Each protection domain has 4 expired snapshots in addition to my one active snapshot. Do these actually take up space? What is their purpose? If I am not mistaken they can not be used for restores.
I was looking for confirmation about a statement that I recently heard that the best practice for the power supply voltage for a block was 220 volts. And that if 110 volts was used, it was less likely that one power supply could handle the load if one power supply failed. Thank you.
I currently have 2 x NX-1450, both are running NOS 4.0.1 and I'm using Hyper-V. Everything is linked up to 10Gb and I'm using a physical veeam server outside of Nutanix to back up both clusters, he is linked up to 4Gb. We are currently suffering from a bad backup performance, let's say something around 30-40Mb/s processing rate for one VM that is on a node that isn't doing anything at all. If I perform the same tests on an NX-3360, I can easily get up to 160-180MB/s processing rate, so that kinda rules out my network/configuration. Is there anyone out there running NX-1050 with vmware or hyper-v and Veeam and that can share their configuration/processing rates?
Has anyone experienced this: When updating esxi 5.5.0 to 5.5.0 Update 1 using the update manager in vsphere, the host never reconnects. In fact, somewhere during the upgrade process, while the host is "down", iscsi gets turned ON whch causes the host to become unavailable to the vcenter server... helpdesk was able to run some commands in the ssh: # esxcfg-swiscsi -d# reboot to disable the iscsi and reboot the host which seems to fix the issue, (You have to manually reconnect the host once it is back up...) but I was curious as to the "WHY" of it all... Thanks all! Rick
Good afternoon, Can someone please sanity check something for me, one of the BEarena team is running a diagnostics test which is taking 7-8 minutes to deploy each diagnostics VM. What is considered to be a normal amount of time? At this rate the 8 node cluster will take over 1 hour to run diagnostics. Thanks Darryl
Login to the community
Login with your account
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.