Installation & Configuration
This forum is the best way to get up and running with the Nutanix platform
- 1,184 Topics
- 3,243 Replies
Hi Tried googling but can’t find anything to support me. Model: NUC7i5DNHE2x SSD (500+250GB SSD)1x _16_ gb Cruizer Fit USB AHV installation works and I create 1 signle node cluster., After rebooting I get SSH/Ping acces to AHV and can loginw with root. Problem:CVM does not answer to ping/wev.Tried SSH / Ping from AHV but does not get an answer: [root@NTNX-eXXXXX-A ~]# ping 10.255.1.11PING 10.255.1.11 (10.255.1.11) 56(84) bytes of data.From 10.255.1.10 icmp_seq=1 Destination Host UnreachableFrom 10.255.1.10 icmp_seq=2 Destination Host UnreachableFrom 10.255.1.10 icmp_seq=3 Destination Host Unreachable I’ve tried re-installing multiple times but same issue.Is the problem the 16Gb cruizer fit usb? It’s the bootable media.CVM gets 500gb SSD diskData gets 250Gb disk [root@NTNX-eXXXXX-A ~]# virsh list Id Name State----------------------------------------------------[root@NTNX-eXXXXX~A]# Tried the logs from other posts but not able to get any outputs. Anyone got
A rather simple question :) Once the NFS Datastore has been presented to the nodes, each node shows the NFS Datastore and the Local disk as available storage. My question is, how do others deal with this scenario? What is to stop an admin from storing data to the local data store as opposed to the NFS data store which is where you want them to store their VM's? Curious how others have dealt with this. Thanks, Sky
I cannot seem to get Active Directory authentication configured and cannot figure out why. After following the directions in the support article "Configuring Authentication" I receive the message "Directory config name is invalid". Here is what I have tried: using the IP rather than the host name, turning off the firewall on the DC, using a different AD host, trying different ports (i.e. 389 and 3268), trying a different browser (firefox and IE), specifying the full directory path (/ou=,dc=,dc=) - nothing seems to work. I just keep getting the error message "Directory config name is invalid". I can ping the DC host from the CVM using the FQDN; firewall issues have been ruled out by turning off the firewall; the LDAP url is typed correctly. My thought is that this feature simply isn't working correctly on 4.1.3. Has anyone else experienced problems getting AD configured properly?
Hello Folks, I`m busy installing a 6 node cluster, some errors or doubt comes out along. my environment : 3 block, 6 nodes (2 nodes/block) AOS 4.6.2 一. When I create a new container, make the Enable Erasure coding button checked, error pops up"enable EC-X will break the current block awareness !" seems mean ECX will disable block level awareness ? 二. As many of us know, if need locate the physical slot location of a disk, you can simply led on that drive. but there is possibility led on function inaccurate or disabled, in this scenario, you must locate the disk by disk id rule, e.g shelf id.slot/bay ID, but the DISK ID in prism hardware section seems ramdom/ruleless number, I can`t locate physical location for a disk. 三. When I do the test about removing a node from nutanix cluster, A doubt appears: Do I need or is it the best practice to execute storage vmotion(other then only vmotion vm`s ) before removing the node ? I think the diffecrence of two methods is who is the data mov
I’m in the sales process and the customer is asking me about the physical requirements of an NX appliance (particularly, the NX-1365-G6). Where can I find this information? Like power consumption, thermal dissipation, dimensions, etc... I know there are some [url=https://www.nutanix.com/products/hardware-platforms]here[/url], but they don’t list all the products there.
Hi, I apologize in advance for the long post !!!! From the information I've reviewed, the Xpress Models (I'm looking at Lenovo's offering) will support a maximum of 4 nodes. In addition there is no Protection Domain. Let's pretend that I'm a cloud provider for multiple customers, and I host their (Domain/File/Print/Email/Sql) servers in my 4 node domain. I want to use an (Acronis/Storagecraft) in VM backup software. These products store their backup images to a NAS/Share. If I setup a second 3 node Xpress Model that is storage "heavy" running Acropolis File Services, as a NAS/Share destination for my backup images, is there any restriction that you can think of that would prevent/limit me from doing this ? Right now we use StorageCraft and save to a Windows Storage NAS that has RAID 10 and I'm concerned that writing to will not be able to handle the I/O? I'm guessing that the 3 node Xpress AFS will easily handle intensive I/O writes. On weekends, the Full backup runs a
Just had the cluster installed. I'm going to patch my 5.5 installs, I put the first node into maintance mode and its just waiting for the CVM to power off or move. Should the CVM shutdown on its own or do I need to power it off before I patch and reboot? thanks,jb
Let say for example you're running ESXi 6.5 U1 and want to upgrade to the latest ESXi version but confused whether the current hardware and AOS will support the latest ESXi version.Nutanix gives you a feature to identify the compatability of different Hypervisors with NX environment easily using the Compatibility Matrix. You just need to visit the compatibility matrix page, filter by using your hardware and AOS and you will see the hypervisors which are supported.Still confused before the hypervisor upgrade?Refer the following document regarding general guidelines before the hypervisor upgrade.Hypervisor-upgrade-guidelines
Hello All, I’m quite new to nutanix, I have a question: Can we add three different hypervisor nodes in a single Nutanix cluster. ex:- if we have three nodes with below hypervisor Node 1 - Esxi Node 2 - AHV Node 3 - Hyper-v So can we add these three servers in nutanix cluster??
Dears, i have nutanix cluster 1350 and when i try to upgrade foundation of it i got below error : Could not find any node with foundation version < foundation-3.9-d31ad270 failed so please advise , despite it show attached photo [img]https://d1qy7qyune0vt1.cloudfront.net/nutanix-us/attachment/a9f53265-1638-4e4a-9c35-80ad03f0a470.png[/img]
Hi All, According to my material on hand that Nutanix will keep a copy in local storage for local data required. So if there are some storage intensive VMs running in one node and the local storage in that node is not enough for data of all local VM, some data should still need to be retrieved in remote node? would like to know the exact behavior. And one more question for DR. If I use vsphere, I still need to buy SRM license or SRM is optional (e.g. for some dedicated features?). For Hyper-V, since the DR feature of Hyper-V is not as rich as VM, can we still use the "Protect Domain" feature in Nutanix for DR in Hyper-V? Thanks a lot in advance! Best Regards, Teru Lei
Hi, I have questions.We are considering whether it can be operated with IPv6 in our IT infrastructure.So,I found that the prism element can be configured with an IPv6 address. However, I didn't know if components such as Prism central support it. Can Nutanix configure IPv6 addresses? Does Nutanix support IPv6?
We added a couple of storage nodes to a modest 3-node cluster that was running out of disk space. The alerts about running out of storage, and not having enough for redundancy, are gone. A few months in, and every time I look in Prism, under Hardware, I see that the storage nodes' "Total Disk Usage" has not gone up very much. The compute nodes' disk usage is about 5-6 times as much as the storage nodes (and the compute nodes actually have more storage onboard than the storage nodes). I understand that Nutanix tries to keep a VM's storage on the same host that provides its compute resources (and the copy is sharded and spread out among other nodes). Is that why I am seeing so little utilisation of the new storage nodes in Prism?
Hello, It's made the news yesterday, but in case you missed it, there seems to be a bug within vSphere 5.5U1 that affects all NFS users. NetApp & VMware (at least, maybe Nutanix too?) are working on the subject. See the post below for more information: http://datacenterdude.com/vmware/nfs-disconnects-vmware-vsphere/ That may also explain the delay from Nutanix in the support of the 5.5U1? Did you guys catch this while working no the 5.5U1 support? Sylvain.
We are getting ready to deploy our first set of clusters and in the preinstall brief we hit a little snag regarding power. The specs state that with dual power supplies you need 208 volts per power supply. I have read in other post that the 3000 series should be OK with 120V if you only have 3 nodes. Can anyone confirm this? My goal is to have redundance if power was lost to a single power supply. If we get the 4th node we will go with 208V, but currently I just don't have that kind of power or mainly a 208/240 UPS.
# On March 4, I posted the following but I have written the wrong answer myself, and the status of that bank has become "solved". # Actually it remains unresolved. I will post it again because I want to resolve it by all means. # The problem is "CVM can only communicate with AHV". Dear All, I am trying to build Nutanix CE into the environment of Nested ESXi. I downloaded and prepared the latest version of ce - 2018.05.01 - stable.img.gz. The ESXi to build the AHV is 6.0. First of all, we started to work as a single node cluster, we plan to develop into a 3 node cluster if it works. 192.168.0.27/24 to AHV, GW: 192.168.0.254. I set up CVM 192.168.0.28/24, GW: 192.168.0.254 and started the installation. After waiting for a while, a message stating that the installation was successful is displayed and the message "Nutanix CVM IP: 192.168.0.28" is displayed. However, only AHV can communicate with this CVM, and it can not communicate with other devices on the same subnet. If you che
We are moving from a traditional model with shared storage to Nutanix converged Infra, I want the options for VM migration.I want a best solution, without downtime to migrate my VMs:Our current traditional is:-4 node cluster (2 clusters)-FC storage array-ESXi version 5.5-vCenter 5.5-10Gbps network-vDS - Distributed Switch Our new Nutanix model is:-4 node Nutanix cluster (2 clusters)-NFS local storage-ESXi version 6.0-vCenter 6.0-10Gbps network-NSX Thanks for the suggestions....
Hello Guys, I got a new environment to take care of and need some help here. Setup is based on Nutanix 3 Nodes with Hyper-V cluster. There is requirement to use Veeam BR as backup solution. All veeam components has to be virtualized i.e., Backup server, Proxy Server, Repository and Tape Server if required. Backup destination is a Tape library which can be connected via FC. Documents that I read "veeam on Nutanix with Hyper-V" mentions about hybrid scenario, where Physical repository server can be used to connect tape library. In my case there is no physical server available. What can be possible ways to get this done? 1. Is it mandatory to use a physical server to connect to tape? 2. Can an HBA installed in NTNX Node be used by a VM on Hyper-V (pass-through)? 3. Can I designate Hyper-V parent OS as Veeam Proxy and Tape server and use HBA directly from parent-OS? How about the compatibility/support in these configuration? If you guys have any other way to get this running, pleas
Dear All, I am trying to build Nutanix CE into the environment of Nested ESXi. I downloaded and prepared the latest version of ce - 2018.05.01 - stable.img.gz. The ESXi to build the AHV is 6.0. First of all, we started to work as a single node cluster, we plan to develop into a 3 node cluster if it works. 192.168.0.27/24 to AHV, GW: 192.168.0.254. I set up CVM 192.168.0.28/24, GW: 192.168.0.254 and started the installation. After waiting for a while, a message stating that the installation was successful is displayed and the message "Nutanix CVM IP: 192.168.0.28" is displayed. However, only AHV can communicate with this CVM, and it can not communicate with other devices on the same subnet. If you check the Arp Table of another Linux machine on the same subnet, the correct MAC address of CVM is registered but the MAC address of this Linux machine is not registered in the Arp Table of CVM side. Only the MAC Address of AHV is registered in the Arp Table of CVM. CVM can not communic
I currently have 2 x NX-1450, both are running NOS 4.0.1 and I'm using Hyper-V. Everything is linked up to 10Gb and I'm using a physical veeam server outside of Nutanix to back up both clusters, he is linked up to 4Gb. We are currently suffering from a bad backup performance, let's say something around 30-40Mb/s processing rate for one VM that is on a node that isn't doing anything at all. If I perform the same tests on an NX-3360, I can easily get up to 160-180MB/s processing rate, so that kinda rules out my network/configuration. Is there anyone out there running NX-1050 with vmware or hyper-v and Veeam and that can share their configuration/processing rates?
WE are running on AOS version 184.108.40.206 and planning to upgrade it. When I checked in Upgrade path, the maximum version I can upgrade to is 5.6.2 I need to know, if I want to upgrade to the latest version which is 220.127.116.11, do I have to follow the path like [b]18.104.22.168 >> 5.6.2 >> 5.9.2 >> 22.214.171.124[/b] , basically a 3 step procedure OR is it that I can upgrade to 5.6.2 only. Please share your views.
Login to the community
Login with your account
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.