Installation & Configuration
This forum is the best way to get up and running with the Nutanix platform
- 1,184 Topics
- 3,243 Replies
AOS is in 5.10.8, We need to install NGT on Windows Server 2016 with only VSS feature enabled and not SSR. However, after installation, even though the SSR feature is not enabled, there is the SSR icon on the desktop. It can be opened and bring us to “http://localhost:5000”. I can understand that the feature is not active but I would like to avoid having it displayed on the desktop. How to avoid to have this on the desktop ? (I’m looking for another way than deleting the icon manually after NGT installation). What about this local web server ? Could it be stopped ?
I have two upgrade options showing in my Nutanix clusters. One is for a 5.15 release and the other is for a 5.16 release. I'm currently on a 22.214.171.124 release and generally keep my clusters updated to the latest release possible. Normally I'd go to the 5.16 release and not ask this question, but I heard from someone that 5.16 might have some problems in it. Would I be safe upgrading to the 5.16 release, or would it introduce instability into my environment?
Hi, We have setup in pri production stage. Cluster is having 2 x 800 GB disk & 2 x 6 TB disk on each node. Cluster is having 3 nodes. We removed single disk from one host & after 10 min. we reinserted disk in same slot. Now cluster is showing 11 disk & not allowing us to format disk which is reinserted. How to add this disk to current cluster? Thanks in advance. Vivek
I have a cluster running on an NX-3060-G5 with XenServer hypervisor that I want to completely rebuild on Acropolis. I’m looking for documentation on how to do this. Our original cluster was built by a Nutanix engineer onsite and I recall he had to install the foundation software on another machine, but I don’t recall all of the steps he took. Can someone point me in the right direction?
We are in the process of identifyng datacenter requirements for our Nutanix installation. One statement I recently heard was that the 1 gig ports for the nodes were primarily to address security concerns and that the 10 gig ports would satisfy most customers. Is this true? Are there any other downside issues to not using the 1 gig ports?
I need to import from KVM (CentOS 6.6) on Acropolis 4.5 a VM RedHat 4.4 with ide disks (RH 4.4 doesn't support Virtio). I have created a new VM from acli with IDE disk:acli vm.disk_create TEST_RH44create_size=80G container=cnt1 bus=ideacli vm.nic_create TEST_RH44 network=vlan.0. Now I have to import the source VM disk image (ASdisk01.img), How to import it as ide on the new VM on Nutanix?
Hi there, I'm facing a problem with Prism that display some weird chart. As you can see above, the chart shows no value (blank) every 6mins. [img]https://d1qy7qyune0vt1.cloudfront.net/nutanix-us/attachment/685iB57C026A9AF18DE6.png[/img] When I go to the analysis page, the blank is still present and there is some strange movement of the Memory line: [img]https://d1qy7qyune0vt1.cloudfront.net/nutanix-us/attachment/686i56E0C99ADEDE0F4A.png[/img] Do you know where this could come from ?
Hello, We are running out of network ports in our datacenter so we thinking of reclaiming the existing Nutanix IPMI network cables and instead use KVM dongles for those. I want to check with everyone here to make sure that there are no functional issues with loosing IPMI connectivity or does Nutanix need IPMI connectivity all the time. Let me know your recommendations.
For a nutanix cluster to be able to install a newly released hypervisor, the information about it must be already in iso_whitelist file. This file can be downloaded from Nutanix portal (search for ESXi 6.7 U3) and is required when installing/upgrading hyperviosr using prism one-click.ESXi 6.7 U3 (built number 14320388), has been out as of Aug, 20, 2019. It is Nutanix policy to support newly release hypervisors within 90 days after they are release by the hyperviosr vendors.As of this writing (Nov 18, 2019), nutanix has released the iso file for this release and is supporting installation of this built on Nutanix clusters 5.10..6 and above.Below is a public KB regarding the support for this release:https://portal.nutanix.com/#/page/kbs/details?targetId=kA00e000000CsgLCAS
My cluster was looking very nice in Prism (as usual) yesterday but I happened to take a look at my NCC summary output and found that Genesis wasn't running on one of my cvm's. I was able to issue a 'genesis start' command and all was good again but I'm kind of curious if I should expect the Genesis service status to be reported up to Prism. I gather that, without Genesis running, the cvm won't play in any of the cluster reindeer games so its running status seem kind of important to me. Am I wrong in this assumption? And if not, is there some way that I can get an alarm should this happen again (other than running/reviewing NCC)? Thanks in advance.
Hi, We have recently upgraded our Nutanix cluster from 4.0.3 to 126.96.36.199 and are noticing slower logons to the Prism interface. Before the upgrade this was almost instant while now it might take up to 10 seconds. Has anyone had the same experience?
Hi folks ! My client asked me a quite though question today : since we are bounded to CISCO Nexus for physical switches, we digged on a low latency nexus with SFP+ connection. It seems that 3064-X is a good choice but as far as I know, Cisco SFP+ compatibility with 3rd party is azardeous. 2 questions : Are there some kind of hardware restriction w/ Nutanix Supermicro stuff appart from SFP+ cable (MOLEX 074752 series as far as I know) ? Is there a compatibility matrix of this granularity level somewhere ? best regards,
I'm putting together a design for a 3 node cluster with the 6000 series nodes. The first block is 6235-G4, the second block, I'm a bit confused. I could get another identicle block with 1 node ( 6135-G4), but there is also the option of getting a 6135C, which is single socket, less RAM, smaller SSD and crappier CPU. It's cheaper and will make the min 3 node cluster, but I'm wondering what kind of performance hit it will have for the overall solution. Am I better off getting a 6135-G4 and being done with it?
Hey guys, We are busy building a Nutanix 188.8.131.52 infrastructure with vSphere 5.5. We have six nodes in two blocks, and the design consists of 2 Nutanix clusters with each its own vSphere cluster. HA is enabled on each cluster. We are getting an error on both Nutanix clusters in PRISM UI: [i]Virtual Machine auto start is disabled on the hypervisor of Controller VM [/i] Nutanix best practices recommend to enable "Start and Stop Virtual Machines with the system" on each host, and move the Nutanix CVM startup to Automatic startup. But, according to This VMware KB the automatic startup feature is disabled when moving an ESXi host in an HA enabled cluster. Just like the error message says. Is this expected behavior? Can this error message be disabled because it is just not valid in combination with VMware HA enabled clusters? Or is this still a configuration error?
Hello, A few of the IPMI ports on a new installation (3K and 6K series) show "no connect" in the BIOS. The ones that are working show "Dedicated LAN." Our networking folks have verified the ports as access ports, and are active with the correct VLAN ID configured. We will be swapping the ports on the switch between the working/non-working IPMI ports to confirm if the issue is on the switch or node side. Could there be some BIOS settings we may have missed? Any thoughts would be greatly appreciated. Thank you.
Hello, In setting up Nutanix we do not allow it on the Internet. How much does this constrain what we can do with it? ie Life Cycle Management (LCM) Although LCM allows us to point to a different source we will still need to get the Modules somehow? Although LCM lets us start an Inventory it still seems to want to go out and get a Manifest file and then it fails? Can these be downloaded somehow? Where else are we going to find difficulty with? Thank you...
Hello All, One of our customers decided to have Vmotion and ESXi MGMT on different VLANS and they will split the 10G connections. So for this kind of setup, which VLAN should CVM be in? Do you have any recommendations? Thank you so much...
esxi 6.5 u2 vSwitch configured as 1 active uplink and 1 standby uplink on esxi nutanix. I didnt understand why the standby adapter in which scenarios. I wonder why and in what scenarios it is used. can i use both 10 GB network adaptor active-active mode in Virtual Standard Switches (VSS).
I'm running Prism Central 184.108.40.206 and have recently upgraded 6 of our clusters to 220.127.116.11. Of those 6 clusters, 3 of them are still showing an Upgrade Status of 'Upgrading' in Prism Central. It's been a over a month for one cluster and I've restarted Prism Central appliance to no avail - has anyone else seen this?
Hi Tried googling but can’t find anything to support me. Model: NUC7i5DNHE2x SSD (500+250GB SSD)1x _16_ gb Cruizer Fit USB AHV installation works and I create 1 signle node cluster., After rebooting I get SSH/Ping acces to AHV and can loginw with root. Problem:CVM does not answer to ping/wev.Tried SSH / Ping from AHV but does not get an answer: [root@NTNX-eXXXXX-A ~]# ping 10.255.1.11PING 10.255.1.11 (10.255.1.11) 56(84) bytes of data.From 10.255.1.10 icmp_seq=1 Destination Host UnreachableFrom 10.255.1.10 icmp_seq=2 Destination Host UnreachableFrom 10.255.1.10 icmp_seq=3 Destination Host Unreachable I’ve tried re-installing multiple times but same issue.Is the problem the 16Gb cruizer fit usb? It’s the bootable media.CVM gets 500gb SSD diskData gets 250Gb disk [root@NTNX-eXXXXX-A ~]# virsh list Id Name State----------------------------------------------------[root@NTNX-eXXXXX~A]# Tried the logs from other posts but not able to get any outputs. Anyone got
Login to the community
Login with your account
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.