Installation & Configuration
This forum is the best way to get up and running with the Nutanix platform
- 1,184 Topics
- 3,243 Replies
I have been getting the below error when i select image configuration. When i do image.list i could see the images in acli. image.list Image name Image type Image UUID Nuatnix Xtract kDiskImage f3b18c26-e461-44af-b3e1-bc295078fe40 Virto-Windows kIsoImage ffcbbf1d-7380-4803-af4c-fd16fa6795f3 Windows_2016 kIsoImage 15ec4036-703b-471d-9887-3cb6a569426e [h4]Error occurred while getting image list[/h4]
Dears, i have nutanix cluster 1350 and when i try to upgrade foundation of it i got below error : Could not find any node with foundation version < foundation-3.9-d31ad270 failed so please advise , despite it show attached photo [img]https://d1qy7qyune0vt1.cloudfront.net/nutanix-us/attachment/a9f53265-1638-4e4a-9c35-80ad03f0a470.png[/img]
purchased 7/15/2016 setup/install 10/24/2016 (by Nutanix tech) office move 4/1/2017 (late april, power outage in San Francisco financial district, was on bus on way to work when power went out, never got to shutdown Nutanix cluster properly, next morning powered up cluster and ran fine. Awesome! no issues since then) hardware: 2 NX-8235-G5 1 NX-8135-G5 NOS: 4.7.2 NCC: 2.3.0 BIOS: 20160516 BMC: 03.28 hypervisor: VMware vCenter Server 5.5 Update 2e (vSphere on Windows Server 2008 R2) I need to upgrade vSphere from 5.5 U2e to whatever latest version of 6.5 and also Nutanix NOS 4.7.2 to 5.5.8 Do I: 1 - upgrade VMware to 6.5 then upgrade Nutanix to 5.5.8 or 2 - upgrade Nutanix to 5.5.8 then upgrade VMware to 6.5 or 3 - upgrade VMware to compatible intermediate version and then upgrade Nutanix to compatible intermediate version. upgrade VMware to 6.5 and then upgrade Nutanix to 5.5.8 or 4 - upgrade Nutanix to compatible intermediate version and then upgrade VMwar
hi must try to get advice.... I will buy NX-3360-G5... 3 node... 1) if one node failure(power off), we still have two nodes for data writing?2) if one node failure(power off), what happens?3) Node 1 Service does failover to the other node? it is possible? i think it will result in Zookeeper Failure. Because will not satisfy quorum rule.
Hi Team, i am new to Nuatnix and i have requested to setup and configure the 2 Nodes Nutanix in our environment and OS will be ESXI .please need your advise by step by step. what are the prerequisites should I need to have. how many IPs i should need to connect from nutanix box to necessary devices Model :NX-6235C-G5 Many thanks in advance Ansar ali
Impact : If hostd service is not running, AOS upgrade will fail. Cause : VMware hostd service may not be running. Resolution : Check hostd status and restart it manually. Node Id : ZM164S021332 Block Id : 16SM13330093 Block Type : NX-1065S Cluster Id : 57606 Cluster Uuid : 00053fb4-2744-267d-0000-00000000e106 Cluster Name : NTX-DETACOOP01 Cluster Version : el7.3-release-euphrates-22.214.171.124-stable-266039dc19600d27bdc4b7134d02d1c7d267cea6 Cluster Ips : 192.168.39.13 192.168.39.14 192.168.39.15 Timestamp : Thu Aug 16 7:29:06 -03 2018
Would I be able to create storage containers in my cluster for the purpose of "pinning" VMs only to hosts that are assigned to the containers - restricting VMs to the hosts that belong to the container? The need is to control VM migration during Software Upgrades due to multicasting issues with the hosts.
A rather simple question :) Once the NFS Datastore has been presented to the nodes, each node shows the NFS Datastore and the Local disk as available storage. My question is, how do others deal with this scenario? What is to stop an admin from storing data to the local data store as opposed to the NFS data store which is where you want them to store their VM's? Curious how others have dealt with this. Thanks, Sky
We have a scenario in which we are running VMware vSphere environment on existing servers and planning to deploy a DR site. Is it possible that we deploy a Nutanix AHV environment at the DR site and replicate from the existing ESXi environment at the Production site to the Nutanix DR site somehow? We are currently using Veeam backup and replication for VMware for backup. What is the possibility of such a requirement?
Hello, We would like to validate some design aspects. Our environment is based on DELL XC630 & XC730, Nutanix 4.6.4 ( in the future 4.7) and EXi 6.0 u2. As for administration perspective, we need to affect an administration subnet IP to the IPMI interface. The CVM needs to be in a production subnet. There is no routing from production to administration subnet. Is it possible to deploy 2 network cards on the CVM for the CVM to communicate with IPMI interfaces ? Is that a validated and supported design by Nutanix. Thanks in advance for your answers.
Hi, I upgraded BMC Firmware from 03.24 to -> 03.40 via IPMI UI today since that i can check the version of BMC on IPMI UI page that its 03.40 upgraded among 3nodes which i upgraded but its not the same on PRISM even its accomplished. i can only see that BMC Firmware 03.40 isnt upgraded on PRISM among 3nodes i use AOS 126.96.36.199 with AHV Nutanix 20160601.44 [img]https://d1qy7qyune0vt1.cloudfront.net/nutanix-us/attachment/2107iA6B3D2779D7382C9.png[/img] additionally, i have this fail message via conducting ncc health check funtion All 4nodes have intel CPU in them. The messages showing like this Detailed information for bmc_bios_version_check:Node 192.168.x.204FAIL: No Intel CPU is found on the node.Node 192.168.x.205FAIL: No Intel CPU is found on the node.Node 192.168.x.206FAIL: No Intel CPU is found on the node.Node 192.168.x.207FAIL: No Intel CPU is found on the node.Refer to KB 3565 ([url=http://portal.nutanix.com/kb/3565]http://portal.nutanix.com/kb/3565[/url]) for de
We are moving from a traditional model with shared storage to Nutanix converged Infra, I want the options for VM migration.I want a best solution, without downtime to migrate my VMs:Our current traditional is:-4 node cluster (2 clusters)-FC storage array-ESXi version 5.5-vCenter 5.5-10Gbps network-vDS - Distributed Switch Our new Nutanix model is:-4 node Nutanix cluster (2 clusters)-NFS local storage-ESXi version 6.0-vCenter 6.0-10Gbps network-NSX Thanks for the suggestions....
I have 8 port 10BaseT 10g switch and I have four nodes with dual 10g and dual 1g. I also have a 1g switch with 50 ports. If I plug all 8 10g nic into the 10g switch I will not have an uplink port. However I still have thx 8 1g nic. So this leaves me with a couple options, and I was curios what best practice would be. A few options I have consider are: [list=1] [*]Only use one of the two 10g ports per node. [*]Configure AHV to use 10g for I/O only and 1g for connecting to the outside world. *not sure if this even doable* [/list]
just wondering the following:Everywhere you look it shows nutanix configuration of 2 or 4 SSD with the rest being HDD. With the advance of having all flash nodes on every model, is there anything stopping us having lets's say 6 SSD and 4 HDD? Or 5 of each in a node supporting 10 Disks?Does the Pro License also enable us to pin VM or part of it to SSD?
Hi All, I upgraded our single node cluster last night, and it went pretty smooth. However, today I am noticing placeholder text (i.e. "vm_critical_alerts" instead of "Critical Alerts" and many, many fields coming back simply as undefined. Is this a known issue or did something go awry during upgrade? [img]https://d1qy7qyune0vt1.cloudfront.net/nutanix-us/attachment/2304iCFA531E337844718.png[/img] SSP is also not working, but I'm active in another thread for that. Thanks! Tim
We recently had multiple Nutanix Blocks installed and I have started configuring them for Active Directory Authentication; however, logging on using AD Accounts is super slow and takes several minutes to logon. I have configured the Authentication to IP Addresses, FQDN's and DOMAIN but all are still unacceptably slow. ldap://192.168.1.1:389 ldap://server.domain.org:389 ldap://domain.org:389 For the Prism Role mapping, I have configured AD Groups and Single Users and the logon is still super slow. There was a post about change recursive authentication to be off; however, there was no command string associated with NCLI. Anyone experiencing this issue? Would like to know the best practice for configuration AD Authentication. Thanks for any assistance... David
Hey all! Just got my 3 node monster Nutanix box up and running and LOVE it! Quick theoretical question though: I blew through my budget on this box, and would LOVE to have a DR setup somewhere else in the building (to combat fire, flood, earthquake, etc.) Could I take some spare hardware lying around, and as long as it meets the specs needed, install the CE version, and make that the DR for my existing paid for version of Nutanix? I've gotten mixed answers from various people is why I ask. Thanks in advance!
Hi all We have a small DEV 3 node lab to test features before moving forward, one feature in 5.0 we are looking forward too is the Self service portal, after upgrading to 5.0 I have selected the SSP which then asks for an ldap user, although this user authenticates correctly from Prism / Authentication / test When entering the same user in the cresentials for SSP I get the following error; Server with access directory url:ldaps://****** is down Has anyone seen this? is ldaps supported? as mentioned it works for Prism authentication This clutser is not under support, we are looking at a dev support option for this currently but we do have many other clusters Thanks Jason
Hi All I am having my first foray into Nutanix using 3 x Dell XC servers and 2 x Dell X4012 switches (stacked), TOR switches are Dell N3024 stacked - I have come from a background of VMware using blades and SANs, so I a little confused on the process for setting up the external networking. What vlans etc do I need on the switches for the Nutanix management, vmotion and storage etc? Is there any best practice documents or has someone completed a similar task and documented the procedures? Any hints, tips or configs gratefully received. Thanks Eric
Hi All, Thanks for caring.. My Name Rizki, you can call me R.P so here my problem, i'm strugle when i try to install/reimaging block NX-1065-g5 my component to reimaging this is AOS 188.8.131.52 Foundation 3.6 Hypervisor AHV-20160601.44 from what i know in compatibility matrix is the component compatible but... i strugle when i reach 12%imaging, it stuck at "preparing blablaba installer" anybody can advice me? or do you any solution Thanks R.P
Hello again CE community! I'm running a Dell R730 with these specs: 2 Sockets 8 Cores Each Socket 256 GB RAM I'm running ESXi to run a 4-node cluster and want to parcel the CPU and Memory evenly among the 4 CVMs. Q1: Based on this, what should my CVM CPU/RAM settings be? Q2: When I allocate vCPU and RAM for VMs, how do I translate a bare-metal config spec e.g. 4x WS2016 8 CPU, 32 GB RAMto Nutanix VM specs, based on these constraints? I just need an example to shine light on how to break down the physical to esxi to nutanix CVM to VM specs. Thanks in advance, Brit
Login to the community
Login with your account
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.