Hello everybody, let's assume we have one Nutanix block with 3 nodes. In each node we only have 1 CPU socket with 10 Cores. A VM can only run on one node. So, in this case, wouldn't it make more sense to let a VM with 4 CPUs run on 1 VCPU and 4 Cores, instead of 4 VCPUs with 1 core, as each node has only 1 CPU socket? Or does Nutanix always recommend to use only 1 Core per VCPU not matter how much CPU sockets are available in a node? Best regards, Didi7
Hello, I`m not new to nutanix, but have a long time not focusing on nutanix installation step. while I installed nutanix with foundation+Phonex(NOS location selection)+Hypervisor image one year ago, I think Phonix is just exactly NOS at that time. I find it seems to have new method to image node: Foundation+AOS(NOS location selection)+Hypervisor image, or even having an JAVA applete+Foundation+AOS(NOS location selection)+Hypervisor image I`d like to confirm detailed difference between NOS , AOS, and Phoenix. I have asked guys, no one can explain this clearly to me. also what make me confused is there are varous statement about it in different guide(it is self-contradictory) , also Which phonex package should I use ? shows you need manually make/generate phonix ISO with AOS , but sees you shoud use the Phonix ISO downloaded from nutanix portal site ! look at these slices : "Phoenix is a tool used to install the Nutanix Controller VM and provision a hypervisor on a new or rep
From this tech note - [url=http://go.nutanix.com/rs/nutanix/images/TechNote-Nutanix_Storage_Configuration_for_vSphere.pdf,]http://go.nutanix.com/rs/nutanix/images/TechNote-Nutanix_Storage_Configuration_for_vSphere.pdf,[/url] my impression is that Nutanix thin provisions a VM if the disk are set to "Thick Provisioned Lazy Zeroed". [i]All Nutanix containers are thin provisioned by default; this is a feature of NDFS. Thin provisioning is a widely accepted technology that has been proven over time by multiple storage vendors, including VMware. As containers are presented by default as NFS datastores to VMware vSphere hosts, all VMs will also be thin provisioned by default. This results in dramatically improved storage capacity utilization without the traditional performance impact. Thick provisioning on a VMDK level is available if required for the limited use cases such as fault tolerance (FT) or highly demanding database and I/O workloads. Thick provisioning can be accomplished by cr
Hello everyone, my environment: 6 * 3060-G4 NODE per node : 2*ssd 480G， 4* HDD（how to find out if it`s sas or sata drive ?）1TB RF=2 --------------------------------------------------------------------------------------------------------- how to calculate? I see prism report 22.18TiB MaxCapacity(Physical)。but I think physical should be 1T*4*6+480G*2*6=24T+5760G if RF2 need two complete copy(like RAID1) ?, the usable capacity is just half of 24T+5760G, and if after one node failure, for N+1 , I should also reserve another extra node capacity(4T+480G*2) ? also I have a basic question and guess, if I have three NODE in all(assume not 6 node ), and one node failed, lost data will be replicate to another any node of remaining two, because of RF. so I have two copy again. but if the second node also fails later(now I have only one node), will the cluster be down ? data lost ? I think data shouldn`t be lost because one of the two copy exists(3 node ,2copy-> 2 node 2copy-> 1node 1copy)
Greetings community members! I am brand new to the Nutanix platform and have completed my basic initial configuration of my nodes on ESXi 5.5U2, however I am having trouble finding (not through lack of Google-fu) a complete guide on what Nutanix recommends as BPs for configuring hosts for ESXi. This raises some questions... e.g: Should I add anything to my node's Advanced Settings? Should I chanage the power settings Active Policy to High Performance? What is the definitive guide to HA settings with respect to the nodes, VM monitoring, policies therein, etcetera ? Not that this is an exhaustive list of questions, but I forgot to ask our Nutanix Pro Services rep before he left...and I could not immediately find what I need unless I am looking in the wrong place! :$ [i]Edit: Some of what I asked is in the [url=https://portal.nutanix.com/#/page/docs/details?targetId=vSphere_Admin-Acr_v4_5:vSphere_Admin-Acr_v4_5]vSphere Administration Guide for Acropolis[/url] which is very h
Dear Expert, My Nutanix environment as per below: 4block of Nutanix ( 8 Node ) connect to Nexus Access Switch with 10G SPF cable. Hypervisor: VMware 5.5 Issue: from vCenter we detect the host keep disconnected randomly. and vCenter become HANG Workaround: Ping continuously on Host. Result No packet drop can you please advise what need to do Regards, Hamballi
Hi, I would like to ask if Nutanix support replace SSD/HDD with different capacity. The reason I ask is that I want to keep the certain Hot Data ratio when data growth but without a growth on compute capacity. My concern is that if I just add the NX-6035C could allow me to growth my data, but the SSD may not enough for maintain the ratio of required Hot Data. Thanks.
Hello All, I am new to "Nutanix XCP, One basic question regarding the Raw device mapping in Esxi using the Nutanix infrastructure. Is RDM mapping supported in the Nutanix Appliance4.6 (Nos)? If yes, Can you please share the procedure to Create a RDM (Physical mode)?.. Thanks, Bhoopathy
Can someone chime in on how Licensing woudl work when using AHV? For example, we currently have Windows 2012R2 Datacentre edition on each host and we have the ability to have unlimited licensed Windows Servers virtualised. I know that AHV doesn't require a license but does that mean that a license for each Widnows Virtual Server will need to be purchased hence invalidating our Datacentre license???
We have a 4-Host Dell Bundle. Per Host: [list] [*]2 x Xeon CPU E5-2620 v3 @ 2.40GHz [*]256GB RAM [*]4.4 TB of storage 400GB of which is SSD[/list]we got this error after running the NCC checks: Node 10.0.0.58:FAIL: CVM memory 16433468 kB is less than the threshold 24 GBNode 10.0.0.55:FAIL: CVM memory 16433468 kB is less than the threshold 24 GBNode 10.0.0.57:FAIL: CVM memory 16433468 kB is less than the threshold 24 GBNode 10.0.0.56:FAIL: CVM memory 16433468 kB is less than the threshold 24 GBRefer to KB 1513 for details on cvm_memory_checkDetailed information for ldap_config_check:Node 10.0.0.56:INFO: No ldap config is specified.Refer to KB 2997 for details on ldap_config_checkDetailed information for cvm_numa_configuration_check:Node 10.0.0.58:INFO: Number of vCPUs(8)on the CVM is more than the max cores(6) per NUMA node.Node 10.0.0.55:INFO: Number of vCPUs(8)on the CVM is more than the max cores(6) per NUMA node.Node 10.0.0.57:INFO: Number of vCPUs(8)on the CVM is more than the
Just implementing a new cluster and I've ran foundation with NOS220.127.116.11 and ESX 6. The ESXi hosts have not yet been connected to vCenter. One of the post image steps is to set the correct time zone for the cluster. It says that each CVM needs to be shutdown in Serial. Is this a simple case of connecting to a each ESXi host and shutting down the CVM, Power it back on, connect to the CVM, via http://cvmip to check that it is up, then move on to the next ESXi host and repeat? Thanks Chris
Hello, i think about Citrix PVS on Nutanix All-Flash Blocks with Acropolis Hypervisor. But there is no offical support for AHV, which means that i have to run VMware or Hyper-V on my Nutanix Cluster to use PVS. -> [url=http://support.citrix.com/article/CTX202032]http://support.citrix.com/article/CTX202032[/url] Is there any roadmap for this?
When trying to upgrade NOS or NCC I'm getting this error: "Error while executing HTTP REST request: Could not connect to Genesis" The updates have already downloaded. This occurs even if I pick a lower version to upgrade to. I'm going from 4.1.4 to 18.104.22.168 (NOS) and 2.01 to 2.1.4. Any ideas? Is there a service I should restart or something? Thanks! Mike
I need to change the IPMI IP Address on 2 Nodes of a 4 Node Dell XC Block. I know how to change the IP from the iDRAC Console; however, how do I get it to update the Nutanix Node? I read through the Admin Manual; however, I could not find anything related to changing the IPMI IP Address. Thanks,David
What is the best approach to migrate data to a nutanix hyper-v failover cluster from a normal hyper-v failover cluster. I used export from the old host (Whitelisted in PRISM) and it succeeds fine but as the data is exported to the cluster FQDN, how do I know which node to use to import the files to make best use of data locality? is there a way to have the VM exported to a specific node?
Hi, Is it possible to assign additional IP to nutanix cluster? Scenario is as follows: 1. Having network class A with 10.x.x.x which is used for CVM, hypervisor communication during configuration. 2. Cluster virtual IP is also from 10.x.x.x range. 3. Added additional IP range of 172.x.x.x to CVM, Hypervisor. 4. Is it possible to add additional IP range IP of range 172.x.x.x to cluster virtual IP? Thanks
Hi, which 10G SFP+ LR (single mode) modules are supported in Nutanix NX-8035-G5 and NX-6035C-G5 nodes with C-NIC-10G-2-SI 10G network interface cards? Do the SFP+ need special branding, as e.g. for use with Cisco switches? Thanks, Erik
Login to the community
Login with your account
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.