Hi, Is it possible to assign additional IP to nutanix cluster? Scenario is as follows: 1. Having network class A with 10.x.x.x which is used for CVM, hypervisor communication during configuration. 2. Cluster virtual IP is also from 10.x.x.x range. 3. Added additional IP range of 172.x.x.x to CVM, Hypervisor. 4. Is it possible to add additional IP range IP of range 172.x.x.x to cluster virtual IP? Thanks
Hello everybody, let's assume we have one Nutanix block with 3 nodes. In each node we only have 1 CPU socket with 10 Cores. A VM can only run on one node. So, in this case, wouldn't it make more sense to let a VM with 4 CPUs run on 1 VCPU and 4 Cores, instead of 4 VCPUs with 1 core, as each node has only 1 CPU socket? Or does Nutanix always recommend to use only 1 Core per VCPU not matter how much CPU sockets are available in a node? Best regards, Didi7
Right now we about to push ESXi 5.5 u3b so we can move on up to AOS 18.104.22.168. Since Nutanix doesn't support pushing out patches, we planned to do that with VUM which is fine. At least saving time from having to bring down each node manually for upgrading to u3b. On top of patching, we're also installing NFS VAAI VIB. In our lab, we do it via command line, but has anyone found a zip, or package that VUM can support pushing out that way we can just package our baseline with updates and the VIB file? We can't find anywhere that has a zip file or anything we can leverage thus far. Since we have two data centers a good number of nodes in the cluster at each, We'd love to save time if at all possible, so any advice is greatly appreciated. Thanks!
Just implementing a new cluster and I've ran foundation with NOS22.214.171.124 and ESX 6. The ESXi hosts have not yet been connected to vCenter. One of the post image steps is to set the correct time zone for the cluster. It says that each CVM needs to be shutdown in Serial. Is this a simple case of connecting to a each ESXi host and shutting down the CVM, Power it back on, connect to the CVM, via http://cvmip to check that it is up, then move on to the next ESXi host and repeat? Thanks Chris
What is the best approach to migrate data to a nutanix hyper-v failover cluster from a normal hyper-v failover cluster. I used export from the old host (Whitelisted in PRISM) and it succeeds fine but as the data is exported to the cluster FQDN, how do I know which node to use to import the files to make best use of data locality? is there a way to have the VM exported to a specific node?
We have a 4-Host Dell Bundle. Per Host: [list] [*]2 x Xeon CPU E5-2620 v3 @ 2.40GHz [*]256GB RAM [*]4.4 TB of storage 400GB of which is SSD[/list]we got this error after running the NCC checks: Node 10.0.0.58:FAIL: CVM memory 16433468 kB is less than the threshold 24 GBNode 10.0.0.55:FAIL: CVM memory 16433468 kB is less than the threshold 24 GBNode 10.0.0.57:FAIL: CVM memory 16433468 kB is less than the threshold 24 GBNode 10.0.0.56:FAIL: CVM memory 16433468 kB is less than the threshold 24 GBRefer to KB 1513 for details on cvm_memory_checkDetailed information for ldap_config_check:Node 10.0.0.56:INFO: No ldap config is specified.Refer to KB 2997 for details on ldap_config_checkDetailed information for cvm_numa_configuration_check:Node 10.0.0.58:INFO: Number of vCPUs(8)on the CVM is more than the max cores(6) per NUMA node.Node 10.0.0.55:INFO: Number of vCPUs(8)on the CVM is more than the max cores(6) per NUMA node.Node 10.0.0.57:INFO: Number of vCPUs(8)on the CVM is more than the
HI, Actually i am able to create folder using nfs whitelist in the nutanix smb share but not able to see the option for that.Then i came to know that we cannot give permission like that for the folder in the nutanix share.is that true.currently we shared a volume from one vm as share. thanks ritchie
Hi, When running NCC on one of the customers cluster, the following output is generated... FAIL: Remote site: HQ-NTNXNumber of vstores mapped for the local site on the remote site are not same.Refer to KB 3335 ([url=http://portal.nutanix.com/kb/3335]http://portal.nutanix.com/kb/3335[/url]) for details on remote_site_config_check or Recheck with: ncc health_checks data_protection_checks remote_site_checks remote_site_config_check Why is it so important to have the same number of mappings? When we are replication from site A to site B, we can use different mappings than when replication is executed from site B to site A. Regards, Bart
When trying to upgrade NOS or NCC I'm getting this error: "Error while executing HTTP REST request: Could not connect to Genesis" The updates have already downloaded. This occurs even if I pick a lower version to upgrade to. I'm going from 4.1.4 to 126.96.36.199 (NOS) and 2.01 to 2.1.4. Any ideas? Is there a service I should restart or something? Thanks! Mike
The customer I currently support has a pair of Nutanix clusters running on the Dell PowerEdge XC630-10 harward platform. They have really enjoyed the performance and overall gain (simplicity of the design, reduction in administration/alerts, etc.) since moving to the Nutanix platform. The topic of a server refresh has arisen, and we're beginning to collect all the different metrics we will need to analyze to support this effort. After that's done, though, the planning and designing will begin. In the past, I've used the Nutanix Sizer. That since seems to be locked away behind a Nutanix/Partner Login page now? Does this still exist? Is there one that supports the Dell hardware platform line? Any other workload sizing resources for the theoretical migration planning would be much appreciated. Thanks, in advance!
Hi, I have a standalone ESX 5.1 server with a VM with multiple 2TB (screenshot from WinScp attached below) vmdk volumes. VM is Windows 2008 R2. My Nutanix cluster is running 5.01, NCC 188.8.131.52, AHV 20160925.30 - Starter Edition I am planning to use a Windows 2012 R2 server with WinSCP to get the vmdk files onto the Nutanix cluster container (I have multiple containers) and then use the Image service (via Chrome browser) to upload/convert via UNC to the container. Here are my questions: [list=1] [*]I'm wondering if there are any limitations on the Image Server when using the Chrome Browser [*]Any potential issues with these vmdk sizes? [*]What is the syntax for the URL to access the Storage Container so I can upload the vmdk without using the UNC? -Figure this one out. In Image Service use nfs://clusterip/Containername/name ofvmdk-flat.vmdk [*]Since I need to upload multiple vmdk files that quite large, can/should I open multiple browsers to get simultaneous uploads happeni
I need to change the IPMI IP Address on 2 Nodes of a 4 Node Dell XC Block. I know how to change the IP from the iDRAC Console; however, how do I get it to update the Nutanix Node? I read through the Admin Manual; however, I could not find anything related to changing the IPMI IP Address. Thanks,David
From this tech note - [url=http://go.nutanix.com/rs/nutanix/images/TechNote-Nutanix_Storage_Configuration_for_vSphere.pdf,]http://go.nutanix.com/rs/nutanix/images/TechNote-Nutanix_Storage_Configuration_for_vSphere.pdf,[/url] my impression is that Nutanix thin provisions a VM if the disk are set to "Thick Provisioned Lazy Zeroed". [i]All Nutanix containers are thin provisioned by default; this is a feature of NDFS. Thin provisioning is a widely accepted technology that has been proven over time by multiple storage vendors, including VMware. As containers are presented by default as NFS datastores to VMware vSphere hosts, all VMs will also be thin provisioned by default. This results in dramatically improved storage capacity utilization without the traditional performance impact. Thick provisioning on a VMDK level is available if required for the limited use cases such as fault tolerance (FT) or highly demanding database and I/O workloads. Thick provisioning can be accomplished by cr
Hi I found the KB below stating you can't virtualize domain controllers on Nutanix Hyper-V. [url=https://portal.nutanix.com/#/page/kbs/details?targetId=kA032000000TTGWCA4]https://portal.nutanix.com/#/page/kbs/details?targetId=kA032000000TTGWCA4[/url] Quote: "Why? Because Hyper-V wants to contact the AD server before it can power up any VM on Nutanix storage, and the AD server would not be available because the VM cannot be booted." I am of the understanding that this might have been a thing in up until 2008 R2, but this should not be a problem when running Hyper-V on 2012 R2. Anyone that could shed some light on the matter? Would like to call the support line, but that is not an option right now since it has nothing to do with our own nutanix nodes. EDIT: Solved. Found out it is due to the SMB3 share of the nutanix cluster, which requires authentication from the domain.
Hello, I`m not new to nutanix, but have a long time not focusing on nutanix installation step. while I installed nutanix with foundation+Phonex(NOS location selection)+Hypervisor image one year ago, I think Phonix is just exactly NOS at that time. I find it seems to have new method to image node: Foundation+AOS(NOS location selection)+Hypervisor image, or even having an JAVA applete+Foundation+AOS(NOS location selection)+Hypervisor image I`d like to confirm detailed difference between NOS , AOS, and Phoenix. I have asked guys, no one can explain this clearly to me. also what make me confused is there are varous statement about it in different guide(it is self-contradictory) , also Which phonex package should I use ? shows you need manually make/generate phonix ISO with AOS , but sees you shoud use the Phonix ISO downloaded from nutanix portal site ! look at these slices : "Phoenix is a tool used to install the Nutanix Controller VM and provision a hypervisor on a new or rep
Hi, I am doing some tests with creating and deleting large files in a sles12 installation on our AHV environment. I use an ext4 filesystem and have enabled the trim/discard feature for the filesystem and lvm. But when i delete a large file with random data (5g size) the storage backend of the cluster does not see that the former used storage is now not longer used. I tried fstrim to initiate the cleanup but that doesn´t work. If I write zeros to the file/partition/filesystem then the backend gets backs the storage. Is trim/discard supported to tell the storage backend that filesystem space is no longer needed or does anybody has experience with such a setup ? Thank you for your help. Regards Hans
Hello All, I am new to "Nutanix XCP, One basic question regarding the Raw device mapping in Esxi using the Nutanix infrastructure. Is RDM mapping supported in the Nutanix Appliance4.6 (Nos)? If yes, Can you please share the procedure to Create a RDM (Physical mode)?.. Thanks, Bhoopathy
Hello, i think about Citrix PVS on Nutanix All-Flash Blocks with Acropolis Hypervisor. But there is no offical support for AHV, which means that i have to run VMware or Hyper-V on my Nutanix Cluster to use PVS. -> [url=http://support.citrix.com/article/CTX202032]http://support.citrix.com/article/CTX202032[/url] Is there any roadmap for this?
Dear Expert, My Nutanix environment as per below: 4block of Nutanix ( 8 Node ) connect to Nexus Access Switch with 10G SPF cable. Hypervisor: VMware 5.5 Issue: from vCenter we detect the host keep disconnected randomly. and vCenter become HANG Workaround: Ping continuously on Host. Result No packet drop can you please advise what need to do Regards, Hamballi
Login to the community
Login with your account
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.