Can someone chime in on how Licensing woudl work when using AHV? For example, we currently have Windows 2012R2 Datacentre edition on each host and we have the ability to have unlimited licensed Windows Servers virtualised. I know that AHV doesn't require a license but does that mean that a license for each Widnows Virtual Server will need to be purchased hence invalidating our Datacentre license???
Hello everybody, let's assume we have one Nutanix block with 3 nodes. In each node we only have 1 CPU socket with 10 Cores. A VM can only run on one node. So, in this case, wouldn't it make more sense to let a VM with 4 CPUs run on 1 VCPU and 4 Cores, instead of 4 VCPUs with 1 core, as each node has only 1 CPU socket? Or does Nutanix always recommend to use only 1 Core per VCPU not matter how much CPU sockets are available in a node? Best regards, Didi7
Hi, I recently updated NOS to 22.214.171.124 and now when I'm trying to look for new updates it stuck on Loading... and I can't see if new updates are available. It's the same thing for all tabs (Acropolis, Hypervisor, Firmware, NCC, Foundation). [img]https://d1qy7qyune0vt1.cloudfront.net/nutanix-us/attachment/1341i0360FD9634A99644.jpg[/img] Is there someone could help me about this ?
As the most of you probably know, with AOS 5.6, has been introduced the Volume Group with Load Balancing function, well known as VGLB As i know the 5.6 version is in short term support. Now i'm deploying a two Oracle 12c RAC clusters on 2 8000 6-nodes AOS-AHV clusters with AOS 126.96.36.199. It involves using volume group with multiple vdisk, network related configuration, Linux related tuning and so on... Of course, with 188.8.131.52 (so far the latest GA version in long term support), i've not the VGLS choice, every volume group's I/O is managed by a single CVM. By the other side, with AOS 5.6, i could distribute this load on every single CVM and every single node storage on the cluster. Of couse this heavily impact on the resiliency, the performance and the resources distribution. The questions are two and i need some suggestions. 1) could be better to upgrade to 5.6 even if it is in short term support? 2) is it possible to update "on the fly" the volumes's configuration with "vg.upd
Hi all is there any guide for what ESXi version/build-number is support or unsupport about Foundation? Sometimes I need to know what version Foundation can install what version ESXi Or I only use the newest version Foundation to install older version ESXi? Does it always work? I hope there will be some information let me know when I install older ESXi on newer Nutanix using newer Foundation Some Costomers not always use the newest ESXi thanks!
Just implementing a new cluster and I've ran foundation with NOS184.108.40.206 and ESX 6. The ESXi hosts have not yet been connected to vCenter. One of the post image steps is to set the correct time zone for the cluster. It says that each CVM needs to be shutdown in Serial. Is this a simple case of connecting to a each ESXi host and shutting down the CVM, Power it back on, connect to the CVM, via http://cvmip to check that it is up, then move on to the next ESXi host and repeat? Thanks Chris
Hi, we're starting to look at using Spark on our Nutanix cluster. Not in a huge way but to run some ETL processes in parallel. I'm under pressure to install Hadoop, or at least HDFS on the cluster but the entire concept of adding a distributed, resilient "filesystem" (actually I think it's more an object store) on top of the one already provided by Nutanix seems somewhat off. Is there a recommended way of doing this? I know that containers are exported to ESXi via NFS. Would that be usable? Would that be able to leverage stargate to access from anywhere? All I really need is a globally available volume shared between all my nodes.
Hi I found the KB below stating you can't virtualize domain controllers on Nutanix Hyper-V. [url=https://portal.nutanix.com/#/page/kbs/details?targetId=kA032000000TTGWCA4]https://portal.nutanix.com/#/page/kbs/details?targetId=kA032000000TTGWCA4[/url] Quote: "Why? Because Hyper-V wants to contact the AD server before it can power up any VM on Nutanix storage, and the AD server would not be available because the VM cannot be booted." I am of the understanding that this might have been a thing in up until 2008 R2, but this should not be a problem when running Hyper-V on 2012 R2. Anyone that could shed some light on the matter? Would like to call the support line, but that is not an option right now since it has nothing to do with our own nutanix nodes. EDIT: Solved. Found out it is due to the SMB3 share of the nutanix cluster, which requires authentication from the domain.
Hi, I am doing some tests with creating and deleting large files in a sles12 installation on our AHV environment. I use an ext4 filesystem and have enabled the trim/discard feature for the filesystem and lvm. But when i delete a large file with random data (5g size) the storage backend of the cluster does not see that the former used storage is now not longer used. I tried fstrim to initiate the cleanup but that doesn´t work. If I write zeros to the file/partition/filesystem then the backend gets backs the storage. Is trim/discard supported to tell the storage backend that filesystem space is no longer needed or does anybody has experience with such a setup ? Thank you for your help. Regards Hans
Hi, I would like to ask if Nutanix support replace SSD/HDD with different capacity. The reason I ask is that I want to keep the certain Hot Data ratio when data growth but without a growth on compute capacity. My concern is that if I just add the NX-6035C could allow me to growth my data, but the SSD may not enough for maintain the ratio of required Hot Data. Thanks.
Hi, I have a standalone ESX 5.1 server with a VM with multiple 2TB (screenshot from WinScp attached below) vmdk volumes. VM is Windows 2008 R2. My Nutanix cluster is running 5.01, NCC 220.127.116.11, AHV 20160925.30 - Starter Edition I am planning to use a Windows 2012 R2 server with WinSCP to get the vmdk files onto the Nutanix cluster container (I have multiple containers) and then use the Image service (via Chrome browser) to upload/convert via UNC to the container. Here are my questions: [list=1] [*]I'm wondering if there are any limitations on the Image Server when using the Chrome Browser [*]Any potential issues with these vmdk sizes? [*]What is the syntax for the URL to access the Storage Container so I can upload the vmdk without using the UNC? -Figure this one out. In Image Service use nfs://clusterip/Containername/name ofvmdk-flat.vmdk [*]Since I need to upload multiple vmdk files that quite large, can/should I open multiple browsers to get simultaneous uploads happeni
Hello, i think about Citrix PVS on Nutanix All-Flash Blocks with Acropolis Hypervisor. But there is no offical support for AHV, which means that i have to run VMware or Hyper-V on my Nutanix Cluster to use PVS. -> [url=http://support.citrix.com/article/CTX202032]http://support.citrix.com/article/CTX202032[/url] Is there any roadmap for this?
Hey, We recently upgraded on of our clusters to AOS 5.01. This process went flawlessly! After the upgrading I noticed some minor bugs which I would like to share: 1) Missing names: [img]https://d1qy7qyune0vt1.cloudfront.net/nutanix-us/attachment/2616i82890AFCEDDB58CB.png[/img] Minor but worth mentioning. 2)Missing title: [img]https://d1qy7qyune0vt1.cloudfront.net/nutanix-us/attachment/2618i073B8D8311B00E3B.png[/img] Minor but worth mentioning 3)Missing export button. This is the main reason I'm posting this(preventing others to waste their time on this); It took me quite a while to find this "invisible button". [img]https://d1qy7qyune0vt1.cloudfront.net/nutanix-us/attachment/2619i018A0275429043E4.png[/img] With AOS 5.01, the (invisible) button is on the right side of the graph: [img]https://d1qy7qyune0vt1.cloudfront.net/nutanix-us/attachment/2620iEE0200D1B2B8CE66.png[/img] If I come accross more bugs I will post them here Seba
Hi, We are thinking of building a separate VmWare Horizon view cluster and make the Nutanix-cluster used just for server loads. We have a few scenarios of creating this. And one is to buy 4 new vmware hosts and connect them to the existing nutanix cluster with iSCSI, this is just because we have enough storage already in the existing cluster. My question is, will it be a bottleneck to use iSCSI? We will run about 260 Win7/Win10 task-workers VDI clients. Regards Tobias
The customer I currently support has a pair of Nutanix clusters running on the Dell PowerEdge XC630-10 harward platform. They have really enjoyed the performance and overall gain (simplicity of the design, reduction in administration/alerts, etc.) since moving to the Nutanix platform. The topic of a server refresh has arisen, and we're beginning to collect all the different metrics we will need to analyze to support this effort. After that's done, though, the planning and designing will begin. In the past, I've used the Nutanix Sizer. That since seems to be locked away behind a Nutanix/Partner Login page now? Does this still exist? Is there one that supports the Dell hardware platform line? Any other workload sizing resources for the theoretical migration planning would be much appreciated. Thanks, in advance!
Hello, i ran into an error with my OS-Deployment via SCCM on my AHV cluster. I apply the Nutanix related drivers in the Tasksequence with following query: [b][i]Select * from Win32_ComputerSystem Where Model LIKE "KVM"[/i][/b] This worked for me all the time. After i migrated the cluster to AOS5.0 this step is skipped in the tasksequence with a message that says that this condition is FALSE. Could it be, that a newly created vm an a AOS5.0 based cluster displays another Model?My running vms on this cluster still show KVM as modul, so i'm a little bit confused.
What is the best approach to migrate data to a nutanix hyper-v failover cluster from a normal hyper-v failover cluster. I used export from the old host (Whitelisted in PRISM) and it succeeds fine but as the data is exported to the cluster FQDN, how do I know which node to use to import the files to make best use of data locality? is there a way to have the VM exported to a specific node?
Login to the community
Login with your account
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.