License-Free Virtualization for Your Enterprise
- 470 Topics
- 1,559 Replies
Hi I'm looking to created clustered files servers using Windows Server Failover Clustering - VMs will be in-guest, i'm using AHV. My understanding that presenting the Nutanix storage via iSCSI to WSFC servers is supported however i'm confused about how to setup the networking and the Nutanix ABS documentation doesn't make this clear. I'm used to creating a separate storage network / VLAN for all storage traffic however the iSCSI target IP address is recommended to be on the same network as the CVMs / all of the Nutanix infrastructure. So the question is do i create a separate network for storage and have it route to the iSCSI target IP address, or have the iSCSI initiators on the same network as the iSCSI target / data services IP? It wouldn't seem logical to mix the CVM traffic with storage traffic, however also not great to have to route storage traffic. Thanks Adam
Is it possible to implement throttling at a per VM level? The thoughts/scenario behind the question: We have had on a couple occasions where we found a VM (different in each occasion) that was running away with storage controller IOPs. Of course the goal is to get the actual issue resolved for the offending VM, but we would like to be able to implement throttling to be able to assure other VMs in that cluster don't suffer when this does occur. Disk I/O has been the scenario so far, but could also see the need to throttle CPU, memory, and/or NIC throughput at some point.
Hi, I would like to ask if Nutanix support replace SSD/HDD with different capacity. The reason I ask is that I want to keep the certain Hot Data ratio when data growth but without a growth on compute capacity. My concern is that if I just add the NX-6035C could allow me to growth my data, but the SSD may not enough for maintain the ratio of required Hot Data. Thanks.
Hi all! Is anyone out there running Cloudera in a Nutanix environment? Not just Hadoop, but specifically the Enterprise Cloudera Hadoop implementation. We're in the process of migrating our Production Cloudera cluster to Nutanix VMs and having a host of issues related to disk latency. I'm aware of the Nutanix Hadoop reference documents, but our configuration doesn't quite fit because it started on physical devices, and the existing production configuration would be challenging to change at this point. We're not running YARN yet, still on MapReduce V1, for example, though we're in the process of moving that direction in our dev environment. We have a storage container devoted to our HDFS directories, with virtual disks carved out of it in VMWare, but we're seeing vdisk latencies *averaging* 300ms with frequent spikes to several seconds. Our I/O is very write heavy; Nutanix advised us to disable EC-X, so we did, but no help. We're running RF2 on that container, no deduplication,
Hello, We have just migrated our entire infrastructure to Nutanix. We are in the process of moving an old file server to a new windows vm. We are running AHV as the hypervisor and are on 10GB ethernet. I am experiencing slow network transfers between these 2 VM's on the same Nutanix Cluster. So bad that the IOPS dont regiser above 300 in Prism. Any thoughts?
Hello folks, I installed a 3node cluster with hyper-v on 1Gig interfaces. Now I want to "migrate" this setting to 10gig interfaces. If i disconnect the 1gig interfaces and plugin the 10gig interfaces the host disconnects and no services are functional. I have to disconnect the 10gig interfaces and reconnect the 1gig interface to reach the host/cluster. Is there a document which describes the procedure to change 1gig to 10gig interfaces? Or could anybody give me some advice? If i check the nic teaming on the hyper-v hosts there are 4 interfaces (2x1gig, 2x10gig) added to the NetAdapterTeam. Best regards, Freddie
Can anyone tell me if the inter-site (not intra-site, I am talking about DR replication traffic between clusters) is moved to the backplane traffic port when network segmentation is enabled? The Prism Web Console Guide v5.5 does not say so (it only mentions intra-cluster traffic): [quote]The traffic entering and leaving a Nutanix cluster can be broadly classified into the following types: [i]Backplane traffic[/i]Backplane traffic is intra-cluster traffic that is necessary for the cluster to function, and comprises traffic between CVMs, traffic between CVMs and hosts, storage traffic, and so on. (For nodes that have RDMA-enabled NICs, the CVMs use a separate RDMA LAN for Stargate-to-Stargate communications.) [i]Management traffic[/i]Management traffic is administrative traffic, or traffic associated with Prism and SSH connections, remote logging, SNMP, and so on. The current implementation simplifies the definition of management traffic to be any traffic that is not on the back
Apart from listed pre-requisites before starting Hypervisor Upgrade (ESXi) listed below, had to Disable Affinity Rules under DRS to enable ESXi host go into Maintainance mode and continue update. Got stuck since i did found this part the hard way... 1) Genesis.out log pointed the target CVM owning Shutdown token and not letting it go. 2) Then from vCenter: Manual Shutdown of CVM -> Manually put related ESXi host in Maintainance mode -> Exit Maintainance mode -> Start CVM 3) Go to PRIMS and upgrade continues... and so does 2048... - Koji
Hi all, I have a case open for this with Nutanix Support, however, while I'm waiting... I'm wondering if anyone can explain why in a block that contains 12HDDs and 6SSDs (3-node block), only shows disk statistics for the SSDs and no values for the HDDs in the list (all metrics are 0). [img]https://d1qy7qyune0vt1.cloudfront.net/nutanix-us/attachment/3031i88E73724C43B9D2E.jpg[/img]
I am suffering with exausthives crashing in my Domino server and IBM support said that the possible problem is related to disks. I did an analysis in Prism and there are really trace of problems. The storage bandwith for this VM is much higher than all others. Is there anyone running Lotus Domino sever on Nutanix Cluster? I would like to hear your feedback and feeling about it. I am using Nutanix with vSphere 5.5.
Hi, we're starting to look at using Spark on our Nutanix cluster. Not in a huge way but to run some ETL processes in parallel. I'm under pressure to install Hadoop, or at least HDFS on the cluster but the entire concept of adding a distributed, resilient "filesystem" (actually I think it's more an object store) on top of the one already provided by Nutanix seems somewhat off. Is there a recommended way of doing this? I know that containers are exported to ESXi via NFS. Would that be usable? Would that be able to leverage stargate to access from anywhere? All I really need is a globally available volume shared between all my nodes.
Our CVMs are configured with 32GB of dynamic memory. Start, minimum and max are all set to 32GB. I'm curious to know why Dynamic Memory is enabled on these VMs? Since the CVM's OS isn't fully Hyper-V integrated, we don't see memory demand from the hypervisor (this is the only benefit I see from leaving dynamic memory enabled on a VM with all values set the same). As an explicit downside, since we're using Dynamic Memory on these VMs, we lose the vNUMA featureset (not that a 32GB VM is [b][i]likely[/i] [/b]to benefit from it). Thanks!
Hi, Running 3 node Lenovo HX1310 with AHV 5.01. I will be opening a ticket with Nutanix support. This more of an FYI.... First saw this issue with and SBS 2011 Standard (aka Windows 2008 R2 x64). After migrating from Hyper-V 2012 R2. I was/am having issues where the system would freeze. I first thought it was ShadowProtect backups causing, but then it started happening late at night. I think its the WBengine Service (aka Block Level Backup). I setup a fresh test Windows 2008 R2 with 2 volumes, C: and E:. When I ran chkdsk on the c: drive it gets to the end, stage 3 of 3, and then the VM powers off. I have to start it manually. So I tried it on the E: that has no data is freshly formatted. I run chkdsk, and the VM powers off. Update:Just tried the same chkdsk /f on a Windows 2012R2 server and the same symptoms appear. Thanks in Advance, G
As the most of you probably know, with AOS 5.6, has been introduced the Volume Group with Load Balancing function, well known as VGLB As i know the 5.6 version is in short term support. Now i'm deploying a two Oracle 12c RAC clusters on 2 8000 6-nodes AOS-AHV clusters with AOS 220.127.116.11. It involves using volume group with multiple vdisk, network related configuration, Linux related tuning and so on... Of course, with 18.104.22.168 (so far the latest GA version in long term support), i've not the VGLS choice, every volume group's I/O is managed by a single CVM. By the other side, with AOS 5.6, i could distribute this load on every single CVM and every single node storage on the cluster. Of couse this heavily impact on the resiliency, the performance and the resources distribution. The questions are two and i need some suggestions. 1) could be better to upgrade to 5.6 even if it is in short term support? 2) is it possible to update "on the fly" the volumes's configuration with "vg.upd
Hi all, Just wanted to check if anyone else is having issues with PRISM stops collecting CPU and Memory Stats from the Hyper-V hosts themselves. The issue seems to be somewhat sporadic and I havn't been able to Pin-Point any specific operation nor any specific time. Restarting the NutanixHostAgent helps to resolve the issue until it stops collecting again. We are currently running 4.6.1 and are using Logical Switches that perhaps might be one of the problem. When this happends the Runway function in Prism Central stops working properly so while it is a "cosmetic" issue it still affects other functions inside of other Nutanix Functions. Kind Regards
Nutanix Tech Marketing Engineer Andy Daniel wrote up a short paragraph recently as an overview to the Storage efficiency features including compression. Share what you are doing for your different workloads in your environments (Nutanix or otherwise). -- To optimize storage capacity and accelerate application performance, the Acropolis Distributed Storage Fabric uses data efficiency techniques such as deduplication, compression, and erasure coding. They are intelligent & adaptive, and in most cases, require little or no fine-tuning. In fact, two levels of post-process compression are enabled in conjunction with cold data classification by default on new shipping clusters. Because they’re entirely software driven, it also means existing customers can take advantage of new capabilities and enhancements by upgrading AOS. DSF provides both inline and post-process compression to maximize capacity. Many times, customers incorrectly associate compression with reduced performance, but this
Hello, I have turned on Windows Dedup for some of my file servers that reside on AHV, however the storage reduction is not reflected on Prism. Any thoughts as to why? I wish I could run the Capacity Dedup from AHV however the last time that I turned it on I received the following alert: Do not replicate protection domain XXXXXXX comprising entities from the storage container that have deduplication enabled to a single node remote site XXXXXXX. (XXX denote omitted field)
Hi all, I just started to write a deployment script for VMs on AHV cloning from a template. Using Sysprep lots of OS Customization is already possible. However, I would like to know if there is a possibility to run a (powershell) script in a Windows 2012 R2 VM on AHV while it has not configured any NIC yet. I know I can easily login via console and start every script I want, but I would like to run it fully automated within my deployment script from my client (for several VMs). Any ideas?
I have recently upgraded my vCenter from 6.0 -> 6.5u1, and I'm not looking to upgrade my ESXi 6.0u3 installs to 6.5u1. My first attempt was to use one-click hypervisor upgrade, and I was able to uploading the ISO and MD5 checksum, but the pre-upgrade check errors out pretty quickly with a fairly generic error message saying "unable to determine version of bundle." Did a few re-download and re-upload attempts to verify my file integrity but the error persisted. I then tried to do the upgrade old-school style using VUM which is conveniently baked into my VCSA as of 6.5, and I got another generic error message inside of update manager, "The upgrade contains the following set of conflicting VIBs:" with nothing but blank space following the colon. Typically I would have expected to see an individual VIB called out in list form, but there is just nothing there. I am fully patched to current using Nutanix LCM, so I suspect it is probably one of the LSI HBA updates that LCM has slipped in
Hi, We are in need to migrate VMs from Dell cluster to Nutanix cluster. Both are AHV based clusters. What would be the best method to achive this? 1. Creating Protection Domain group and migrate OR 2. Adding Nutanix box to existing Dell cluster and migrate VMs. Rgds, Jitendra Ingale.
Login to the community
Login with your account
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.