Hi, I have a standalone ESX 5.1 server with a VM with multiple 2TB (screenshot from WinScp attached below) vmdk volumes. VM is Windows 2008 R2. My Nutanix cluster is running 5.01, NCC 188.8.131.52, AHV 20160925.30 - Starter Edition I am planning to use a Windows 2012 R2 server with WinSCP to get the vmdk files onto the Nutanix cluster container (I have multiple containers) and then use the Image service (via Chrome browser) to upload/convert via UNC to the container. Here are my questions: [list=1] [*]I'm wondering if there are any limitations on the Image Server when using the Chrome Browser [*]Any potential issues with these vmdk sizes? [*]What is the syntax for the URL to access the Storage Container so I can upload the vmdk without using the UNC? -Figure this one out. In Image Service use nfs://clusterip/Containername/name ofvmdk-flat.vmdk [*]Since I need to upload multiple vmdk files that quite large, can/should I open multiple browsers to get simultaneous uploads happeni
Hi, When running NCC on one of the customers cluster, the following output is generated... FAIL: Remote site: HQ-NTNXNumber of vstores mapped for the local site on the remote site are not same.Refer to KB 3335 ([url=http://portal.nutanix.com/kb/3335]http://portal.nutanix.com/kb/3335[/url]) for details on remote_site_config_check or Recheck with: ncc health_checks data_protection_checks remote_site_checks remote_site_config_check Why is it so important to have the same number of mappings? When we are replication from site A to site B, we can use different mappings than when replication is executed from site B to site A. Regards, Bart
I have this same error on 3 nodes - I have checked networking / Time and connectivity and it all seems ok. Running ncc health checks on different services gives me a range of things to check. Not sure where to start with this. The Hosts are Hyper-V.
Hi Team, I executed the NCC health checks run_all on two clusters and got the following message (at the end of the run_all script): Detailed information for sar_stats_threshold_check:ERR : Execution terminated by exception IndexError('list index out of range',):Traceback (most recent call last): File "/home/hudsonb/workspace/workspace/ncc-2.0.2-stable_release/builds/build-ncc-2.0.2-stable-release/ncc-python-tree/bdist.linux-x86_64/egg/ncc/ncc_utils/plugin_utils.py", line 128, in handle_exceptions result = fn() File "/home/hudsonb/workspace/workspace/ncc-2.0.2-stable_release/builds/build-ncc-2.0.2-stable-release/ncc-python-tree/bdist.linux-x86_64/egg/ncc/plugins/base_plugin.py", line 740, in result = putils.handle_exceptions(lambda : check(*check_args), cls.canvas) File "/home/hudsonb/workspace/workspace/ncc-2.0.2-stable_release/builds/build-ncc-2.0.2-stable-release/ncc-python-tree/bdist.linux-x86_64/egg/ncc/plugins/health_checks/sar_checks.py", line 358, in check_threshol
What is the recommended CPU performance setting for AHV? Im curious if AHV has the ability to interract with the Intel throttling or if it's better to just have it run full speed all the time. The CPU power options are: Performance Per Watt DAPC Performance Per Watt OS Performance Acropolis Hypervisor 201602173 Dell xc730xd nodes with 2 X Xeon E5-2630 v3 @ 2.40GHz The Dell Active Power Control (DAPC) mode allows the BIOS to manage the processor power states in order to achieve Performance/Watt maximized at all utilization levels and workload types while still meeting performance requirements. In the OS (Demand Based Power Management (DBPM) mode, the operating system (OS) controls the processor’s power management. In the Maximum Performance mode, the processor runs at the highest frequency all the time. [list] [*]Performance Per Watt Optimized (DAPC) This mode allows the BIOS to manage the processor power states in order to achieve Performance/Watt maximized at all utiliz
Right now we about to push ESXi 5.5 u3b so we can move on up to AOS 184.108.40.206. Since Nutanix doesn't support pushing out patches, we planned to do that with VUM which is fine. At least saving time from having to bring down each node manually for upgrading to u3b. On top of patching, we're also installing NFS VAAI VIB. In our lab, we do it via command line, but has anyone found a zip, or package that VUM can support pushing out that way we can just package our baseline with updates and the VIB file? We can't find anywhere that has a zip file or anything we can leverage thus far. Since we have two data centers a good number of nodes in the cluster at each, We'd love to save time if at all possible, so any advice is greatly appreciated. Thanks!
HI, Actually i am able to create folder using nfs whitelist in the nutanix smb share but not able to see the option for that.Then i came to know that we cannot give permission like that for the folder in the nutanix share.is that true.currently we shared a volume from one vm as share. thanks ritchie
Hello, i ran into an error with my OS-Deployment via SCCM on my AHV cluster. I apply the Nutanix related drivers in the Tasksequence with following query: [b][i]Select * from Win32_ComputerSystem Where Model LIKE "KVM"[/i][/b] This worked for me all the time. After i migrated the cluster to AOS5.0 this step is skipped in the tasksequence with a message that says that this condition is FALSE. Could it be, that a newly created vm an a AOS5.0 based cluster displays another Model?My running vms on this cluster still show KVM as modul, so i'm a little bit confused.
Hi I found the KB below stating you can't virtualize domain controllers on Nutanix Hyper-V. [url=https://portal.nutanix.com/#/page/kbs/details?targetId=kA032000000TTGWCA4]https://portal.nutanix.com/#/page/kbs/details?targetId=kA032000000TTGWCA4[/url] Quote: "Why? Because Hyper-V wants to contact the AD server before it can power up any VM on Nutanix storage, and the AD server would not be available because the VM cannot be booted." I am of the understanding that this might have been a thing in up until 2008 R2, but this should not be a problem when running Hyper-V on 2012 R2. Anyone that could shed some light on the matter? Would like to call the support line, but that is not an option right now since it has nothing to do with our own nutanix nodes. EDIT: Solved. Found out it is due to the SMB3 share of the nutanix cluster, which requires authentication from the domain.
The customer I currently support has a pair of Nutanix clusters running on the Dell PowerEdge XC630-10 harward platform. They have really enjoyed the performance and overall gain (simplicity of the design, reduction in administration/alerts, etc.) since moving to the Nutanix platform. The topic of a server refresh has arisen, and we're beginning to collect all the different metrics we will need to analyze to support this effort. After that's done, though, the planning and designing will begin. In the past, I've used the Nutanix Sizer. That since seems to be locked away behind a Nutanix/Partner Login page now? Does this still exist? Is there one that supports the Dell hardware platform line? Any other workload sizing resources for the theoretical migration planning would be much appreciated. Thanks, in advance!
Hello Nutanix Currently, my demo cluster has had problem in removing host from cluster The host that I wanted to remove has an SSD showing status: "Marked for removal but not detachable" and it has been kept by that status for a long time with nothing changed despite that the host was shown up as successfully removed the host So, after that I have tried reinstalling this host but that status still showing up and I cannot do anything with the new reinstalled host as well as expanding the cluster with it By the way, I noted that before reinstalling this host, this host was running as a cluster member
Hi, I am doing some tests with creating and deleting large files in a sles12 installation on our AHV environment. I use an ext4 filesystem and have enabled the trim/discard feature for the filesystem and lvm. But when i delete a large file with random data (5g size) the storage backend of the cluster does not see that the former used storage is now not longer used. I tried fstrim to initiate the cleanup but that doesn´t work. If I write zeros to the file/partition/filesystem then the backend gets backs the storage. Is trim/discard supported to tell the storage backend that filesystem space is no longer needed or does anybody has experience with such a setup ? Thank you for your help. Regards Hans
Thoughts I'd post this rather than bugging our SE over and over ;) Trying to get a disaster recovery plan in place to protect against a major disaster such as server room fire... We have a couple of clusters geographically separated which I have set up as remote-sites. Site A and B lets call them. I've set up a PD w/ consistency group with the VMs to protect and periodically snapshot from site A to B. Now in the case of a fire/destroyed cluster at site A... obviously step one is rebuilding a cluster at that site with the appropriate configs, but then... With a brand new cluster in place, what is the standard process to retreive the snapshots from site B back to the new site A ? So far, I've activated the backup on site B (as if it was an unplanned outage and left it off). and recreated the remote-site link on the newly built out site A. I am thinking I now create a new PD to shuffle the VMs/consistency group back to Site A and then activate there and I am done ? I know this
As the most of you probably know, with AOS 5.6, has been introduced the Volume Group with Load Balancing function, well known as VGLB As i know the 5.6 version is in short term support. Now i'm deploying a two Oracle 12c RAC clusters on 2 8000 6-nodes AOS-AHV clusters with AOS 220.127.116.11. It involves using volume group with multiple vdisk, network related configuration, Linux related tuning and so on... Of course, with 18.104.22.168 (so far the latest GA version in long term support), i've not the VGLS choice, every volume group's I/O is managed by a single CVM. By the other side, with AOS 5.6, i could distribute this load on every single CVM and every single node storage on the cluster. Of couse this heavily impact on the resiliency, the performance and the resources distribution. The questions are two and i need some suggestions. 1) could be better to upgrade to 5.6 even if it is in short term support? 2) is it possible to update "on the fly" the volumes's configuration with "vg.upd
Hi all, finally, after a long POC i'm a new Nutanix customer :D I'm waiting N°2 NX-1365S with 6 nodes and the idea is to create a new vsphere cluster version 6 and migrate the existing VM's from the old vsphere 5.5u2. I've just check the Nutanix support page and i see the latest supported version is 6.0u1a build 3.073.146. This version seems to be affected to CBT issue, is it possible to install the correction patch ([url=http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2136854)]http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2136854)[/url] after the the installation? Or install directly the 6.0u1b? Thanks in advance, and have a good w.end!
Hi, I upgraded BMC Firmware from 03.24 to -> 03.40 via IPMI UI today since that i can check the version of BMC on IPMI UI page that its 03.40 upgraded among 3nodes which i upgraded but its not the same on PRISM even its accomplished. i can only see that BMC Firmware 03.40 isnt upgraded on PRISM among 3nodes i use AOS 22.214.171.124 with AHV Nutanix 20160601.44 [img]https://d1qy7qyune0vt1.cloudfront.net/nutanix-us/attachment/2107iA6B3D2779D7382C9.png[/img] additionally, i have this fail message via conducting ncc health check funtion All 4nodes have intel CPU in them. The messages showing like this Detailed information for bmc_bios_version_check:Node 192.168.x.204FAIL: No Intel CPU is found on the node.Node 192.168.x.205FAIL: No Intel CPU is found on the node.Node 192.168.x.206FAIL: No Intel CPU is found on the node.Node 192.168.x.207FAIL: No Intel CPU is found on the node.Refer to KB 3565 ([url=http://portal.nutanix.com/kb/3565]http://portal.nutanix.com/kb/3565[/url]) for de
Hi, We are thinking of building a separate VmWare Horizon view cluster and make the Nutanix-cluster used just for server loads. We have a few scenarios of creating this. And one is to buy 4 new vmware hosts and connect them to the existing nutanix cluster with iSCSI, this is just because we have enough storage already in the existing cluster. My question is, will it be a bottleneck to use iSCSI? We will run about 260 Win7/Win10 task-workers VDI clients. Regards Tobias
We are using Apache Cloudstack 4.8.0 with Vmware 5.5 to to provide IAAS services . We want to use Nutanix as underlying infrastructure Is this implementation supported ? Is there any recommendations, best practices for deployment or a reference architecture? We also plan to have HA for a class of our vm whereas others should not be replicated. What kind of mapping between Cloudstack and Vmware organisation units, and Nutnaix Protections domains strategies should we have to achieve this goal. If none of the above is supported is Apache Cloudstack in Nutanix roadmap on Vmware or AHV ? Thank you in advance
Login to the community
Login with your account
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.