Tools and Technologies
Recently active
Hey guys, We are busy building a Nutanix 3.5.3.1 infrastructure with vSphere 5.5. We have six nodes in two blocks, and the design consists of 2 Nutanix clusters with each its own vSphere cluster. HA is enabled on each cluster. We are getting an error on both Nutanix clusters in PRISM UI: Virtual Machine auto start is disabled on the hypervisor of Controller VM Nutanix best practices recommend to enable "Start and Stop Virtual Machines with the system" on each host, and move the Nutanix CVM startup to Automatic startup. But, according to This VMware KB the automatic startup feature is disabled when moving an ESXi host in an HA enabled cluster. Just like the error message says. Is this expected behavior? Can this error message be disabled because it is just not valid in combination with VMware HA enabled clusters? Or is this still a configuration error?
I've been working on a VDI project with Nutanix of late. the project i sstrating to wrap up now and I've started to blog about 'Quick Tips'. I thought I would share them here for all to see, hopefully will be of help to others. First one went up today: Nutanix Networking | Quick Tip Thanks!
Hello, It's made the news yesterday, but in case you missed it, there seems to be a bug within vSphere 5.5U1 that affects all NFS users. NetApp & VMware (at least, maybe Nutanix too?) are working on the subject. See the post below for more information: http://datacenterdude.com/vmware/nfs-disconnects-vmware-vsphere/ That may also explain the delay from Nutanix in the support of the 5.5U1? Did you guys catch this while working no the 5.5U1 support? Sylvain.
What's is the official stance on using the Driver Rollup ISOs VMware has been releasing? Can they be used if the ESXi version is in accordance with the Nutanix-approved version compatibility list?
I wonder what the upgrade path(s) for NOS 4.0 are. I'm currently on 3.5.1, looking to skip 3.5.2 & 3.5.3 to go straight to the new features.
We have a shiny new VMware NTNX environment, yeeh! Question: I'd like to be able to measure the virtual disk metrics for a specific workload (backup). Preferably without any layer in between so I can really see what the workload does. Idea is to use a virtual machine on NTNX as a backup server and see what kind of parameters I use for the backup storage. Is it possible to present a container on the CVM as an SMB share and write my backup on it? Or would it be best to use the Windows NFS client to access an export on the CVM? Background: I'd like to purchase a separate storage platform for the backups. But since I'm not really sure what kind of IO patern and throughput I need to sustain, I'd like to test and measure before we buy.
Hi , I have run below command to show my hypervisor password from one of the CVM , but the password doesn't come out , anyone can help ? Command : zeus_config_printer 2> /dev/null | grep "hypervisor {" -A 3 Result: nutanix@NTNX-14SM36060045-A-CVM:192.168.1.208:~$ zeus_config_printer 2> /dev/null | grep "hypervisor {" -A 3 hypervisor { address_list: "192.168.1.204" username: "root" }-- hypervisor { address_list: "192.168.1.206" username: "root" }-- hypervisor { address_list: "192.168.1.207" username: "root" } Apart of it , can anyone share where is the password configuration Hypervisor and CVM store in nutanix system? (from book it said store in ZEUS , but i don't know exactly the path location ) Thank. ;) Regards, Zack
Hello, We are in the process of moving our VMWare site to Nutanix. With 3 locations i would like some design tips to setup the NTNX sites/cluster(s). I am very new to nutanix. Situation: 3 fysical sites, 2 sites with 4 NTNX hosts and 1 site with 3 NTNX hosts. Every site dedicated fiber connection. The 2 "4 hosts sites" are for production, the 3 hosts site will be otap and/or extra hosts for production. How to design the NTNX cluster(s) with minimal risk of loosing data and the most redundancy. If someone needs more info please let me know grtz
Today I've been working on a new PoC NX-7110 install at an existing NX-3460 customer. This customer has extremely tight networking standards which has lead to a number of issues to add the additional node and expand the cluster. The customer has the following existing setup within their current Nutanix Cluster: vmk0: External IP address for ESXi management only (VLAN 901) vmk1: vMotion (VLAN 902) vmk4: Nutanix Management Port (VLAN 903 - completely isolated, non routable subnet for NFS traffic only) Controller VM (VLAN 903) Due to the requirement to have the CVM and vmk4 on the completely isolated NFS VLAN, this has caused issues during the cluster expand.We've resolved these issues with manual ncli intervention. I don't want to dive into this, the details will be extremely long and tedious. Nutanix Engineering can speak to Soby if they need more info. By way of feature request, would it be possible to develop additional fields during the cluster setup / expand which takes in
Dear All, configuring ms cluster for SQL has a vmware vsphere limitation regarding physical compitibilty(RDM).how it will be configured when using Nutanix ? Regards,
I'm currently working on a Nutanix deployment with VMware vSphere. I've cobbled together all my notes from changes I made and placed the in a bolog post here - hopefully others will find this useful. http://www.vwired.co.uk/2014/04/14/nutanix-configuration-with-vsphere-5-5/
Anyone get the cluster_init page to work to setup a new block from their mac. I have done 4 or 5 installs and I never can get the page to come up. What am I doing wrong? How do I detect the IPv6 nodes online?
Does anyone test Nutanix with FIO but not with diagnostics VM? When we testing Nutanix our processors on host all cores are almost 100 % loaded. We started test 3.5.2.1 and get some disks maked offline, we return them and upgrade to 3.5.3.1. For the first time after install results is ok (13k IOPS for 400 Gb disk on 3350), but next day (all data is cold) we have afwul resuls with zero counters sometimes.For example 0 read/0 write. Nutanix is upper consoles on slide. Does it normal resuls? ncc check tell that everything is ok. For test on slide we use 100Gb disk so we should be in one node. Fio config >sudo fio read_config.ini root@debian:~# cat test_vmtools.ini [readtest] blocksize=4k filename=/dev/sdb rw=randread direct=1 buffered=0 ioengine=libaio iodepth=32 >sudo fio write_config.ini root@debian:~# cat test_vmtools.ini [writetest] blocksize=4k filename=/dev/sdb rw=randwrite direct=1 buffered=0 ioengine=libaio iodepth=32 >sudo fio rw_config.ini root@debian:~# cat test_vmtoo
Is there any special guide, how to setup cisco 10 Gigabit switched for nutanix environments? Im using vsphere 5.0 U3 with nutanix. Should jumbo frames be enabled? Is it possible with iperf inside the cvm to get 9 Gbit/s with with one process? I get 1,57 Gbit/s Good hints from Intel. The INtel nics are onboard. Which PCI Express speed X4,x8,x16? http://www.intel.com/support/network/sb/CS-025829.htm This graph is intended to show (not guarantee) the performance benefit of using multiple TCP streams. PCI Express ImplementationEncoded Data RateUnencoded Data Rate x15 Gb/sec4 Gb/sec (0.5 GB/sec) x420 Gb/sec16 Gb/sec (2 GB/sec) x840 Gb/sec32 Gb/sec (4 GB/sec) x1680 Gb/sec64 Gb/sec (8 GB/sec) http://dak1n1.com/blog/7-performance-tuning-intel-10gbe Maybe useful script: For me, I had 10 machines to test, so I scripted it instead of running any commands by hand. This is the script I used: https://github.com/dak1n1/cluster-netbench/blob/m... http://www.vmware.com/pdf/10GigE_performance.pdf#sthas
I'm deploying 4 Nutanix blocks for my customer. They are a security conscious company so changing the IPMI/ESXi/CVM passwords was a given after the install completed. We've experienced no end of issues after changing the passwords and now have a support call open to try and get the issue resolved. Don't get me wrong, the support is good, but I get the impression changing the passwords is something of an unknown. I'd be interested to hear other people's experiences.