Installation & Configuration
This forum is the best way to get up and running with the Nutanix platform
- 1,184 Topics
- 3,243 Replies
I'm currently working on a Nutanix deployment with VMware vSphere. I've cobbled together all my notes from changes I made and placed the in a bolog post here - hopefully others will find this useful. http://www.vwired.co.uk/2014/04/14/nutanix-configuration-with-vsphere-5-5/
[img]https://d1qy7qyune0vt1.cloudfront.net/nutanix-us/attachment/160iFF41C41040150583.png[/img] Does anyone test Nutanix with FIO but not with diagnostics VM? When we testing Nutanix our processors on host all cores are almost 100 % loaded. We started test 126.96.36.199 and get some disks maked offline, we return them and upgrade to 188.8.131.52. For the first time after install results is ok (13k IOPS for 400 Gb disk on 3350), but next day (all data is cold) we have afwul resuls with zero counters sometimes.For example 0 read/0 write. Nutanix is upper consoles on slide. Does it normal resuls? ncc check tell that everything is ok. For test on slide we use 100Gb disk so we should be in one node. Fio config >sudo fio read_config.ini root@debian:~# cat test_vmtools.ini [readtest] blocksize=4k filename=/dev/sdb rw=randread direct=1 buffered=0 ioengine=libaio iodepth=32 >sudo fio write_config.ini root@debian:~# cat test_vmtools.ini [writetest] blocksize=4k filename=/dev/sdb rw=randwrite direct=
Is there any special guide, how to setup cisco 10 Gigabit switched for nutanix environments? Im using vsphere 5.0 U3 with nutanix. Should jumbo frames be enabled? Is it possible with iperf inside the cvm to get 9 Gbit/s with with one process? I get 1,57 Gbit/s Good hints from Intel. The INtel nics are onboard. Which PCI Express speed X4,x8,x16? http://www.intel.com/support/network/sb/CS-025829.htm This graph is intended to show (not guarantee) the performance benefit of using multiple TCP streams. [b]PCI Express Implementation[/b][b]Encoded Data Rate[/b][b]Unencoded Data Rate[/b] [b]x1[/b]5 Gb/sec4 Gb/sec (0.5 GB/sec) [b]x4[/b]20 Gb/sec16 Gb/sec (2 GB/sec) [b]x8[/b]40 Gb/sec32 Gb/sec (4 GB/sec) [b]x16[/b]80 Gb/sec64 Gb/sec (8 GB/sec) http://dak1n1.com/blog/7-performance-tuning-intel-10gbe Maybe useful script: For me, I had 10 machines to test, so I scripted it instead of running any commands by hand. This is the script I used: https://github.com/dak1n1/cluster-netbench/blob/m... http:
I'm deploying 4 Nutanix blocks for my customer. They are a security conscious company so changing the IPMI/ESXi/CVM passwords was a given after the install completed. We've experienced no end of issues after changing the passwords and now have a support call open to try and get the issue resolved. Don't get me wrong, the support is good, but I get the impression changing the passwords is something of an unknown. I'd be interested to hear other people's experiences.
Hello everyone, I was wondering if it's possible to have different VLANs ID/Subnet range for each of the different traffic type bellow: - Hypervisor Management (ESXi) - Nutanix Cluster administration - Nutanix Cluster replication / AutoPath And the very best would be to even have replication & AutoPath on different VLANs. The rationale here is to comply with customer internal security policies regarding DMZ virtualization. We are allowed to use VLANs and are not forced to use differents physical ports, but the security team (worldwide bank) is concerned about the ESXi & Nutanix being on the same VLAN. Sylvain.
We have an NX-3050 and we frequently have to re-build linux VM's due to their ext4 filesystems corrupting and goin into read only. Our research has pointed us to articles where the linux kernel has issues with SSD's. Has anyone else experienced this and if so, how did you solve it? Edit: We created a container that bypassed the SSD's and we have not yet see the issue there, but we would love to re-engage the SSD's on our servers. The linux version/distro is Ubuntu 12.04.3 LTS. One of the articles we found relating to this is: [url=http://askubuntu.com/questions/262717/ubuntu-12-04-ssd-root-frequent-random-read-only-file-system]http://askubuntu.com/questions/262717/ubuntu-12-04-ssd-root-frequent-random-read-only-file-system[/url] I hope this helps if any of you have experienced this issue.
We would really like to use the distributed switch in VMware cluster. There seems to be a lack of documentation on how to properly implement it with Nutanix. For example I seem to remember that the Nutanix controller VM’s should not be moved from their installed switch and they have to be able to talk to vSphere and or the ESXi hosts. But if I want to put vSphere and the ESXi hosts into the dvSwitch and how do I do this? I might not be wording this question correctly, but one of the goals would be to have traffic separated by VLAN and protected by QOS where necessary. Is there documentation on how to do this correctly in a Nutanix environment? Has anyone tried to do this?
Login to the community
Login with your account
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.