Hi guys,
I'm still digging to give corrects answer to my manager who is puting pros and cons for nutanix on the table.
As I need to make a design without knowing exactly where he want to go, I'm making assomption on it. I understood that they wanted to put NSX on it VRA and vCloud SP.
I've found a couple of recommandations / tips on NSX over NUTANIX but nothing that can be used on design part (more on implementation which is quite cool !) Do someone have integrated it on a 4.5 cluster ?
do Nutanix plan to give us a technical note / any other cool stuff on NSX ?
sheers,
Page 1 / 1
If you click on the URL it will take you to the portal. If you copy and paste it will not work as this tool truncates it.
I got the orignal URL fixed yesterday. But you can still access the content through the support portal under Solutions Documentation. So please use the original url or the one on the support portal https://portal.nutanix.com/#/page/solutions/details?targetId=SN-2040_VMware_NSX_for_vSphere:SN-2040_VMware_NSX_for_vSphere. Please make sure to copy the complete URL.
vcdxnz001
Thanks for URL. Unfortunately It is still working.. perhaps truncated when you posted it. 🙂
Thanks for URL. Unfortunately It is still working.. perhaps truncated when you posted it. 🙂
Thanks for brining this to our attention. We will get the link fixed. In the meantime you should be able to access the document through the Nutanix Support Portal. The url is https://portal.nutanix.com/#/page/solutions/details?targetId=SN-2040_VMware_NSX_for_vSphere:SN-2040_VMware_NSX_for_vSphere.
HI bbbburns vcdxnz001
I am in similiar situaion as below. Is it possible for you to fix the links in above post please. I am getting 404 Page not found while trying to access the documents.
Cheers
Sushil
I am in similiar situaion as below. Is it possible for you to fix the links in above post please. I am getting 404 Page not found while trying to access the documents.
Cheers
Sushil
Yes thanks for your involvement in this thread ! I'll read this again and again.... I was on NSX for a while as I passed my VCP ;)
Thomas
Thomas
Following up on this thread - we've finally released our Nutanix Solution Note covering VMware NSX for vSphere.
Check out the blog announcing it here:
http://next.nutanix.com/t5/Nutanix-Connect-Blog/Nutanix-Validates-Two-Crucial-Deployment-Scenarios-with-VMware/ba-p/7580
The document can be found here:
http://www.nutanix.com/go/vmware-nsx-for-vsphere.html
Check out the blog announcing it here:
http://next.nutanix.com/t5/Nutanix-Connect-Blog/Nutanix-Validates-Two-Crucial-Deployment-Scenarios-with-VMware/ba-p/7580
The document can be found here:
http://www.nutanix.com/go/vmware-nsx-for-vsphere.html
,
What version of NSX are you running where you experienced that? NSX 6.2 doesn't even give you the option to choose a DFW default rule of "deny", but that was an option during setup in previous versions. Obviously, if you deploy NSX with the default rule as deny, it's going to put every VM on an island, including your CVM's. Just interested to hear if you had NFS issues with a default rule of allow, or possibly another firewall rule impacting them.
I personally chose not to have the CVM's on an NSX logical network (and they're exempted from DFW as well) just due to the fact that it's the underlying layer everything rides upon, and didn't want to run into a scenario where an issue with NSX also caused me a storage problem. I've gone so far as dedicate separate physical NICs for the CVM/storage traffic and the other physical NICs on a different dvSwitch for NSX/regular VM networking. Might be overkill, but I feel more comfortable that way given some of the bugs I've found in newer releases of VMware's products recently.
What version of NSX are you running where you experienced that? NSX 6.2 doesn't even give you the option to choose a DFW default rule of "deny", but that was an option during setup in previous versions. Obviously, if you deploy NSX with the default rule as deny, it's going to put every VM on an island, including your CVM's. Just interested to hear if you had NFS issues with a default rule of allow, or possibly another firewall rule impacting them.
I personally chose not to have the CVM's on an NSX logical network (and they're exempted from DFW as well) just due to the fact that it's the underlying layer everything rides upon, and didn't want to run into a scenario where an issue with NSX also caused me a storage problem. I've gone so far as dedicate separate physical NICs for the CVM/storage traffic and the other physical NICs on a different dvSwitch for NSX/regular VM networking. Might be overkill, but I feel more comfortable that way given some of the bugs I've found in newer releases of VMware's products recently.
This may be useful:
Please exclude the CVM IP address and VIP from NSX firewall settings. Otherwise, NFS mount breaks.
Please exclude the CVM IP address and VIP from NSX firewall settings. Otherwise, NFS mount breaks.
Hi ,
Many thanks for your sharing ! i'll give you feedback from the trench when it will be up and running ;)
sheers,
Thomas
Many thanks for your sharing ! i'll give you feedback from the trench when it will be up and running ;)
sheers,
Thomas
Hi ,
I'm doing the testing for NSX with Nutanix that will be documented in an upcoming tech note. I'm also planning to put a blog together once i've done the testing. The results so far are very good. At a high level Nutanix is invisible to NSX and it really just works. The recommended versions are NSX 6.2 with vSphere / ESXi 6.0 U1a (assumes NSX-V). This gives the best features and also the highest performance. I have tested 4.8GB/s, i.e. line rate, per host with 2 x 10GbE NIC's (scales linearly) and latency of 76us on average between nodes.
The design patterns for NSX in terms of the vSphere clusters will be the same as VMware recommends, Management Cluster, Edge Services Cluster(s), Resource/Compute Cluster(s). Installation / Configuration is also the same. Also as our CVM is just a VM, it also works on an NSX vWire.
Important considerations are MTU size on the underlay network. In my environment I'm using 9216, and I'm using 9000 on the vDS. Due to the VXLAN overhead this makes the maximum MTU of any VM = 8950 bytes. In my case the CVM's are configured with MTU 8950 and are on a NSX vWire/VXLAN stretched between two different ToR switches. I've configured a leaf spine network architecture with L2 ToR to L3 Spine.
In terms of platform model selection the 1065G4 is a great choice for management clusters and ESG clusters. 3060G4 and 8035G4 or any of the other models make great resource/compute clusters.
Nutanix works with vRA, vCloud SP etc, so nothing changes there, again the platform is invisible to them. The value that Nutanix brings is a greatly simplified architecture that is quick to deploy, expand on demand, very simple to manage, and self heals when things go wrong. Deploy in minutes, one click upgrade at lunch time, spend more time doing what you want to do.
If you have any specific questions I'd be happy to answer them.
I'm doing the testing for NSX with Nutanix that will be documented in an upcoming tech note. I'm also planning to put a blog together once i've done the testing. The results so far are very good. At a high level Nutanix is invisible to NSX and it really just works. The recommended versions are NSX 6.2 with vSphere / ESXi 6.0 U1a (assumes NSX-V). This gives the best features and also the highest performance. I have tested 4.8GB/s, i.e. line rate, per host with 2 x 10GbE NIC's (scales linearly) and latency of 76us on average between nodes.
The design patterns for NSX in terms of the vSphere clusters will be the same as VMware recommends, Management Cluster, Edge Services Cluster(s), Resource/Compute Cluster(s). Installation / Configuration is also the same. Also as our CVM is just a VM, it also works on an NSX vWire.
Important considerations are MTU size on the underlay network. In my environment I'm using 9216, and I'm using 9000 on the vDS. Due to the VXLAN overhead this makes the maximum MTU of any VM = 8950 bytes. In my case the CVM's are configured with MTU 8950 and are on a NSX vWire/VXLAN stretched between two different ToR switches. I've configured a leaf spine network architecture with L2 ToR to L3 Spine.
In terms of platform model selection the 1065G4 is a great choice for management clusters and ESG clusters. 3060G4 and 8035G4 or any of the other models make great resource/compute clusters.
Nutanix works with vRA, vCloud SP etc, so nothing changes there, again the platform is invisible to them. The value that Nutanix brings is a greatly simplified architecture that is quick to deploy, expand on demand, very simple to manage, and self heals when things go wrong. Deploy in minutes, one click upgrade at lunch time, spend more time doing what you want to do.
If you have any specific questions I'd be happy to answer them.
Let me see if I can get you some info
Enter your E-mail address. We'll send you an e-mail with instructions to reset your password.