Hello everyone,
I was wondering if it's possible to have different VLANs ID/Subnet range for each of the different traffic type bellow:
- Hypervisor Management (ESXi)
- Nutanix Cluster administration
- Nutanix Cluster replication / AutoPath
And the very best would be to even have replication & AutoPath on different VLANs.
The rationale here is to comply with customer internal security policies regarding DMZ virtualization.
We are allowed to use VLANs and are not forced to use differents physical ports, but the security team (worldwide bank) is concerned about the ESXi & Nutanix being on the same VLAN.
Sylvain.
Page 1 / 1
Hi shuguet,
putting the HV/ESXi mgmt traffic on a seperate VLAN should be no problem (its even mentioned in the install guide)
as for the "Nutanix Cluster Administration" and the "replication traffic" i don't see it supported currently.
Why?
If doing the VLAN-Splitup on the Hypervisor(vSwitch)-Level, you would need a 3rd/4th vNIC in your CVM (to configure the additional IPs for the additional networks)
Which would be of course no problem for the Linux the CVM runs in, but i can't see any configuration support for the different Nutanix Services.
If you "forward" the vlan tagged traffic inside the CVM, and do the split-up inside the VM, it would still be the same Problem.
The only (theoretical) way i see is to configure an openvswitch inside the CVM with 10g-bond-vNICs as uplink (carrying tagged traffic) and a native VLAN of the primrary Nutanix CVM IP residency. In addition there is some "openflow/openvswitch-rule"-magic (here it gets tricky :P) which tags e.g. replication traffic differently.
absurd :-)
Edit: you may post into the Suggestion Box / Product Features - category..
putting the HV/ESXi mgmt traffic on a seperate VLAN should be no problem (its even mentioned in the install guide)
as for the "Nutanix Cluster Administration" and the "replication traffic" i don't see it supported currently.
Why?
If doing the VLAN-Splitup on the Hypervisor(vSwitch)-Level, you would need a 3rd/4th vNIC in your CVM (to configure the additional IPs for the additional networks)
Which would be of course no problem for the Linux the CVM runs in, but i can't see any configuration support for the different Nutanix Services.
If you "forward" the vlan tagged traffic inside the CVM, and do the split-up inside the VM, it would still be the same Problem.
The only (theoretical) way i see is to configure an openvswitch inside the CVM with 10g-bond-vNICs as uplink (carrying tagged traffic) and a native VLAN of the primrary Nutanix CVM IP residency. In addition there is some "openflow/openvswitch-rule"-magic (here it gets tricky :P) which tags e.g. replication traffic differently.
absurd :-)
Edit: you may post into the Suggestion Box / Product Features - category..
Yeah, I was thinking maybe as an advanced configuration with support help or something.
I'm well aware of the technical limitations of the UI, but as you pointed, the Linux VM would be more than happy with more vNIC.
It's just a matter of adding the option, so if it's not possible right now, I will take your advice and post in the suggestion section.
Sylvain.
I'm well aware of the technical limitations of the UI, but as you pointed, the Linux VM would be more than happy with more vNIC.
It's just a matter of adding the option, so if it's not possible right now, I will take your advice and post in the suggestion section.
Sylvain.
It's part of the solution, but the main concern is to split CVM Management & CVM-to-CVM replication.
As the first kind of traffic is management (often routed/remote) & the second is storage (most of the time L2 only, local), it will make a lot of sense to split them in 2 VLANs.
Sylvain.
As the first kind of traffic is management (often routed/remote) & the second is storage (most of the time L2 only, local), it will make a lot of sense to split them in 2 VLANs.
Sylvain.
rereading through the docs reminded me of the address-list option to remote-site command used for replication..
Normally you would use this parameter for the CVM-IP-addresses, but the docs also talk about an theoretical (VPN) tunnel address.
to me this looks like you would - at least from a technical point of view - be able to add a second vnic to the CVM (in a different vlan, (un)tagged by ESXi vSwitch), and use that IP for replication traffic.
(check if cerebro service is listening on 0.0.0.0 or if it is interface bound)
if that works nutanix will possibly support it.
I'm still wondering about then "tunnel" comment in the docs.
Normally you would use this parameter for the CVM-IP-addresses, but the docs also talk about an theoretical (VPN) tunnel address.
to me this looks like you would - at least from a technical point of view - be able to add a second vnic to the CVM (in a different vlan, (un)tagged by ESXi vSwitch), and use that IP for replication traffic.
(check if cerebro service is listening on 0.0.0.0 or if it is interface bound)
if that works nutanix will possibly support it.
I'm still wondering about then "tunnel" comment in the docs.
From partner site
Description
Sometimes a network design requires the CVMs and ESXi hosts to be on separate networks.Current versions of Nutanix cluster do not allow this, and the ha.py failover script will not function properly.A workaround is detailed below, however this still requires the use of addresses within the CVM's network to assign to the ESXi hosts (in addition to the primary management addresses assigned on the hosts outside of the CVM network).
Solution
Workaround:
Where x.x.x.x is the IP address of the new vmkernel port group and n.n.n.n is the subnet mask.
Description
Sometimes a network design requires the CVMs and ESXi hosts to be on separate networks.Current versions of Nutanix cluster do not allow this, and the ha.py failover script will not function properly.A workaround is detailed below, however this still requires the use of addresses within the CVM's network to assign to the ESXi hosts (in addition to the primary management addresses assigned on the hosts outside of the CVM network).
Solution
Workaround:
- Create a storage VMKernel portgroup on each ESXi host.
- Assign an IP address within the CVMs' subnet. This will allow the CVM to communicate with the host to get details of VM/CPU/memory statistics and automatically mount NFS datastores via the HyperInt API.
- Unselect vmotion, management traffic, fault tolerant logging and iscsi from this port group.
- Put the ESXi IP address (the new vmk that you created) in the CVM's nfs-white-list:
Where x.x.x.x is the IP address of the new vmkernel port group and n.n.n.n is the subnet mask.
- Repeat step 2 for each of the new VMKernel IP addresses you created in step 1 (optionally, you can add the entire CVM subnet instead)
Enter your E-mail address. We'll send you an e-mail with instructions to reset your password.