Hi,
We have a currently running nutanix cluster on dells and are looking to move the cvms to a new subnet. The new subnet has esxi management for each host, but on a different vmk, is it possible to point the cvms to the new vmk to get around the 'CVM subnet is currently different than the Hypervisor subnet.' issue?
Cheers
Thomas
Page 1 / 1
Should you check this link before changing the IP address
I'm just curious, what's the driver here? Why not keep vmk0 and cvm eth0 in the same subnet, and just add the secondary vmk3/4/5/etc for whatever else you need to do?
I've been presented with a networking solution where the two TOR switches have a vlan between them just for cvm traffic, plan was to change the cvms onto this new vlan, then add a secondary esxi management network and get them to use that.
Reading the doco on weighting seems to suggest that would work? Though I believe I would need to change the cvms ip manually, and there doesnt seem any doco for that.
Cheers
Reading the doco on weighting seems to suggest that would work? Though I believe I would need to change the cvms ip manually, and there doesnt seem any doco for that.
Cheers
We do have a KB that mostly covers this: https://portal.nutanix.com/#/page/kbs/details?targetId=kA0600000008bRUCAY
That said, it sounds like you want to do this online. As laid out in the KB, you'd have to follow the RE IP procedure in the admin guide, which requires a small downtime window.
Past that, I'm curious, what's the real driver here? Is there a technical or business requirement driving this?
Asking as, honestly, the vast, vast majority of our customers do not do this, at least in this manner. The majority of the isolation/segmentation requests we get are for narrowing the environemnt to a separate set of physical switches, not just a VLAN on the same switch
That said, it sounds like you want to do this online. As laid out in the KB, you'd have to follow the RE IP procedure in the admin guide, which requires a small downtime window.
Past that, I'm curious, what's the real driver here? Is there a technical or business requirement driving this?
Asking as, honestly, the vast, vast majority of our customers do not do this, at least in this manner. The majority of the isolation/segmentation requests we get are for narrowing the environemnt to a separate set of physical switches, not just a VLAN on the same switch
Thanks for the info, the requirement is coming from the network team, to drive all the replication traffic onto that specific vlan as that only exists between the two TOR switches, aim is that traffic doesn't have to leave the rack.
It looks like I can do that by changing the current ip of the cvms to that subnet and adding esxi mngmt interfaces in there, then having a dual homed box for management.
Downtime is ok as this is our first nutanix setup and its all in test mode at the moment.
Cheers
It looks like I can do that by changing the current ip of the cvms to that subnet and adding esxi mngmt interfaces in there, then having a dual homed box for management.
Downtime is ok as this is our first nutanix setup and its all in test mode at the moment.
Cheers
Quick thought on not leaving the rack, all Nutanix CVM traffic is east west anyways, so regardless of VLAN, it would stay on the TOR if all nodes in the cluster are attached to the same TORs
My understanding from networking is the TOR are not linked (apart from that isolated vlan), each node has 1 link into each TOR so potentially traffic from 1 node would go out of the rack and back into the other TOR to get to another node.
Believe this is due to not being able to run a vpc across the TOR.
Cheers
Believe this is due to not being able to run a vpc across the TOR.
Cheers
Just because you aren't running a VPC down to the node doesn't necessarily mean that the TOR's aren't VPC enabled (assuming Cisco Nexus)
Assuming that the TOR's are VPC enabled, they will have VPC enabled VLANs for everything, so that failover is "sane"
If you want to delve into this more, reach out at jon at nutanix dot com and I'm happy to chat through this on a webex
Assuming that the TOR's are VPC enabled, they will have VPC enabled VLANs for everything, so that failover is "sane"
If you want to delve into this more, reach out at jon at nutanix dot com and I'm happy to chat through this on a webex
Thanks Jon, I'm told the TORs can't be vpc enabled (nexus 9000 series), it would need to have the vpc back on the 7000s.
Our networking team is reviewing, still may reach out to you, these are our first nutanix, impressed so far, just this networking issue is slowing down the implementation.
Cheers
Our networking team is reviewing, still may reach out to you, these are our first nutanix, impressed so far, just this networking issue is slowing down the implementation.
Cheers
Well, every implementation is different, but we've seen a TON of Nexus 9000's, especially the 9300's, and the VPC just fine, both "up" (to something like 9500, 7000, 7700, etc) and "down" (to Nutanix or other devices).
Not trying to twist your arm, just want to make this easy as possible for yall. Feel free to reach out jon at nutanix dot com, happy to chat with you and your network team to work through the options.
Cheers,
Jon
Not trying to twist your arm, just want to make this easy as possible for yall. Feel free to reach out jon at nutanix dot com, happy to chat with you and your network team to work through the options.
Cheers,
Jon
Enter your E-mail address. We'll send you an e-mail with instructions to reset your password.