Solved

Change Virtual Switch Name

  • 29 November 2017
  • 4 replies
  • 1237 views

Badge +1
Is it possible to change the name of the virtual switch? Installed scenario is Hyper-V 2012 R2 with Nutanix 5.1.3. We have an existing Hyper-V cluster we plan to move all VMs off of and onto the new Nutanix cluster. We use VMM 2016. After bringing the Hyper-V cluster running with Nutanix into VMM we cannot select the option to move VMs from one cluster to another due to Logical Network inconsistencies even though the IP space is the same. For example, old Hyper-V cluster uses "Public" as logical network name for VMs and new Nutanix cluster was built with "ExternalSwitch" as name. Changing the name within Hyper-V on the Nutanix cluster results in the CMVs loosing communication and the Nutanix cluster failing.

Is there a way to shutdown the Nutanix cluster, change the network switch name the CVMs connect to, then rename the switch name in Hyper-V Virtual Switch Manager, and bring the Nutanix cluster back online?
icon

Best answer by mmcghee 29 November 2017, 22:39

Hi PASouza



You will need to present the targeted Nutanix SMB share (container) to the old cluster, so I think you're on the right path. I'm not sure if we support our SMI-S provider with SCVMM 2016 yet (we will with our 5.5 release). Please double check that the Nutanix shares show up with available space under the "File Servers" view (and do not say 0 GB available capacity). If they show up as 0 GB what you'll need to do is remove the registered provider and manually add the SMB share (\ntnxclusternamecontainername) to the cluster in SCVMM (be sure to do a rescan after adding) for both the old and new cluster. You'll also need to add the SCVMM server and the Nodes in the old cluster to the Nutanix cluster whitelist (you should do this first). You can do this for each container separately or for the whole cluster if you like. This will allow the old cluster and SCVMM to access the targeted SMB share for migration independent of any run as account. Hopefully this gets you further.



Thanks,

Mike
View original

This topic has been closed for comments

4 replies

Userlevel 7
Badge +35
Thanks for contributing your first post PASouza - hope to meet you at .NEXT in person next year!

bbbburns is our rockstar networker, you able to share some insights here?
Userlevel 4
Badge +18
Hi PASouza

You can technically rename the virtual switch. You'd want to make sure you do that from powershell and not hyper-v manager since a rename from hyper-v manager will also rename the management OS virtual NIC also named "externalswitch" which we don't want. You'd also need to change each CVM to point to this new virtual switch as you point out. And you'd ideally either do this one node at a time or stop the cluster before proceeding.

But even after this I'm not sure it fully solves the problem. As a first step before trying to modify the default configuration you'll want to assign the logical networks which existing VMs use to the Nutanix cluster. This can be done against either a standard switch or a logical switch. A standard switch (which is the default) requires selecting the logical network under the hardware properties of the host against the NIC team (image below).



You can alternatively apply a logical switch that leverages an uplink port profile which is configured to also use the required logical networks. SCVMM 2016 allows you to do this with the "convert to logical switch" option, which is again under the host properties, then under virtual switches (image below).


The logical switch must be configured to match the settings of the standard switch for this to work. Microsoft details the requirements under the convert section here: https://docs.microsoft.com/en-us/system-center/vmm/network-switch?view=sc-vmm-1711

Please try to apply the logical network to the standard or logical switch (if you haven't already) to see if that helps you get around the migration issue. If not you can try the route of a rename. I'm sure support can step you through that process, or you can ping me at mmcghee@nutanix.com and I can help.

Thanks,
Mike
Badge +1
Thanks much mmcghee - Your first suggestion to modify the hardware settings for the Nutanix nodes by adding the VM logical network from the source cluster allows the Nutanix cluster/nodes to be selected as a destination for a VM migration. However, we ran into a problem with the hosts in the source cluster not having access to the Nutanix SMB share.

Added a Storage Device of "SAN and NAS devices discovered and managed by SMI-S provider" to the VMM Fabric by creating a new Run As account with the Prism portal admin credentials and using that to configure the storage device. However, when that came in it listed both containers (default storage and NutanixManagementShare) with default storage (named NTX-Container in our instance) as "File share managed by Virtual Machine Manager". When attempting to add that file share storage to the old cluster the job fails because it cannot grant access to the file share for all of the computer accounts because VMM attempts to use the default Run As account that is in the domain to access the share rather than the Prism admin account configured for the storage device.

Any suggestions as to where we might go from here? Feel like we may have chased the rabbit down the wrong hole as now I cannot seem to even remove the added storage device from the fabric and start over.
Userlevel 4
Badge +18
Hi PASouza

You will need to present the targeted Nutanix SMB share (container) to the old cluster, so I think you're on the right path. I'm not sure if we support our SMI-S provider with SCVMM 2016 yet (we will with our 5.5 release). Please double check that the Nutanix shares show up with available space under the "File Servers" view (and do not say 0 GB available capacity). If they show up as 0 GB what you'll need to do is remove the registered provider and manually add the SMB share (\ntnxclusternamecontainername) to the cluster in SCVMM (be sure to do a rescan after adding) for both the old and new cluster. You'll also need to add the SCVMM server and the Nodes in the old cluster to the Nutanix cluster whitelist (you should do this first). You can do this for each container separately or for the whole cluster if you like. This will allow the old cluster and SCVMM to access the targeted SMB share for migration independent of any run as account. Hopefully this gets you further.

Thanks,
Mike