Setting up new Dell EMC XC430 Xpress. Need help in network connections.

Badge +1
Hi Guys,

My office is going to get 3 node of the Dell EMC XC430 Xpress and I was thinking of setting them up instead of using professional service. I am a VCP and have prior experience setting up ESXI based on traditional SAN/Hypervisor methods. Hyper converged is something which is very new to me so I need require some help from you guys in the networking part. I've read that the SC430 Xpress is very easy to setup and the configuration script from factory setups everything including VMware ESXI.

I have read from the Dell documentation here

I will be using 2x Dell X4102 10GB SFP+ switches to connection to the nodes.
I believed that there will be 1 x iDrac port, 4 x 1GB and 2 x 10GB SFP+ card on the PCIe

Where are these connections support to go? My understanding is the 1 x iDrac port, 4 x 1GB will connect to my LAN which is the same network as my working laptop? The 2 x 10GB SFP+ will be connected to each of the 2x Dell X4102? The Dell documentation says 10GB to production network? The production is user LAN or rather the private ISCSI network which was used with traditional ISCSI SAN with ESXI hypervisors?

Can I set private IPs on the 10GB production network? Like 192.168.20.x? Dell specify not to use 192.168.5.x as they are used by the CVMs. Is the IP addresses setup the same as the EXSI hosts?

Do I need to enable stuffs like MTU 9000 Jumbo Packets just like normal ISCSI SAN setup procedures?

Dell says "Connect the management workstation for cluster set up to the same production network subnet where the XC Xpress appliance resides". How do I connect a laptop to spf+ connections?

Thanks everyone for your help.

8 replies

Badge +3
The first thing you'll want to do is login to the support portal and go to Downloads > Foundation > and click download the Java Applet. The XC series servers work best utilizing the Java applet. Plug all of your servers and laptop up to a simple 1GB switch (both the iDRAC and the 1GB port) Do not use your SFP+ connections to image the nodes. Change the IP address on your laptop to be an IP address within range of your network.
In the applet, you can configure each of the networking interfaces. If you are using a 192.168.10.x network, that will be okay. Place the ESXi IP addresses and the CVM IP addresses in that network. Your iDRAC IP can be on a separate private network.

CVM IP's =
Hypervisor IP's =

If you need to change the iDRAC, you can do so through the standard iDRAC portal as you would any other Dell server. Its just easier to get things up and running if they are in the same VLAN

Use the Field Installation guide for instructions and reference. -

After you walk through the applet instructions, disconnect the cables and connect your 1gb and 10GB to your switches. When you login to ESXi for each node, move your 10Gbe network interfaces from standby to active.

Hope this helps!

Userlevel 2
Badge +12
I have setup quite a lot of XC nodes, just not the xpress stuff...

iDracs on your management network, if you are going to use the 10g's and dont have a requirement for the 1Gbs you can leave them unplugged

MTU standard of 1500 for the 10Gbs set as trunk ports, we set native vlan for the 10G ports to use for the host and cluster network

All traffic will be over the 10Gb ports, cluster traffic and VM's

I should add; we dont use the java applet but configure an IP on one of the CVM's then can use the local foundation http://cvm_ip:8000
This allows us to remotely setup clusters, follow Adams directions for the standard procedure

This is all based on the usual XC installs not the xpress though....

Badge +1
Hi Leslie8888 - adding a few more thoughts to the other replies.

You can go a couple different ways for the "production network" actually:
  • Use only 1GBE network
  • Use only 10GBE network
  • Put CVM/Hypervisor traffic on 10GBE and VM traffic on 1GBE

Before summarizing the options here are some things to also consider and a few questions:
  • The production network in the diagram is referring to a 10GBE network that is used for both the CVM and hypervisor travice and the Guest VM (user VMs) traffic.
  • The CVM and hypervisor management IP must be on the same subnet as required by Nutanix software. This is the same subnet your workstation has to be on in order to do the discovery and configuration.
  • Which network is your vCenter located on (if you have it setup already)? Given that the hosts and CVMs need to be on the same subnet if you want to put the hosts and CVMs on the 10GBE network the vCenter must be on a network with a route to this network in order to manage the hosts. The answer to this question will probably push you towards one of the options.
  • Be sure to utilize the Dell PowerTools Fabric Manager 1.0.0 available at the link to verify the top of rack network config is adequate.

Everything on 1GBE
Running the XC Xpress solution on 1GBE is supported but the preference is to run on 10GBE if possible. With the limited cluster size of Xpress, 1GBE should be adequate. Going this route you would leave your 10GBE ports disconnected, connect all the 1GBE ports to the LAN switch and perform the discovery and configuration from your laptop which you mentioned was on this network. Downside would be that you aren't able to leverage your existing 10GBE switches, but this would require the smallest amount of changes to your existing environement based on your description of it.

Everything on 10GBE
Based on what I know about the environment I don't think this is probably a likely option unless you want to reconfigure your network but I can briefly describe it. If you ran the whole environment in your 10GBE network all the infrastructure managment services (Prism, vCenter, etc.) Guest VMs would have IPs here and would only be accessible from this network or a network with a route to this network (which I don't expect there to be since you stated it was a private network).

Use both 1GBE and 10GBE
  1. You can run the Xpress solution (CVMs and hypervisor management) on 10GBE and run your Guest VMs on 1GBE by configuring ESXi host networking properly on each of your nodes. This will be a little more complex and would depend on your level of comfort with ESXi host networking. If you're going this route you need to either create a new vSwitch for only the 1GBE vmnics and remove the 1GBE vmnics from the existing vSwitch0 on each of your ESXi hosts (my preference) or create a separate portgroup on the existing vSwitch0 and override the NIC teaming failover for the new portgroup to only use the 1GBE vmnics and the existing VM Network portgroup to only use the 10GBE vmnics. Do not make any changes to the "vSwitchNutanix" vSwitch on each host as this is used for the I/O data path. This would also require that you have a machine connected to the 10GBE network for configuration and ongoing management or configure a route between the networks (configuring the route is probably not trivial unless you are also the network owner).

Where possible, it's good to have the iDRACs configured on a separate network from production, even if they are both 1GBE.

If you do end up going the route where you need to connect your workstation to the 10GBE switches to configure the cluster, these handy little converters do the trick:

Please reach out to me directly if you've got anymore questions! We'd love to get your feedback on the self-deploy experience if you decide to go that route.
Badge +1
Thanks Guys,

I plan to set it up like what I was using when I set up an equalogic plus 3 x dell R620 as esxi hosts.
10 GB network solely for storage traffic and 1gb network for vmware and idrac. Same as user LAN.

Badge +1
Hi Guys,

Thanks for your replies to my thread. You guys are really helpful and experts in the hyper converged technologies from Nutanix.

I would like to get your help in the configurations of the XC430 Xpress. As I've heard that setting up is quite simple as all the VMware hypervisor and configurations are pre-installed from the factory. I would tend to avoid professional service if the configuration is relatively simple for me to handle.

The XC430 allows a single 10 GB network as the storage and vm production network? Sorry but as a person previously from the SAN to ESXI kind of person, this is something new to me as the traditional SAN setup separates the storage and the hypervisor VM traffic.

I will be getting in a Cisco 3650 1 GB 48 ports switch just to setup the VMware Traffic, plus 2 x Dell 4012 10GB SFP+ just for the storage.

To simplify everything on the networking part, can I just use the 2 x 10GB SFP+ switches just to do everything for the storage plus vmware traffic and leave the 1GB ports un-use?

Meaning that I will configure Jumbo Packets on both the 10GB switches plus 2 LAG between them? Then uplink 1 x 1GB cables from the Cisco 3650 to each of the 10GB SFP+ switches so that they will be in the production network according to the Dell diagram which I've posted on my thread? Thanks.

Badge +1
Can I get some help based on the diagram which I have created. If I were to run everything based on 10GB network. Will that help to use the 10GB switches since my Cisco 3650 or back bone network is a 1GB?

Badge +1
Hi Leslie8888 - just replied with answers to your private message!
Userlevel 2
Badge +12
I would use a seperate subnet for iDRAC management range,

But you will gain a benifit from the 10Gb for the Nutanix cluster network (Storage) and inter VM comunications, obviously any connectivity that passes through your Cisco will be restricted to 1Gb