I'm looking at using Cisco IP Communicator on a Horizon View persistant desktop and would like some input on what others suggest for voice traffic prioritization. Some options I see:
1> Saying "It's on 10GB, it's fine"
2> Using a separate network (port group and VLAN) specifically for voice on a VSS with 1GB uplinks
3> Using a separate network (port group and VLAN) specifically for voice on a VDS with 1GB uplinks pinned to the port group
4> Using a separate network (port group and VLAN) specifically for voice on a VDS with 10GB uplinks
5> Using a separate network (port group and VLAN) specifically for voice on a VDS with 10GB uplinks and traffic prioritization on the TOR switches for the voice VLAN using IP Precedence (differentiated services code point [dscp]) audio priority.
Any thoughts community?
Page 1 / 1
I found this article that references some Cisco UC recomended practices.
http://docwiki.cisco.com/wiki/QoS_Design_Considerations_for_Virtual_UC_with_UCS
Key take away was this suggestion:
"If the Nexus 1000V is not deployed, it is still possible to provide some QoS, but it would not be an optimal solution. For example, you could create multiple virtual switches and assign a different CoS value for the uplink ports of each of those switches. For example, virtual switch 1 would have uplink ports configured with a CoS value of 1, virtual switch 2 would have uplink ports configured with a CoS value of 2, and so forth. Then the application virtual machines would be assigned to a virtual switch, depending on the desired QoS system class. The downside to this approach is that all traffic types from a virtual machine will have the same CoS value. For example, with a Unified CM virtual machine, real-time media traffic such as MoH traffic, signaling traffic, and non-voice traffic (for example, backups, CDRs, logs, Web traffic, and so forth) would share the same CoS value."
So I'm thinking that maybe using port based CoS assignment and a new vSwitch with the 2 x 1GB uplinks connected to those prioritized ports.
But after thats done, should a second virtual adapter be assigned to the endpoint VMs for "voice-only" traffic (which would be on a port group on the new vSwitch)? Or should I have the entire VM use a single virtual adapter and put that on a port group on the new vSwitch?
Two virtual adapters helps with traffic isolation and scalability, but increases management overhead.
Thoughts?
http://docwiki.cisco.com/wiki/QoS_Design_Considerations_for_Virtual_UC_with_UCS
Key take away was this suggestion:
"If the Nexus 1000V is not deployed, it is still possible to provide some QoS, but it would not be an optimal solution. For example, you could create multiple virtual switches and assign a different CoS value for the uplink ports of each of those switches. For example, virtual switch 1 would have uplink ports configured with a CoS value of 1, virtual switch 2 would have uplink ports configured with a CoS value of 2, and so forth. Then the application virtual machines would be assigned to a virtual switch, depending on the desired QoS system class. The downside to this approach is that all traffic types from a virtual machine will have the same CoS value. For example, with a Unified CM virtual machine, real-time media traffic such as MoH traffic, signaling traffic, and non-voice traffic (for example, backups, CDRs, logs, Web traffic, and so forth) would share the same CoS value."
So I'm thinking that maybe using port based CoS assignment and a new vSwitch with the 2 x 1GB uplinks connected to those prioritized ports.
But after thats done, should a second virtual adapter be assigned to the endpoint VMs for "voice-only" traffic (which would be on a port group on the new vSwitch)? Or should I have the entire VM use a single virtual adapter and put that on a port group on the new vSwitch?
Two virtual adapters helps with traffic isolation and scalability, but increases management overhead.
Thoughts?
Thanks for the question @DaemonBehr
@KeesBaggerman @bbbburns @dlink7 are you guys able to provide an insight here?
@KeesBaggerman @bbbburns @dlink7 are you guys able to provide an insight here?
@DaemonBehr
Why IP Communicator instead of Cisco Jabber? I understand that the requirements for one or the other aren't always directly in your control, but if you have the option to do Cisco Jabber you can use the VXME Plugin:
http://www.cisco.com/c/en/us/products/collateral/collaboration-endpoints/virtualization-experience-media-engine/datasheet-c78-734102.html
This plugin allows the media to be streamed directly between the thin clients, thus you can keep all of your RTP audio traffic out of the data center and directly between endpoints. Your QoS marking and classification is then done at the switches near the endpoints and you avoid hairpinning RTP audio back and forth to the data center. The caveat here is this won't work on a zero client.
If you remove the RTP traffic from the VM then you no longer have to look at giving special QoS to all traffic from this VM. If you have no choice but to use IP Communicator I still don't like the extra complexity that a dedicated network adapter for voice enabled VMs would require. Having two network adapters in the same VM (one dedicated for voice) could lead to problems with one way audio if the IP Communicator doesn't handle it properly.
The ideal scenario removes the RTP from the datacenter, or remarks and reclassifies the voice traffic coming out of the data center where it's feasible to do so.
That's a long way of saying "Option 1 - It's on 10GB, it's fine." With a note to try to avoid the scenario in the first place, or to try marking the traffic where you can.
Jason Burns | CCIE Voice #20707 | Solutions & Performance Engineer | jason.burns@nutanix.com | @bbbburns
Why IP Communicator instead of Cisco Jabber? I understand that the requirements for one or the other aren't always directly in your control, but if you have the option to do Cisco Jabber you can use the VXME Plugin:
http://www.cisco.com/c/en/us/products/collateral/collaboration-endpoints/virtualization-experience-media-engine/datasheet-c78-734102.html
This plugin allows the media to be streamed directly between the thin clients, thus you can keep all of your RTP audio traffic out of the data center and directly between endpoints. Your QoS marking and classification is then done at the switches near the endpoints and you avoid hairpinning RTP audio back and forth to the data center. The caveat here is this won't work on a zero client.
If you remove the RTP traffic from the VM then you no longer have to look at giving special QoS to all traffic from this VM. If you have no choice but to use IP Communicator I still don't like the extra complexity that a dedicated network adapter for voice enabled VMs would require. Having two network adapters in the same VM (one dedicated for voice) could lead to problems with one way audio if the IP Communicator doesn't handle it properly.
The ideal scenario removes the RTP from the datacenter, or remarks and reclassifies the voice traffic coming out of the data center where it's feasible to do so.
That's a long way of saying "Option 1 - It's on 10GB, it's fine." With a note to try to avoid the scenario in the first place, or to try marking the traffic where you can.
Jason Burns | CCIE Voice #20707 | Solutions & Performance Engineer | jason.burns@nutanix.com | @bbbburns
Does the audio stream fall in line with PCoIP? If it does we can optimize the PCoIP stream and setup a DSCP
Yes, without VXME the audio will be in line with the rest of the desktop streaming session.
@bbbburns
Awesome info. I didn't know that about the VXME plugin for Cisco Jabber. We will definately look at using that. In addition, it looks like we will create a new vSwitch with the 2 x 1GB uplinks and put our "endpoint" desktops that require a softphone on a port group there.
This gives us physical segmentation for simpler traffic analysis and zero chance of contention with other types of traffic from the host.
There are some zero clients in play as well, so direct endpoint to endpoint RTP audio will not work in all instances. Another reason to separate the traffic physically.
Again, thank you for your insight. It is greatly appreciated.
Awesome info. I didn't know that about the VXME plugin for Cisco Jabber. We will definately look at using that. In addition, it looks like we will create a new vSwitch with the 2 x 1GB uplinks and put our "endpoint" desktops that require a softphone on a port group there.
This gives us physical segmentation for simpler traffic analysis and zero chance of contention with other types of traffic from the host.
There are some zero clients in play as well, so direct endpoint to endpoint RTP audio will not work in all instances. Another reason to separate the traffic physically.
Again, thank you for your insight. It is greatly appreciated.
I'm actually revisiting this now as the use of the 1GB ports increases the port utilization greatly, and reduces scalability of nodes per switch.
I may have to go back to a 2 x 10GB uplink on a VDS ( with LBT and NIOC) and hope that there is enough bandwidth to thwart any contention.
Cisco jabber with VXME is a great option, but the client doesn't want to leave IP Communicator yet.
I may have to go back to a 2 x 10GB uplink on a VDS ( with LBT and NIOC) and hope that there is enough bandwidth to thwart any contention.
Cisco jabber with VXME is a great option, but the client doesn't want to leave IP Communicator yet.
Enter your E-mail address. We'll send you an e-mail with instructions to reset your password.