Connecting Cloud Innovators: Building Community at .NEXT 2024
We did this for a client and it worked well, but we found that the use case did not work well with AHV because of internal policies, etc., and now they want to switch back to ESXi. What process can be used to do that? What I have scoped out so far is this: [list] [*]Take VMs offline in Prism [*]Mount external NFS datastore to CVM [*]Copy VMs to a secondary NFS datastore off the Nutanix cluster [*]Reconfigure the 6 Nutanix nodes for ESXi 6.0U1 [*]Deploy vCenter server appliance and create datacenter and HA cluster [*]Add ESXi hosts to HA cluster [*]Add external NFS mount to ESXi hosts[/list]This is where I get stuck. Using vCenter converter 6 standalone or Starwinds V2V converter, I should be able to import the 3rd part disk format, but I would need to create an NFS mount from within the Windows guest OS or workstation running the converter program. Am I on the right track? or is there a better way?
Hi I understand the use case for the 6135C now. It adds storage to the cluster, but not compute. It actually works for this use case as the client doesn't need extra compute. Thanks.
The workload is the backend database for a global financial company. They are looking at alternatives to the their legacy vendors to provide the same performance in a smaller profile and reduced cost. I'll let you know the result.
I'm actually revisiting this now as the use of the 1GB ports increases the port utilization greatly, and reduces scalability of nodes per switch. I may have to go back to a 2 x 10GB uplink on a VDS ( with LBT and NIOC) and hope that there is enough bandwidth to thwart any contention. Cisco jabber with VXME is a great option, but the client doesn't want to leave IP Communicator yet.
[url=http://next.nutanix.com/member/profile?mid=8908]@bbbburns[/url] Awesome info. I didn't know that about the VXME plugin for Cisco Jabber. We will definately look at using that. In addition, it looks like we will create a new vSwitch with the 2 x 1GB uplinks and put our "endpoint" desktops that require a softphone on a port group there. This gives us physical segmentation for simpler traffic analysis and zero chance of contention with other types of traffic from the host. There are some zero clients in play as well, so direct endpoint to endpoint RTP audio will not work in all instances. Another reason to separate the traffic physically. Again, thank you for your insight. It is greatly appreciated.
I found this article that references some Cisco UC recomended practices. http://docwiki.cisco.com/wiki/QoS_Design_Considerations_for_Virtual_UC_with_UCS Key take away was this suggestion: [i]"If the Nexus 1000V is not deployed, it is still possible to provide some QoS, but it would not be an optimal solution. For example, you could create multiple virtual switches and assign a different CoS value for the uplink ports of each of those switches. For example, virtual switch 1 would have uplink ports configured with a CoS value of 1, virtual switch 2 would have uplink ports configured with a CoS value of 2, and so forth. Then the application virtual machines would be assigned to a virtual switch, depending on the desired QoS system class. The downside to this approach is that all traffic types from a virtual machine will have the same CoS value. For example, with a Unified CM virtual machine, real-time media traffic such as MoH traffic, signaling traffic, and non-voice traffic (for exa
Already have an account? Login
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.
Sorry, we're still checking this file's contents to make sure it's safe to download. Please try again in a few minutes.
Sorry, our virus scanner detected that this file isn't safe to download.