Hi,
We are thinking of building a separate VmWare Horizon view cluster and make the Nutanix-cluster used just for server loads.
We have a few scenarios of creating this. And one is to buy 4 new vmware hosts and connect them to the existing nutanix cluster with iSCSI, this is just because we have enough storage already in the existing cluster.
My question is, will it be a bottleneck to use iSCSI? We will run about 260 Win7/Win10 task-workers VDI clients.
Regards
Tobias
Page 1 / 1
Keep in mind, ABS does not support VAAI for VMFS volumes on iSCSI, so this is not the ideal use case.
Also keep in mind that this thought pattern is what got a lot of early VDI deployment (think 2006-2012 era) in trouble, as they would preceive to have lot of storage capacity available, but actually its IO latency, IO throughput, and storage processor capacity that make all of the difference, not actual available GB/TB.
Lots of different options here, but one of them would be to purchase a small amount of nodes just for VDI, such that you can get all of the scale out benefits of Nutanix+VDI, and not have to worry about the system bottlenecking itself artificially.
Happy to chat more if you'd like.
Also keep in mind that this thought pattern is what got a lot of early VDI deployment (think 2006-2012 era) in trouble, as they would preceive to have lot of storage capacity available, but actually its IO latency, IO throughput, and storage processor capacity that make all of the difference, not actual available GB/TB.
Lots of different options here, but one of them would be to purchase a small amount of nodes just for VDI, such that you can get all of the scale out benefits of Nutanix+VDI, and not have to worry about the system bottlenecking itself artificially.
Happy to chat more if you'd like.
Hi Jon,
Thanks for your answer.
Will the IO be an issue? The 260 VDI is today present in the existing Nutanix cluster. I would just want to move them to another host to get more CPU/RAM.
VAAI could be an issue, but I don`t know if we use that a lot for VDI.
Regards
Tobias
Thanks for your answer.
Will the IO be an issue? The 260 VDI is today present in the existing Nutanix cluster. I would just want to move them to another host to get more CPU/RAM.
VAAI could be an issue, but I don`t know if we use that a lot for VDI.
Regards
Tobias
you can get Nutanix nodes with very light storage configurations, but still run Nutanix to do CPU / memory. That way, they will add some SSD and storage controller capacity to the cluster, and everything will work smoothly, especially migration, as you could add the hosts to the Nutanix cluster, make them their own VMware cluster, and just compute vMotion them over to the new hosts. Easy as that.
I dont work in Sales, but if you'd like to play around with some configuration ideas, drop me an email at jon at nutanix dot com and I'd be happy to put our heads together on the technical bits.
RE VAAI
What I'm talking about is VAAI for block storage, which all VM's use in one fashion or another when you run a VM on a VMFS volume. Specifically a VAAI primative called ATS, we do not support (yet) on ABS. That has to do with array assisted LUN locking in VMFS. Before ATS was introduced, LUN locking was a huge problem in VDI and server virtualization.
Also, if you run VM's on an ABS volume, we lose all ability to control those VM's at a per VM level, so you will see the VM's "drop out of" prism reporting.
Lastly, a single ABS volume uses one oplog volume, whereas each VM virtual disk uses one oplog volume. This has to do with how write caches work within Nutanix. This means that you're going to go from a good amount of write cache per VM, to drastically less write cache shared across many VM's on an ABS volume.
Net Net, while ABS technically could do the job, it's really meant for different use cases on bare metal servers, or scale out storage on high end database workloads (VM or physical)
I dont work in Sales, but if you'd like to play around with some configuration ideas, drop me an email at jon at nutanix dot com and I'd be happy to put our heads together on the technical bits.
RE VAAI
What I'm talking about is VAAI for block storage, which all VM's use in one fashion or another when you run a VM on a VMFS volume. Specifically a VAAI primative called ATS, we do not support (yet) on ABS. That has to do with array assisted LUN locking in VMFS. Before ATS was introduced, LUN locking was a huge problem in VDI and server virtualization.
Also, if you run VM's on an ABS volume, we lose all ability to control those VM's at a per VM level, so you will see the VM's "drop out of" prism reporting.
Lastly, a single ABS volume uses one oplog volume, whereas each VM virtual disk uses one oplog volume. This has to do with how write caches work within Nutanix. This means that you're going to go from a good amount of write cache per VM, to drastically less write cache shared across many VM's on an ABS volume.
Net Net, while ABS technically could do the job, it's really meant for different use cases on bare metal servers, or scale out storage on high end database workloads (VM or physical)
Enter your E-mail address. We'll send you an e-mail with instructions to reset your password.