@NutanixNext
Visit Nutanix

Nutanix Connect Blog

Welcome to the Nutanix NEXT community. To get started please read our short welcome post. Thanks!

Showing results for 
Search instead for 
Do you mean 

This post was written by Kees Baggerman and Martijn Bosschaart 

 

Provisioning Services (PVS) is a bit older than MCS and requires a separate server in your infrastructure in addition to the XenDesktop controller. Because of availability and scalability you would, of course, have more than one PVS server. 

 

A big difference between MCS and PVS is that PVS is a mostly network-based technology. It can be deployed in non-persistent scenarios.

 

Citrix Provisioning Services solution 

 

After installing Citrix PVS, an administrator or consultant will prepare a device (i.e., a master target device) for imaging. This will mean installing all required software on that device (e.g., MS Office, PDF readers and writers, the Citrix PVS target device tools) and all needed optimizations for this particular image. 

 

The administrator then creates a vDisk image from the master target, and this image is saved on the Citrix PVS server or storage device—a file share or a storage system that the Citrix PVS server can communicate with (iSCSI, SAN, NAS, and CIFS). 

 

Once the vDisk is available from the network, the image can be streamed from that location to a target VM; this allows the target VM to boot directly across the network. The Citrix PVS server will stream the content of the vDisk to the target device(s) on demand and the target device will act as if it’s running from a local drive.

 

image-1.png

 

image-2.png

 

A vDisk can be assigned to either a single target device in Private Image mode, which allows changes to be made to the image or to multiple target devices in Standard Image mode, which in turn generates a refresh to the default image when the target devices reboots. 

 

Understanding Write Cache 

 

While Citrix PVS is great for image management, there are certain items within a Windows configuration that would gain from persistency, such as event logs, antivirus definitions, and App-V cache. “Write cache” is the Citrix PVS feature that allows differential writes to be saved for persistent items within the Windows configuration. When data is written to the image with a configured write cache, it is written to the write cache file rather than the base image itself. 

 

The options for write cache are as follows: 

  • Cache in device RAM 
  • Cache in device RAM with overflow on hard disk 
  • Cache on device hard disk 
  • Cache on server 

 

When the target device is booted, write cache information is checked to determine the cache’s files presence. If the cache file is not present, the data is then read from the original vDisk file.

 

Cache on device hard disk 

 

This will require a local HD, and the cache will be stored in a file on the local disk, which will be mounted as the second drive of the target device. The cache file size grows as needed, but will not outgrow the original vDisk.

 

Pros

Cons

Faster than server cache

Slower than cache in RAM

Enables HA

 

 

Cache in device RAM 

 

The cache is stored in client memory, which is taken from the client’s main configured memory. The maximum amount of cache can be set in the vDisk properties. All the data written to the cache in device RAM will be read from RAM too, resulting in fast performance.

 

Pros

Cons

Faster than cache on local HD

When memory limits are reached it will result in device stops (the blue screen of death).

Enables HA

 

 

Cache in device RAM with overflow on hard disk

 

Pros

Cons

Faster than cache on local HD

Needs proper sizing and, when RAM is full, performance could be impacted by slower HD performance.

No BSOD when RAM Is full

 

 

Note: Provisioning Service 7.1 or later, Windows 7, Windows Server 2008 R2 or later.

 

Cache on server 

 

When cache on server is selected, the cache is stored on the Citrix PVS server, or on a file share or a SAN connected to the PVS server. It is slower than RAM cache because all reads and writes have to go to the server and be read from a file. The cache gets deleted when the device reboots, so on every boot the device reverts to the base image.

 

Pros

Cons

Enables HA

All writes will go across the network

 

Needs centralized storage

 

Changes are not consistent after reboot

Benefits of Citrix PVS 

 

When implementing or managing a traditional Citrix XenApp or XenDesktop environment, one of the most time-consuming tasks—one that could have a big impact on user experience—is establishing uniform behavior across all servers within the farm. Of course administrators can solve this problem using automation and orchestration, but there still needs to be a rationale for using both of these processes. For example, one could easily modify one server, just to save time. 

 

By using Citrix PVS, change management for Citrix XenApp or XenDesktop farms is forced to be in line with desktop management processes; changes and patches will be managed centrally within a golden image. Changes to the image will be pushed out when the target devices boot-up; this will make your image build consistent, because all of your target devices are using the same shared copy of the image.

 

Downsides for Citrix PVS 

 

Write Cache Placement 

 

There are many things to consider when deciding on write cache placement. If administrators don’t follow sizing guidelines and best practices, there is a risk of BSODs, bad performance, or even data loss because changes aren’t consistent on reboot. The issues with write cache placement can hold administrators back from making the right architectural decisions. 

 

Distribution of vDisks 

 

In a traditional environment, vDisks need to be managed per PVS server, with a copy of the vDisk placed on every active PVS server. 

 

This process can be done manually or with scripted automation. However, both of these processes are error prone when an update to the vDisk is required, potentially leading to a mismatch between vDisk versions on the various PVS servers.

 

How Nutanix helps with Citrix PVS 

 

Write Cache Placement 

 

Because of the Nutanix Distributed File System, we can simplify write cache placement. There’s no local disk management and our solution isn’t typically constrained by IOPS. As a result, an architecture can be simplified by directing the base-image VM’s write cache to the Nutanix data store.

 

image-3.png

 

By using the Nutanix infrastructure for your workloads, there is always an optimized I/O path available due to auto-tiering in the Nutanix technology stack. Auto-tiering prevents RAM from filling up, which could result in a BSOD or decreased performance from hitting HDDs.

 

image-4.png

 

As the above image shows, our read I/O will first hit the in-memory cache, while our write I/O will first hit the SSDs; this design ensures the fastest I/O and data consistence. When the written blocks become cold, our CVM move them to the cold-tier, thus freeing up space for hot blocks on the flash-tier. 

 

Distribution of vDisks 

 

NDFS simplifies Citrix PVS vDisks distribution by presenting SMB shares to Hyper-V on the same physical node as the PVS server. This method leverages the benefits of shared storage, such as manageability and fault tolerance, without the drawbacks, such as a single point of failure and I/O bottleneck. This scenario also takes advantage of increased performance due to data localization, which is based on RAM, SSD, or HDD, if needed. 

 

In sum, Citrix MCS and PVS are valid solutions for efficient image distribution, with simple VM deployment and centralized control. In addition, Nutanix provides high performance and data protection, eliminating the possibility of catastrophic BSOD.

 

Read more about our Winning Best of Show at Citrix Synergy 2015 

Citrix MCS and PVS on Nutanix: Enhancing XenDesktop VM Provisioning with Nutanix Part 1

by Community Manager ‎06-17-2015 04:27 PM - edited ‎06-17-2015 04:30 PM (12,202 Views)

This post was written by Kees Baggerman and Martijn Bosschaart 

 

Citrix XenDesktop now offers a fully integrated desktop virtualization suite for both virtual desktop infrastructures (VDI) and hosted shared desktops (HSD)—the latter is better known as XenApp. Add to this the powerful HDX protocol and the Flexcast stack and you’ve got the world’s most advanced and widely used desktop virtualization product. From a single broker console, Citrix Studio, you can deploy all types of desktop and application workloads, either persistent or non-persistent, and each of these can be derived from master images and cloned on the spot.

 

In this blog we will focus on XenDesktop provisioning methods, and, in particular, how Nutanix simplifies and enhances both storage infrastructure configuration and overall deployment.

 

Although MCS and PVS have their own unique features and advantages, which we discuss below, their primary function is to enable the rapid deployment of copies of a golden master virtual machine to multiple users simultaneously, which saves administrators lots of time and ensures a consistent and reliable environment. Instead of patching hundreds, or even thousands, of PCs, admins now update a handful of master images that are then rolled out almost instantaneously. Another advantage is that, because you are managing single images, a rollback to a previous version is basically a matter of managing snapshot file versions. Like Nutanix, Citrix likes to keep it simple. As we will show, though, the Nutanix Distributed File System takes the simplicity of XenDesktop to another level with streamlined management, reduced rollout time, and enhanced performance.

 

Origins of Machine Creation Services

 

The first provisioning technique we want to discuss is Machine Creation Services. MCS is, in terms of Citrix’s offerings, still the new kid on the block, even though it has been around since the 5.0 release of XenDesktop, back in 2010. With the release of XenDesktop 5.0, Citrix moved towards a simpler, scalable, and agile architecture, one that would lead to the end of the Independent Management Architecture (IMA). The older XenDesktop versions prior to 5.x were still based on IMA, and this proved to be a huge problem in large environments because IMA was meant to run in XenApp-only environments with a maximum of 1,000 servers. At the time this scenario was fairly unusual, and so it was not considered a problem. Before long, however, XenDesktop customers began trying to run thousands of desktops and quickly hit IMA’s limits. This required a new architecture, and thus the Flexcast Management Architecture (FMA) was born. 

 

FMA was at first only available for VDI workloads, as XenApp was still a separate product and would continue to have another three full versions (5.0, 6.0, and the final 6.5) based on IMA. Only with the release of XD 7.0 did the XenApp workload make its way to FMA, first as a hosted shared desktop option in XD, and then brought back as XenApp 7.5 exclusively for that specific workload. When XD 5.0 was released, MCS became available as well, and its design was focused on simplifying XenDesktop and lowering setup time. The new version significantly simplified both broker installation and group rollouts of desktop VMs. With XD 5.0, an admin could now do the entire setup of a XenDesktop farm in just a couple of hours and still have plenty of time to drink coffee while doing it.

 

However, when MCS became available, storage infrastructure did not have the advantages that Nutanix makes possible. While Provisioning Services in XenDesktop is partially network based, MCS is a storage-centric provisioning technology. Its adoption was therefore slowed by the state of the technology four-to-five years ago, as the SANs of that age couldn’t handle the increased IOPS requirement, and they still can’t do it well today. 

 

This is where Nutanix comes in. We manage these challenges on multiple fronts. To give you a better understanding of what Nutanix offers XenDesktop users, we’ll first deep dive a bit into MCS.

 

MCS – the inner workings

 

When non-persistent environments use MCS, the broker will copy the master image to each configured datastore specified by the Studio host connection. This can either be a local datastore on each host or a shared datastore on a SAN or NAS. The admin can then select the available datastores, which are read from the hypervisor cluster (through a VMware vCenter, Microsoft SCVMM, or XenCenter interface). After this copy is complete (which can take a while depending on the number of datastores configured), all the VMs in the catalog are then pointed to these local copies. 

 

MCS works roughly as shown in the figure below. I say “roughly” because each supported hypervisor has its own specific MCS implementation in terms of disk management, but the net effect is the same.

 

Image1.png

 

To make each VM unique, and to allow for the data to be written, MCS uses two additional disks in addition to the master disk. 

 

The ID disk is a very small disk (max 16 MB) that contains identity information; this information provides a unique name for the VM and allows it to join Active Directory (AD). The broker fully manages this process; the admin only has to provide or create AD accounts that the VMs can use. The broker then creates a unique ID disk for every VM. 

 

The difference disk, also known as the write cache, is used to separate the writes from the master disk, while the system still acts as if the write has been committed to the master disk. 

 

From a VM perspective, these “chains” act as a pane of glass. While the base OS disk, the ID, and the difference disk are separate, the end user’s perspective will be that of working on a unique, writeable VDI VM. 

 

In the screenshot below, there are two virtual disks in use by the VM (just after a shutdown) in addition to the base disk. The identity disk is about 7 MB and the delta.vmdk disk is a difference disk mounted to the VM. These are the two disks mounted in the VM config (vmx).

 

image2.png

 

In VMware environments, however, this is not the actual difference disk file the changes are written to. MCS on VMware utilizes a VMDK disk chain with multiple child disks. On Hyper-V and XenServer, MCS utilizes VHD-chaining, quite similar to VMware, but slightly different in execution and disk naming. 

 

The delta.vmdk file you see in the above screenshot is actually just a disk descriptor file that references the golden master and diverts writes to a child disk, or REDO log. This is not to be confused with a snapshot. 

 

So MCS more or less sets up a read-only “disk-in-the-middle,” which is not much more than a redirector. However, it allows Citrix to effectively control the persistent or non-persistent behavior from within Studio, as it does not have to empty or delete the disk configured in the VMX file (ESX cleans up the REDO file on reboot).

 

When using a persistent desktop, this redirector disk does not exist; differential writes are written directly into the configured difference disk in the VMX file that is the only child to the master disk. This disk is not altered on reboots, which allows user changes to persist. 

 

The master disk used in both persistent and non-persistent scenarios is not stored inside the VM’s folder, but is placed in its own folder in the datastore root. There is a reference to the base disk in the delta file’s VMDK file in case of using non-persistent mode (see below):

 

image3.png

 

The folder the base disk is placed in gets its name from the catalog name in Studio, followed by the actual filename, which is taken from the date and timestamp when the disk was created. 

 

When Citrix Studio gives the command to boot a non-persistent VM, you can see more files popping up as the REDO file is created:

 

image4.png

 

The hypervisor redirects all writes within the VM to the delta (diff) disk, which in turn get redirected to a separate REDO log file. The REDO log file is not a snapshot, but a child disk chained to the diff disk, which is a child of the master image. 

 

If you copy a 1 GB file to the desktop of the VM, you can see the child disk get bigger:

 

Image5.png

 

Now that I have explained the basics of MCS on the disk level, let’s take a look at how MCS manages disk distribution. 

 

Rolling out Master Disks 

 

When you create a new catalog of VMs that use a non-persistent disk, you are first asked to select a snapshot of the master VM, which will then become the master disk. If you select a chained snapshot, Studio will first flatten the snapshot chain into a new master disk. 

 

This disk will be copied to all the datastores configured in the Studio host connection. These are full copies, and so the more datastores you use to spread the IOPS load, the longer this process takes. 

 

This is the first thing we can solve with Nutanix.

 

To overcome the IOPS burden, Citrix admins around the world have resorted to implementing local-storage-based architectures, which avoided the SAN when placing the write cache. While this solves a part of the problem, it also creates new issues, as you lose centralized storage management and have to deal with decentralized islands of disks.

 

For XenDesktop, this means that you now have to configure these local datastores in the host configuration option in Studio. If you have a big farm this means a lot of clicking (or scripting).

 

image7.png

 

Configuring five hosts is not a problem, but in a bigger enterprise environment with, perhaps, dozens of hosts, this can become cumbersome. This is especially true if you need to take out a host for maintenance mode, or if a host is down during deployment of an image, because you would first have to deselect it to prevent the deployment phase from erroring out. 

 

The problem is that when you roll out a new image, the flattened snapshot is going to be copied to all the datastores selected. The more datastores, the longer the copy is going to take—up to hours, depending on the size of your master disk. 

 

Nutanix Distributed File System 

 

Nutanix solves both the IOPS and configuration hassle with the Nutanix Distributed File System.

 

image7.png

 

While the NDFS is made up of local storage, the disks of all the nodes in the cluster are drawn together into a storage pool. This storage pool is then presented back to the hosts as shared NFS containers.

 

This means that every host sees the same central datastore and XenDesktop no longer requires local storage configuration.

 

Within Studio, you select only the single container to which you want to rollout your VDI VMs.

 

image8.png

 

Instead of using the option “Local,” you select “Shared” and choose the NFS datastore that has been mounted on the hosts in the cluster you configured in the Studio host connection. Studio gets this information from vCenter.

 

Now when you rollout a new master disk, the copying process takes place only once, and at most will take several minutes. The NDFS file system will make sure the data gets distributed across the cluster.

 

Nutanix also brings “data locality” to the table, which enhances MCS even further. The fastest way for a VM to access its data is to make sure it’s being served locally. Data locality ensures that the data always follows the VM and stays as close as possible. This is not only a performance booster, but it also prevents unnecessary traffic across the network between nodes.

 

This system works great for VMs that have their own VMDK for both reads and writes. With MCS, however, the VMs no longer have a local disk (besides the write cache)—rather, they read from a centralized image. This would cause extra reads to traverse the network.

 

Studio creates the flattened snapshot copy of the master VM on the host it is currently placed on, so the master disk might only be local to a subset of VMs, as the VMs are most likely spread out. This would mean not every VM would benefit from local data. Those VMs not local to the master disk would need to grab reads over the network, although writes to the difference disk would still remain local.

 

Shadow Clones 

 

This is where Shadow Clones come in. Shadow Clones are a mechanism that automatically detects if multiple VMs are using a single virtual disk file. Once detected, the Shadow Clone will mark the master disk read-only, which allows the system to cache the disk locally on all hosts in the cluster, and thus serve all read IOs locally.

 

image9.png

 

As we can see, the Nutanix Distributed File System drastically simplifies the Studio configuration by offering all the benefits of shared storage to XenDesktop Studio. Using only a single datastore target reduces rollout time and enhances performance by utilizing local storage for speed. And NDFS offers Shadow Clones to optimize data placement, removing the need for local storage management.

 

Stay Tuned For Part 2, PVS with Nutanix, and read more about our Winning Best of Show at Citrix Synergy 2015

 

Announcements

One of the fun things about participating in an online community is developing a community identity. One way to do that is with a personalized avatar.

Read More: How to Change Your Community Profile Avatar
Labels