Provisioning Services (PVS) is a bit older than MCS and requires a separate server in your infrastructure in addition to the XenDesktop controller. Because of availability and scalability you would, of course, have more than one PVS server.
A big difference between MCS and PVS is that PVS is a mostly network-based technology. It can be deployed in non-persistent scenarios.
Citrix Provisioning Services solution
After installing Citrix PVS, an administrator or consultant will prepare a device (i.e., a master target device) for imaging. This will mean installing all required software on that device (e.g., MS Office, PDF readers and writers, the Citrix PVS target device tools) and all needed optimizations for this particular image.
The administrator then creates a vDisk image from the master target, and this image is saved on the Citrix PVS server or storage device—a file share or a storage system that the Citrix PVS server can communicate with (iSCSI, SAN, NAS, and CIFS).
Once the vDisk is available from the network, the image can be streamed from that location to a target VM; this allows the target VM to boot directly across the network. The Citrix PVS server will stream the content of the vDisk to the target device(s) on demand and the target device will act as if it’s running from a local drive.
A vDisk can be assigned to either a single target device in Private Image mode, which allows changes to be made to the image or to multiple target devices in Standard Image mode, which in turn generates a refresh to the default image when the target devices reboots.
Understanding Write Cache
While Citrix PVS is great for image management, there are certain items within a Windows configuration that would gain from persistency, such as event logs, antivirus definitions, and App-V cache. “Write cache” is the Citrix PVS feature that allows differential writes to be saved for persistent items within the Windows configuration. When data is written to the image with a configured write cache, it is written to the write cache file rather than the base image itself.
The options for write cache are as follows:
- Cache in device RAM
- Cache in device RAM with overflow on hard disk
- Cache on device hard disk
- Cache on server
When the target device is booted, write cache information is checked to determine the cache’s files presence. If the cache file is not present, the data is then read from the original vDisk file.
Cache on device hard disk
This will require a local HD, and the cache will be stored in a file on the local disk, which will be mounted as the second drive of the target device. The cache file size grows as needed, but will not outgrow the original vDisk.
Faster than server cache
Slower than cache in RAM
Cache in device RAM
The cache is stored in client memory, which is taken from the client’s main configured memory. The maximum amount of cache can be set in the vDisk properties. All the data written to the cache in device RAM will be read from RAM too, resulting in fast performance.
Faster than cache on local HD
When memory limits are reached it will result in device stops (the blue screen of death).
Cache in device RAM with overflow on hard disk
Faster than cache on local HD
Needs proper sizing and, when RAM is full, performance could be impacted by slower HD performance.
No BSOD when RAM Is full
Note: Provisioning Service 7.1 or later, Windows 7, Windows Server 2008 R2 or later.
Cache on server
When cache on server is selected, the cache is stored on the Citrix PVS server, or on a file share or a SAN connected to the PVS server. It is slower than RAM cache because all reads and writes have to go to the server and be read from a file. The cache gets deleted when the device reboots, so on every boot the device reverts to the base image.
All writes will go across the network
Needs centralized storage
Changes are not consistent after reboot
Benefits of Citrix PVS
When implementing or managing a traditional Citrix XenApp or XenDesktop environment, one of the most time-consuming tasks—one that could have a big impact on user experience—is establishing uniform behavior across all servers within the farm. Of course administrators can solve this problem using automation and orchestration, but there still needs to be a rationale for using both of these processes. For example, one could easily modify one server, just to save time.
By using Citrix PVS, change management for Citrix XenApp or XenDesktop farms is forced to be in line with desktop management processes; changes and patches will be managed centrally within a golden image. Changes to the image will be pushed out when the target devices boot-up; this will make your image build consistent, because all of your target devices are using the same shared copy of the image.
Downsides for Citrix PVS
Write Cache Placement
There are many things to consider when deciding on write cache placement. If administrators don’t follow sizing guidelines and best practices, there is a risk of BSODs, bad performance, or even data loss because changes aren’t consistent on reboot. The issues with write cache placement can hold administrators back from making the right architectural decisions.
Distribution of vDisks
In a traditional environment, vDisks need to be managed per PVS server, with a copy of the vDisk placed on every active PVS server.
This process can be done manually or with scripted automation. However, both of these processes are error prone when an update to the vDisk is required, potentially leading to a mismatch between vDisk versions on the various PVS servers.
How Nutanix helps with Citrix PVS
Write Cache Placement
Because of the Nutanix Distributed File System, we can simplify write cache placement. There’s no local disk management and our solution isn’t typically constrained by IOPS. As a result, an architecture can be simplified by directing the base-image VM’s write cache to the Nutanix data store.
By using the Nutanix infrastructure for your workloads, there is always an optimized I/O path available due to auto-tiering in the Nutanix technology stack. Auto-tiering prevents RAM from filling up, which could result in a BSOD or decreased performance from hitting HDDs.
As the above image shows, our read I/O will first hit the in-memory cache, while our write I/O will first hit the SSDs; this design ensures the fastest I/O and data consistence. When the written blocks become cold, our CVM move them to the cold-tier, thus freeing up space for hot blocks on the flash-tier.
Distribution of vDisks
NDFS simplifies Citrix PVS vDisks distribution by presenting SMB shares to Hyper-V on the same physical node as the PVS server. This method leverages the benefits of shared storage, such as manageability and fault tolerance, without the drawbacks, such as a single point of failure and I/O bottleneck. This scenario also takes advantage of increased performance due to data localization, which is based on RAM, SSD, or HDD, if needed.
In sum, Citrix MCS and PVS are valid solutions for efficient image distribution, with simple VM deployment and centralized control. In addition, Nutanix provides high performance and data protection, eliminating the possibility of catastrophic BSOD.
Read more about our Winning Best of Show at Citrix Synergy 2015.