Author Archives: SteveD

Configure Horizon View 6 Cloud Pod Architecture

Now View 6 has become GA, it’s time to look further into one of the many new features!  One of the features that grabbed my attention, is the Cloud Pod Architecture feature, essentially allowing you to link multiple View Pods over a WAN connection (to another datacentre).  This has always been a limiting factor with large View architecture, essentially limiting a pod per datacentre, or multiple pods per datacentre.  Back in 2012 at VMworld US, the ability to bring this added dimension to View designs was discussed in the EUC 1470 session – Demystifying Large Scale Enterprise View Architecture.  This will dramatically change the way large View designs can be designed.  Let’s drill further into the details (from the official VMware docs):-

View Cloud Pod Architecture

  • With the Cloud Pod Architecture feature, you can link together multiple View pods, to provide a single large desktop brokering and management environment.
  • In a traditional View implementation, you manage each pod independently. With the Cloud Pod Architecture feature, you can join together multiple pods to form a single View implementation called a pod federation.
  • A pod federation can span multiple sites and datacentres, and simultaneously simplify the administration effort required to manage a large-scale View deployment.
  • View Connection Server instances in a pod federation use the Global Data Layer to share key data. Shared data includes information about the pod federation topology, user and group entitlements, policies, and other Cloud Pod Architecture configuration information.
  • View Connection Server instances communicate in a Cloud Pod Architecture environment by using an interpod communication protocol called the View InterPod API (VIPA).
  • You use the lmvutil command line tool to configure and manage a Cloud Pod Architecture environment. lmvutil is installed as part of the View installation. You can use View Administrator to view pod health and desktop session information.
  • Before you begin to configure the Cloud Pod Architecture feature, you must make decisions about your Cloud Pod Architecture topology. Cloud Pod Architecture topologies can vary, depending on your goals, the needs of your users, and your existing View implementation. If you are joining existing View pods to a pod federation, your Cloud Pod Architecture topology is typically based on your existing network topology.
  • In a Cloud Pod Architecture environment, a site is a collection of well-connected pods in the same physical location, typically in a single datacentre. The Cloud Pod Architecture feature treats pods in the same site equally.
  • When you initialize the Cloud Pod Architecture feature, it places all pods into a default site called Default First Site. If you have a large implementation, you might want to create additional sites and add pods to those sites.
  • The Cloud Pod Architecture feature assumes that pods within the same site are on the same LAN, and that pods in different sites are on different LANs.
  • Ports – 22389 Global Data Layer LDAP Instance and 8472 View interpod API (VIPA) interpod communication.
  • Limitations
    • 20,000 desktops, 4 pods, 2 sites and 20 Connection Servers
    • This release does not support using the HTML Access feature. With HTML Access, end users can use a Web browser to connect to remote desktops and are not required to install any client software on their local systems.
    • This release does not support using remote Windows-based applications hosted on a Microsoft RDS host. 

Jumping into the Lab

Having been fortunate enough to have access to View 6 beta, I decided to take a closer look and have a play with this new feature in my lab…

Summary of steps

  1. Initialize feature
  2. Join Pod(s) to Pod Federation
  3. Find and Change Pod Names
  4. Create and Configure Sites
  5. Move Pods to relevant Site
  6. Create & Configure a Global Entitlement (future post)

Lab Environment

  • Two datacentres – London and New York City (NYC).
    • London – View Connection Server = ViewCS6
    • New York City – View Connection Server = ViewCS6-nyc

The goal is to join the London pod to the New York City pod, creating a Pod Federation, with two unique Sites called London and NYC

Please be aware the example is for Lab purposes only, in production you would have at least two Connection Servers in each pod, to provide redundancy & availability.  My lab has limits, and I do not have two physical datacentres, or two domains, which would be minimum requirement, for a typical real world environment.

This is purely to test the configuration of the Cloud Pod Architecture feature at a simplistic level.

Initialize feature

Before you configure a Cloud Pod Architecture environment, you must initialize the Cloud Pod Architecture feature. You need to initialize the Cloud Pod Architecture feature only once, on the first pod in a pod federation. When you add pods to the pod federation, the new pods join the initialized pod.

Run the following command from any View Connection Server instance in the London pod.  I will run from ViewCS6, which is a Connection Broker running in my London datacentre.

lmvutil –authAs steve.dunne –authDomain –authPassword “*” –initialize

Note: “*” Will prompt you to enter your password.

Note: The commands\switches start with hyphen hyphen (or dash dash), you may not be able to see these clearly from the text or graphics below.  Check the official documentation for confirmation if unsure.


During the initialization process, View sets up the Global Data Layer on each View Connection Server instance in the pod, configures the VIPA interpod communication channel, and establishes a replication agreement between each View Connection Server instance.

List Pods

The Pod is now created and called Cluster-ViewCS6, and the Owning site is Default First Site (we can change the names further down the line to suit our needs).  Run the following to list out the pods.

lmvutil –authAs steve.dunne –authDomain –authPassword “*” –listPods


Join additional Pods to Pod Federation

During the Cloud Pod Architecture initialization process, the Cloud Pod Architecture feature creates a pod federation that contains a single pod.  You can use the lmvutil command to join additional pods to the pod federation.  Joining additional pods is optional.

Within the New York City datacentre, a single pod is running from here.  On any View Connection Server (in my case ViewCS-nyc) in the pod, we need to join this pod to the pod federation

lmvutil –authAs Steve.dunne –authDomain –authPassword “*” –join –joinServer –userName\steve.dunne –password “*”


After you finish joining the pod(s) to the pod federation, the pod(s) begin to share health data. You can view health data on the dashboard in View Administrator.


From the above, you can see the Health of the Local Pod (ViewCS6-NYC), and the Remote Pod, ViewCS6 (running from London).

Change Pod Names

The Cloud Pod Architecture feature assigns default names to the pods in a pod federation. You can use lmvutil commands to list the names of the pods in your pod federation, and change the default names to names that reflect your network topology.

From my View Connection Server (ViewCS6 in London), I used the following:-

lmvutil –authAs steve.dunne –authDomain –authPassword “*” –updatePod –podName “Cluster-ViewCS6″ –newPodName “London-Pod-1″

lmvutil –authAs steve.dunne –authDomain –authPassword “*” –updatePod –podName “Cluster-ViewCS6-NYC” –newPodName “NYC-Pod-1″

lmvutil –authAs steve.dunne –authDomain –authPassword “*” –listPods


We now have two pods, correctly named:-

NYC-Pod-1, containing ViewCS6-NYC

London-Pod-1, containing ViewCS6

Create Sites and move pods to relevant site

On any Connection Server instance in the pod federation (I selected ViewCS6), run the following to create two sites (one for London & one for New York), and move the pods to the relevant site.

lmvutil –authAs steve.dunne –authDomain –authPassword “*” –createSite –siteName “London Datacentre”

lmvutil –authAs steve.dunne –authDomain –authPassword “*” –assignPodToSite –podName “London-Pod-1″ –siteName “London Datacentre”

lmvutil –authAs steve.dunne –authDomain –authPassword “*” –createSite –siteName “New York City Datacentre”

lmvutil –authAs steve.dunne –authDomain –authPassword “*” –assignPodToSite –podName “NYC-Pod-1″ –siteName “New York City Datacentre”


Verify our configuration

lmvutil –authAs steve.dunne –authDomain –authPassword “*” –listPods


Horizon View Administrator Dashboard

Viewing the below graphic – New York View Broker (left hand side)  & London View Broker (right hand side).  Both sites show the Remote Pods with the relevant names.

Ignore the ‘red’ status of the two Connection Servers, this is because I haven’t yet installed the certificates.  Both servers are functioning fully after the above changes.


Wrapping Up

You can achieve much more using the lmvutil tool, including creation and configuration of Global Entitlements and assigning a Home Site to a User or Group.  I encourage you to check out the official VMware documentation for ‘Administering the View Cloud Pod Architecture

I hope the above brings some insight into the configuration of this new feature.

ThinApp Optimise and Book Review

ThinApp has been around for years and application virtualization is extremely useful.  Although I’ve used ThinApp in the past, I needed to brush up in a few areas, recommit some knowledge back into my memory. Specifically I wanted to focus on a few key areas – such as performance, optimisation of large packages (sandboxes).  I’m a fan of Packt Publishing and recalled them releasing a ThinApp 4.7 Essentials book, so I decided to run through this book and try to gather the information I needed.  The author is Peter Bjork, a VMware Senior Specialist from the EUC EMEA Practice.


As a result, I actually found this publication the most comprehensive and complete guide I could find around ThinApp.  Although the book is over a year old, and in technology that’s a long time, from my understanding, many of the core concepts and principles outlined in this book are still very relevant for today’s latest version (v5.0.1).  You can find a video resource around the major release of ThinApp v5 at the end of this post.

Back to the book, which from back to front, covers all the necessary sections you’d expect, from ThinApp high level overview, architecture in depth, the capturing process, packages and all the detail, settings and configuration.  Many of the sections are accompanied by guides and insightful screenshots to enhance the learning experience.  Additionally all the various enterprise deployment and update methodologies of ThinApp packages are discussed.  Design and implementation considerations is covered, a troubleshooting section and various tools used, finally wrapping up with a useful references section to VMware’s official documentation.

For those of you considering the end user computing VMware VCAP exams (DTD and DTA), ThinApp technology is covered in both.  Although I’m under NDA having taken both exams, what I will say is, master this book and you’ll pretty much take care of the ThinApp questions.  As I was reading through the book, I noticed a couple of things which would have definitely improved and helped during my exam experience.

Returning to my original intent, I decided to round up some notes, which may prove useful for others or myself in the future.  I would encourage anyone wanting to drill deeper, to purchase the book or reference the official VMware docs.


  • Set user expectations
  • Size of application\package can impact performance (execution)
  • Slowness of applications
    • At launch or during run\execution?
  • Where do ThinApp and Sandbox reside? Removable media, network or local?
  • AV should be set to scan only .exe of package, not data files\folders
  • Detect processes in background causing slowness – Process Explorer and Monitor tools
  • ThinApp log monitor
  • Slow registration of packages
    • Where are packages located? How many packages?
    • Method – ThinReg, MSI or 3rd party
    • Physical, virtual or remote desktops?
  • Performance heavily dependent on network and storage
  • Impact of additional validation like login scripts and 3rd party tools

Isolation Modes

  • Merged
    • Full access to local system, files and registry
    • Will add changes to local system
    • Virtualised elements will end up in sandbox
  • Write\Copy
    • Allows reading of local system, but modifications created in sandbox
  • Full
    • Cannot write or interact with local system, files and registry
    • Protect application from seeing conflicting elements on local system

Prevent Sandbox bloating

  • Size of sandbox depends on isolation mode and behaviour of application
  • How application needs to behave, defines isolation mode
  • Does the application actually require a sandbox?
  • Investigate sandbox during testing of a package, to learn behaviour (files created)
  • Disable application updaters
  • Could be caused by sandbox ‘locking’ upon application closing down.
    • Use Process Explorer tools to trouble-shoot
  • Specific apps may require sandbox elsewhere (away from user profile), if too much bloating
  • Package.ini parameter ‘SandboxPath=’
    • Even if packaging policy specifies a certain location for sandboxes (roaming profile), the policy should allow a different location for those few applications that create massive sandboxes and don’t need to be roamed.
  • Package.ini parameter ‘RemoveSandboxOnExit=
    • Deletes sandbox when app closed, if user settings not required
  • Choose not to sandbox user settings by using the Merged isolation mode on %AppData% and all it’s subfolders
    • Configured using the file
    • Sandbox will be smaller and you can more easily manipulate the settings accessing the native location, however all changes made by package are not held in one place

Other tuning options?

  • Package.ini parameter ‘CachePath=’  – i.e. stub executable files
    • Specify location away from Sandbox directory
  • Package.ini parameter ‘CompressionType’= None or Fast
    • For performance, choose none, only files other than executables & DLL are compressed
    • You can change this behaviour using OptimizeFor=Disk
  • Package.ini parameter ‘ExcludePattern=’ – Files\folders to exclude from build
    • Add the parameter into ##Attributes.ini files as well. This way, the exclusion will only be active on that specific folder (where ##Attributes.ini reside)
  • Ensure application is captured on CLEAN operating system (Windows)

Sizing your streaming file share

  • Size repository as with any other file share, the load from packages are the same as any other file types
  • Initial execution of files require disk I\O
  • If application consumes 5 MB when launched (use Wireshark to measure), and ten users launch the application at the same time, the load on the network will be 50 MB.  The first launch of a package will always be slower than the subsequent launches. Upon the first launch the sandbox is initiated, and this will take something like a second or so
  • If the package creates a large sandbox upon the first launch, then the package might not be a very good candidate for streaming
  • Store packages and user sandboxes on different file servers, as  you won’t hit the same server twice across the network during the execution of the package
  • Disable AV scanning of package repository


ThinApp bootcamp video series – Design, implementation & scripting etc

ThinApp Blog

ThinApp Performance Enhancing

A great video from the above bootcamp series from VMware

ThinApp v5.0 Overview

VMware Horizon View 5.3 – 3D vDGA deployment

Over the course of the last month, I’ve been engaged with a client around a Horizon View Plan & Design. One of the main business drivers and Use Cases, is full scale workstation replacement for CAD users (approx 8).

Therefore, I’ve spent quite some time researching the technology and also hands on with the graphics acceleration offering from Horizon View 5.3, such as vSGA and vDGA.  You’ll find plenty of resources already out there, going over the differences, advantages and disadvantages of each.  An excellent write up can be found here

My primary resource I used for this deployment is the VMware whitepaper Graphics Acceleration in VMware Horizon View Virtual Desktops

Let’s step through a couple of deployment stages:-

Use Case and Requirements

  • Provide full scale workstation replacement for CAD users (approx 8)
  • Performance doesn’t necessarily have to match or exceed that of a physical workstation.
  • Performance should be suitable for CAD users compared to existing physical workstations.  The ability to rotate, zoom and interact with models with no excessive lag or jitter would be considered a success.

The above may be slightly vague, however the pilot or POC is there to validate if the solution can replace existing physical desktops going forward.  The customer is aware performance may not match physical, as those dedicated workstations have resources up to 24GB and x2 CPU (6 cores each).  Of course, vSphere 5.5 can handle virtual machines of that size, if required.

Environment and Hardware

The hardware for the project had already been procured before the engagement started.  These are the resources that were available (design constraints).

  • x2 Dell PowerEdge R720 – 128GB, x2 CPU (12 cores each), core speed 2.3GHZ and local SSD storage
    • ESXi 5.5 and vCenter 5.5 (vCSA)
  • x2 Nvidia K1 GRID cards per host, offering 8 GPUs per host (4 per card)
  • Shared storage – NetApp array
  • Dell Wyse P25 Zero Client, with latest v4.2 firmware applied.


Design Considerations

There are a number of decisions or choices, some of these are typical for vSphere and Horizon View design, however they require thought, along with the impact on other parts of the design.

  • Virtual Machine resources (RAM\CPU) – Small, medium or large VMs? If considering large VMs for CAD users with maybe 4 vCPU and 8GB RAM, how does this affect CPU co-scheduling, HA. host density or other virtual machines etc
  • Storage – Will fast shared storage be used? Does this provide the required I\O, latency and throughput? Can local SSD storage be used?  Consider the impact of local storage, in terms of availability, manageability and maintenance operations
  • Pool settings (video RAM) – 64MB to 512MB?  If choosing 512MB, consider the additional overhead and available resources of the cluster.
  • vSphere features – Impact from using Direct I\O passthrough device to virtual machines – No HA, DRS and vMotion.
  • View Pool type – Automated or Manual?  Due to passthrough device, vDGA can only use a Manual pool, not Automated or Linked Clones (only software rendering and vSGA).
  • Balance CAD workloads (if heavy and demanding) across K1 hardware and ESXi hosts
  • GPU – How do you carve up your GPU selection across your available slots?
  • Network – LAN v WAN? Expected latency of connection as higher latency will impact the CAD session.
  • As a result of networking, how do you tune PCoIP accordingly?  Max\Min Image quality, Image Caching, FPS & Max Session bandwidth?
  • Endpoints – Are proposed client devices powerful enough to handle the workload of 3D? Tera1 devices can support up to 30fps, Tera2 devices up to 60fps.

Some of the above can be clarified further by a desktop assessment of those existing physical workstations used by CAD users.

Otherwise, use your pilot or POC to validate the above, and adjust accordingly.  Test, test and test!

Prepare ESXi hosts

Two display adapters – If the high-end NVIDIA card is set as the primary adapter, Xorg will not be able to use the GPU for rendering.

If you have two GPUs installed, the server BIOS may give you the option to select which GPU should be the Primary and which should be the Secondary. If this option is available, make sure the standard GPU is set as Primary and the high-end GPU is set as Secondary.

After the Nvidia hardware has been installed, you’ll need to install the drivers into ESXi 5.5 using the following:-

  • # vim-cmd hostsvc/maintenance_mode_enter
  • # localcli software vib install –no-sig-check -v /<path-to-vib>/NVIDIA-VMware-319.65-1OEM.550.0.0.1331820.x86_64.vib
  • # vim-cmd hostsvc/maintenance_mode_exit

Following the driver install, configure the GPUs for passthrough using ESXi>Advanced Settings>DirectPath I\O Configuration

Note: These drivers are only required in vSGA mode.  With vDGA the GPU is passed directly to the virtual machine, and the Nvidia drivers installed in the guest operating system are used.  Typically, most deployments utilise both vSGA and vDGA, so I would recommend the drivers being installed into ESXi at the beginning, to ensure all the relevant pieces are in place for different scenario’s.

Full details can be found here Graphics Acceleration in VMware Horizon View Virtual Desktops

Additional checks:-

  • Ensure Intel VT-d in BIOS is enabled or check using esxcfg-module –l | grep vtddmar
  • Check PCI Passthrough (green flag) of K1 GRID devices via ESXi>Advanced Settings

Prepare Parent Image

After installation of relevant applications such as Solidworks, it was time to fine tune the image.  Below is a checklist of things to cover off:-

  • VMware HW version 9 or 10 (only 128MB Video RAM available with v8).
  • VMware HW – Video card (Auto detect settings)
  • Minimum 4GB RAM and 2 vCPU
  • Configure PCI passthrough (Nvidia K1 GRID GPU) device.
  • VMXNET3 adapter
  • Install latest Nvidia drivers – 332.76 into Windows (reboot)
  • Check Windows Device Manager for Nvidia device
  • Install Horizon View Agent 5.3 (reboot)
  • Customise Windows – Enable Windows Aero, Themes service, Let Windows choose and Enable Transparency
  • Run the VMware OS optimise tool  Be careful not to disable settings required for 3D experience, see above.
  • Enable the proprietary NVIDIA capture APIs by running  C:\Program Files\Common Files\VMware\Teradici PCoIP Server\ MontereyEnable.exe” –enable
  • Reboot virtual machine
  • Registry changes, if required (see Performance Tips below)
  • Activate Nvidia display adapter.  Connect to VM for first time via PCoIP in full screen (use Manual Pool) from endpoint at native resolution, or VM will use the Soft 3D display adapter.  vDGA does not work through the vSphere console sessions.
  • After connecting via PCoIP, run dxdiag.exe and check Display tab for Nvidia GPU and driver

Note: After initial testing had been performed, remove the PCI device from the above Parent Image, and then cloned off another x of virtual machines.  If you forget to remove the PCI device, you won’t be able to clone the virtual machine.  After the virtual machines had been cloned and joined to the domain, a unique PCI device was assigned to each VM.

VM ESXi Host PCI Device User Usage
CAD-PC-01 ESXi 1 06:00:0 K1 GRID 1 High
CAD-PC-02 ESXi 1 07:00:0 K1 GRID 2 Low
CAD-PC-03 ESXi 1 08:00:0 K1 GRID 3 Low
CAD-PC-04 ESXi 1 09:00:0 K1 GRID 4 High
CAD-PC-05 ESXi 2 06:00:0 K1 GRID 5 High
CAD-PC-06 ESXi 2 07:00:0 K1 GRID 6 High
CAD-PC-07 ESXi 2 08:00:0 K1 GRID 7 Low
CAD-PC-08 ESXi 2 09:00:0 K1 GRID 8 Low

Horizon View Pool Configuration

  • Manual Pool & Dedicated assignment
  • PCoIP and 2 monitors (max allowed)
  • Users cannot choose protocol
  • 3D rendering – Hardware
  • Video RAM – 512MB

After VMs have been configured or re-configured in vCenter, you must power off, and on, existing virtual machines for the 3D Renderer setting to take effect. Restarting or rebooting a virtual machine does not cause the setting to take effect.

Performance Benchmarking

A couple of cool benchmark tools you can use are as follows:-

Due to restrictions with software downloads, I was only able to run the performance benchmark tool provided by Solidworks.

Performance Tips

  • Virtual Machine – Minimum 4GB and 2 vCPU, plus VMXNET3
  • PCoIP FPS (30), if application requires more increase to 60-120fps
  • Tera2 Zero client device only support up to 60fps
  • Enable PCoIP Image Caching, as Tera2 Zero client devices running firmware v4.1 can take advantage (View 5.2 onwards)

Note: The above setting is more geared towards bandwidth savings, which on a local LAN may be required.  However, the whitepaper VMware View 3D Graphics Performance Study including Solidworks provides the following point of interest:-

In initial performance testing, it was quickly discovered that the sophisticated image caching techniques in View 5.2 ensured that any repetitive interaction with the CAD applications was rapidly cached such that, in some cases for the remainder of the test, View was able to source up to 90% of the total remoted pixels from the image cache. Accordingly, simple model rotations or model animations are not suitable operations for examining the real-world performance of the system.

Real world usage and interaction of CAD may result in the Image Cache being less effective, however I personally see no harm in enabling the feature.  Every little bit of help goes a long way!

  • Enable (Disable Build to lossless), reducing amount of PCoIP traffic and load on VM and endpoint device.
  • Enable relative mouse (if app cursor control is uncontrollable) – Only supported through software client
  • Solidworks – Tools>Options>Performance Toggle between hardware\software rendering (the default is hardware).

Registry Changes

1. CAD and CATIA related.  Occasionally, when working with CAD models (when turning and spinning), you may find that objects move irregularly and with a delay. However, the objects themselves are displayed clearly, without blurring

  • HKLM\SOFTWARE\VMware, Inc.\VMware SVGA DevTap\

Value Name: MaxAppFrameRate=dword:00000000

If this registry key does not exist, it defaults to 30.  Possibly set to match default PCoIP frame rate (i.e 60-120)

This change can negatively affect other applications. Use with caution and only if you are experiencing the symptoms mentioned above.

Note: I did not apply the above setting.  As advised in the VMware whitepaper, I was only prepared to make this change, if I noticed the above behaviour.  For my particular deployment, the CAD application performance was more than acceptable for the end user.

2. VMs using VMXNET3 (improve video playback performance, if required)

  • HKLM\System\CurrentControlSet\Services\Afd\Parameters

Value Name: FastSendDatagramThreshold

Data Type: REG_DWORD

Value: 1500

Note: Both registry changes require a reboot of Windows.

Performance Monitoring

To monitor performance within Windows:-

  • Use the Nvidia Control Panel provided as part of the install of Nvidia software.
  • Use the command nvidia-smi  to monitor the usage of your GPU.  This can be found in C:\Program Files\Nvidia Corporation\NVSMI

The best nvidia-smi metric located at the right of the middle section, showing % of each GPUs cores at point in time.

The nvidia-smi command should be run within Windows and not the ESXi host, as the hardware device (GPU) is being passed directly to the operating system with vDGA.

Other tools:

Performance Results

Solidworks provides a performance benchmark tool which was run twice. The results were compared to a previous benchmark run against a physical system.

  • VM – x2 vCPU, 4GB, dedicated Nvidia K1 GRID GPU and NetApp storage
  • Physical – x2 CPU (6 cores each), 12GB RAM, Quadro 4000 PCIe graphics and local SATA storage
Attribute VDI – Test 1 VDI – Test 2 Physical
Graphics 15.6 33.2 88.3
Processor 53.8 59.3 143.9
I\O 200.7 45.9 282.3
Overall 270.2 138.5 514.5
Rendering 29.9 33 42.9
Real View Performance 10 111.3 165.2

Note: The differences between VDI Test 1 and Test 2 in terms of configuration were minimal.  The second test was performed with the virtual machine domain joined and able to access CAD files\databases based on the network.

Unfortunately, I’ve been unable to validate the above results further and try to identify the differences.  You would have thought with the spec of the virtual compared to the physical workstation, results may have varied.  My access at the customer site is extremely limited, with no access to the physical workstation for further inspection or analysis.

The main point here, is the VDI solution more than matches up to the physical workstation.  Based off these results and the positive feedback from the customer, the 3D solution gets the thumbs up!

Further Recommendations

A couple of further tweaks can be implemented to drive performance higher, and reduce the above benchmark ratings, if required.

  • Existing configuration is limited at 30fps (Windows limit), increase if the application requires a higher frame rate.
  • Consider increasing the VM vCPU count from 2 to 4, if additional rendering performance is required.  From my research via the Solidworks forums, the biggest hitter when it comes to improving rendering is CPU.
  • Consider running the VMware OS Optimisation tool if further savings are required.
  • Consider placing the CAD virtual machines on fast storage or local SSD, to improve I\O performance.

Final Thoughts

The above configuration resulted in excellent initial performance.  The end user reported the performance and capabilities of the CAD application was comparable to the physical PC.  I was concerned the differences in the hardware may have disappointed the end user, however the Nvidia and Horizon View technology stood up well and exceeded expectation.  It would have been nice to have further tested and tweaked the configuration using other benchmark tools, and further investigated the differences between the physical and virtual platforms, however with limited access onsite, this was not possible.

For further reading, check out my 3D Resources page

3D Graphics, Horizon View, vDGA, Nvidia – Resources

Below is a collection of excellent resources for Virtualisation and 3D technology, specifically around Nvidia and Horizon View.  A number of these resources were utilised during a recent 3D project I’ve been working on.

PQR Whitepaper



Enterprise Virtualization

Grid Boards 


VMware ESXi 5.5 driver

Windows 7 64-bit Nvidia driver

Nvidia GPU Tech Conference 2014 Video Sessions

Smackdown GPU Optimized VDI Solutions: 2014 Edition

Delivering High-Performance Remote Graphics with NVIDIA GRID Virtual GPU

The State of the Industry: How GPU Technologies Are Set to Empower the VDI Experience


Deployment guide

3D performance study 

VMware Communities – Performance Tuning

Thread 1

Thread 2

Thread 3 - Tips

vSGA in action

vDGA in action

VMware vExpert 2014 Award

A little late to the party, however I’m delighted to be announce I’ve been awarded and recognised as VMware vExpert for 2014!


For those readers who are not familiar with VMware vExpert, here’s an extract from the official pages at VMware.

The annual VMware vExpert title is given to individuals who have significantly contributed to the community of VMware users over the past year. The title is awarded to individuals (not employers) for their commitment to sharing their knowledge and passion for VMware technology above and beyond their job requirements.

vExperts are book authors, bloggers, VMUG leaders, tool builders, and other IT professionals who share their knowledge and passion with others. These vExperts have gone above and beyond their day jobs to share their technical expertise and communicate the value of VMware and virtualization to their colleagues and community.

VMware vExpert 2014 Announcement

When I started my journey into blogging back in October last year, I didn’t have any thoughts or considerations around the vExpert award.  My target was to provide content and information back into a community that has served me greatly since my beginning with VMware back in 2009.  Therefore, to have been successful within 6 months, joining an elite group of folks within the community, many of whom I look up to, is a great feeling.

Applicants can apply quarterly and applications for Q2 of 2014 are now open Q2 applications

Congratulations to all the other vExperts!

Horizon Mirage – Failed to authenticate device


Horizon Mirage has been available for some time now.  Although I’ve been exposed to the capabilities this solution brings, attended a couple of Mirage specific VMworld 2013 sessions and completed the Mirage VMware Hands-On labs, I’m yet to get my hands dirty in my own environment.  The other week, I managed to spend a few days installing, configuring and working my way through some of the features and use cases, Mirage brings to market.  I’ll assume readers are up to speed on Mirage in general and the benefits it brings.  There’s numerous blog posts and content already out there covering this.  VMware offers Mirage Fundamentals e-learning course  and I was pleasantly surprised by the content.

VMware_MirageFor anyone with a few hours spare looking to bring themselves up to speed, I would encourage and recommend to view this offering, I found it beneficial and a good refresher for some of the Mirage terms and concepts.


Firstly, my home lab is simply a power workstation with 64GB RAM, three SSD drives, running VMware Workstation 10. There is nothing fancy or complex.  For most part, it’s more than sufficient for my requirements, although of course it does have it’s limitations.

Mirage lab component wise, consists of virtual machines for the Mirage Server and the Mirage Management Server.  In addition I have a Windows 7 Reference Machine and a Windows XP Reference Machine.  Finally, Windows 7 and Windows XP clients (virtual machines – endpoints).


Having installed all the components (above), the Mirage dashboard showed a clean bill of health.  I successfully centralised the devices (endpoints) where I installed the Mirage client, however the endpoints didn’t start the scanning phase and centralising to the datacentre, like below…


From the Inventory in Mirage, All CVDs were still showing ‘Pending Assignment’ and no progress. I even tried installing the client onto a couple of laptops, to verify if perhaps a limitation or configuration issue existed with my lab.

mirage2From the Pending Devices, I promoted my Windows 7 Reference Machine to a Reference CVD.  I then attempted to capture a Base Layer using the wizard.  Same result (no progress), the clients (endpoints) and Mirage server were simply not communicating.

mirage3I verified the usual network checks, such as open ports via telnet and ping\DNS tests among all components.

Mirage Ports and Protocols -

At this point, the Mirage clients were also showing as ‘disconnected’ – I found the following KB article and followed a few of the steps:

Continuing my investigation, I checked the Event Logs within Mirage, and I stumbled across this warning which had been generated from the Mirage Server (source).


A small extract below, from the logs on the Mirage Server (Program Files>Wanova>Mirage Server>Logs)

2014-02-07 14:43:02,335 CTX:(null) [ 27] DEBUG Wanova.Server.Common.Volumes.RealVolumeMounter Creating non-SIS file system for: ([Name='DefaultVolume', Description='The default volume', Path='C:\MirageStorage', Capacity=42,947,571,712, FreeSpace=26,736,009,216, State=Mounted, Id=616766272, UserState=Accepting, ]), optimized path: C:\MirageStorage, verification: True

2014-02-07 14:43:02,335 CTX:(null) [ 27] WARN Wanova.Server.Server.ServerCore Client authentication failed (unexpected exception), request-id=10

System.IO.InvalidDataException: StorageId.dat is not found.

At this point, I recalled during the setup of my Mirage Management Server, leaving the default path of C:\MirageStorage for storage. I checked the VMware documentation @

My Scenario:-

The UNC path to the storage is required whenever Horizon Mirage is installed on more than one host, for example, when the Management server and one or more other servers are each on separate machines.

Typical smaller environment (lab or pilot):-

The use of local storage, for example E:\MirageStorage, is supported for smaller environments where a single server is co-located on the same machine as the Management server

Indeed, the Mirage Server was unable to communicate with this storage path, since it’s a local path on the Mirage Management Server.  I verified storageid.dat existed on the Management Server under C:\MirageStorage\Nonsis


Performed the following steps:-

  • Within Mirage, browse to System Configuration>Volumes
  • Right-click the DefaultVolume and select Unmount
  • Mirage Management Server – Create a share with relevant permissions for the C:\MirageStorage
  • Right-click the DefaultVolume and select Edit
  • Change ‘Path’ to UNC path (created from new share) for example   \\MirageServer\MirageStorage
  • Right-click the DefaultVolume and select Mount
  • Restart Mirage Server service and Mirage Management service.
  • Double check the status of Mirage Server from System Configuration>Servers

Now my endpoints were centralising to the Mirage Server!  I was also able to capture a Base Layer successfully from my ‘Reference CVD’.


In hindsight, because I’m running a lab, I could have setup the Mirage Server and Management Server on the same machine (not recommended for production!). The Mirage Server could then happily see the default path for storage – C:\MirageStorage.

However, my preference is to install\configure as close to real world deployment as possible, as it promotes good habits and cements my knowledge.  Of course, sizing is always an exception in a home lab. The Mirage Server recommendation of 16GB RAM (1500 endpoints) is impossible to justify in most folks lab environment!  I’ve re-sized my server to 4GB and it’s running smoothly enough.

Ultimately the point of the post, is to help anyone who comes across the same Mirage server error ‘failed to authenticate device’, or the Mirage client constantly sits at ‘disconnected’ and you wonder why?  Hopefully, the above may provide some guidance and starting point(s) to trouble-shoot your issue.

External References

Horizon Mirage Essentials book

Mirage Architecture and Use Cases 

Mirage Design and Architecture Detail

Mirage Lab Setup Guide -

Excellent Mirage blog - HorizonFlux

Microsoft Office and VDI – Performance tips

A couple of months back, during a recent engagement, I was trouble-shooting end user performance problems.  To keep it short and sweet, Microsoft Office (Outlook and Word), were monopolizing the CPU.  The virtual desktop just became unresponsive.

Note: Microsoft Office 2013 had been deployed in the parent image.

Platform: vSphere 5.1 and Horizon View 5.2

Whilst investigating and searching for a solution, I came across many articles with tweaks\tips which I put together in a quick Word document.  However, I thought it would be useful for others, as I searched Google extensively for ‘microsoft office vdi performance’

Known Performance Issues

Highly recommend downloading this report -

Windows related settings

  • Disable Windows Search (linked to Outlook)

  • Set ‘Adjust for best performance’ to disable unnecessary Windows visual effects and animations

Microsoft Office 2013 (some apply to Office 2010)

  • Disable Office 2013 animations via registry

  • Check and ensure latest patches are applied
      • MS Word>File>Account>Update Options

  • Use Office customization tool to disable unnecessary settings
  • Install Office 2013 with only necessary applications and features (not all defaults are required)


  • Set to Online mode (no Cached mode)

  • Ensure no OST file being created or stored in user profile on network
  • Disable ‘Reading Pane’ view
  • Disable ‘Live Preview’
  • Disable ‘Hardware Graphics Acceleration’
  • Manage ‘COM Add Ins’ and disable unnecessary add-in’s, if necessary

Microsoft Word

  • Disable ‘Reading View’ aka Reading Pane
  • Set the following:-

Note: The above certainly isn’t a comprehensive list of all tips\tweaks to cure to all issues!  However, it’s a good ’round up’ from my research, whether some of the above are effective or not, it’s a case of picking out the tweaks which best suit the environment and test, test, test and measure the results!

In this instance, because some of the above tweaks didn’t make a major difference for Office 2013 (results with Office 2010 could vary), Office 2010 was re-installed due to growing pressure from the user base, and the virtual desktop performance was much improved.


VCAP-DTA Section 10 Notes

Section 10 – Troubleshoot a View Implementation

  • Objective 10.1 – Troubleshoot View Pool creation and administration issues
  • Objective 10.2 – Troubleshoot View administration management framework issues
  • Objective 10.3 – Troubleshoot end user access
  • Objective 10.4 – Troubleshoot network, storage, and vSphere infrastructure related to View 

**See View Administration guide p379**

Troubleshooting View Components


Troubleshooting KB articles

Virtual desktop is not available

Desktop not available message

Black screen when connecting via PCoIP

Black screen when connecting via PCoIP (2)

Unable to connect using PCoIP

VCAP-DTA Section 9 Notes

Section 9 – Configure Persona Management for a View Implementation

Objective 9.1 – Deploy a Persona Management Solution

  • Create a Persona Management repository
    • Separate from Windows Roaming profile location.  Create share\UNC on file server.
    • Or you can use existing Windows profile location (Persona repository location – leave blank)
    • Set path for new repository in ViewPM.adm settings
  • Implement optimized Persona Management GPOs
    • Add ViewPM.adm to Computer Configuration>Policies>Administrative Templates via new GPO or Local Policy (gpedit.msc)
    • Settings are under VMware View Agent Configurations
    • Settings to enable for optimization are under the following folders:
      • Roaming and Synchronization
      • Folder Redirection
  • Implement optimized Windows Roaming Profiles with Persona Management

Deployments where users access both View desktops managed by Persona Management and standard Windows desktops managed by roaming profiles can cause problems. The best solution if the desktops are in the same domain is to use different profiles for the two desktop environments. To accomplish this:

  • Configure Windows roaming profiles (either with Windows GPO settings or on the user object in Active Directory)
  • Configure View Persona Management and Enable Persona repository location
  • Enable Override Active Directory user profile path, if it is configured

This prevents Windows roaming profiles from overwriting a View Persona Management profile when the user logs out of the desktop.

If users will share data between Windows roaming profiles and View Persona Management profiles, configure

Windows folder redirection. In folder redirection group policy settings for user profile folders, be sure to include %username% in the folder path      \\lab-dc\personadata\%username%\MyVideos

You can specify files and folders within users’ personas that are managed by Windows roaming profiles functionality, instead of View Persona Management. You use the Windows Roaming Profiles Synchronization policy to specify these files and folders.

Persona Management deployment guide

Objective 9.2 – Migrate a Windows Profile

  • Ensure pre-requisites are met for a profile migration
    • Run the migration utility on a Win 7 or Win 8 32-bit system, because most V1 profiles are 32-bit
    • Run the migration utility on a Win 7 Win 8 physical computer or virtual machine
    • Ensure network access to v1 profiles and login as administrator on Win7\Win8 machine
    • Ensure users are not logged in and don’t use their profile until migration complete

User profile migration

  • Perform profile migration using migprofile.exe

When you install View Agent with the View Persona Management setup option on a virtual machine, the migprofile.exe utility is installed in the directory \VMware\VMware View\Agent\bin

    • migprofile.exe [/s:source_path] [/t:target_path] [/r-:] [/takeownership] [config_file]

The following example migrates all V1 user profiles under the \\file01\profiles folder to the same location. V2 user profiles are created with .V2 appended to each user’s root folder name. The utility takes ownership of the user profiles during the migration:

    • migprofile.exe /s:\\file01\profiles\* /takeownership
    • migprofile.exe “/s:\\test\c$\documents and settings\user” /t:\\file01\profiles\ /takeownership /r-
  • Modify migration configuration file 

Note: Copy the content from the official documentation into notepad, then save as .xml to create the configuration file.  Modify as needed.

Migration file configuration

VCAP-DTA Section 8 Notes

Section 8 – Secure a View Implementation

 Objective 8.1 – Configure and Deploy Certificates

Configure two Factor/Smart Card Authentication including truststore

  • Obtain the root CA certificate or export the certificate from the Microsoft CA (PKI)
  • Verify that the keytool utility is added to the system path on your View Connection Server or security server host (see Administration guide). Location below:-
    • VMware\VMwareView\Server\jre\bin
    • On your View Connection\Security Server, use the keytool utility to import the root certificate into the server truststore file.
    • keytool -import -alias alias -file root_certificate.cer –keystore truststorefile.key
    • Copy the truststore file to the SSL gateway configuration folder on the View Connection\Security server.
      • o   VMware\VMwareView\Server\sslgateway\conf\truststorefile.key
    • Create or edit the file in SSL gateway configuration folder on View Connection\Security server h
      • VMware\VMware View\Server\sslgateway\conf\
    • Example shown below, specifies that the root certificate for all trusted users is located in the file lonqa.key, sets the trust store type to JKS, and enables certificate authentication.
      • trustKeyfile=lonqa.key
      • trustStoretype=JKS
      • useCertAuth=true
    • Restart the View Connection\Security Server service.
    • Configure Smartcard authentication via View>Config>Servers>Connection Servers
      • Optional or Required
    • Configure any smartcard removal policies in same window
    • Restart View Connection Server service

Configuring smartcard authentication

VMware Pubs 

  • Configure and deploy View certificates
    • Install Microsoft CA role as ‘root CA’ on server
    • Use provided template for request.inf  - VMware KB
    • Amend as required (Country can only be 2 letters). Include SAN entries, if required.
    • Use certreq tool to generate a request using   certreq –new request.inf request.txt
    • Browse to CA server http://server/certsrv and choose Request Certificate>Advanced Certificate
    • Submit a certificate request using Base-64-encoded…
    • Paste the request.txt contents into the ‘Saved Request’ field and submit.
    • Change Certificate template to ‘Web Server’
    • Download certificates and import into certificates local computer on specific server (Connection\Security\Composer). Import into Personal>Certificates
    • Open properties of old certificate, rename ‘Friendly name’ to ‘old-vdm’ old certificate
    • Open properties of new certificate, ensure ‘Friendly name’ is set to ‘vdm’
    • Restart View Connection Server service
  • Configure certificate revocation checking using the file

Create or edit the file in the SSL gateway configuration folder on the View Connection\Security Server – VMware\VMware View\Server\sslgateway\conf\


Example file:-






Smartcard revocation checking

  • Perform a certificate replacement using sviconfig

View Composer requires an SSL certificate that is signed by a CA (certificate authority). If you intend to replace an existing certificate or the default, self-signed certificate with a new certificate after you install View Composer, you must import the new certificate and run the SviConfig ReplaceCertificate utility to bind your new certificate to the port used by View Composer

  • Stop View Composer service
  • ProgramFilesx86>VMware>Composer
  • SviConfig –operation=replacecertificate –delete=false
  • Start View Composer service

Objective 8.2 – Harden View Components and View Desktops

**See View Administration and View Security guides**