Horizon Mirage has been available for some time now. Although I’ve been exposed to the capabilities this solution brings, attended a couple of Mirage specific VMworld 2013 sessions and completed the Mirage VMware Hands-On labs, I’m yet to get my hands dirty in my own environment. The other week, I managed to spend a few days installing, configuring and working my way through some of the features and use cases, Mirage brings to market. I’ll assume readers are up to speed on Mirage in general and the benefits it brings. There’s numerous blog posts and content already out there covering this. VMware offers Mirage Fundamentals e-learning course and I was pleasantly surprised by the content.
For anyone with a few hours spare looking to bring themselves up to speed, I would encourage and recommend to view this offering, I found it beneficial and a good refresher for some of the Mirage terms and concepts.
Firstly, my home lab is simply a power workstation with 64GB RAM, three SSD drives, running VMware Workstation 10. There is nothing fancy or complex. For most part, it’s more than sufficient for my requirements, although of course it does have it’s limitations.
Mirage lab component wise, consists of virtual machines for the Mirage Server and the Mirage Management Server. In addition I have a Windows 7 Reference Machine and a Windows XP Reference Machine. Finally, Windows 7 and Windows XP clients (virtual machines – endpoints).
Having installed all the components (above), the Mirage dashboard showed a clean bill of health. I successfully centralised the devices (endpoints) where I installed the Mirage client, however the endpoints didn’t start the scanning phase and centralising to the datacentre, like below…
From the Inventory in Mirage, All CVDs were still showing ‘Pending Assignment’ and no progress. I even tried installing the client onto a couple of laptops, to verify if perhaps a limitation or configuration issue existed with my lab.
From the Pending Devices, I promoted my Windows 7 Reference Machine to a Reference CVD. I then attempted to capture a Base Layer using the wizard. Same result (no progress), the clients (endpoints) and Mirage server were simply not communicating.
At this point, the Mirage clients were also showing as ‘disconnected’ – I found the following KB article and followed a few of the steps:
Continuing my investigation, I checked the Event Logs within Mirage, and I stumbled across this warning which had been generated from the Mirage Server (source).
A small extract below, from the logs on the Mirage Server (Program Files>Wanova>Mirage Server>Logs)
2014-02-07 14:43:02,335 CTX:(null) [ 27] DEBUG Wanova.Server.Common.Volumes.RealVolumeMounter Creating non-SIS file system for: ([Name=’DefaultVolume’, Description=’The default volume’, Path=’C:\MirageStorage’, Capacity=42,947,571,712, FreeSpace=26,736,009,216, State=Mounted, Id=616766272, UserState=Accepting, ]), optimized path: C:\MirageStorage, verification: True
2014-02-07 14:43:02,335 CTX:(null) [ 27] WARN Wanova.Server.Server.ServerCore Client authentication failed (unexpected exception), request-id=10
System.IO.InvalidDataException: StorageId.dat is not found.
At this point, I recalled during the setup of my Mirage Management Server, leaving the default path of C:\MirageStorage for storage. I checked the VMware documentation @ http://pubs.vmware.com/horizonmirage-43/index.jsp#com.vmware.horizonmirage.installation.doc/GUID-F87CF7C1-F436-44EA-B8F1-C27492C14355.html
The UNC path to the storage is required whenever Horizon Mirage is installed on more than one host, for example, when the Management server and one or more other servers are each on separate machines.
Typical smaller environment (lab or pilot):-
The use of local storage, for example E:\MirageStorage, is supported for smaller environments where a single server is co-located on the same machine as the Management server
Indeed, the Mirage Server was unable to communicate with this storage path, since it’s a local path on the Mirage Management Server. I verified storageid.dat existed on the Management Server under C:\MirageStorage\Nonsis
Performed the following steps:-
- Within Mirage, browse to System Configuration>Volumes
- Right-click the DefaultVolume and select Unmount
- Mirage Management Server – Create a share with relevant permissions for the C:\MirageStorage
- Right-click the DefaultVolume and select Edit
- Change ‘Path’ to UNC path (created from new share) for example \\MirageServer\MirageStorage
- Right-click the DefaultVolume and select Mount
- Restart Mirage Server service and Mirage Management service.
- Double check the status of Mirage Server from System Configuration>Servers
Now my endpoints were centralising to the Mirage Server! I was also able to capture a Base Layer successfully from my ‘Reference CVD’.
In hindsight, because I’m running a lab, I could have setup the Mirage Server and Management Server on the same machine (not recommended for production!). The Mirage Server could then happily see the default path for storage – C:\MirageStorage.
However, my preference is to install\configure as close to real world deployment as possible, as it promotes good habits and cements my knowledge. Of course, sizing is always an exception in a home lab. The Mirage Server recommendation of 16GB RAM (1500 endpoints) is impossible to justify in most folks lab environment! I’ve re-sized my server to 4GB and it’s running smoothly enough.
Ultimately the point of the post, is to help anyone who comes across the same Mirage server error ‘failed to authenticate device’, or the Mirage client constantly sits at ‘disconnected’ and you wonder why? Hopefully, the above may provide some guidance and starting point(s) to trouble-shoot your issue.
Mirage Lab Setup Guide – thatsmyview.net
Excellent Mirage blog – HorizonFlux