View Planner 3.0 as VDI Benchmark (VMworld 2013 – TEX5760)

By | October 10, 2014
  • Benchmarking – Why we care?
    • Understand VDI performance scope
    • Bench marketing and show case solution(s)
    • Drive consistency and predictability around solution
  • Understanding VDI performance scope – its complex
    • Hypervisor
    • Storage
    • VMware View
    • Clients
    • vCenter
  • Which metrics to focus on? IOPS, Read\write latency, bandwidth, CPU usage, % Ready Time, Host CPU\RAM
  • End user experience is critical to measure
    • What methodology – Tools to use, which user profile and capacity planning etc
  • VDI benchmark – Why now?
    • More interest in benchmarking VDI solutions
    • Confusing published results
    • Need a standard structure and methodology
    • View Planner 3.0 – Benchmark provides a standard methodology to evaluate and design effective VDI solutions
  • VDI has to scale well and you have to measure scalability accurately
  • View Planner 3.0 design\ideas
    • Scalable
    • Support for common desktop user applications
    • Client side timing and execution
    • Measure end user experience
    • QoS methodology clearly defines metrics for scalability
    • Repeatability, methodology has low run to run variance
    • Ease of use and reporting
  • Methodology
    • User\VM consolidation with QoS in mind
    • After each steady state run evaluate combined QoS
    • Consolidate more users\VMs till QoS
    • #VDIMark scored is based on how many users can be consolidated on the system under test with passing QoS
  • View Planner benchmark specifications
    • Applications and versions – Office 2010, IE9, Firefox, 7 Zip & Adobe Reader etc
    • Workload iterations per user – User has to run x iterations
    • Thinktime (2 times) – Sleep time after every operations
  • Scoring steady state iterations (divided into 3 phases)
    • Ramp Up – VMs powered on, connected, start workload slowly
    • Steady State – Every user now performing operations
    • Ramp Down – Users slowly stop workload
    • View Planner only measure steady state for accuracy
  • Scoring – VDIMark QoS Metric
    • Easy to understand operational latency metric
    • Groups based on characteristics
      • Group A – Interactive operations
      • Group B – IO Operations
      • Group C – background load operations
    • Evaluation based on reference set of thresholds for each group
    • Score for the run is determined when both the following conditions are met
      • 95th percentile of all Group A operations < 1 seconds
      • 95th percentile of all Group A operations < 6 seconds
  • View Planner Run Modes
    • Remote mode – One client connect to one desktop (compliant with benchmark mode). Characterize end user experience and networking\latency
    • Local mode – No clients required, good for storage characterization, less hardware, (not compliant with benchmark mode)
    • Passive mode – Less clients required, little less hardware (not compliant with benchmark mode)
  • Benchmark Flow Chart
    • Single VM Local Mode run – Standard benchmark profile (all apps) and 1 iteration
      • Verifies desktop setup ready for deployment
    • Single VM remote mode run with RDP
      • Verifies Client setup ready
    • Single VM Remote mode run with PCoIP
      • View setup ready (via desktop pool)
    • Do Multi-VMs run with PCoIP
      • Scale runs
      • Identify VDImark your system supports
    • Analyse results
      • Group A and Group B QoS – Below threshold? Has the run passed?
      • VDImark score for system
  • Run Flow Chart
    • Setup View Planner Harness – Deploy & configure
      • Save VC, View and AD info
    • Setup Golden Desktop and Client VM – View Planner user guide
    • Define the workload
    • Define the run parameters – How many VMs? What protocol? What VMs to bring in from vCenter to perform this run?
    • Analyse the results
      • Group A and Group B QoS – Below threshold?
    • Repeat – Define the workload, run parameters and analyse until VDImark
  • View Planner Use Cases – Platform characterization
    • CPU Architecture comparison
    • CPU scaling
    • Storage characterization
    • Features evaluation
    • User density for special custom applications
    • Study for different hardware\software stack configurations

VMworld 2014 Session notes – EUC1513 – Stress Testing Your Horizon View System

View Planner 3.0 – download and documentation

Note:  The above is my understanding and interpretation of the information presented, whilst scribbling down these notes quickly through the session.  The usual VMware disclaimer also applies from the session.  I encourage you to either watch the session if you have access, alongside performing your own research on the technology.

I hope the above proves useful and provides further clarity into View Planner.

Leave a Reply