Tintri 540 – Real World VDI Performance

By | November 15, 2013

tintri_logoDuring a recent engagement, I’ve been working on a VDI refresh of around 800 desktops for a college.  I had previously been engaged with the college last year, and performed a VMware vSphere and View upgrade to v5.0.  A major pain point at the time, was the performance of the infrastructure and virtual desktops for the students.  A full assessment identified the storage architecture as one of the major bottlenecks.  Multiple RAID-5 LUN groups (5 disks each) had been provisioned, with as many as 100 virtual desktops or Linked Clones located on each datastore.  With the lack of spindles, IOPS, throughput and use of RAID-5 with x4 write penalty, this all resulted in a less than desirable architecture and performance (VMFS locking to name one) to handle the desktop workloads (generally a 20% read, 80% write IO profile), which differ greatly from server workloads.

Fast forward to the summer of 2013, with a new budget available, a project to eliminate these performance pain points and optimise the environment was launched.  The platform, based upon VMware Horizon View 5.2 and a new storage solution was planned.  In terms of storage choice, there are numerous great technologies out there from many excellent vendors, though a discussion diving into all the pros and cons of each solution, is beyond the scope of this post.  Ultimately it comes down to a variety of things, not limited to, but including requirements, constraints and budget.  A couple of the main requirements, were to be within 10% performance of native physical desktops, compared against virtual desktops, ease administration of the solution and reduce the complexity of the infrastructure.

Tintri 540

Tintri storage ticked all the relevant boxes and has received rave reviews throughout the industry. Following a successful proof of concept and extensive load testing, this was the chosen platform.  There are many advantages to the solution and you can read more about the technology at www.tintri.com including whitepapers and reference architectures.  A couple of specific items, I’ll list below:-

  • NFS solution – Simple and can leverage existing ethernet infrastructure and eliminates one of the previous problem points, SCSI reservations (VMFS locking).
  • Minimal configuration and setup required.
  • A self optimising storage appliance, no requirement to continually tune\tweak.
  • Consists of eight 3TB hard disks and eight 300GB SSD (MLC), providing the required total capacity (13TB), and a good amount of flash to serve read and write IO.
  • Instant performance bottleneck visualization. Real-time VM and vDisk (VMDK) level insight on IO, throughput, end-to-end latency and other key metrics.
  • Support for up to 1000 VMs, which for this project, provides enough capacity for now and expected scaling and future growth.
  • Supports up to 75,000 IOPS. All read and write IO is delivered from flash and provides low latency performance for VMs.

There are many more features of the Tintri appliance, such as replication, snapshots and ability to pin certain VMs\VMDK to flash.  Refer to the external references at the end of the post for more detail.

Note: The Ethernet-based infrastructure in use for this implementation, consists of dedicated, redundant switches, servicing storage traffic only.  The connectivity between the stacked switches, and between the switches and Tintri appliance utilise 10GB.  Due to budget constraints, the ESXi hosts will utilise 1GB.  Tintri recommend up to 80 VMs per host running on 1GB, which is within the target VM to host ratio for the project.

Inside the Tintri Dashboard and Real World Performance

The initial deployment of virtual desktops running Windows 7, consisted of 715 desktops, with the majority Linked Clones, with less than 20 Persistent desktops.  Virtual desktops remain powered on, which is controlled by each Horizon View pool policy, to allow quick access and logon to each desktop.  Various desktop workloads are in use, however none of these are extreme use cases in terms of IO profile and are typically task workers, knowledge workers, with a small number of power users, which is fairly standard.  As always, perform your own assessment of physical and virtual desktops before proceeding with your own project, identify your use cases, and then map these to your different pools.  Ensure the environment is sized correctly, handling the peaks and allowing for additional overhead.

Let’s dive into the Tintri management tool.

  • The dashboard provides real-time insight and monitoring.  You are able to drill down further into all of the metrics for further analysis.  Tintri pulls in metrics from vCenter and VMs, to provide deep insight.  Under the main IOPS, Throughput, Latency and Flash % ratio counters, you can observe 7 day (over 10 minute) average numbers.  On the right hand side, you can view which VMs are changing in terms of performance and space, and by what degree of change.

dashboard

  • The Diagnose>Hardware screen allows visibility into the status of the hardware – disks, fans, memory, CPU and controllers.

diagnostics

  • IOPS – Can be monitored over real-time, 4hrs, 12hrs, 2 days or 7 days.  Tintri can track every single IO from every single VM.  You can click on different points of the chart to view specific offenders (such as VDI-T2-48 below), or hover over a point to bring up the data on screen.  You can drag the mouse cursor left or right to scroll through time as you see fit.

datastore

As you can observe from above, during the first week of production for 715 virtual desktops (with a peak of 400 concurrent active sessions), total IOPS generally remained under 4,000, with increases shown by various logon storms throughput the day as students log in.  The dramatic peaks are largely due to replica VMs being read from, or maintenance (recompose) operations.

Note: Horizon View Storage Accelerator (VSA) is enabled on each pool, this can dramatically decrease the number of read IO that is required from the backend storage system, as this feature caches common blocks across the desktops, and serves these blocks up from a content based read cache (CBRC), which utilises physical RAM (max size 2048MB) on each ESXi host.

You can read more about the VSA feature here – http://blogs.vmware.com/euc/2012/05/optimizing-storage-with-view-storage-accelerator.html

  • IOPS v Throughput

The ability to compare two charts, side by side, is very useful.  Below shows the comparison between IOPS and Throughput in MBps.  We can see the total IOPS peaks at 10444 around 6:10 PM (likely due to refresh\recompose operations), with 8396 read IO (yellow) and 2048 write IO (blue).  The replica disk shown below, is contributing 13% to the overall total IOPS.

sidebyside

  • Latency

Latency is a vital statistic, because it measures how long it takes a single IO request to occur end to end, from the VM (guest OS) to the storage disks.  If latency is consistently greater than 20-30ms, then all round performance of the storage and virtual machines will suffer greatly.

As shown below, green indicates the latency occurring at the host, rather than the network, storage or disk.  The total latency is 2.68ms, which results from the host (2.05), network (0.12), storage (0.51) and disk (0).  Maintaining consistent latency around this point, will provide excellent end to end performance.

latency

  • Flash %

The chart shows the amount of IO (read and write) that is being served up from flash disks.  As you can see, 100% is being served with only a couple of small drops to around 98% observed.  This ensures we maintain the best possible performance with IO being served as quickly as possible from flash, rather than mechanical, spinning disk(s).

flashpercent

  •  Virtual Machines

This is a real time graph, enabling you to review all virtual machines running on the datastore, and sort by various columns, such as name, IOPS, MBps and latency, which is further broken down to host, network, storage and disk

virtualmachines

  • Contributors

On each of the graphs which are presented on the left, ‘Contributors’ are shown down the right hand side, and allow visibility into individual virtual machines and the contribution to the overall total number(s). Below we can clearly see a couple of Replica VMs recording high IOPS, a result of Linked Clones reading from the parent image (Replica) disk.

contributors

  • 7 day zoom – IOPS v Latency

Finally, taking advantage of the side by side view again, below we can observe a 7 day view of IOPS and latency.  We can clearly see the peaks and troughs of IOPS throughout the week.  The majority of IO activity on the Tintri is writes (blue), with View Storage Accelerator helping to reduce the read IO requirement from the storage.

Total end to end latency (host, network, storage & disk) remains consistently low around 3ms, with the occasional spike here and there, which is to be expected.  As shown below, green indicates the latency occurring at the host, rather than the network, storage or disk.

I would imagine the host latency is a result of ESXi (VMkernel) having to take on board new storage IO and process this, for all of the virtual machines residing on the host. Therefore, guest (OS) latency could contribute to this.  This would be my assumption, although I haven’t investigated this thoroughly to confirm, because I’m more than happy with 3ms latency, end to end.  However, I’ll continue to monitor this.

sidebyside2

Thoughts & Conclusion

  • The Tintri 540 appliance has impressed all round, operationally in terms of reduced management and ability to handle all workloads during peak periods.
  • Performance is matching expectation, with Tintri dealing with all IO and throughput consistently well, whilst maintaining high flash % (IO served from SSD) and low end to end latency, which is critical.
  • Virtual desktop performance is within the 10% requirement of native physical performance, with comparative metric testing seeing virtual desktops exceed physical performance in some areas.
  • Straight forward setup and configuration, no additional performance tuning required.
  • Excellent real-time visualization tools providing insight and diagnostics capabilities into many different metrics.

Note:  I’m not employed by Tintri, neither have I been asked or sponsored to compose this blog.  These are my own personal findings and thoughts, having worked with the above technology recently.

External References

http://www.tintri.com/sites/default/files/field/pdf/document/tintri_family_datasheet_v2.0.pdf

http://www.tintri.com/sites/default/files/field/pdf/document/tintri-540-specsheet-v2.0.pdf

http://www.tintri.com/sites/default/files/field/pdf/whitepapers/vmware-tintri-vdi-reference-architecture-testing-results.pdf

http://www.tintri.com/sites/default/files/field/pdf/document/tintri-OS-datasheet-v2.0.pdf

http://www.tintri.com/sites/default/files/field/pdf/document/Tintri_VMwareView_TechSolutionOverview.pdf

http://go.tintri.com/vmware-view-best-practices-whitepaper/

http://www.vmware.com/files/pdf/partners/tintri/VMware-view-tintri-best-practices.pdf

Leave a Reply