Home Lab v2.0

Background

It’s been almost 18 months since I decided to take the plunge and invest into a home lab.  I felt this was crucial in order for me to learn new technology, keep up to date and work towards certifications like VCAP-DTA and VCAP-DCA.  Fortunately, I have achieved those goals and my initial investment has paid off now.  It was fairly obvious at the time of purchase, I’d probably need to scale out, and make a further investment to enhance my lab.

Within a month of buying my original system, I upgraded my workstation from 32GB to 64GB.  I didn’t quite appreciate the options available to me when I purchased my workstation, and immediately felt the need for additional compute resource (RAM).

Whilst my shared storage solution from StarWind free iSCSI worked well enough (with performance and stability limitations), I always knew somewhere down the line I’d want a more robust external solution, with more power and capability.

Storage NAS

All the talk around the VMware community for the past few years, in terms of a NAS for home lab, has pretty much centered around Synology.  Therefore, I read people’s blogs, spoke to colleagues, and decided about choosing the right Synology for me.

Requirements and Choices

  1. Performance over capacity – I don’t require TBs worth of capacity.
  2. About 500GB for VMs (SSD) would be ideal to start with
  3. 3 to 4TB for data – Lab ISOs, documents, photos and music.
  4. vSphere VAAI offload, to reduce the load on my workstation and nested ESXi hosts
  5. Ability to use x2 NICs and LACP to increase throughput in future, if required.

DS412+ (4 bay NAS) – Obviously cheaper, but concerned about expansion or outgrowing in near future? Perhaps I could consider replacing the disks with larger capacity disks in the future. Perhaps a RAID-1 for SATA (data), and RAID-1 for SSD (although not recommended), or two free slots for two volumes each containing single SSD.

DS1513+ (5 bay NAS) – Basically a 5 disk RAID-5 (then split into multiple volumes if needed) to get the most out of this device?  Sure you could get creative, fiddle with x2 RAID-1 and also have a single volume with one disk?   It depends on your needs really.  I was always going back and forth between the 4 bay and 8 bay really.

DS1813+ (8 bay NAS) – Cost is greater, it’s around £140 more for 3 extra bays which isn’t too bad.  Gives plenty of options, and could have all sorts of configurations with volumes, plus use both NFS and iSCSI.  My initial feeling was perhaps to leave a couple of  bays available for choice and flexibility in medium and long term.  I couldn’t really afford to max out of the NAS with 8 disks straight away.  Also, this NAS has x4 NICS, although that’s very much overkill for my environment.

DS412+ Selected – Why?

synology-disker_63501a

Simply, because this solution fitted my requirements and budget more closely.  Whilst I was edging towards the DS1813+, because it’s the top end model and has everything you need for now and in the future (power, performance, flexibility, scalability & capacity), the actual cost of the unit and whether I’d actually get the most out of it now and in the future, steered me clear.  Also sound advice from my Xtravirt colleagues Ather Beg and Seb Hakiel helped with my decision.  As such, I couldn’t justify the cost, the unit alone would have set me back around £700, instead I purchased the DS412+ for around £435, more in line with my budget.

The device is VAAI capable, this was a key requirement as stated above.  4 bays offered me enough choice and flexibility for now.  If I needed to expand I could buy another unit (2 bay or 4 bay) or buy larger disks.  You never know, in 12 months + some super nova tech may have exceeded the Synology devices!

x2 1GB NICs provides enough throughput and LACP options, if I need it later.

The device is also small, compact, fairly quiet and sits tucked away in the corner.

Disks

I’ve added these disks to my DS412+

Volumes

I’ve added these volumes to my DS412+

No redundancy for volumes 2 and 3 – This is a slight concern and I would have preferred protection for my virtual machines.   Certainly in production, this would be crazy.  However, RAID-1 for SSD isn’t highly recommended for performance reasons from what I’ve read.  For a home lab you might get away with it. I just wanted to ensure I get the maximum performance from my SSDs, also a reason why I chose the Samsung Pro 840 model, as they are clear market leaders.  I plan to use Veeam to backup my virtual machines to external USB 3.0 devices.

Network Hardware

Original setup used a bog standard cheap 1GB unmanaged switch.  I wanted a more efficient and advanced switch for my iSCSI traffic travelling to my Synology device. The Netgear has enough features (at a reasonable cost) to give me options going forward including – QoS, VLANs, Jumbo Frames and LACP.

VMware Workstation Setup

Fairly simple here, nothing fancy or too complex.  I’m sure there are better ways to configure (comments welcome) the networking, however this setup works well for my environment.

x2 Nested ESXi 5.5 – Hardware v8 and x8 NICS

NIC # NIC Type Network Type Virtual Network Use Physical Adapter
1 vmxnet3 Bridged VMnet0 Mgmt Intel onboard 1GB
2 vmxnet3 Bridged VMnet0 Mgmt Intel onboard 1GB
3 vmxnet3 Bridged VMnet0 VMs Intel onboard 1GB
4 vmxnet3 Bridged VMnet0 VMs Intel onboard 1GB
5 e1000 Bridged VMnet1 iSCSI Intel PCIe 1GB
6 e1000 Bridged VMnet1 iSCSi Intel PCIe 1GB
7 e1000 Bridged VMnet0 vMotion Intel onboard 1GB
8 e1000 Bridged VMnet0 vMotion Intel onboard 1GB

Note:- You can change the adapter type by editing the .vmx file by changing the ethernet.virtualDev line

For example on ethernet0

  • ethernet0.virtualDev = “e1000”

For example on ethernet1

  • ethernet1.virtualDev = “vmxnet3”

Virtual Network

Note:- Ignore VMnet2 – Although setup and ready, I don’t currently have this in use with virtual machines running in Workstation.

Networking Bottleneck

Previously my nested ESXi network adapters regularly showed ‘Status down’.  There was no way to recover apart from a reset of the nested ESXi virtual machine was really bugged me!

  • Occurred when using e1000 adapters for ESXi adapters
  • Much more stable with vmxnet3 adapters – However network performance\throughput suffered in my nested ESXi environment. Perhaps a hardware limitation or configuration issue?  I could never get to the bottom of it having explored different avenues.
  • iSCSI traffic via vmxnet3 to Synology device – frequently caused write errors within virtual machines and when trying to install Windows OS. As soon as I flipped back to e1000 adapters on my nested ESXi for iSCSI this problem went away.

The addition of the extra NIC and splitting iSCSI storage traffic onto a dedicated NIC, and all other traffic on the onboard NIC, looks to have solved the issue (fingers crossed).  With this in mind, I could probably set all my NICs attached to my nested ESXi hosts back to e1000.  However, I’ll change this if I notice issues in the future, for now things are ticking over nicely.

Early Impressions and Tests

  • Cloning a template
  • Deploying a Horizon View pool (including replica disks and Linked Clones)

Coming soon…

One thought on “Home Lab v2.0

Leave a Reply