Homelab 2015.3

[This is a snapshot of the Hardware Page above, taken on March 7th, 2014. The hardware page above is a live document, where this post will remain static.]

This is my home lab. It is something I have been developing, I guess you could argue, since 15 years of age. My first PC was built from parts I had requested for Christmas, as far as I knew no one in my family had built their own PC. My parents were very much in doubt on whether I had selected the correct components or whether they were compatible. I was as sure as I could be from the hours I had read on the subject. Christmas came and by the end of the day I had Windows running on the newly build PC, I was pumped. I became known as the “computer guy.” People would frequently donate hardware to me and I would keep everything in organized bins based on the components. By 2008 I had something like seven computers. Of those, one was the primary desktop and the rest were used for testing out OSes and learning. In 2010 I moved myself and all my possessions to New Mexico, by then I had acquired more hardware and PCs. This is when I learned about virtualization.

Learning about the exists of virtualization was a game changer. After reading as much as I could about the concept I needed some hands on experience. Around the same time I decided that I wanted high availability of one large storage pool. This is better known as RAID or storage redundancy. I bought a SuperMicro server motherboard, associated components, and a Norco server case with 20x hot-swappable 3.5″ bays. This was version 1 of my modern homelab. For the next 6 months I toyed around with different configurations and OSes, finally settling on using ESXi for a hypervisor and Solaris for providing data storage via NAS and SAN. As I write this page, it is 2015 and I am still using ESXi and Solaris. Both are stable, enterprise products and even testing other OSes leaves me with the feeling I am using the best on the market. Below is a list of hardware that I use to further my knowledge and craft.



  • Servers:
    • 2x Dell Poweredge R610 a 1U dual socket Nehalem server
      • Runs ESXi 5.5
      • 32GB RAM
      • 2x Intel L5630 CPU – 4 cores at 2.26Ghz at 40 watts TDP
      • 4 ports of 1GB Ethernet
      • 1 port 4GB Fiberchannel PCIe card
      • No harddrives or SSDs. All storage is present via FiberChannel
    • 1x Dell Poweredge R610 a 1U dual socket Nehalem server
      • Runs Windows Server 2012 Hyper-V Core
      • 32GB RAM
      • 2x Intel L5630 CPU – 4 cores at 2.26Ghz at 40 watts TDP
      • 4 ports of 1GB Ethernet
      • 1 port 4GB Fiberchannel PCIe card
      • No harddrives or SSDs. All storage is present via FiberChannel
    • 1x Dell Poweredge R610 a 1U dual socket Nehalem server
      • Runs ESXi 5.5
      • 24GB RAM
      • 1x Intel L5630 CPU – 4 cores at 2.26Ghz at 40 watts TDP
      • 4 ports of 1GB Ethernet
      • 1 port 4GB Fiberchannel PCIe card
      • 2x 2.5″ 500GB WD Black drives in RAID1 for times when I need to restart my NAS and bring the Fiberchannel offline
    • 1x Dell Poweredge R510 a 2U dual socket Nehalem server
      • Runs Solaris 11.2
      • 128GB RAM
      • 2x Intel E5640 CPU – 6 cores at 2.66Ghz at 95W TDP
      • LSI 9201-16e – 4 External Physical SFF8088 Ports for 16 Ports of SAS/SATA
      • Dell H200 – Flashed with LSI IT firmware
      • 4 port 4GB Fiberchannel PCIe card – Presents block storage to above R610s
      • 1x Norco DS-12D DAS which use SFF8088 to SFF8088 SAS cables to connect to the R510
      • Storage (ZFS):
        • 12x RAIDZ2 ~45TB of block space – Primary data storage
        • 4x 240GB Intel 730 SSD
          • 2 vdev – two striped mirrors, better known as RAID10 outsize of ZFS
          • ~400GB zvol presented over FiberChannel with VMFS5
          • Compression turned on (1.37x ratio as of this writing)
          • Dedupe turned on (1.70x ratio as of this writing)
        • 2x 2TB WD Enterprise drives in mirror
          • Backup for SSD array
          • Primary Storage for Hyper-V
  • Power
    • 1x Synaccess NP-16 a 2U Networked PDU
      • Controls Power for PoE injector, Modem, Servers, KVM, and Switch.
    • 1x Synaccess NP-8 a 1U Networked PDU
      • Controls Power for Cisco Lab
    • 2x CyberPower BRG1500AVRLCD UPS 1500VA/900W 12 Outlets AVR
    • P3 International P4460 Kill A Watt EZ Electricity Usage Monitor
  • Networking
    • Home Infrastructure
      • Cisco Catalyst 2960G – 48 Port  1GBe
      • Cisco Catalyst 2950 – 24 Port 100MBe
      • Cisco Aironet 3702i – 802.11ac Wave 1 WAP in Autonomous mode
    • Cisco Lab
      • Switches
        • 1x Cisco Catalyst 3650 24 Port  PoE 100MBe
        • 2x Cisco Catalyst 3550 48 Port 100MBe
      • Routers
        • 1x Cisco 3640
        • 1x Cisco 1821
        • 2x Cisco 2610XM
  • Trendnet TK-803R KVM switch
  • 2U pull out drawer with 15″ monitor and keyboard with integrated mouse
  • Dell Poweredge 2420 24U rack


Home Workstation:

The tower was built around 3 years ago, the only change I have made since was switching to an Intel 530 SSD. I highly recommend only buying quality SSDs if you do decide to buy an SSD. Intel and Samsung make great products and back their quality with long warranties. The tower houses an Intel i5-2500K that is overclocked to 4.0Ghz on a Cooler Master Hyper 212 EVO. Overclocking really is not my thing, but the motherboard made it remarkably easy, it’s been rock solid stable, and I’ve gained a 21% increase in clockspeed which is noticeable. The PC pushes to two monitors, one is used for VMs/background-movies and the other is a Samsung U28D590D 4K monitor. I like the Samsung but it does have it’s drawbacks; among other things: TN panels are less than optimal for color, even for my color deficient eyes.


Regrettable I use Windows at home for many reasons but the moment FreeBSD has support for VirtualBox’s Extension Pack, I will be switching.

Work Workstation:

At work I have the option of using the OS of my choice. We are a heavy Windows AD corporate environment. This makes our management of the equipment really easy but still requires us to administrate from a Windows machine on some level. I have PC-BSD installed with a Windows 7 virtual host through VirtualBox. USB pass through to my Windows VM is not required which makes available my first choice of using PC-BSD. On Mondays I run ~80+ reports and use Unix tools to parse the raw data. PC-BSD has been a great base to do this on. In addition, I like the built in ZFS on root, how the project is managed, and big thanks to Kris and his efforts. I pretty much can not recommend PC-BSD enough, I am enthralled by it. /rant.

20150306_195514Kyle, yes that is the Rat Padz you gave me 10+ years ago.


I briefly touched on the operating systems used on the hardware above. As you might gather, the homelab is built primarily for virtualization with a focus for cost effective high performance. Everything is virtualized, even my primary gateway router running pfSense through the magic of VLANs. Fortunately VMUG has the Advantage program which EVAL Experience is apart of. This allows me to license and use really cool features like vMotion for a relatively low cost on my home lab.

As mentioned above, I do utilize ZFS’s dedupe from a Solaris 11.2 host for the ESXi zvol. I have been really satisfied with it’s performance. That said, there is absolutely a noticeable degradation of performance from a non-deduped pool but the savings in SSD diskspace and the ARC make up for that degradation. In most cases I would not recommend it, however in my particular case there are multiple VMs using the same base OS installation and my savings have been pretty high. I pulled these numbers while typing this up and have a space savings ratio of 1.37 due to compression and a 1.70 ratio due to dedupe. This translates to 119GB of allocated space (data written to disk), of which 204GB is referenced (data which the end device perceives as being written to disk.)

As far as what is being virtualized: I have several installations of FreeBSD based OSes including pfSense, OPNSense, PC-BSD, TrueOS. These run everything from virtualized desktops, a gateway router, an offline CA, a VPN access server, and nginx for testing and adblocking. There are also many Windows NT 6.1 and 6.3 based VMs running as tools for breaking and learning or what I like to call: career based learning. Lastly, there are several Linux distributions that I run for testing, breaking, and learning; some are of my own accord and some are due for learning in a lab environment for work.

I have a passion for computing and also for teaching others; if you have any questions or something you’d like to mention to me I really encourage and appreciate it.

This entry was posted in News, Technology and tagged , , , , , , , , , , . Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *