Homelab

(Last update: March 26, 2016)

Introduction

This is my home lab. It is something I have been developing, I guess you could argue, since 15 years of age. My first PC was built from parts I had requested for Christmas, as far as I knew no one in my family had built their own PC. My parents were very much in doubt on whether I had selected the correct components or whether they were compatible. I was as sure as I could be from the hours I had read on the subject. Christmas came and by the end of the day I had Windows running on the newly build PC, I was pumped. I became known as the “computer guy.” People would frequently donate hardware to me and I would keep everything in organized bins based on the components. By 2008 I had something like seven computers. Of those, one was the primary desktop and the rest were used for testing out OSes and learning. In 2010 I moved myself and all my possessions to New Mexico, by then I had acquired more hardware and PCs. This is when I learned about virtualization.

Learning about the exists of virtualization was a game changer. After reading as much as I could about the concept I needed some hands on experience. Around the same time I decided that I wanted high availability of one large storage pool. This is better known as RAID or storage redundancy. I bought a SuperMicro server motherboard, associated components, and a Norco server case with 20x hot-swappable 3.5″ bays. This was version 1 of my modern homelab. For the next 6 months I toyed around with different configurations and OSes, finally settling on using ESXi for a hypervisor and Solaris for providing data storage via NAS and SAN. Up through 2015 I continued to use Solaris and ESXi, however decided to switch to FreeBSD as my primary platform after running VMs of it for years for various services.

What is a home lab?

A homelab is a tool for learning. Common in the system administration subset of IT but also across different industries. Automotive for example, they call it “the shop [at home]”. Not everyone wants or desires a homelab, it is more common among those who are passionate about the technology, learning, and typically career ambitious.

Hardware:

Below is a list of hardware that I use to further my knowledge and craft.

Homelab:

  • Servers:
    • 1x Dell Poweredge R610 a 1U dual socket Nehalem server
      • Runs FreeBSD-CURRENT – Testing / bhyve
      • 32GB RAM
      • 2x Intel L5630 CPU – 4 cores at 2.13Ghz at 40 watts TDP
      • 4 ports of 1GB Ethernet
      • 1 port 4GB Fiberchannel PCIe card
      • No harddrives or SSDs. All storage is present via FiberChannel
    • 1x Dell Poweredge R610 a 1U dual socket Nehalem server
      • Runs ESXi 6.0 – General labbing
      • 32GB RAM
      • 2x Intel L5630 CPU – 4 cores at 2.13Ghz at 40 watts TDP
      • 4 ports of 1GB Ethernet
      • 1 port 4GB Fiberchannel PCIe card
      • No harddrives or SSDs. All storage is present via FiberChannel
    • 1x Dell Poweredge R610 a 1U dual socket Nehalem server
      • Runs Windows Server 2012 Hyper-V Core – Windows labbing
      • 32GB RAM
      • 2x Intel L5630 CPU – 4 cores at 2.13Ghz at 40 watts TDP
      • 4 ports of 1GB Ethernet
      • 1 port 4GB Fiberchannel PCIe card – VM storage
      • 2x 2.5″ 750GB WD Black drives in mirror provided by PERC card.
    • 1x Dell Poweredge R610 a 1U dual socket Nehalem server
      • Runs FreeBSD 10.3 – Poudriere build host
      • 48GB RAM
      • 2x Intel X5650 CPU – 6 cores at 2.66Ghz at 95 watts TDP
      • 4 ports of 1GB Ethernet
      • 1 port 4GB Fiberchannel PCIe card
      • No harddrives or SSDs. All storage is present via FiberChannel
    • 1x Dell Poweredge R610 a 1U dual socket Nehalem server (hostname: cailin)
      • Runs FreeBSD 10.3 – Core Machine – NAS, SAN, virtual machine host
      • 192GB RAM
      • 2x Intel L5630 CPU – 4 cores at 2.13Ghz at 40 watts TDP
      • 4 ports of 1GB Ethernet
      • 4 port 4GB Fiberchannel PCIe card – Presents block storage to above R610s
      • LSI SAS9200-8E – Connects the SuperMicro DAS below
      • 4x 2.5″ 750GB WD Black drives in stripped mirror (aka RAID10 in non-ZFS terms)
    • 1x SuperMicro SC847E16-RJBOD1 (45x JBOD DAS)
      • Connects to FreeBSD machine above via SAS cables
      • Custom Arduino based PWM fan controller built by your’s truly.
      • 45x 3.5″ bays – 24 bays in the front, 21 in the rear
      • 16x various branded 5TB drives in 8x stripped mirror (no RAID equivalent)
        • Great IOPs for spinning disks
        • ~36.4TB of usable space
      • 2x spare 5TB drives for the high failure rate of abused consumer drives in this environment.
    • 1x Dell Poweredge R510 a 2U dual socket Nehalem server (hostname: essos)
      • Runs FreeBSD 10.3 – Offsite storage using ZFS replication
      • 24GB RAM
      • 2x Intel E5640 CPU – 6 cores at 2.66Ghz at 95W TDP
      • LSI SAS9200-8E – Connects the SuperMicro DAS for large local transfers
      • Dell H200 – Flashed with LSI IT firmware
      • 12x various branded 5TB drives in single RAIDZ3
        • Slow archival location
        • ~40.9TB of usable space
  • Power
    • 3x Ubuquiti mPower Pro Networked PDU
    • 2x CyberPower BRG1500AVRLCD UPS 1500VA/900W 12 Outlets AVR
    • P3 International P4460 Kill A Watt EZ Electricity Usage Monitor
  • Networking
    • Home Infrastructure
      • Cisco Catalyst 2960G – 48 Port 1GBe
      • Cisco Aironet 3702i – 802.11ac Wave 1 WAP in Autonomous mode
    • Cisco Lab
      • Switches
        • 1x Cisco Catalyst 3650 24 Port PoE 100MBe
        • 2x Cisco Catalyst 3550 48 Port 100MBe
        • 1x Cisco Catalyst 2950 – 24 Port 100MBe
      • Routers
        • 1x Cisco 3640
        • 2x Cisco 1841
        • 2x Cisco 2610XM
  • Dell Poweredge 2420 24U rack

Old photos (Circa Mid-2015 – pre-SuperMicro DAS):

IMG_5022IMG_5020IMG_5021IMG_5012IMG_5018IMG_5017IMG_5016IMG_5023

Home Workstation:

The tower was built around 3 years ago, the only change I have made since was switching to a Samsung 840 EVO SSD. I highly recommend only buying quality SSDs if you do decide to buy an SSD. Intel and Samsung make great products and back their quality with long warranties. The tower houses an Intel i5-2500K that is overclocked to 4.0Ghz on a Cooler Master Hyper 212 EVO. Overclocking really is not my thing, but the motherboard made it remarkably easy, it’s been rock solid stable, and I’ve gained a 21% increase in clockspeed which is noticeable. The PC pushes to a Samsung U28D590D 4K monitor. I like the Samsung but it does have it’s drawbacks; among other things: TN panels are less than optimal for color reproduction, even for my color deficient eyes.

IMG_5001

Work Workstation:

At work I have the option of using the OS of my choice. We are a heavy Windows AD corporate environment driven largely by our vendors and end users. This requires us to administrate from a Windows machine on some level. I have PC-BSD installed with a few Windows virtual hosts through VirtualBox. I run weekly reports from 80+ data sources and use Unix tools to parse the raw data. In addition, per VM ZFS datasets makes testing updates on the Windows hosts simple to rollback on.

20150306_195514Kyle, yes that is the Rat Padz you gave me 10+ years ago.

Software:

I briefly touched on the operating systems used on the hardware above. My homelab is built primarily for virtualization with a focus on cost-effective high performance. The entire core infrastructure is ran on FreeBSD using iohyve and iocage. This includes an internal webserver for my lab notes, a private blog/journal, DNS resolver, a separate ad blocker (1px and 301 responder), primary pfSense gateway, inbound VPN, outbound VPN, a Debian based mFi appliance, a poudriere build host for my Raspberry Pis (which flip-flops from v2p to p2v thanks to bhyve and FiberChannel), and a RADIUS server just to name some of the guests ran from one machine as either type-2 virtualized guests (bhyve) or via process separation (jails).

For my ESXi lab, fortunately VMUG has the Advantage program which EVAL Experience is apart of. This allows me to license and use really cool features like vMotion for a relatively low cost on my home lab.

As for the Hyper-V lab, I use it so infrequently that I usually rebuild from scratch. Usually opting to break the machines at work instead. My employer is really is the only reason I use Windows.

Also you might find it odd that I run such as slow processor on the core machine (cailin), this is almost entirely due to the fan speed profiles used. The L5630 spins the fans quiet slowly, in fact it passes opposite-sex test. The other processors I have tested, all have an audible increase in fan speed which is not worth it considering the CPU load is quiet low on this machine despite everything running on it. When I do want some quick computer power, especially for the poudriere build host, I boot the one machine that has the X5650 processors. It’s loud but knocks out the package building quickly.

Leave a Reply

Your email address will not be published. Required fields are marked *