DIY Home NAS Adventures


I’ve been wanting to build my own NAS for some time now. My home network has become littered with various hard drive enclosures in various RAID setups attached to various computing boxen by various connection protocols, and I only foresee it becoming more so unless I do something about it. And the prebuilt boxen offered by Synology, QNAP, Drobo, etc. don’t offer the flexibility and defenses against bit-rot that I require.

So I decided to build my own NAS this weekend.

Emboldened by recent research, including ideas gleaned from the excellent articles written by Brian Moses on his blog, I made the financial and experiential plunge, ordering a large care package from newegg.com with a smaller supporting role from amazon.com. Because I wasn’t finding a whole lot of what I’ll call prosumer information for aspiring DIY NAS builders, here are my thoughts and notes for anyone else considering such an endeavor.

The Situation

My current storage situation goes something like this:

  • a digital music library (rips, purchases, and “lent”): 4x1TB drives, RAID10, using btrfs over USB2, for mpd, Sonos, and DLNA
  • a digital photography library (film scans and digital captures): 4x1TB drives, RAID10, using ZFS over USB3
  • Time Machine backups for my Macs: 2x4TB drives, RAID1, using ZFS over USB3
  • an archive/dumping ground for older files: 2x1TB drives, RAID1, using ZFS over USB3
  • miscellaneous drives of miscellaneous things: miscellaneous sizes, mostly using ZFS over USB

The RAID setups are in 4-bay and 2-bay hard drive enclosures I’ve picked up and ordered over the years. The ZFS RAIDs are attached to a single Mac Mini I was using as the central server, and the btrfs RAID is attached to an older Zotac Z-Box running Ubuntu Linux.

For ZFS on the Mac Mini, I use OpenZFS on OS X, which can be a great offering, and I’m indebted to the volunteer developers who work on it. However, the USB enclosures holding the ZFS RAID drives don’t always play well with OpenZFS or the Mac Mini… the enclosures have a tendency to enter sleep mode at odd times and don’t seem to honor the Mini’s USB signal consistently letting them know that they should be busy with file operations. That causes all sorts of ZFS IO issues including pool corruption. Plus, OpenZFS is still under heavy development so it’s not the speediest of file systems, especially to a USB3 enclosure with 4 drives. And the Mac Mini just acted flaky with OpenZFS.

My NAS Wishlist and Needlist

So, what can I do to make this all better? What would be a better setup?

My base-level requirements are a NAS that:

  • supports archive-quality file systems like btrfs and ZFS–no bit-rot of valuable files, please.
  • supports various configurations of software RAID–I’m not a fan of hardware RAID controllers and their potentially proprietary setups.
  • offers plenty of room for consolidating existing hard drives into one box from many.
  • has at least gigabit ethernet built into it.
  • isn’t slow, CPU- and IO-wise.
  • is manageable and not dumbed down–I’m comfortable doing sysadmin tasks on a command line.
  • offers decent configurability and tweakability–‘cuz, ya know, I’m a tweakin’ geek 🙂
  • isn’t insanely expensive.
  • has replaceable parts if something dies.

Add to those some really-nice-to-haves, like:

  • plenty of speedy internal data connections, preferably using 12Gb/s SAS with full SATA III compatibility.
  • opportunities for external expansion if necessary, probably to some kind of backplane (SAS preferred 🙂
  • storage for more than 8×3.5″ drives internally–because I don’t foresee my drive usage and RAID configurations decreasing in the future; plus, wiggle room is always nice.
  • ease of access to the drives–not necessarily meaning I need to hot-swap them, but if that’s available, then cool.
  • a small enough case footprint that it can sit on my desk if need be, or get tucked away easily in my small home office.
  • current snappy CPU, motherboard, memory, and data paths–I’d like them to last me at least a few years without bogging down.
  • being as quiet and unobtrusive as possible.

The Setup

Armed with those ideas, needs, and wants, and spending way too much time thinking about it all, here’s what I came up with and ordered:

  • Fractal Design Node 804 micro-ATX case
  • ASRock Z170M Extreme4 motherboard
  • Corsair RM550x 550W power supply
  • Intel Core i5-6600 CPU with fan and heatsink
  • Corsair Vengeance 16GB DDR4 2133MHz memory
  • LSI SAS 9300-8i PCI Express 3.0 12Gb/s host bus adapter
  • 2xSFF-8643 Mini-SAS HD to 4xSATA cables
  • 2×4-port SATA power splitter cables
  • extra SATA III data cables (just to cover my butt in case they’re needed)

All in all, after a few promotional discounts, the damage came out to around US$900–more than I was initially wanting to spend (I was hoping for US$500 or so), but if it at least met my high expectations for it, then I’d be happy.

For a boot drive, I use a Sandisk Ultra Fit USB3 64GB flash drive. And for the OS I decided to use Arch Linux, as it’s become my Linux distro of choice.

The Experience

All of the pieces-parts arrived on the same day yesterday, so I took the opportunity on a quiet Friday workday to begin assembling everything and moving most of the existing drives over to the new rig.

I’m a bit rusty at putting a computer together from scratch, but most things came together easily with only a few hiccups. The minor issues I ran into mostly had to do with some out-of-date documentation and some initial sloppy cable management on my part, but I took my time and got everything installed to at least get it to test-boot and start the Linux setup.

The system booted up fine the first time without any Magic Smoke leaving 🙂 But I ran into the first and biggest hurdle: booting up took minutes as the LSI SAS card initialized itself and loaded its option ROM into the system.

At first I thought the card was malfunctioning, but it turned out that the card initialization was just slooooooooooow. And it was sloooooooow at each reboot.

I poked around in the ASRock motherboard settings but no setting seemed the obvious culprit. I poked around in the LSI card settings, but there were limited options and nothing seemed to help there either. But I was happy that the card seemed to work, and after the (numerous) reboot delays, I got Arch Linux installed and it happily saw the drives. Success! I spent most of a long, late night getting packages and modules installed in Arch, doing initial testing, and making sure things ran correctly.

This morning I decided to do some more digging into what might be causing the slow booting of the box–the boot lag was going to be intolerable once I put the box into day-to-day use. Something I saw after initialization of the SAS card stuck in my mind: it reported that the MPT3 ROM was installed to the system successfully.

When I had been initially looking at the motherboard firmware settings, I made sure that UEFI boot-up was enabled, that it was looking for UEFI boot media first, and that as little of legacy BIOS capabilities was getting in the way. However, I kept the UEFI Compatibility Support Module (CSM) for legacy BIOS booting enabled, which also enabled some legacy capabilities.

During my morning googling to research the SAS card booting slowness, I learned about legacy BIOS option ROMs which enable a BIOS to use add-on cards and other peripherals. I remembered the CSM settings and wondered if something there might be causing the delays–the card initialization looked a lot like a legacy BIOS initialization to me. Because I’m only using UEFI for my Arch Linux setup, I disabled CSM and made sure that the system was only booting UEFI-enabled devices. I also made sure that my Arch flash drive was the first boot device, and that the system wasn’t trying to boot any of the SAS-connected drives. Saving this configuration and rebooting the system resulted in *much* faster booting/rebooting. Success! (well, mostly…)

It was only a partial success because Arch now failed to fully boot because it couldn’t mount my music RAID10 array attached to the SAS card at boot that I had in my /etc/fstab. Some quick thinking led me to make sure that the SAS kernel module (mpt3sas) and btrfs file system kernel module were both loaded at boot time. Adding those modules to my Arch mkinitcpio.conf settings and regenerating the boot images allowed the system to boot and mount the drives successfully. w00t!

The Current Status

So as of now, I have a fully functioning DIY NAS that not only meets, but actually *exceeds* my expectations. Copying files between the ZFS arrays is blazing fast (compared to what it was before)–watching the iostats while copying gigs of files between arrays had 100+MB/s write throughput, and was actually pushing 200MB/s writes. Scrubs of the arrays to make sure they were healthy took less than 3 hours, where before it would have taken around 12 hours. I’m happy with that 🙂

I still have some configuration and cleanup to do on the box (for software and cable cleanup), but the NAS is already pretty much fully functional. As my adventures continue with it, I’ll post more of my experiences. But for now, it’s looking very good!