DIY Home NAS Adventures

Published 2016-12-11, by John

I’ve been wanting to build my own NAS for some time now. My home network has become littered with various hard drive enclosures in various RAID setups attached to various computing boxen by various connection protocols, and I only foresee it becoming more so unless I do something about it. And the prebuilt boxen offered by Synology, QNAP, Drobo, etc. don’t offer the flexibility and defenses against bit-rot that I require.

So I decided to build my own NAS this weekend.

Emboldened by recent research, including ideas gleaned from the excellent articles written by Brian Moses on his blog, I made the financial and experiential plunge, ordering a large care package from with a smaller supporting role from Because I wasn’t finding a whole lot of what I’ll call prosumer information for aspiring DIY NAS builders, here are my thoughts and notes for anyone else considering such an endeavor.

The Situation

My current storage situation goes something like this:

The RAID setups are in 4-bay and 2-bay hard drive enclosures I’ve picked up and ordered over the years. The ZFS RAIDs are attached to a single Mac Mini I was using as the central server, and the btrfs RAID is attached to an older Zotac Z-Box running Ubuntu Linux.

For ZFS on the Mac Mini, I use OpenZFS on OS X, which can be a great offering, and I’m indebted to the volunteer developers who work on it. However, the USB enclosures holding the ZFS RAID drives don’t always play well with OpenZFS or the Mac Mini… the enclosures have a tendency to enter sleep mode at odd times and don’t seem to honor the Mini’s USB signal consistently letting them know that they should be busy with file operations. That causes all sorts of ZFS IO issues including pool corruption. Plus, OpenZFS is still under heavy development so it’s not the speediest of file systems, especially to a USB3 enclosure with 4 drives. And the Mac Mini just acted flaky with OpenZFS.

My NAS Wishlist and Needlist

So, what can I do to make this all better? What would be a better setup?

My base-level requirements are a NAS that:

Add to those some really-nice-to-haves, like:

The Setup

Armed with those ideas, needs, and wants, and spending way too much time thinking about it all, here’s what I came up with and ordered:

All in all, after a few promotional discounts, the damage came out to around US$900–more than I was initially wanting to spend (I was hoping for US$500 or so), but if it at least met my high expectations for it, then I’d be happy.

For a boot drive, I use a Sandisk Ultra Fit USB3 64GB flash drive. And for the OS I decided to use Arch Linux, as it’s become my Linux distro of choice.

The Experience

All of the pieces-parts arrived on the same day yesterday, so I took the opportunity on a quiet Friday workday to begin assembling everything and moving most of the existing drives over to the new rig.

I’m a bit rusty at putting a computer together from scratch, but most things came together easily with only a few hiccups. The minor issues I ran into mostly had to do with some out-of-date documentation and some initial sloppy cable management on my part, but I took my time and got everything installed to at least get it to test-boot and start the Linux setup.

The system booted up fine the first time without any Magic Smoke leaving πŸ™‚ But I ran into the first and biggest hurdle: booting up took minutes as the LSI SAS card initialized itself and loaded its option ROM into the system.

At first I thought the card was malfunctioning, but it turned out that the card initialization was just slooooooooooow. And it was sloooooooow at each reboot.

I poked around in the ASRock motherboard settings but no setting seemed the obvious culprit. I poked around in the LSI card settings, but there were limited options and nothing seemed to help there either. But I was happy that the card seemed to work, and after the (numerous) reboot delays, I got Arch Linux installed and it happily saw the drives. Success! I spent most of a long, late night getting packages and modules installed in Arch, doing initial testing, and making sure things ran correctly.

This morning I decided to do some more digging into what might be causing the slow booting of the box–the boot lag was going to be intolerable once I put the box into day-to-day use. Something I saw after initialization of the SAS card stuck in my mind: it reported that the MPT3 ROM was installed to the system successfully.

When I had been initially looking at the motherboard firmware settings, I made sure that UEFI boot-up was enabled, that it was looking for UEFI boot media first, and that as little of legacy BIOS capabilities was getting in the way. However, I kept the UEFI Compatibility Support Module (CSM) for legacy BIOS booting enabled, which also enabled some legacy capabilities.

During my morning googling to research the SAS card booting slowness, I learned about legacy BIOS option ROMs which enable a BIOS to use add-on cards and other peripherals. I remembered the CSM settings and wondered if something there might be causing the delays–the card initialization looked a lot like a legacy BIOS initialization to me. Because I’m only using UEFI for my Arch Linux setup, I disabled CSM and made sure that the system was only booting UEFI-enabled devices. I also made sure that my Arch flash drive was the first boot device, and that the system wasn’t trying to boot any of the SAS-connected drives. Saving this configuration and rebooting the system resulted in *much* faster booting/rebooting. Success! (well, mostly…)

It was only a partial success because Arch now failed to fully boot because it couldn’t mount my music RAID10 array attached to the SAS card at boot that I had in my /etc/fstab. Some quick thinking led me to make sure that the SAS kernel module (mpt3sas) and btrfs file system kernel module were both loaded at boot time. Adding those modules to my Arch mkinitcpio.conf settings and regenerating the boot images allowed the system to boot and mount the drives successfully. w00t!

The Current Status

So as of now, I have a fully functioning DIY NAS that not only meets, but actually *exceeds* my expectations. Copying files between the ZFS arrays is blazing fast (compared to what it was before)–watching the iostats while copying gigs of files between arrays had 100+MB/s write throughput, and was actually pushing 200MB/s writes. Scrubs of the arrays to make sure they were healthy took less than 3 hours, where before it would have taken around 12 hours. I’m happy with that πŸ™‚

I still have some configuration and cleanup to do on the box (for software and cable cleanup), but the NAS is already pretty much fully functional. As my adventures continue with it, I’ll post more of my experiences. But for now, it’s looking very good!