Goodbye, ZFS; Hello, btrfs…


(subtitled: …At least for now.)

I’m a huge enthusiast when it comes to ZFS. But recent issues and concerns with it have caused me to abandon it (at least for now) in favor of using btrfs for some of my critical data.

For the last several months, I’ve used the Greenbytes/ZEVO community version of ZFS on my Macs. I served up a music library in RAID10 from a Mac Mini attached to my living room stereo, and have my photos in RAID10 attached to an iMac in my home office. I also use ZFS for my time machine drives on each computer. I want it to work, as I have a lot of data managed by it.

Problems are that ZEVO ZFS has problems serving the subvolumes of a root ZFS volume over AFP without a freely available helper plugin. And Apple’s SMB implementation on Lion and Mountain Lion is sketchy, even between Macs. I tried using Samba instead of Apple’s SMB but I’d lose the shares *and* the ZFS volumes when using the latest Samba versions (3.6 and 4.0 series). Trying to mount the ZEVO volumes on FreeBSD or Linux using ZFSonLinux weren’t always successful as sometimes the on-disk format of the ZEVO volumes seemed to differ from what FreeBSD and Linux expected them to be. Those on-disk formats also varied from one volume setup to another even on the Macs. Tie all this together with ZFS support being beta-quality ports on the Macs and Linux, and I felt like I was fighting a losing battle spurred by loyalty.

Enter btrfs. It has many of the same benefits of ZFS, is newer, and native to Linux. While not as easy to use as ZFS (I’m a huge fan of ZFS’ command-line interface) and somewhat conflicting in concepts with ZFS, btrfs is easy enough to learn, set up, and more and more documentation exists in the wild to give examples and help with issues. It’s still maturing, but seems to be getting more and more bulletproof. It allows RAID10 setups like I prefer, and is almost as fast on Linux as Linux’s native ext4 filesystem format.

So I thought I’d give btrfs a go. So far I’ve backed up my music RAID10 to a single drive for swapping between setups, torn down the old ZFS music RAID10, created a new btrfs RAID10, and copied the music back over to it. I’m using a Zotac ZBOX with a dual-core Celeron processor as my Linux host running Ubuntu 13.04 Linux, and so far everything is serving, connecting, and working well. It’s almost frightening how little work I need to do with the btrfs RAID10 now that it’s up… I pessimistically watch the lights on my RAID enclosure, waiting for them not to blink with activity when reading from or writing to the drives. But I’m continually thwarted (in a good way :). The drives are found at boot, and now should mount at boot as well.

My only concern is that no partitions seem to show on the btrfs drives when using gdisk. No GPT info, no EFI partitions, no dummy partitions… nothing. This probably has to do with how btrfs uses the drives (and may be similar to how mdadm does it) but from a user’s perspective, it would be great to know that at least there’s *one* partition marked as btrfs or btrfs RAID when using gdisk. Otherwise, it all just seems like magic… the data is being stored and read, but from where? How? And how do I know that btrfs is properly set up on a drive when using this tool? How can I confirm what the btrfs commands return?

So that’s that for now. I hope that it continues to work as well as it seems to be doing now. If anything goes wrong, I’ll probably freak out but there will be ways to recover the info after doing some reading and digging. I wish ZFS were more mature on OSX and Linux, and it looks like it will be soon with the OpenZFS initiative. But for now, I guess I’m a btrfs fan 🙂