Home NAS, Part 1: Reliable Storage with ZFS and Ubuntu 16.04
July 10, 2016

Project Goals

This project should provide a large amount of reliable storage that performs well at a reasonable cost using components that are both power-efficient and quiet enough for home office use. My previous storage layout consisted of a collection of data disks and two parity disks managed by SnapRAID. This worked well for the most part but there were quite a few disadvantages that made me reconsider this as a long-term solution.

Reasons why I upgraded to ZFS from SnapRAID:

Hardware

Operating System

Ubuntu 16.04 supports native ZFS with a simple installation:
    sudo apt install zfsutils-linux
  

Configuring ZFS

After installing ZFS, you'll want to determine the proper arrangement for your disks based on your own particular requirements and expectations. I chose RAIDZ2 because of the storage efficiency only giving up two disks to parity, the performance is able to easily saturate gigabit ethernet, and having just one vdev will not be a problem with only 1-2 concurrent operations as is typical in a home environment. I could have used 4 mirrored vdevs but it's not an optimal storage option given the 50% overhead. Additionally, it's nice to have the security of losing any two disks while maintaining uptime. With mirrors, you only have 1 disk of fault tolerance in the worst case scenario and the resilvering process during a rebuild will heavily tax the remaining disk from the pair. Mirrors will have faster resilvering times, however, and the additional vdevs will provide much better IO operations per second for the pool.

    sudo zpool create tank -o ashift=12 raidz2 sda sdb sdc sdd sde sdf sdg sdh
    sudo zpool export tank
    sudo zpool import -d /dev/disk/by-id tank
    
    sudo zfs set compression=lz4 tank
    sudo zfs set recordsize=1M tank
    sudo zfs set atime=off tank
    sudo zfs set xattr=sa tank
    
    sudo zfs create tank/dataset
  

Ideally, you would specify the disks for the pool using their /dev/disk/by-id/ paths, but in this particular case ZFS was not cooperating and I had to use the short names. It's possible to fix this by exporting the pool and then specifying the directory to use when importing it. This doesn't make a huge difference, but it's nice to see the full name and serial number when checking sudo zpool status. The serial number will help you pull the correct disk in the event of a failure.

    sudo zpool status
    pool: tank
    state: ONLINE
    scan: scrub repaired 0 in 5h13m with 0 errors on Wed Jul 27 03:12:02 2016
    config:
    
        NAME                                         STATE     READ WRITE CKSUM
        tank                                         ONLINE       0     0     0
          raidz2-0                                   ONLINE       0     0     0
            ata-HGST_HMS5C4040ALE640_PL1331LAHGJURH  ONLINE       0     0     0
            ata-HGST_HMS5C4040ALE640_PL1331LAHG426H  ONLINE       0     0     0
            ata-HGST_HMS5C4040ALE640_PL1331LAHGXPVH  ONLINE       0     0     0
            ata-HGST_HMS5C4040ALE640_PL1331LAHG429H  ONLINE       0     0     0
            ata-HGST_HMS5C4040ALE640_PL1331LAHGHPAH  ONLINE       0     0     0
            ata-HGST_HMS5C4040ALE640_PL1331LAHGL19H  ONLINE       0     0     0
            ata-ST4000DM000-1F2168_Z306592W          ONLINE       0     0     0
            ata-ST4000DM000-1F2168_Z3065TOV          ONLINE       0     0     0
    
    errors: No known data errors
  


© 2016 |