DIY NAS with OMV, SnapRAID, MergerFS, and Disk Encryption

· 18 min read

At the beginning of the year I replaced my Drobo with a do it yourself NAS build inspired by JDM_WAAAT’s community. While I didn’t follow one of his builds to the dot, it was an essential resource to get up to speed regarding suitable server hardware. But let’s go back a bit. Why even bother building a NAS instead of buying a pre-built one? A good pre-built NAS (Synology, QNAP) just works, but comes with constraints and might not offer every feature you seek. A DIY NAS offers ultimate flexibility, but requires you to be the captain of the ship at all times.

My goals were full disk encryption, data protection (integrity, bit-rot), and flexibility in mixing drive sizes. Here’s how I got there:

1. Hardware

I bought the following hardware for a total of €190.74 (incl. shipping).

Note: Data and parity drives excluded. As of mid 2020 I run 2x Seagate Exos X14 12TB, 1x WD Red 4TB, and 1x WD Red 3TB drive. My Borg drive is a WD Elements 12TB.

ItemPrice in €
Supermicro motherboard X9SCi-LN4F (used)29.99
Intel Xeon E3-1225 (used)22.89
Samsung 8GB DDR3 PC12800 ECC UDIMM (used)35.00
be quiet! Pure Power 10 400W (used)32.99
be quiet! Pure Rock Slim CPU cooler17.99
Fractal Design Define R4 (used)23.50
Ugreen SATA III 90 degree cables7.59
Crucial BX500 120GB SSD20.79

The Supermicro X9SCi-LN4F motherboard comes with IPMI and 4x GbE LAN ports. I can upgrade the RAM to 32GB, and add a beefier, or more power-efficient, Xeon E3-12XX v2 CPU down the road. The x16 PCI-E 3.0 slot allows me to add a SAS card to expand the possible drive capacity.

This NAS is my main data repository, I’m not using it to host other production software (albeit the hardware is capable for that). It only runs when I need it. If your use-case is a 24/7 NAS, I’d look into a more power-efficient build. JDM_WAAAT’s Xeon CPU Comparison Sheet and PassMark’s CPU Benchmarks are great resources to start with.

2. Operating system (OS)

During my research three operating systems came up time and time again. Unraid, FreeNAS, and OpenMediaVault (OMV). The first two are FreeBSD based, the latter Debian. Unraid is a proprietary solution, so it wasn’t what I was looking for, after coming from a Drobo. FreeNAS is using OpenZFS which is fantastic for data protection, but its RAM requirements and limitations in mixing drives pushed me to OMV with SnapRAID.

Unlike a traditional RAID5/6 solution that computes parity data at real-time, SnapRAID is more similar to a backup solution, because the parity data is updated only upon a request from the user. This allows among others to restore accidentally deleted files and to fix data corruptions.

Other advantages (from the official FAQ):

  • All your data is hashed to ensure data integrity and to avoid silent corruption.
  • If the failed disks are too many to allow a recovery, you lose only the data on the failed disks. All data on the other disks is safe.
  • You can start with already filled disks. (In this guide we add full disk encryption, so this doesn’t apply.)
  • The disks can have different sizes.
  • You can add disks at any time.
  • It doesn’t lock-in your data. You can stop using SnapRAID at any time without the need to reformat or move data.
  • To access a file, a single disk needs to spin, saving power and producing less noise.
  • You can have up to six parity levels compared to the one of RAID5 and the two of RAID6.

SnapRAID fits my goals of data protection (integrity, bit-rot) and flexibility in mixing drive sizes perfectly. Traditional RAID has the speed advantage, but it isn’t as flexible. SnapRAID published a helpful comparison between SnapRAID, Unraid, ZFS, and Btrfs.

3. OMV installation

Download the current stable OMV image (this guide is based on OMV4), and verify its signature and checksum:

# Import key
gpg --keyserver --recv-keys <PGP key ID>

# Verify fingerprint
gpg --fingerprint <PGP key ID>

# Verify signature
gpg --verify openmediavault.iso.asc openmediavault.iso

# Verify checksum
shasum -a 256 openmediavault.iso

Create an USB boot stick with OMV on it:

# Get correct disk
diskutil list

# Unmount if busy, e.g. for /dev/disk1 
diskutil unmountDisk /dev/disk1 

# Add OMV image
dd if=openmediavault.iso of=/dev/disk1 bs=4096

Boot from the USB boot stick, and follow the install guide. It’s super easy. My notable install settings:

  • Use SSD on USB port as system drive.
  • Use US keyboard layout.
  • Set hostname to: omv-1

4. OMV setup

Disclaimer: Use this guide for OMV4 at your own risk. I’m not responsible for any data loss or damaging results. It’s a good idea to always keep at least one backup of your data when migrating to a new system, and to keep it around for a while in case you want to roll back. You might want to verify this setup in a virtual machine at first.

In your browser on another machine, enter the IP address or hostname.local (from the install settings) of the NAS to access the Web UI. If you don’t know either, login as user root on the NAS and then enter ip address to get it.

Let’s first check and update all packages. I generally manage my NAS through SSH, but you can also use the Web UI to update your system.

omv-update # Wrapper script to update the system, see 5.2. Updating packages.

4.1. General settings and drive preparation

These are my general settings. Adjust to your preference.

  • General settings > Auto logout: 30 minutes
  • General settings > Web Admin Password: Change
  • Date & Time: Berlin
  • Power Management > Power button: Shutdown
  • Monitoring: Disable (When you use a SSD or USB flash drive, the performance stats/charts will reduce their lifetime. I use htop to monitor performance when needed.)
  • Update Management > Check > Mark all > Upgrade

Let’s also prepare all drives:

Warning: Wipe will delete all existing data on the drives! Make sure to have at least one backup, if they contain data that you want to keep.

  • Storage > Disks (for all, but OS SSD) > Wipe
  • Storage > Disks (for all, but OS SSD) > Edit: Power 1, Acoustic Disabled, Spindown 60min, Enable write-cache
  • Storage > SMART: Enable
  • Storage > SMART > Devices: Edit > Activate for each drive
  • Storage > SMART > Scheduled Tests: Add short self-test for each drive

4.2. Plugins

First, install OMV-Extras to gain access to more plugins.

Then, install the openmediavault-flashmemory plugin through the Web UI, complete its optional setup (/etc/fstab adjustments), and reboot.

Note: I didn’t uncomment the swap partition, as I’m currently only running 8GB of RAM. This gives me room to compile something.

Afterwards, install the following plugins through the Web UI:

openmediavault-luksencryption # To encrypt all data drives.
openmediavault-snapraid # To create SnapRAID.
openmediavault-unionfilesystems # To create the MergerFS data pool.

4.3. Create encrypted drives

Warning: All data on the drives will be deleted. If you’ve followed this guide to the dot, no data was added after the Wipe in 4.1. General settings and drive preparation.

In Storage do the following for all data and parity drives:

  1. Go to Encryption > Create > Select drive
  2. Set encryption passphrase > Create
  3. Unlock drive
  4. Go to File Systems > Create > Select encrypted drive: Label data drives as data1, data2 etc., and parity drives as parity1, parity2 etc.
  5. Mount
  6. Repeat step 1-5 for all remaining drives
  7. Backup all encryption headers: Encryption > Select drive > Recovery > Backup

4.4. Create SnapRAID

In Services > SnapRAID do:

  1. Add > Select data1 and name it the same.
  2. Check: Content and Data
  3. Repeat 1-2 for all available data drives.
  4. Add > Select parity1 and name it the same.
  5. Check: Parity
  6. Repeat 4-5 for all available parity drives.
  7. Backup Config

Note: I have not scheduled SnapRAID jobs at the moment, because I only turn on my NAS manually from time to time, to rsync data on it. That’s when I’ll also run snapraid sync and snapraid scrub. If you run this setup 24/7 you can schedule a job for snapraid sync to automatically build the parity information.

4.5. Create MergerFS pool

MergerFS is a union filesystem to simplify storage and management of files across drives. You can think of it as a transparent (replaceable) layer on top of data drives to provide a single point of access to all of them. This is helpful when you setup shared folders, because you can always use the pool to store data on, and MergerFS will automatically decide on what drive it goes.

In Storage > Union Filesystems do:

  1. Add > Name: pool1
  2. Select all data drives
  3. Policy: Most free space
  4. Min. free space: 5% of smallest drive size (e.g. 4TB drive = 200G) (feel free to adjust)

Note: The default policy epmfs doesn’t fit me, because since v2.25 path preserving policies will no longer fall back to non-path preserving policies. This means once you run out of space on drives that have the relative path, adding a new file will fail (out of space error).

4.6. Create shared folders

For device pool1 create all your shared folders. I use the following ACL:

Owner: root | Read/Write/Execute
Group: users | Read/Write/Execute
Others: None

4.7. Add users

Create your main user (e.g. michael), give Read/Write privileges to everything, and add the user to the following groups: users (default), ssh, and sudo.

Setup SSH as usual. OMV uses the RFC 4716 SSH format, so remember to export your public key with:

ssh-keygen -e -f ~/.ssh/

4.8. Enable Apple Filing, Samba, or NFS

Note: This step is optional. If you want easy access from other computers to your shared folders on the NAS, follow along to setup a network share.

I use Mac’s to access my NAS, so I added the Apple Filing plugin as it’s more reliable for macOS than Samba or NFS. If you use a Windows or Linux computer, Samba and NFS are included in OMV, so you can follow the official documentation to set them up.

Install the following plugin through the Web UI:

openmediavault-netatalk # To add the Apple network share service.

Afterwards, enable Apple Filing, and add the shared folders you want to make available. On your Mac, open Finder and hit cmd + k, then enter afp://hostname.local to connect your network shares.

4.9. SFTP

Note: This step is optional. If you want SFTP access to your shared folders on the NAS, follow along to add the SFTP service.

OMV only includes a FTP service. To add the more secure SFTP, install the following plugin through the Web UI:

openmediavault-sftp # SFTP server.

Afterwards enable the SFTP service, disable password auth, and enable public key auth. I’ve added all my shared folders to the access list for my main user.

4.10. Other packages I use

Note: This step is optional.

As root install:

apt-get install git htop screen vim zsh

Set vim as default editor:

update-alternatives --config editor

Hereby the install and initial setup is complete. To give you a better sense of day-to-day usage, I’ve included below my notes on usage, solutions to problems I had to solve, my backup strategy, and how to add/replace drives.

5. Usage

5.1. General

After every reboot, all encrypted drives need to be unlocked through the Web UI at Storage > Encryption. Otherwise, shared folders won’t be accessible.

5.2. Updating packages

omv-update is a wrapper script for:

apt-get update && apt-get –yes –force-yes –fix-missing –auto-remove –allow-unauthenticated –show-upgraded –option DPkg::Options::="–force-confold" dist-upgrade

In general, I make updates through SSH as root (sudo su), because it gives me more control and feedback in case of issues.

5.3. SnapRAID

Note: I mainly manage my NAS through SSH, but you can also use the OMV Web UI for SnapRAID. The plugin is great.

Switch to root (to have all permissions), and use a screen session to run SnapRAID, if over SSH.

  • snapraid touch: Sets arbitrarily the sub-second timestamp of all the files that have it at zero. This improves the SnapRAID capability to recognize moved and copied files as it makes the timestamp almost unique, removing possible duplicates.
  • snapraid sync: Build the parity information, to protect the data.
  • snapraid --force-empty sync: Force it.
  • snapraid scrub: To check the data and parity for errors.

Dumping data onto the NAS

I use rsync -av to dump data onto my NAS. For new data I run it twice, the second time with the --checksum flag to verify data integrity.

6. Problems and solutions

6.1. mdadm: no arrays found in config file or automatically

  1. Get the current drives with blkid, and search for the boot drive (e.g. /dev/sde1). (most likely at the bottom, as SATA drives/arrays should be atop)
  2. Reboot.
  3. Hit e on the grub screen.
  4. Use the identified boot drive from step 1, on line 15: For example, change linux /boot/...3-amd64 root=/dev/sdh1 ro quiet to linux /boot/...3-amd64 root=/dev/sde1 ro quiet
  5. F11 to reboot.
  6. Login and make the change permanent to grub: update-grub

See: 1

6.2. A start job is running for dev-disk-by…

This is because the encrypted drives aren’t unlocked automatically on boot.

To resolve this delay, set x-systemd.device-timeout in /etc/fstab to 1 (one second timeout), e.g.:

dev/disk/by-label/data1 /srv/dev-disk-by-label-data1 ext4 defaults,nofail,x-systemd.device-timeout=1,user_xattr,noexec,usrjquota=aquota.user,,jqfmt=vfsv0,acl 0 2

See: 1, 2, 3

6.3. python3.5/ error

It’s just a warning, but annoying. The final version of Python in Debian 9 doesn’t get the fix, so we manually have to add the fixed version (source):

wget -O /usr/lib/python3.5/

7. Backup

7.1. OS

After OMV config changes, I make a backup of /etc/openmediavault/config.xml:

cp /etc/openmediavault/config.xml /dest/20200725_omv_config.xml.bak

After major OS changes I shutdown the NAS, connect the system drive to a Mac, and use dd to create an image of the system drive:

  1. Get list of disks and volumes: diskutil list
  2. Create image and compress on the fly:
sudo dd if=/dev/rdisk1 bs=1M | gzip > 20190404_omv-1_OS.gz

Use ctrl + t to view the progress.

Unmount disk: diskutil unmountDisk /dev/disk1

It also doesn’t hurt to backup fstab from time to time:

cp /etc/fstab /dest/20200725_fstab.bak


  1. Unmount, but not eject, to be able to write to it: diskutil unmountDisk /dev/disk1
  2. Restore: gzip -dc ~/<source>/<file>.gz | sudo dd of=/dev/rdisk1 bs=1m

Note: If dealing with a drive that has bad blocks, add the conv=noerror,sync argument, so that dd doesn’t stop. It will fill in the missing sectors with null bytes. Another option is to use ddrescue instead.

7.2. Data

I’ve been using BorgBackup (short Borg) for over a year now. It has been rock solid, which is why I’m also using it for my NAS. Borg is a deduplicating archiver with encryption, compression, and data integrity checks. Latter is important to me, because if you blindly backup data, you won’t notice bit-flips and might overwrite a file with a corrupted version at some point. It’s worth mentioning that snapraid scrub can be used for this too, but SnapRAID isn’t a versioned backup, so it does has it limits (e.g. if data is already synchronized).

You can install Borg with apt install borgbackup. I don’t use the omv-extras plugin, because it can’t yet adapt existing borg repos, which is too restricting for me (e.g. changing OS). Also the mount through the Web UI misses the -o allow_others option, which makes it impossible to browse the backup mounts on the network share.

Once installed, add a dedicated drive just for Borg. Label it as borg1. As the Borg repositories will be encrypted, we won’t encrypt the drive itself. That way it can easily be mounted on another systems for file recovery.

Note: I don’t expose my Borg drive through a shared folder. I created the shared folder TempMounts instead, to which I mount a borg repo if I need access to the backup.

Switch to root to create and manage backups:

sudo su

Initialize the first Borg repo:

borg init --encryption=repokey-blake2 /srv/<insert borg UUID>/Borg/Main

Note: Use multiple repos to keep backups isolated.

Create the backup:

Note: In this example we backup /Photos/.

borg create --list --stats --progress /srv/<insert borg UUID>/Borg/Main::Photos-{now} /sharedfolders/Photos

To get information about a repo or to list all archives in it run:

borg info /srv/<insert borg UUID>/Borg/Main
borg list /srv/<insert borg UUID>/Borg/Main

To mount a backup (for restoring) run:

# Mount to /sharedfolders/TempMounts/Borg
borg mount -o allow_other /srv/<insert borg UUID>/Borg/Main /sharedfolders/TempMounts/Borg

# Unmount
borg umount /sharedfolders/TempMounts/Borg

To verify the consistency of a repo and the corresponding archives run:

borg check -v /srv/<insert borg UUID>/Borg/Main

# Limit to repo checks
borg check -v --repository-only /srv/<insert borg UUID>/Borg/Main

# Limit to archive checks
borg check -v --archives-only /srv/<insert borg UUID>/Borg/Main

To perform a full data integrity verification run:

borg check -v --verify-data /srv/<insert borg UUID>/Borg/Main

Note: You can write a shell script or adapt the official examples to automate the backup process. There’s also a wrapper script—borgmattic—that makes it easy to create, check, and prune backups.

8. Hardware upgrades and replacements

8.1. Adding new drives

  1. Repeat the steps from create encrypted drives, create SnapRAID, and add the new drive to the MergerFS pool, if desired.
  2. Add x-systemd.device-timeout=1 in /etc/fstab for the new drives, to avoid a boot delay.
  3. Backup /etc/fstab.

8.2. Replacing failed drives

  1. Identify the failed drive in OMV. Power off the system and replace the failed drive with a new drive.
  2. Repeat the steps from create encrypted drives, create SnapRAID for the new drive, but use the failed drive’s label for the new drive!
  3. Run snapraid fix in OMV to fix the drive (regenerates the data from the failed drive, which will take a while).
  4. Run snapraid diff to check if all files are there.
  5. Run snapraid check in OMV to verify that the data is ok. You can limit it to the replaced drive with snapraid check -a -d DISK_NAME.
  6. Run snapraid sync to re-sync with the new drive.
  7. Recursively restore permissions of shared folders, as SnapRAID won’t.

You won’t need to adjust the union filesystem (MergerFS), as long as the same labels are used. If not, just edit and select the new drive. (MergerFS is just a virtual layer).

8.3. Replacing drives with larger drives

If the drive hasn’t failed yet, it could be also cloned. For example when replaced with a larger one (source):

  1. Install the new drive.
  2. Boot Clonezilla.
  3. Clone the old drive to the new drive.
  4. Power off the system.
  5. Remove the old drive.
  6. Boot gparted-live.
  7. Expand the filesystem on the new drive, to use all of the space.
  8. Reboot into OMV.

You shouldn’t need to do anything further, because OMV and SnapRAID will think the same drive, as before, is there.

And lastly, you could also just use rsync or cp:

# rsync
rsync -av --hard-links --acls --xattrs /srv/dev-disk-by-label-data_old_drive/ /srv/dev-disk-by-label-data_new_drive/

# cp
cp -av /srv/dev-disk-by-label-data_old_drive/. /srv/dev-disk-by-label-data_new_drive

Then replace the old drive with the new drive in your SnapRAID config, using the same label. Afterwards test if everything is fine with snapraid diff, only if it looks as expected you should run snapraid sync to complete the migration.