When I was first looking at DIY NAS options, I was looking at UnRAID, SnapRaid, ZFS, Flexraid, FreeNAS, etc. I eventually went with UnRAID, but had difficulty finding answers to a number of questions, and ended up using their trial license to figure it all out.

Here’s the questions from my notes that I originally had, with the answers that I found:

If disk in the array dies, but there’s enough free space to copy the data elsewhere, can I move the data off the simulated disk then drop it from the array and recalc parity?

Yes, you can. I was curious about this because I had a ton of smaller disks, many of which were of questionable quality, were old, etc. Once a disk fails, its contents are mulated by the system. Accessing anything on that emulated disk requires a read on all other disks in the system, so it will be a bit slower than normal (around as fast as your slowest disk). Here’s a sample process to do that:

  1. SSH into the UnRAID system, use Midnight Commander (mc), and move files from /mnt/user/diskX (the failed disk) to /mnt/user/diskY (another disk).
  2. Once the failed disk is empty, screenshot the “Main” WebUI page (listing the disks).
  3. Optional: Shutdown, remove the failed disk, and reboot.
  4. Stop the array.
  5. Under Tools, Pick “New Config”. Preserve all disk assignments, and apply.
  6. On the main page, remove the disk that is failed. Refer to your screenshot to ensure you are removing the correct disk.
  7. Start the array.
    Parity will need to rebuild, because the missing disk wasn’t all zeroes (despite moving all files off of it).

If I have an array with a cache, and I modify a file, what happens? Does the prior version of the file still exist on a disk until the cache mover moves it, overwriting the old version? Or is the old version deleted from the array, and only the updated version exists on cache?

Depends on the application / client that does the change. If it’s an office product, it’s likely to write a new file (which will be placed on cache), then delete the old file and rename the new one to the same name. Other applications that open, modify, and close an existing file will update it where it lies (if it’s on the array, then it’s updated there).

The Mover will move any file that is not open/locked, on any share that is marked as “Use Cache=Yes” (Mover will move Cache to Array), or “Use Cache=Prefer” (Mover will move from Array to Cache). In all other configurations (No, Only), the Mover will do nothing, even if the share contains files where they shouldn’t be.

Are there any reasonable “snapshot” solutions for unRaid? Coming from ZFS, snapshots are a nice safety net against ransomware, I can just clean up and restore an earlier snapshot. Likewise, I can go back in time and pull out a document that I messed up.

Btrfs can be used for the array, but I chose against it, I personally think it’s a bit buggy, though I haven’t run into any problems with it on cache. If you are running a cache pool, you need to use btrfs.

At what points in the following is the array not fully online? Disk dies, it’s simulated and array is still online. I get a new big disk and migrate parity onto it, during which time the array is offline? I reassign the old parity disk to replace the failed disk, array is online and disk is simulated until rebuild is complete?

  • Array would be online during the disk failure, and while emulating.
  • You can’t replace parity when dealing with a failed disk, becuase parity is involved in emulating it (unless you have two parity disks).
    • In this case, you would need to move the data off of the failed disk first, either to other disks in the array (if you have space), or elsewhere.
  • Parity can’t be migrated – you pull the disk, add the new one, and it rebuilds from the disks in the array – you would need to shutdown the array for a few minutes to reassign disks, but the rebuild is done online.
  • Once the largest drive is partiy, the old parity disk can be formatted and added to the array – again, short downtime to reassign disks, then it’s all online.

What happens if three SSDs are put into a cache pool? Can it be done? Will the third disk act as hotspare, or actually have some use?

Yes, you can add 3 or more disks to the cache pool. However, past a single disk, you must use btrfs as the filesystem. The actual pooling is done by btrfs, which will ensure that everything written to the cache pool is written to two disks in the pool. So it’s active/active/active. If you mix and match sizes, there are calculators online to figure out how much space you’ll get.