Hacker News new | past | comments | ask | show | jobs | submit
I've used ZFS dedupe for a personal archive since dedupe was first introduced.

Currently, it seems to be reducing on-disk footprint by a factor of 3.

When I first started this project, 2TB hard drives were the largest available.

My current setup uses slow 2.5-inch hard drives; I attempt to improve things somewhat via NVMe-based Optane drives for cache.

Every few years, I try to do a better job of things but at this point, the best improvement would be radical simplification.

ZFS has served very well in terms of reliability. I haven't lost data, and I've been able to catch lots of episodes of almost losing data. Or writing the wrong data.

Not entirely sure how I'd replace it, if I want something that can spot bit rot and correct it. ZFS scrub.

loading story #42003944
loading story #42003976