XFS, Ext4, btrfs etc… all support sparse files, so any app can cause problems you can try it with:
dd if=/dev/zero of=sparse_file.img bs=1M count=0 seek=1024
If you add conv=sparse to the dd command with a smaller block size it will sparsify what you copy too, use the wrong cp command flags and they will explode.Much harder problem than the file system layers to deal with because the stat size will look smaller usually.
Creating sparse files requires the application to purposefully use special calls like fallocate() or seek beyond EOF, like dd with conv=sparse does. You won't accidentally create a sparse file just by filling a file with zeros.
It is an observability issue, even zabbix tracked reserve space and inodes 20 years ago.
Will dedupe,compression,sparse files you simply don’t track utilization by clients view, which is what du does.
The concrete implementation is what matters and what is, as this case demonstrates, is what you should alert on.
Inodes, blocks, extents etc.. are what matters, not the user view of data size.
Even with rrdtool you could set reasonable alerts, but the heuristics of someone exploding a sparse file with a non-sparse copy makes that harder.
Rsync ssh etc… will do that by default.