Better fill those files with random bytes, to ensure the filesystem doesn’t apply some “I don’t actually have to store all-zero blocks” sparse-file optimization. To my knowledge no non-compressing file system currently does this, but who knows about the future.
A good way to do this is to create a swap file, both because then you can use it as a swap file until you need to delete it and because swap files are required to not be sparse.
XFS, Ext4, btrfs etc… all support sparse files, so any app can cause problems you can try it with:
dd if=/dev/zero of=sparse_file.img bs=1M count=0 seek=1024
If you add conv=sparse to the dd command with a smaller block size it will sparsify what you copy too, use the wrong cp command flags and they will explode.Much harder problem than the file system layers to deal with because the stat size will look smaller usually.
loading story #47680096
No, just use LVM or other dynamic volume management.
Shit like that just wastes space that SSD could use for wear levelling...
If I recall correctly:
dd if=/dev/urandom of=/home/myrandomfile bs=1 count=Nloading story #47683465
loading story #47683233
loading story #47678381
loading story #47685756
loading story #47677443