Hacker News new | past | comments | ask | show | jobs | submit
A neat trick I was told is to always have ballast files on your systems. Just a few GiB of zeros that you can delete in cases like this. This won't fix the problem, but will buy you time and free space for stuff like lock files so you can get a working system.
This trick is actually used by some banking apps.

They fill app their mobile apps with junk data just to make the APK/IPA bigger. So if they need to push an urgent update, they won't have users that can't update because their phones are full to the brim.

I know two Italian banks that do it, Unicredit and Intesa. The latter was on the news when a user found out that one of the filler files was a burp recording [1].

[1] https://www.ilfattoquotidiano.it/2024/12/20/intesa-san-paolo... (in Italian)

loading story #47684711
loading story #47684538
loading story #47686621
Better fill those files with random bytes, to ensure the filesystem doesn’t apply some “I don’t actually have to store all-zero blocks” sparse-file optimization. To my knowledge no non-compressing file system currently does this, but who knows about the future.
loading story #47687120
loading story #47679154
loading story #47688421
loading story #47675699
loading story #47675361
Similarly, I always leave some space unallocated on LMV volume groups. It means that I can temporarily expand a volume easily if needed.

It also serves to leave some space unused to help out the wear-levelling on the SSDs on which the RAID array that is the PV¹ for LVM. I'm, not 100% sure this is needed any more² but I've not looked into that sufficiently so until I do I'll keep the habit.

--------

[1] if there are multiple PVs, from different drives/arrays, in the VG, then you might need to manually skip a bit on each one because LVM will naturally fill one before using the next. Just allocate a small LV specially on each and don't use it. You can remove one/all of them and add the extents to the fill LV if/when needed. Giving it a useful name also reminds you why that bit of space is carved out.

[2] drives under-allocate by default IIRC

loading story #47688485
loading story #47678035
loading story #47688405
> A neat trick I was told is to always have ballast files on your systems.

ZFS has a "reservation" mechanism that's handy:

> The minimum amount of space guaranteed to a dataset, not including its descendants. When the amount of space used is below this value, the dataset is treated as if it were taking up the amount of space specified by refreservation. The refreservation reservation is accounted for in the parent datasets' space used, and counts against the parent datasets' quotas and reservations.

* https://openzfs.github.io/openzfs-docs/man/master/7/zfsprops...

Quotas prevent users/groups/directories (ZFS datasets) from using too much space, but reservations ensure that particular areas always have a minimum amount set aside for them.

loading story #47676887
loading story #47676002
I always called it a “bit-mass”. Like a thermal mass used in freezers in places where the power is not very stable.

I knew I didn’t invent the concept, as there’s so many systems that cannot recover if the disk is totally full. (a write may be required in many systems in order to execute an instruction to remove things gracefully).

The latest thing I found with this issue is Unreal Engines Horde build system, its so tightly coupled with caches, object files and database references: that a manual clean up is extremely difficult and likely to create an unstable system. But you can configure it to have fewer build artefacts kept around and then it will clear itself out gracefully. - but it needs to be able to write to the disk to do it.

Now that I think about it, I don’t do this for inodes, but you can run out of those too and end up in a weird “out of disk” situation despite having lots of usable capacity left.

This saved us a couple times. At least until I had time to add monitoring to their old system to track disk usage. It was also helpful to use a tool called ncdu. It helps you visualize where most disk space is getting used up to track down the problem.
loading story #47682777
I did this too, but i also zipped the file, turns out it had great packing ratio!
loading story #47674112
This is why I never empty the Rubbish Bin/trash Can on my Linux laptop until the disk fills.
loading story #47687251
Interesting strategy, can't believe I've never heard of this one before.

Would it be more pragmatic to allocate a swap file instead? Something that provides a theoretical benefit in the short term vs a static reservation.

loading story #47678333
This is my snippet i used alot. In doubt when even rm wont work just reboot.

Disc Space Insurance File

    fallocate -l 8G /tmp/DELETE_IF_OUT_OF_SPACE.img
https://gist.github.com/klaushardt/9a5f6b0b078d28a23fd968f75...
loading story #47687585
Surely a 50% warning alarm on disk usage covers this without manual intervention?
loading story #47674116
loading story #47677706
loading story #47675919
loading story #47674000
loading story #47674130
Sounds like something straight out of Dilbert
loading story #47686701
Similar to the old game development trick of hiding some memory away and then freeing it up near the end of development when the budget starts getting tight.
loading story #47686454
I did this recently, aka, docker images prune. Can confirm, saved the day.
loading story #47686208
Love the simplicity and pragmatism of this solution
loading story #47674051
Some filesystems can be unable to delete a file if full. Something to be a bit worried about.
loading story #47684704
loading story #47676241
> A neat trick I was told is to always have sleep statements in your code. Just a few sleep statements that you can delete in cases like this. This won't fix the problem, but will buy you time and free up latency for stuff like slow algorithms so you can get faster code.

FTFY ;)

Would another way be to drop the reserved space (typically 1% to 5% on an ext file system)?
loading story #47674890