Hacker News new | past | comments | ask | show | jobs | submit
Aren't LLM outputs deterministic given the same inputs?
Not at all. Even the ones that provide a "seed" parameter don't generally 100% guarantee you'll get back the same result.

My understanding is that this is mainly down to how floating point arithmetic works. Any performant LLM will be executing a whole bunch of floating point arithmetic in parallel (usually on a GPU) - and that means that the order in which those operations finish can very slightly affect the result.

loading story #41875654
They are not, necessarily. Especially when using commercial providers who may change models, finetunes, privacy layers, and all kinds of other non-foundational-model things without notice.