Hacker News new | past | comments | ask | show | jobs | submit
Interesting -- is there any impact from LLM outputs not being deterministic?
SQL functions can be non-deterministic just fine. E.g. SQL:2003 grammar defines DETERMINISTIC | NOT DETERMINISTIC characteristic for CREATE FUNCTION. Or, e.g. PostgreSQL has IMMUTABLE | STABLE | VOLATILE clauses.
Aren't LLM outputs deterministic given the same inputs?
Not at all. Even the ones that provide a "seed" parameter don't generally 100% guarantee you'll get back the same result.

My understanding is that this is mainly down to how floating point arithmetic works. Any performant LLM will be executing a whole bunch of floating point arithmetic in parallel (usually on a GPU) - and that means that the order in which those operations finish can very slightly affect the result.

Funny wrinkle here: unless I’ve misread the OpenAI API docs[1], the recently added prompt caching feature cannot be explicitly disabled and automatically applies to all input prompts over 1024 tokens for a ~few minutes.

It seems to be possible to work around it by mixing up the very start of your prompt (e.g., with an iteration number), but it’s messed up some of our workflows which rely on running multiple hits with the same prompt to gather a consensus output.

Would be great if they let us disable it.

[1]: https://platform.openai.com/docs/guides/prompt-caching

They are not, necessarily. Especially when using commercial providers who may change models, finetunes, privacy layers, and all kinds of other non-foundational-model things without notice.