connecting ws...
Hacker News new | past | comments | ask | show | jobs | submit
> Agents can be used for open-ended problems where it’s difficult or impossible to predict the required number of steps, and where you can’t hardcode a fixed path. The LLM will potentially operate for many turns, and you must have some level of trust in its decision-making. Agents' autonomy makes them ideal for scaling tasks in trusted environments.

The questions then become:

1. When can you (i.e. a person who wants to build systems with them) trust them to make decisions on their own?

2. What type of trusted environments are we talking about? (Sandboxing?)

So, that all requires more thought -- perhaps by some folks who hang out at this site. :)

I suspect that someone will come up with a "real-world" application at a non-tech-first enterprise company and let us know.

loading story #42477100