Hacker News new | past | comments | ask | show | jobs | submit
If I have time I want to try this today because it matches my LLM-based work style, especially when I am using local models: I have command line tools that help me generated large one-shot prompts that I just paste into an Ollama repl - then I check back in a while.

It looks like Axe works the same way: fire off a request and later look at the results.

Exactly! I also made it to chain them together so each agent only gets what it needs to complete its one specific job.