Hacker News new | past | comments | ask | show | jobs | submit
Can you provide an example of how you actually prompt AI models? I get the feeling the difference among everyone's experiences has to do with prompting and expectation.
Biggest difference I've noticed is giving the model constraints upfront rather than letting it freestyle. Something like "use only the standard library, no new files, keep it under 50 lines" produces dramatically better results than open-ended "build me X." It's less about clever prompting and more about narrowing the solution space so it can't wander off.
I find that the default Claude Code harness deals with the ambiguity best right now with the questionnaire system. So you can pose the core of the problem first and then specify only those implementation details that matter.
I wasn't implying that clever prompting needed to be used. I'm just trying to confirm that the person I was replying to isn't just saying what essentially amounts to "build me X".

When I write my prompts, I literally write an essay. I lay constraints, design choices, examples, etc. If I already have a ticket that lays out the introduction, design considerations, acceptance criteria and other important information, then I'll include that as well. I then take the prompt I've written and I request for the model to improve the prompt. I'll also try to include the most important bits at the end since right now models seem to focus more on things referenced at the end of a prompt rather than at the beginning.

Once I do get output, I then review each piece of generated code as if I'm doing an in-depth code review.

loading story #47394792