Hacker News new | past | comments | ask | show | jobs | submit
That first image, “Structure Prompts with XML”, just screams AI-written. The bullet lists don’t line up, the numbering starts at (2), random bolding. Why would anyone trust hallucinated documentation for prompting? At least with AI-generated software documentation, the context is the code itself, being regurgitated into bulleted english. But for instructions on using the LLM itself, it seems pretty lazy to not hand-type the preferred usage and human-learned tips.
No, it’s two screenshots from Anthropic documentation, stitched together: https://platform.claude.com/docs/en/build-with-claude/prompt...

The post even links to that page, although there’s a typo in the link.

Author here: I have just fixed the typo. Thank you.

And yes, these are screenshots from Anthropic’s documentation.

They're not even stitched together ; there's just no padding between the two images.
It looks like a screenshot from the Claude desktop app, so I don't think the author is trying to disguise the AI origin of the marerial
You just hallucinated the content is AI generated.
"This is AI" is the new "This is 'shopped, I can tell by the pixels."
I can tell by the em dashes
There must be an OpenClaw YouTube video helping people post to hacker news, or something, because the front page is overrun with AI slop like this article, that makes no sense anyway. The author literally has no idea what any of this stuff means.