The objections I heard, which seemed solid, are (1) there's no single input to the AI (i.e. no single session or prompt) from which such a project is generated,
(2) the back-and-forth between human and AI isn't exactly like working with a compiler (the loop of source code -> object code) - it's also like a conversation between two engineers [1]. In the former case, you can make the source code into an artifact and treat that as "the project", but you can't really do that in the latter case, and
(3) even if you could, the resulting artifact would be so noisy and complicated that saving it as part of the project wouldn't add much value.
At the same time, people have been submitting so many Show HNs of generated projects, often with nothing more than a generated repo with a generated readme. We need a better way of processing these because treating them like old-fashioned Show HNs is overwhelming the system with noise right now [2].
I don't want to exclude these projects, because (1) some of them are good, (2) there's nothing wrong with more people being able to create and share things, (3) it's foolish to fight the future, and (4) there's no obvious way to exclude them anyhow.
But the status quo isn't great because these projects, at the moment, are mostly not that interesting. What's needed is some kind of support to make them more interesting.
So, community: what should we do?
[1] this point came from seldrige at https://news.ycombinator.com/item?id=47096903 and https://news.ycombinator.com/item?id=47108653.
YoumuChan makes a similar point at https://news.ycombinator.com/item?id=47213296, comparing it to Google search history. The analogy is different but the issue (signal/noise ratio) is the same.
[2] Is Show HN dead? No, but it's drowning - https://news.ycombinator.com/item?id=47045804 - Feb 2026 (422 comments)
IMO it's not the lack of context that makes them uninteresting. It's the fact that the bar for "this took effort and thought to make" has moved, so it's just a lot easier to make things that we would've considered interesting two years ago.
If you're asking HN readers to sift through additional commit history or "session transcripts" in order to decide if it's interesting, because there's a lot of noise, you've already failed. There's gonna be too much noise to make it worth that sifting. The elevator pitch is just gonna need to be that much different from "vibe coded thing X" in order for a project to be worth much.
I don’t have anything against vibe coded apps, but what makes them interesting is to see the vibe coding session and all the false starts along the way. You learn with them as they explore the problem space.
Perhaps [Show HN] for things that have commentary or highlight a particular thing. It's a bit nebulous because it gets to be like Wikipedia's notability and is more of a judgement call.
But if that is backed up with a [Creations], simply for things that have been made that people might like or because you are proud of your achievement.
So if you write a little Chess engine, it goes under [Creations]. If it is a Chess engine in 1k, or written in BrainFuck, or has a discussion on how you did it, it goes under [Show HN]
[Creations] would be much less likely to hit the front page of course, but I think there might need a nudge to push the culture towards recognising that being on the front page should not be the goal.
For reference here are the two things, coming to a [Show HN] near you (maybe).
https://fingswotidun.com/PerfBoard/ (Just an app, Commentary would be the value.)
https://lerc.neocities.org/ (this is just neat (to a certain mind anyway), awaiting some more polish)
2. Then that separate group, call it "Vibe HN", gets to decide what they find valuable through their own voting and flagging.
Some guidelines on what makes a good "Vibe HN" post would be helpful to nudge the community towards the things you're suggesting, but I think (1) cutting off self-promotion incentives given the low cost of creating software now and (2) allowing for self-moderation given the sheer number of submissions is the only tenable path
My diagnosis is that the friction that existed before (the effort to create a project) was filtering out low-effort projects and keeping the amount of submissions within the capacity the community to handle. Now that the friction is greatly reduced, there's more low-effort content and it's beyond the community's capacity (which is the real problem).
So there's two options: increase the amount of friction or increase the capacity. I don't think the capacity options are very attractive. You could add tags/categories to create different niches/queues. The most popular tags would still be overwhelmed but the more niche ones would prosper. I wouldn't mind that but I think it goes against the site's philosophy so I doubt you'll be interested.
So what I would propose is to create a heavier submission process.
- Make it so you may only submit 1 Show HN per week.
- Put it into a review queue so that it isn't immediately visible to everyone.
- Users who are eligible to be reviewers (maybe their account is at least a year old with, maybe they've posted to Show HN at least once) can volunteer to provide feedback (as comments) and can approve of the submission.
- If it gets approved by N people, it gets posted.
- If the submitter can't get the approvals they need, they can review the feedback and submit again next week.
High effort projects should sail through. Projects that aren't sufficently effortful or don't follow the Show HN guidelines (eg it's account walled) get the opportunity to apply more polish and try again.
A note on requirements for reviewers: A lot of the best comments come from people with old accounts who almost never post and so may have less than 100 karma. My interpretation is that these people have a lot of experience but only comment when they have an especially meaningful contribution. So I would suggest having requirements for account age (to make it more difficult to approve yourself from a sockpuppet) but being very flexible with karma.
2. Require submissions which use GAI to have a text tag in title Show HN GAI would be fine for example - this would be a good first step and can be policed by readers mostly.
I do think point 1 is important to prevent fully automated voting rings etc.
Point 2 is preparation for some other treatment later - perhaps you could ask for a human written explanation on these ones?
I don’t think any complex or automated requirements are going to be enforceable or done so keep it simple. I also wonder whether show posts are enough - I’ve noticed a fair few blogspam posts using AI to write huge meandering articles.
> It runs a commit and then stores a cleaned markdown conversation as a git note on the new commit.
So it doesn't seem that normal commit history is affected - git stores notes specially, outside of the commit (https://git-scm.com/docs/git-notes).
In fact github doesn't even display them, according to some (two-year-old) blog posts I'm seeing. Not sure about other interfaces to git (magit, other forges), but git log is definitely able to ignore them (https://git-scm.com/docs/git-log#Documentation/git-log.txt--...).
This doesn't mean the saved artifacts would necessarily be valuable - just that, unlike a more naive solution (saving in commit messages or in some directory of tracked files) they may not get in the way of ordinary workflows aside from maybe bloating the repo to some degree.
Unlike many people, I'm on the trailing edge of this. Company is conservative about AI (still concerned about the three different aspects of IP risk) and we've found it not very good at embedded firmware. I'm also in the set of people who've been negatively polarized by the hype. I might be willing to give it another go, but what I don't see from the impressive Show HN projects (e.g. the WINE clone from last week) is .. how do you get those results?
This is the major blocker for me. However, there might be value in saving a summary - basically the same as what you would get from taking meeting notes and then summarizing the important points.
> Is Show HN dead? No, but it's drowning
Is spam on topic? and are AI codegen bots part of the community?
To me, the value of Show HN was rarely the thing, it was the work and attention that someone put into it. AI bot's don't do work. (What they do is worth it's own word, but it's not the same as work).
> I don't want to exclude these projects, because (1) some of them are good,
Most of them are barely passable at best, but I say that as a very biased person. But I'll reiterate my previous point. I'm willing to share my attention with people who've invested significant amounts of their own time. SIGNIFICANT amounts, of their time, not their tokens.
> (2) there's nothing wrong with more people being able to create and share things
This is true, only in isolation. Here, the topic is more, what to do about all this new noise, (not; should people share things they think are cool). If the noise drowns out the signal, you're allowed that noise to ruin something that was useful.
> (3) it's foolish to fight the future
coward!
I do hope you take that as the tongue-in-cheek way I meant it, because I say it as a friend would; but I refuse to resign myself completely to fatalism. Fighting the future is different from letting people doing something different ruin the good thing you currently have. Sure electric cars are the future, but that's no reason to welcome them in a group that loves rebuilding classic hot rods.
> (4) there's no obvious way to exclude them anyhow.
You got me there. But then, I just have to take your word for it, because it's not a problem I've spent a lot of time figuring out. But even then, I'd say it's a cultural problem. If people ahem, in a leadership position, comment ShowHN is reserved for projects that took a lot of time investment, and not just ideas with code... eventually the problem would solve itself, no? The inertia may take some time, but then this whole comment is about time...
I know it's not anymore, but to me, HN still somehow, feels a niche community. Given that, I'd like to encourage you to optimize for the people who want to invest time into getting good at something. A very small number of these projects could become those, but trying to optimize for best fairness to everyone, time spent be damned... I believe will turn the people who lift the quality of HN away.
In this case, it was more of write the X language compiler using X. I had to prove to myself if keeping the session made sense, and what better way to do it than to vibe code the tool to audit vibe code.
I do get your point though
So you could treat Show HN as the same. Like what gets floated on /show is only a small sample of the good stuff in /shownew and be fine with the idea that a lot of the good Show HN just slip through the crack. Which seems to me like the best alternative. Possibly with a /showpool maybe?
You could split Show HN into categories, but you'd have done it by now if you thought it a good idea.
You could also rate Show HN submissions algorithmically trying to push for those projects that have been around longer and that look like more effort has been put into them, but I guess that's kind of hard.
Or you'd have to hire actual people to pre-sort the submissions, and gut all the ones that are not up-to-par. In fact, if there was a human-based approval system for new Show HN, you'd possibly get a lot fewer submissions and more qualitative ones, which in itself would make the work of sorting through them simpler.
And yet, the premise of the question assumes that it's possible in this case.
Historically having produced a piece of software to accomplish some non-trivial task implied weeks, months, or more of developing expertise and painstakingly converting that expertise into a formulation of the problem precise enough to run on a computer.
One could reasonably assume that any reasonable-looking submission was in fact the result of someone putting in the time to refine their understanding of the problem, and express it in code. By discussing the project one could reasonably hope to learn more about their understanding of the problem domain, or about the choices they made when reifying that understanding into an artifact useful for computation.
Now that no longer appears to be the case.
Which isn't to say there's no longer any skill involved in producing well engineered software that continues to function over time. Or indeed that there aren't classes of software that require interesting novel approaches that AI tooling can't generate. But now anyone with an idea, some high level understanding of the domain, and a few hundred dollars a month to spend, can write out a plan can ask an AI provider to generate them software to implement that plan. That software may or may not be good, but determining that requires a significant investment of time.
That change fundamentally changes the dynamics of "Show HN" (and probably much else besides).
It's essentially the same problem that art forums had with AI-generated work. Except they have an advantage: people generally agree that there's some value to art being artisan; the skill and effort that went into producing it are — in most cases — part of the reason people enjoy consuming it. That makes it rather easy to at least develop a policy to exclude AI, even if it's hard to implement in practice.
But the most common position here is that the value of software is what it does. Whilst people might intellectually prefer 100 lines of elegant lisp to 10,000 lines of spaghetti PHP to solve a problem, the majority view here is that if the latter provides more economic value — e.g. as the basis of a successful business — then it's better.
So now the cost of verifying things for interestingness is higher than the cost of generating plausibly-interesting things, and you can't even have a blanket policy that tries to enforce a minimum level of effort on the submitter.
To engage with the original question: if one was serious about extracting the human understanding from the generated code, one would probably take a leaf from the standards world where the important artifact is a specification that allows multiple parties to generate unique, but functionally equivalent, implementations of an idea. In the LLM case, that would presumably be a plan detailed enough to reliably one-shot an implementation across several models.
However I can't see any incentive structure that might cause that to become a common practice.
There is very clearly many things wrong with this when the things being shown require very little skill or effort.