Hacker News new | past | comments | ask | show | jobs | submit

The 100 hour gap between a vibecoded prototype and a working product

https://kanfa.macbudkowski.com/vibecoding-cryptosaurus
I work as a DevOps/SRE and have been doing it FinTech (bank, hedge funds, startups) and Crypto (L1 chain) for almost 20 years.

My thoughts on vibe coding vs production code:

- vibe coding can 100% get you to a PoC/MVP probably 10x faster than pre LLMs

- This is partly b/c it is good at things I'm not good at (e.g. front end design)

- But then I need to go in and double check performance, correctness, information flow, security etc

- The LLM makes this easier but the improvement drops to about 2-3x b/c there is a lot of back and forth + me reading the code to confirm etc (yes, another LLM could do some of this but then that needs to get setup correctly etc)

- The back and forth part can be faster if e.g. you have scripts/programs that deterministically check outputs

- Testing workloads that take hours to run still take hours to run with either a human or LLM testing them out (aka that is still the bottleneck)

So overall, this is why I think we're getting wildly different reports on how effective vibe coding is. If you've never built a data pipeline and a LLM can spin one up in a few minutes, you think it's magic. But if you've spent years debugging complicated trading or compliance data pipelines you realize that the LLM is saving you some time but not 10x time.

I'm building a Java HFT engine and the amount of things AI gets wrong is eye opening. If I didn't benchmark everything I'd end up with much less optimized solution.

Examples: AI really wants to use Project Panama (FFM) and while that can be significantly faster than traditional OO approaches it is almost never the best. And I'm not taking about using deprecated Unsafe calls, I'm talking about using primative arrays being better for Vector/SIMD operations on large sets of data. NIO being better than FFM + mmap for file reading.

You can use AI to build something that is sometimes better than what someone without domain specific knowledge would develop but the gap between that and the industry expected solution is much more than 100 hours.

AI is extremely good at the things that it has many examples for. If what you are doing is novel then it is much less of a help, and it is far more likely to start hallucinating because 'I don't know' is not in the vocabulary of any AI.
I think the main issue is treating LLM as a unrestrained black box, there's a reason nobody outside tech trust so blindly on LLMs.

The only way to make LLMs useful for now is to restrain their hallucinations as much as possible with evals, and these evals need to be very clear about what are the goal you're optimizing for.

See karpathy's work on the autoresearch agent and how it carry experiments, it might be useful for what you're doing.

> there's a reason nobody outside tech trust so blindly on LLMs.

Man, I wish this was true. I know a bunch of non tech people who just trusts random shit that chatgpt made up.

I had an architect tell me "ask chatgpt" when I asked her the difference between two industrial standard measures :)

We had politicians share LLM crap, researchers doing papers with hallucinated citations..

It's not just tech people.

My treating doctor (EU country) asked ChatGPT for a recommendation for my case, as she didn't have much experience with my particular condition. Guess what? The recommendation was wrong, based on a review by a real doctor from a related discipline.

Well, it was not completely wrong — it was incompetent, because my condition required additional aspects to consider, and ChatGPT provided a general recommendation, ignoring context.

My bet is that in a couple of years we will face a reality where misleading information will exceed any imaginable level, and we will need to turn back, either to books or to archived copies of old websites, to gather more reliable information.

I've seen SQL injection and leaked API tokens to all visitors of a website :)
Wouldn't Java always lose in terms of latency against a similarly optimized native code in, let's say, C(++)?
As long as you tune the JVM right it can be faster. But its a big if with the tune, and you need to write performant code
Not necessarily. Java can be insanely performant, far more than I ever gave it credit for in the first decade of its existence. There has been a ton of optimization and you can now saturate your links even if you do fairly heavy processing. I'm still not a fan of the language but performance issues seem to be 'mostly solved'.
"Saturating your links" is rarely the goal in HFT.

You want low deterministic latency with sharp tails.

If all you care about is throughput then deep pipelines + lots of threads will get you there at the cost of latency.

Depends. Many reasons, but one is that Java has a much richer set of 3rd party libraries to do things versus rolling your own. And often (not always) third party libraries that have been extensively optimized, real world proven, etc.

Then things like the jit, by default, doing run time profiling and adaptation.

I am curious about what causes some to choose Java for HFT. From what I remember the amount of virgin sacrifices and dances with the wolves one must do to approach native speed in this particular area is just way too much of development time overhead.
"HFT" means different things to different people.

I've worked at places where ~5us was considered the fast path and tails were acceptable.

In my current role it's less than a microsecond packet in, packet out (excluding time to cross the bus to the NIC).

But arguably it's not true HFT today unless you're using FPGA or ASIC somewhere in your stack.

The one person who understands HFT yeah. "True" HFT is FPGA now and also those trades are basically dead because nobody has such stupid order execution anymore, either via getting better themselves or by using former HFTs (Virtu) new order execution services.

So yeah there's really no HFT anymore, it's just order execution, and some algo trades want more or less latency which merits varying levels of technical squeezing latency out of systems.

There’s a big gap between reality and the influencer posts about LLMs. I agree with you that LLMs do provide some significant acceleration, but the influencers have tried to exaggerate this into unbelievable numbers.

Even non-influencers are trying to exaggerate their LLM skills as a way to get hired or raise their status on LinkedIn. I rarely read the LinkedIn social feed but when I check mine it’s now filled with claims from people about going from idea to shipped product in N days (with a note at the bottom that they’re looking for a new job or available to consult with your company). Many of these posts come from people who were all in on crypto companies a few years ago.

The world really is changing but there’s a wave of influencers and trend followers trying to stake out their claims as leaders on this new frontier. They should be ignored if you want any realistic information.

I also think these exaggerated posts are causing a lot of people to miss out on the real progress that is happening. They see these obviously false exaggerations and think the opposite must be true, that LLMs don’t provide any benefit at all. This is creating a counter-wave of LLM deniers who think it’s just a fad that will be going away shortly. They’re diminishing in numbers but every LLM thread on HN attracts a few people who want to believe it’s all just temporary and we’re going back to the old ways in a couple years.

> I rarely read the LinkedIn social feed but when I check mine it’s now filled with claims from people about going from idea to shipped product in N days (with a note at the bottom that they’re looking for a new job or available to consult with your company).

This always seems to be the pattern. "I vibe coded my product and shipped it in 96 hours!" OK, what's the product? Why haven't I heard of it? Why can't it replace the current software I'm using? So, you're looking for work? Why is nobody buying it?

Where is the Quicken replacement that was vibecoded and shipping today? Where are the vibecoded AAA games that are going to kill Fortnite? Where is the vibecoded Photoshop alternative? Heck, where is the vibecoded replacement for exim3 that I can deploy on my self hosted E-mail server? Where are all of the actual shipping vibecoded products that millions of users are using?

> Where are all of the actual shipping vibecoded products that millions of users are using?

Claude Code and OpenClaw - they are vibecoded. And I believe more coming.

Yeah, I really wonder if someone would trust to do their taxes in a vibe-coded version of Turbotax...
Day 7 of using Claude Code here are my takes...
The “store on the chain” thing turned out to be a fad in terms of technology, even though it made a lot of money (in the billions and more) to some people via the crypto thing. That was less than 10 years ago, so many of us do remember the similarities of the discourse being made then to what’s happening now.

With all that said, today’s LLMs do seem so provide a little bit more value compared to the bit chain thing, for example OCR/.pdf parsing is I’d say a solved thing right now thanks to LLMs, which is nice.

{"deleted":true,"id":47388780,"parent":47388584,"time":1773591010,"type":"comment"}
The magic is testing. Having locally available testing and high throughput testing with high amount of test cases now unlocks more speed.

The test cases themselves becomes the foci - the LLM usually can't get them right.

The word "Testing" is a very loaded term. Few non-professionals, or even many professionals, fully understand what is meant by it.

Consider the the following: Unit, Integration, System, UAT, Smoke, Sanity, Regression, API Testing, Performance, Load, Stress, Soak, Scalability, Reliability, Recovery, Volume Testing, White Box Testing, Mutation Testing, SAST, Code Coverage, Control Flow, Penetration Testing, Vulnerability Scanning, DAST, Compliance (GDPR/HIPAA), Usability, Accessibility (a11y), Localization (L10n), Internationalization (i18n), A/B Testing, Chaos Engineering, Fault Injection, Disaster Recovery, Negative Testing, Fuzzing, Monkey Testing, Ad-hoc, Guerilla Testing, Error Guessing, Snapshot Testing, Pixel-Perfect Testing, Compatibility Testing, Canary Testing, Installation Testing, Alpha/Beta Testing...

...and I'm certain I've missed dozens of other test approaches.

What I do now is I make an MVP with the AI, get it working. And then tear it all down and start over again, but go a little slower. Maybe tear down again and then go even more slowly. Until I get to the point where I'm looking at everything the AI does and every line of code goes through me.
> - This is partly b/c it is good at things I'm not good at (e.g. front end design)

Everyone thinks LLMs are good at the things they are bad at. In many cases they are still just giving “plausible” code that you don’t have the experience to accurately judge.

I have a lot of frontend app dev experience. Even modern tools (Claude w/Opus 4.6 and a decent Claude.md) will slip in unmaintainable slop in frontend changes. I catch cases multiple times a day in code review.

Not contradicting your broader point. Indeed, I think if you’ve spent years working on any topic, you quickly realize Claude needs human guidance for production quality code in that domain.

>Testing workloads that take hours to run still take hours to run with either a human or LLM testing them out (aka that is still the bottleneck)

Absolutely. Tight feedback loops are essential to coding agents and you can’t run pipelines locally.

Isn’t that the reason why people advocate for spec-driven development instead of vibe coding?
At this point, every programmer who claims that vibecoding doesn't make you at least 10 times more productive is simply lying or worst, doesn't know how to vibe code. -So, you want to tell me that you don't review the code you write? Or that others don't review it? - You bring up ONE example with a bottleneck that has nothing to do with programming. Again, if you claim it doesn't make you 10x more productive, you don't know how to use AI, it is that simple. - I pin up 10 agents, while 5 are working on apps, 5 do reviews and testing, I am at the end of that workflow and review the code WHILE the 10 agents keep working.

For me it is far more than 10x, but I consider noobs by saying 10x instead of 20x or more.

I can't tell if this is real or a joke.
loading story #47390010
loading story #47389780
The gap is definitely real. But I think most of this thread is misdiagnosing why it exists. It's not that AI cannot produce production quality code, it's that the very mental model most people have of AI is leading them to use the wrong interaction model for closing that last 20% of complexity in production code bases.

The author accidentally proved it: the moment they stopped prompting and opened Figma to actually design what they wanted, Claude nailed the implementation. The bottleneck was NEVER the code generation, it was the thinking that had to happen BEFORE ever generating that code. It sounds like most of you offload the thinking to AFTER the complexity has arisen when the real pattern is frontloading the architectural thinking BEFORE a single line of code is generated.

Most of the 100-hour gap is architecture and design work that was always going to take time. AI is never going to eliminate that work if you want production grade software. But when harnessed correctly it can make you dramatically faster at the thinking itself, you just have to actually use it as a thinking partner and not just a code monkey.

loading story #47388542
loading story #47388439
They're... launching an NFT product in 2026...

I know it's not the point of this article, but really?

loading story #47387967
The more I evaluate Claude Code, the more it feels like the world's most inconsistent golfer. It can get within a few paces of the hole in often a single strike, and then it'll spend hours, days, weeks trying to nail the putt.

There's some 80-20:ness to all programming, but with current state of the art coding models, the distribution is the most extreme it's ever been.

With sufficiently advanced vibe coding the need for certain type of product just vanishes.

I needed it, I quickly build it myself for myself, and for myself only.

loading story #47387887
loading story #47388816
loading story #47387401
loading story #47387316
loading story #47387914
loading story #47387570
"working" != "shipping."

When we start selling the software, and asking people to pay for/depend upon our product, the rules change -substantially.

Whenever we take a class, they always use carefully curated examples, to make whatever they are teaching, seem absurdly simple. That's what you are seeing, when folks demonstrate how "easy" some new tech is.

A couple of days ago, I visited a friend's office. He runs an Internet Tech company, that builds sites, does SEO, does hosting, provides miscellaneous tech services, etc.

He was going absolutely nuts with OpenClaw. He was demonstrating basically rewiring his entire company, with it. He was really excited.

On my way out, I quietly dropped by the desk of his #2; a competent, sober young lady that I respect a lot, and whispered "Make sure you back things up."

I think there's a lot to pick apart here but I think the core premise is full of truth. This gap is real contrary to what you might see influencers saying and I think it comes from a lot of places but the biggest one is writing code is very different than architecting a product.

I've always said, the easiest part of building software is "making something work." The hardest part is building software that can sustain many iterations of development. This requires abstracting things out appropriately which LLMs are only moderately decent at and most vibe coders are horrible at. Great software engineers can architect a system and then prompt an LLM to build out various components of the system and create a sustainable codebase. This takes time an attention in a world of vibe coders that are less and less inclined to give their vibe coded products the attention they deserve.

loading story #47389934
loading story #47389387
I’ve had a similar experience. I’ve been vibecoding a personal kanban app for myself. Claude practically one-shotted 90% of the core functionality (create boards, lanes, cards, etc.) in a single session. But after that I’ve now spent close to 30 hours planning and iterating on the remaining features and UI/UX tweaks to make the app actually work for me, and still, it doesn’t feel "ready" yet. That’s not to say it hasn’t sped up the process considerably; it would’ve taken me hours to achieve what Claude did in the first 10 minutes.
loading story #47388361
loading story #47388574
I'm having somewhat good experiences with AI but I think that's because I'm only half-adopting it: instead of the full agentic / Ralphing / the-AI-can-do-anything way, I still do work in very small increments and review each commit. I'm not as fast as others, but I can catch issues earlier. I also can see when code is becoming a mess and stop to fix things. I mean, I don't fix them manually, I point Claude at the messy code and ask it to refactor it appropriately, but I do keep an eye to make sure Claude doesn't stray off course.

Honestly, seeing all the dumb code that it produces, calling this thing "intelligent" is rather generous...

> Late in the night most problems were fixed and I wrote a script that found everyone whose payment got stuck. I sent them money back (+ extra $1 as a ‘thank you for your patience’ note), and let them know via DMs.

(emphasis added)

Not sure if it was actually written by hand or AI was glossed over, but as soon as giving away money was on the table, the author seems to have ditched AI.

I’m sure someone else has probably coined the term before me (or it’s just me being dumb, often the case) but I’ve started calling this phase of SWE ‘Ricky Bobby Development’.

So many people are just shouting ‘I wanna go fast’ and completely forgetting the lessons learned over the past few decades. Something is going to crash and burn, eventually.

I say this as a daily LLM user, albeit a user with a very skeptical view of anything the LLM puts in front of me.

loading story #47388983
If you ask for something complicated this headline is more than true. But why complicate things, keep it simple and keep it fast.

Also this article uses 'pfp' like it's a word, I can't figure out what it means.

I'm able to vibe code simple apps in 30 minutes, polish it in four hours and now I've been enjoying it for 2 months.

loading story #47387559
loading story #47387587
I started working on one of my apps around a year ago. There was no ai CLI back then. My first prototype was done in Gemini chat. It took a week copy and pasting text between windows. But I was obsessed.

The result worked but that's just a hacked together prototype. I showed it to a few people back then and they said I should turn it into a real app.

To turn it into a full multi user scaleable product... I'm still at it a year later. Turns out it's really hard!

I look at the comments about weekend apps. And I have some of those too, but to create a real actual valuable bug free MVP. It takes work no matter what you do.

Sure, I can build apps way faster now. I spent months learning how to use ai. I did a refactor back in may that was a disaster. The models back then were markedly worse and it rewrote my app effectively destroying it. I sat at my desk for 12 hours a day for 2 weeks trying to unpick that mess.

Since December things have definitely gotten better. I can run an agent up to 8 hours unattended, testing every little thing and produce working code quite often.

But there is still a long way to go to produce quality.

Most of the reason it's taking this long is that the agent can't solve the design and infra problems on its own. I end up going down one path, realising there is another way and backtracking. If I accepted everything the ai wanted, then finishing would be impossible.

This seems more like he is bad at describing what he wants and is prompting for “a UI” and then iterating “no, not like that” for 99 hours.
loading story #47387629
loading story #47388214
I have had the experience with creating https://swiftbook.dev/learn

Used Codex for the whole project. At first I used claude for the architect of the backend since thats where I usually work and got experience in. The code runner and API endpoints were easy to create for the first prototype. But then it got to the UI and here's where sh1t got real. The first UI was in react though I had specifically told it to use Vue. The code editor and output window were a mess in terms of height, there was too much space between the editor and the output window and no matter how much time I spent prompting it and explaining to it, it just never got it right. Got tired and opened figma, used it to refine it to what I wanted. Shared the code it generated to github, cloned the code locally then told codex to copy the design and finally it got it right.

Then came the hosting where I wanted the code runner endpoint to be in a docker container for security purpose since someone could execute malicious code that took over the server if I just hosted it without some protection and here it kept selecting out of date docker images. Had to manually guide it again on what I needed. Finally deployed and got it working especially with a domain name. Shared it with a few friends and they suggested some UI fixes which took some time.

For the runner security hardening I used Deepseek and claude to generate a list of code that I could run to show potential issues and despite codex showing all was fine, was able to uncover a number of issues then here is where it got weird, it started arguing with me despite showing all the issues present. So I compiled all the issues in one document, shared the dockerfile and linux secomp config tile with claude and the also issues document. It gave me a list of fixes for the docker file to help with security hardening which I shared back with codex and that's when it fixed them.

Currently most of the issues were resolved but the whole process took me a whole week and I am still not yet done, was working most evenings. So I agree that you cannot create a usable product used by lots of users in 30 minutes not unless it's some static website. It's too much work of constant testing and iteration.

loading story #47388310
What I really want to know is... as a software developer for 25+ years, when using these AI tools- it is still called "vibecoding"? Or is "vibecoding" reserved for people with no/little software development background that are building apps. Genuine question.
loading story #47388462
loading story #47388415
I came across the following yesterday: "The Great Way is not difficult for those who have no preferences," a famous Zen teaching from the Hsin Hsin Ming by Sengstan

As we move from tailors to big box stores I think we have to get used to getting what we get, rather than feeling we can nitpick every single detail.

I'd also be more interested in how his 3rd, 4th or 5th vibe coded app goes.

loading story #47389640
loading story #47389557
I have not been coding for a few years now. I was wondering if vibe coding could unstick some of my ideas. Here is my question, can I use TDD to write tests to specify what I want and then get the llm to write code to pass those tests?
loading story #47387901
loading story #47388032
loading story #47387941
loading story #47387983
loading story #47388045
loading story #47387796
The 80/20 rule doesn’t go away. I am an AI true believer and I appreciate how fast we can get from nothing to 80% but the last “20%” still takes 80%+ of the time.

The old rules still apply mainly.

loading story #47388250
loading story #47388258
>> people who say they "vibecoded an app in 30 minutes" are either building simple copies of existing projects,

those are not copies, they aren't even features. usually part of a tiny feature that barely works only in demo.

with all vibe coding in the world today you still need at least 6 months full time to build a nice note taking app.

If we are talking something more difficult - it will be years - or you will need a team and it will still take a long time.

Everything less will result in an unusable product that works only for demo and has 80% churn.

loading story #47387204
loading story #47387457
loading story #47387474
loading story #47387471
The speed of prototyping right now is wild.

The interesting shift seems to be that building the first version is no longer the bottleneck — distribution, UX polish and reliability are.

It already starts with BS. Yes there are apps you can build in 30 minutes and they are great, not buggy or crap as he says it. And there are apps you need 1 hour or even weeks. It depends on what you want to build. To start off by saying that every app build in 30 minutes is crap, simply shows that he did not want to think about it, is ignorant or he simply wanted to push himselve higher up by putting others down. At this point, every programmer who claims that vibecoding doesn't make you at least 10 times more productive is simply lying or worst, doesn't know how to vibe code.
Look at the screenshots to understand what the author means by 'product'.
loading story #47387604
loading story #47387520
Woodworking is an analogy that I like to use in deciding how to apply coding agents. The finished product needs to be built by me, but now I can make more, and more sophisticated, jigs with the coding agents, and that in turn lets me improve both quality and quantity.
this is why i use ai just for one file at the time, as extension of my own programming. not so fast, but keeps control
> With AI, it’s easier to get the first 90 percent out there. This means we can spend more time on the remaining 10 percent, which means more time for craftsmanship and figuring out how to make your users happy.

EXCEPT... you've just vibe coded the first 90 percent of the product, so completing the remaining 10 percent will take WAY longer than normal because the developers have to work with spaghetti mess.

And right there this guy has shown exactly how little people who are not software developers with experience understand about building software.

I keep seeing things that were vibe coded and thinking, "That's really impressive for something that you only spent that much time on".

To have a polished software project, you must spend time somewhat menially iterating and refining (as each type of user).

To have a polished software project, you need to have started with tests and test coverage from the start for the UI, too.

Writing tests later is not as good.

I have taken a number of projects from a sloppy vibe coded prototype to 100% test coverage. Modern coding llm agents are good at writing just enough tests for 100% coverage.

But 100% test coverage doesn't mean that it's quality software, that it's fuzzed, or that it's formally verified.

Quality software requires extensive manual testing, iteration, and revision.

I haven't even reviewed this specific project; it's possible that the author developed a quality (CLI?) UI without e2e tests in so much time?

Was the process for this more like "vibe coding" or "pair programming with an LLM"?

loading story #47388169
loading story #47388037
> The "remaining 10 percent" is a difference between slop and something people enjoy.

I would say the remaining 10% are about how robust your solution is - anything associated with 'vibe' feels inherently unsecure. If you can objectively proof it is not, that's 10 % time well spend.

I can't say I'm impressed by this at all. 100+ hours to build a shitty NFT app that takes one picture and a predefined prompt, then mints you a dinosaur NFT. This is the kind of thing I would've seen college students slam out over a weekend for a coding jam with no experience and a few cans of red bull with more quality and effort. Has our standards really gotten so low? I don't see any craftsmanship at play here.
loading story #47388030
loading story #47388984
Instead of 10x devs you now have the super rare 100x devs. They are using AI how it should be used.
If you hear someone spouting off about how vibe coding allows for creation of killer apps in a fraction of the time/cost, just ask them if you can see what successful killer apps they’ve created with it. It’s always crickets at that point because it’s somewhere between wishful thinking and an outright lie.
Of course vibe coding is going to be a headache if you have very particular aesthetic constraints around both the code and UX, and you aren't capable of clearly and explicitly explaining those constraints (which is often hard to do for aesthetics).

There are some good points here to improve harnesses around development and deployment though, like a deployment agent should ask if there is an existing S3 bucket instead of assuming it has to set everything up. Deployment these days is unnecessarily complicated in general, IMO.

Why did this crypto grifter AI app get traction on this site?
Im an 20 year veteran of application development consulting. Contributor level... not talking head. I do more estimating than anyone you likely know. Consulting is cooked. I just AI native built (not vibe coding...) an application with a buddy, another Principal level engineer and what would cost a client 500-750k and 8-12 weeks, we did for $200 and 1 sprint. Its a passion project but highly complex mapping and navigation app with host/client multi-user sync'd state. Cooked.
>highly complex mapping

Curious. Can you elaborate on this a bit?

Do you have a race car or race team? Happy to onboard you, otherwise, not here.
I realize this sounds one sided. Ive also founded companies and worked across the range of startup to faang. Everything has changed... For the better if you ask me.
I mean the worst part about this is the author also vibe coded their security. It could have been much more catastrophic if they built a crypto wallet or trading system. But because it was NFTs I guess the max damage was limited.

I have to say its a little sad that so many devs think of security and cryptography in the same way as library frameworks. In that they see it as just some black box API to use for their projects rather than respecting that its a fully developed, complex field that demands expertise to avoid mistakes.

loading story #47389642
Wow. First realistic post about coding assistants that I've read on HN, I think.

[Disclaimer: that I have read. Doesn't mean there weren't others.]

Too bad it's about NFTs but we can't have everything, can we?