For me, this is a bit different. Writing code has always been the bottleneck. I get most of my joy out of solving edge cases and finding optimizations. My favorite projects are when I’m given an existing codebase with the task, “When mars and venus are opposite eachother, the code gets this weird bug that we can’t reproduce.”
When a project requires me to start from scratch, it takes me a lot longer than most other people. Once I’ve thought of the architecture, I get bored with writing the implementation.
AI has made this _a lot_ easier for me.
I think the engineers who thrive wi be the ones know when to use what tool. This has been the case before AI, AI is just another tool allowing more people to thrive.
I'm not afraid of breaking stuff because it is only a small set of users. However for my own code for my professional job no way I would go that fast because I would impact millions of users.
It is insane that companies think they can replace teams wholesale while maintaining quality.
Dunno man. Ideas alone aren't worth anything [0] and execution is everything [1], but good ideas and great execution will never go out of style regardless of how much competition is out there. I'm of the opinion that even if 10% of the population is now capable of creating a side project, there's still the same relatively-fixed amount of people capable of making a good side project, and even fewer who will see it through to a real product. Nothing has really changed in the aggregate. It's like architecture, there are always improvements in materials, tools and processes, and Claude and Codex can provide more laborers for almost free, but most people are still gonna be building uninspired McMansions instead of the Guggenheim.
So really, they are comparatively cheap. I, for one, have hundreds of ideas, but always lacked the time to execute on 5% of them.
- A todo app better than the existing ones
- A todo app with these 3 features
- A todo app with these 3 features, here's how the UI would look
I have tens of ideas, but maybe 1 - 3 that I believe have a meaningful chance to become successful and generate income ($20k annually or more) with great execution. I find it hard to come up with ideas that have a fairly clear path to success and can generate income.
Why do you look at it that way? Why does anyone beside you have to care about what you do?
Just build something for yourself. You will always have things you'd like to build for yourself. You will be in competition with yourself only and your target audience will be yourself.
Market forces do not apply to side-projects, because that's what people do for fun.
Just because there are chess computers, doesn't mean that no one plays chess anymore at home.
This is just a correction of something that managed to remain in an invalid state for an impressively long time.
> Why? Because the bottleneck was never typing code.
Were you also shipping side projects every 2 months before AI?
If not, this comment just reads like cognitive dissonance. Your core claim is that AI has enabled you to ship 7 projects in 12 months, which presumably was not something you did pre-AI, right? So the AI is helping ship projects faster?
I agree that AI is not a panacea and a skilled developer is required. I also agree that it can become a trap to produce a lot of bad code if you’re not paying attention (something a lot of companies are going to discover in 2026 IMO)
But I don’t know how you can claim AI isn’t helping you ship faster right after telling us AI is helping you ship faster.
I can guarantee you this... the story is not absolute. Depending on who you are and what you need to work on dev time could be slower, same or faster for you. BUT what we don't know is the proportion. Is it faster for 60% of people? 70%, 80%?
This is something we don't know for sure yet. But i suspect your instinct is completely wrong and that 90% of people are overall faster... much faster. I do agree that it produces more bugs and more maintenance hurdles but it is that much faster.
The thing is LLMs can bug squash too. AND they are often much faster at it then humans. My agentic set up just reads the incoming slack messages on the issue, makes a ticket, fixes the code and creates a PR in one shot.
Since managing dependencies is one of the major maintenance burdens in some of my projects (updating them, keeping their APIs in mind, complexity due to overgeneralization), this can help quite a lot.
See also https://www.karl.berlin/simplicity-by-llm.html for some of my thoughts regarding this.
Anything else? I'll struggle and grow as a developer, thanks. And before anyone says "but there are architecture decisions etc. so you still grow"... those existed anyways. If I have to practice, I'll practice micro AND macro skills.
I think from time to time, it's better to ask the AI whether the codebase could be cleaned and simplified. Much better if you use different AI than what you use to make the project.
As you mentioned, scope definition and constraints play a major role but ensuring that you don't just go for the first slop result but refine it pays off. It helps to have a very clear mental model of feature constraints that doesn't fall prey to scope creep.
Were you able to fairly split test?
Why will the 8th project still have those things as the bottleneck given your experience?
Also if you're not seeing any real gains in productivity, why are you using AI for your side projects and wasting tokens/money?
One area --and many may not like that fact-- where it can help greatly is that the cost of adding tests also drops to near zero and that doesn't work against us (because tests are typically way more localized and aren't the maintenance burden production code is). And a some us were lazy and didn't like to write too many tests. Or take generative testing / fuzzy testing: writing the proper generators or fuzzers wasn't always that trivial. Now it could become much easier.
So we may be able to use the AI slop to help us have more correct code. Same for debugging edge cases: models can totally help (I've had case as simple as a cryptic error message which I didn't recognize: passed it + the code to a LLM and it could tell me what the error was).
But yup it's a given that, as you put it, when the marginal cost of adding complexity drops to near zero, we're opening a whole new can of worms.
TFA is AI slop but fundamentally it may not be incorrect: the gigantic amount of generated sloppy code needs to be kept in check and that's where engineering is going to kick in.