Honestly, I think it will become a better Intellisense but not much more. I'm a little excited because there's going to be so many people buying into this, generating so much bad code/bad architecture/etc. that will inevitably need someone to fix after the hype dies down and the rug is pulled, that I think there will continue to be employment opportunities.
We also have the ceremonial layers of certain forms of corporate architecture, where nothing actually happens, but the steps must exist to match the holy box, box cylinder architecture. Ceremonial input massaging here, ceremonial data transformation over there, duplicated error checking... if it's easy for the LLM to do, maybe we shouldn't be doing it everywhere in the first place.
I don't know that I've ever even met a developer who wants to be writing endless pools of trivial boilerplate instead of meaningful code. Even the people at work who are willing to say they don't want to deal with the ambiguity and high level design stuff and just want to be told what to do pretty clearly don't want endless drudgery.
When I hear that most code is trivial, I think of this as a language design or a framework related issue making things harder than they should be.
Throwing AI or generates at the problem just to claim that they fixed it is just frustrating.
This was one of my thoughts too. If the pain of using bad frameworks and clunky languages can be mitigated by AI, it seems like the popular but ugly/verbose languages will win out since there's almost no point to better designed languages/framework. I would rather a good language/framework/etc where it is just as easy to just write the code directly. Similar time in implementation to a LLM prompt, but more deterministic.
If people don't feel the pain of AI slop why move to greener pastures? It almost encourages things to not improve at the code level.
Just as an example, I have "service" functions. They're incredibly simple, a higher order function where I can inject the DB handler, user permissions, config, etc. Every time I write one of these I have to import the ServiceDependencies type and declare which dependencies I need to write the service. I now spend close to zero time doing that and all my time focusing on the service logic. I don't see a downside to this.
Most of my business logic is done in raw SQL, which can be complex, but the autocomplete often helps there too. It's not helping me figure out the logic, it's simply cutting down on my typing. I don't know how anyone could be offered "do you want to have type significantly less characters on your keyboard to get the same thing done?" and say "no thanks". The AI is almost NEVER coding for me, it's just typing for me and it's awesome.
I don't care how lean your system is, there will at least be repetition in how you declare things. There will be imports, there will be dependencies. You can remove 90% of this repetitive work for almost no cost...
I've tried to use ChatGPT to "code for me", and I agree with you that it's not a good option if you're trying to do anything remotely complex and want to avoid bugs. I never do this. But integrated code suggestions (with Supermaven, NOT CoPilot) are incredibly beneficial and maybe you should just try it instead of trying to come up with theoretical arguments. I was also a non-believer once.
Regardless, I do wonder how accurate those successful reports are. Do people take LLM output, use it verbatim, not notice subtle bugs, and report that as success?