At work we use one of the less popular solutions, available both as a plugin for vscode and as a claude code-like terminal tool. The code I work on is mostly Golang and there's some older C++ using a lot of custom libraries. For Golang, the AI is doing pretty good, especially on simple tasks like implementing some REST API, so I would estimate the upper boundary of the productivity gain to be maybe 3x for the trivial code.
Since I'm still responsible for the result, I cannot just YOLO and commit the code, so whenever I get to work on simple things, I'm effectively becoming a code reviewer for the majority of time. That is what probably prevents me from going above 3x productivity; after each code review session I still need a break so I go get coffee or something, so it's still much faster than writing all the code manually, but the mental load is also higher which requires more breaks.
One nontrivial consequence is that the expectations are adapting to the new performance, so it's not like we are getting more free time because we are producing the code faster. Not at all.
For the C++ codebase though, in the rare cases when I need to change something there, it's pretty much business as usual; I won't trust the code it generates, and would rather write what I need manually.
Now, for personal projects, it's a completely different story. For the past few months or so, I haven't written any code for my personal projects manually, except for maybe a few trivial changes. I don't review the generated code either, just making sure that it works as I expect. Since I'm probably too lazy to configure the proper multi-agent workflow, what I found works great for me is: first ask Claude for the plan, then copy-paste the plan to Codex, get its feedback back to Claude, repeat until they agree; this process also helps me stay in the loop. Then, when Claude implements the plan and makes a commit, I copy-paste the commit sha to Codex and ask it to review, and it very often finds real issues that I probably would've missed.
It's hard to estimate the productivity gain of this new process mostly because the majority of the projects I worked on these past few months I would've never started without Claude. But for those I would've started, I think I'm somewhere near 4-5x compared to manually writing the code.
One important point here is that, both at work and at home, it's never a "single prompt" result. I think about the high level design and have an understanding of how things will work before I start talking to the agent. I don't think the current state of technology allows developing things in one shot, and I'm not sure this will change soon.
My overall attitude towards AI code generation is quite positive so far: I think, for me, the joy of having something working so soon, and the fact that it follows my design, outweighs the fact that I did not actually write the code.
One very real consequence of that is I'm missing my manual code writing. I started going through the older Advent of code years where I still have some unsolved days, and even solving some Leetcode problems (only interesting ones!) just for the feeling of writing the code as we all did before.