AI Made Writing Code Easier. It Made Being an Engineer Harder
https://www.ivanturkovic.com/2026/02/25/ai-made-writing-code-easier-engineering-harder/Looks like something AI would say. Regardless of how it really was written
Admittedly it was so long and basic, I stopped halfway.
That's probably just default settings though - I asked it to rewrite, and most of the tell-tale signs are gone as I can see (apart from the em-dash)
A surgeon (no coding experience) used Claude to write a web app to track certain things about procedures he had done. He deployed the app on a web hosting provided (PHP LAMP stack). He wanted to share it with other doctors, but wasn't sure if it was 'secure' or not. He asked me to read the code and visit the site and provide my opinion.
The code was pretty reasonable. The DB schema was good. And it worked as expected. However, he routinely zipped up the entire project and placed the zip files in the web root and he had no index file. So anyone who navigated to the website saw the backups named Jan-2026.backup, etc. and could download them.
The backups contained the entire DB, all the project secrets, DB connection strings, API credentials, AWS keys, etc.
He had no idea what an 'index' file was and why that was important. Last I heard he was going to ask Claude how to secure it.
1) I guess I am not included in the set named "most software engineers." 2) If the title is "Software Engineer," I think I should be engineering, not coding.
This has probably been beaten to death, but I think this is the biggest disciminating question between "pro ai" and "against ai" in the software world is: "Dp you do (this) becuase you like writing code, or because you like building things for the world?"
Of course I don't think it's a binary decision.
Although I more more motivated by building things, I do somewhat miss the programmer flow state I used to get more often.
One concrete example of this realization was when I was researching how to optimize my claude code environment with agents, skills, etc. I read a lot of technical documents on how these supplemental plugins work and how to create them. After an hour of reading through all this, I realized I could just ask Claude to optimize the environment for me given the project context. So I did, and it was able to point out plugins, skills, agents that I can install or create. I gave it permission to create them and it all worked out.
This was a case of where I should not think more technically deeper, but at a more "meta" level to define the project enough for Claude to figure out how to optimize the environment. Whether that gave real gains is another question of course. But I have anecdotally observed faster results and less token usage due context caching and slightly more tools-directed prompts.
No jobs get easier with automation - they always move a step up in abstraction level.
An accountant who was super proficient in adding numbers no longer can rely on those skills once calculator was invented.
These, surely, are the skills they always needed? Anyone who didn't have these skills was little more than a human chatgpt already, receiving prompts and simply presenting the results to someone for evaluation.
What I never enjoyed was looking up the cumbersome details of a framework, a programming language or an API. It's really BORING to figure out that tool X calls paging params page and pageSize while Y offset and limit. Many other examples can be added. For me, I feel at home in so many new programming languages and frameworks that I can really ship ideas. AI really helps with all the boring stuff.
The scenario I'm somewhat worried about is that instead of 1 PM, 1 designer and 5 developers, there will be 1 PM, 1 designer and 1 developer. Even if tech employment stays stable or even slightly increases due to Jevons paradox, the share of software developers in tech employment will shrink.
Maybe this is not entirely true yet, but it most likely will be in the near future.
In the past, I would give them an assignment and they would take a few days to return with the implementation. I was able to see them struggling, they would learn, they would communicate and get frustrated by their own solution, then iterate.
Today, there are two kinds: 1) the ones who take a marginally smaller amount of time because they’re busy learning, testing and self reviewing, and 2) the ones who watch Twitch or Youtube videos while Claude does the job and come to me after two hours with “done, what’s next” while someone has to comb through the mess.
Leadership might see #2 and think they’re better, faster. But they are just a fucking boat anchor that drags down the whole team while providing nothing more than a shitty interface to an LLM in return.
Interestingly, most jobs don't incentivize working harder or smarter, because it just leads to more work, and then burn-out.
[1] https://en.wikipedia.org/wiki/Automation#Paradox_of_automati...
I stopped here. Was this written by an an LLM? This sentence in particular reads exactly like the author supplied said essay as context and this sentence is the LLM's summarization of it. Nowhere is the original article linked, either, further decreasing trust. Moreover, there's an ad at the bottom for some BS "talent" platform to hire the author. This article is probably an LLM generated ad.
My trust is vacated.
This makes me feel that the SWE work/identity crisis is less important than the digital trust crisis.
So for me being able to have AI wrote certain things extremely fast with me just doing voice to text with my specific approach, is amazing.
I am all in on everything AI and have a discord server just for openclaw and specialized per repo assistants. It really feels like when I'm busy I can throw it an issue tracker number for things.
Then I will ssh via vs code or regular ssh which forwards my ssh key from 1password. My agents have read only repo access and I can push only when I ssh in. Super secure. Sorry for the tangent to the article but I have always loved coding now I love it even more.
> That is not an upgrade. That is a career identity crisis.
This is not X. It is Y.
> The trap is ...
> This gap matters ...
> This is not empowerment ...
> This is not a minor adjustment...
Your typical AI slop rhetorical phrasing.
Phrases like: "identity crisis", "burnout machine", "supervision paradox", "acceleration trap", "workload creep"
These sound analytical but are lightly defined. They function as named concepts without rigorous definition or empirical grounding.
There might be some good arguments in the article, but AI slop remains AI slop.
In any case, I think we should start treating the majority of code as a commodity that will be thrown away sooner or later.
I wrote something about this here: https://chatbotkit.com/reflections/most-code-deserves-to-die - it was inspired by another conversation on HN.
It never was
LLMs Can accelerate you if you use best practices and focus on provability and quality, but if you produce slop LLMs will help you produce slop faster.
... most software engineers became engineers because they love writing code. Not managing code. Not reviewing code. Not supervising systems that produce code. Writing it. The act of thinking through a problem, designing a solution, and expressing it precisely in a language that makes a machine do exactly what you intended. That is what drew most of us to this profession. It is a creative act, a form of craftsmanship, and for many engineers, the most satisfying part of their day.
Actually surprised none of the other comments have picked up on this, as I don't think it's especially about AI. But the periods of my career when I've been actually writing code and solving complicated technical problems have been the most rewarding times in my life, and I'd frequently work on stuff outside work time just because I enjoyed it so much. But the other times when I was just maintaining other people's code, or working on really simple problems with cookie-cutter solutions, I get so demotivated that it's hard to even get started each day. 100%, I do this job for the challenges, not to just spend my days babysitting a fancy code generation tool.
A SWE who bases their entire identity and career around only writing code is not an engineer - they are a code monkey.
The entire point of hiring a Software ENGINEER is to help translate business requirements into technical requirements, and then implement the technical requirements into a tangible feature or product.
The only reason companies buy software is because the alternative means building in-house, and for most industries software is a cost-center not a revenue generator.
I don't pay (US specific) 200K-400K TCs for code monkeys, I pay that TC for Engineers.
And this does a disservice to the large portion of SWEs and former SWEs (like me) who have been in the industry because we are customer-outcome driven (how do we use code to solve a tangible customer need) and not here to write pretty code.
it's all so fucking tiresome
THE MARKET WILL FILL THAT VOID
IT DOES NOT MAKE IT TRUE
Also, check out the dude's linkedin: https://www.linkedin.com/in/ivanturkovic/
Why? Because the bottleneck was never typing code. It was always understanding the problem, making architectural decisions, debugging edge cases, and most importantly - knowing what NOT to build.
AI made me faster at producing code, but it also made me produce MORE code, which means more surface area for bugs, more maintenance burden, more complexity to reason about. The discipline of "write less code" is harder now because writing code costs almost nothing.
The engineers who thrive will be the ones who can resist the temptation to over-engineer when the marginal cost of adding complexity drops to near zero.
I think from time to time, it's better to ask the AI whether the codebase could be cleaned and simplified. Much better if you use different AI than what you use to make the project.
One area --and many may not like that fact-- where it can help greatly is that the cost of adding tests also drops to near zero and that doesn't work against us (because tests are typically way more localized and aren't the maintenance burden production code is). And a some us were lazy and didn't like to write too many tests. Or take generative testing / fuzzy testing: writing the proper generators or fuzzers wasn't always that trivial. Now it could become much easier.
So we may be able to use the AI slop to help us have more correct code. Same for debugging edge cases: models can totally help (I've had case as simple as a cryptic error message which I didn't recognize: passed it + the code to a LLM and it could tell me what the error was).
But yup it's a given that, as you put it, when the marginal cost of adding complexity drops to near zero, we're opening a whole new can of worms.
TFA is AI slop but fundamentally it may not be incorrect: the gigantic amount of generated sloppy code needs to be kept in check and that's where engineering is going to kick in.
Another little thing that resonated was a tweet that said "some will use it to learn everything and some so that they don't have to learn anything ". Of course it's not really a hard truth. It's questionable how much you can learn without really getting your hands dirty. But I do think people looking at it as a tool that helps then and/or makes them better will profit more than people looking to cut corners.