Hacker News new | past | comments | ask | show | jobs | submit

AI Made Writing Code Easier. It Made Being an Engineer Harder

https://www.ivanturkovic.com/2026/02/25/ai-made-writing-code-easier-engineering-harder/
Its worth mentioning that this essay has some signs of being either partially AI generated or heavily edited through an LLM. Some of the signs are there (It's not X, it's Y), With the blog having gone from nearly zero activity between 2015 and 2025 to have it explode in posts and text output since then also raises an eyebrow.
loading story #47207005
loading story #47208132
loading story #47207243
loading story #47207036
loading story #47207300
loading story #47207324
loading story #47207292
loading story #47207098
loading story #47207561
loading story #47207028
loading story #47207023
loading story #47207739
loading story #47207775
loading story #47209739
> you are not imagining things. The job changed. The expectations changed. And nobody sent a memo.

Looks like something AI would say. Regardless of how it really was written

Its really long winded. The entire thing could have been a couple bullet points.

Admittedly it was so long and basic, I stopped halfway.

Article definitely has an AI writing style
Why is AI such a bad writer? Phrasing like this feels like reading Fox News.
I saw someone point out something like: ai makes every sentence count. There’s no building or allowing a point to breathe. Every sentence is an axiom to get the meaning across, and its so grating
It's an interesting way to view it, because what happens in fact is likely the opposite - AI is asked to expand a few bullet points into a blog post
Maybe that's why the writing feels so terrible. The AI is attempting to maximize every sentence while simultaneously expanding on just a few actually meaningful points. And the net result of that dissonance is this rage-inducing vapidity. It's the written equivalent of the Uncanny Valley.
I think it has got past the uncanny valley really - it does read like a human, just a very attention-seeking one, like your typical LinkedIn salesman.

That's probably just default settings though - I asked it to rewrite, and most of the tell-tale signs are gone as I can see (apart from the em-dash)

https://chatgpt.com/s/t_69a46b290fb08191ad3bd93066b8cad4

Making fluff sound grandiose is probably what makes so grating.
To be honest it still feels crazy that AI is a writer at all. But yeah, not a good one
One problem I have seen IRL is AI deployment mistakes and IMO Vibe Coders need an IT/Dev Father Figure type to avoid these simple mistakes. Here is one example:

A surgeon (no coding experience) used Claude to write a web app to track certain things about procedures he had done. He deployed the app on a web hosting provided (PHP LAMP stack). He wanted to share it with other doctors, but wasn't sure if it was 'secure' or not. He asked me to read the code and visit the site and provide my opinion.

The code was pretty reasonable. The DB schema was good. And it worked as expected. However, he routinely zipped up the entire project and placed the zip files in the web root and he had no index file. So anyone who navigated to the website saw the backups named Jan-2026.backup, etc. and could download them.

The backups contained the entire DB, all the project secrets, DB connection strings, API credentials, AWS keys, etc.

He had no idea what an 'index' file was and why that was important. Last I heard he was going to ask Claude how to secure it.

loading story #47207737
> Here is something that gets lost in all the excitement about AI productivity: most software engineers became engineers because they love writing code.

1) I guess I am not included in the set named "most software engineers." 2) If the title is "Software Engineer," I think I should be engineering, not coding.

This has probably been beaten to death, but I think this is the biggest disciminating question between "pro ai" and "against ai" in the software world is: "Dp you do (this) becuase you like writing code, or because you like building things for the world?"

Of course I don't think it's a binary decision.

Although I more more motivated by building things, I do somewhat miss the programmer flow state I used to get more often.

It's a different skillset and way of thinking. Engineers tend to think vertically deep on technical problems. With AI, you have to think horizontally broad and vertically up on the architectural problem. The trick is to be comfortable relegating the details to AI.

One concrete example of this realization was when I was researching how to optimize my claude code environment with agents, skills, etc. I read a lot of technical documents on how these supplemental plugins work and how to create them. After an hour of reading through all this, I realized I could just ask Claude to optimize the environment for me given the project context. So I did, and it was able to point out plugins, skills, agents that I can install or create. I gave it permission to create them and it all worked out.

This was a case of where I should not think more technically deeper, but at a more "meta" level to define the project enough for Claude to figure out how to optimize the environment. Whether that gave real gains is another question of course. But I have anecdotally observed faster results and less token usage due context caching and slightly more tools-directed prompts.

The post itself is 100% AI written https://www.pangram.com/history/6572a5c4-f548-46e2-977d-9813...
loading story #47207944
The post is right superficially. It made being an engineer harder because it took away the easy parts that anyone can do and it forces engineers to think of the hard ones.

No jobs get easier with automation - they always move a step up in abstraction level.

An accountant who was super proficient in adding numbers no longer can rely on those skills once calculator was invented.

loading story #47206960
loading story #47207000
loading story #47209718
loading story #47208691
It might be worth mentioning studies that show the lack of productivity gains from LLM usage. These posts take it as an unequivocal given. Management might still have the expectations that certain tasks are faster. But they aren’t always connected to reality because they’re not thinking as engineers.
There's nothing new about this pattern. When the tractor was invented, the farmer didn't get to knock off early. He just started producing 10x more. Then the tractors got bigger and more powerful, and the things you used them with got more sophisticated too and suddenly you're producing 100x more.
loading story #47207872
"the skills that the new engineering landscape actually requires: system design, architectural thinking, product reasoning, security awareness, and the ability to critically evaluate code they did not write."

These, surely, are the skills they always needed? Anyone who didn't have these skills was little more than a human chatgpt already, receiving prompts and simply presenting the results to someone for evaluation.

loading story #47207947
loading story #47208161
loading story #47208983
loading story #47208169
AI made programming A LOT MORE FUN for me.

What I never enjoyed was looking up the cumbersome details of a framework, a programming language or an API. It's really BORING to figure out that tool X calls paging params page and pageSize while Y offset and limit. Many other examples can be added. For me, I feel at home in so many new programming languages and frameworks that I can really ship ideas. AI really helps with all the boring stuff.

loading story #47207199
loading story #47207272
loading story #47208494
This article is obviously written by ai and it’s just painful for me to read ChatGPT’s writing style day in and day out
loading story #47207614
Regarding expanding role:

The scenario I'm somewhat worried about is that instead of 1 PM, 1 designer and 5 developers, there will be 1 PM, 1 designer and 1 developer. Even if tech employment stays stable or even slightly increases due to Jevons paradox, the share of software developers in tech employment will shrink.

loading story #47207787
loading story #47207757
loading story #47209746
loading story #47208404
AI may have sped up coding, but it also exposed that real engineering is about judgment, trade-offs, and responsibility—not just producing code.
loading story #47207507
Yeah but a manager can do those things. You don't need an engineer for that.

Maybe this is not entirely true yet, but it most likely will be in the near future.

loading story #47208215
For me, one thing that completely changed almost overnight was dealing with junior developers.

In the past, I would give them an assignment and they would take a few days to return with the implementation. I was able to see them struggling, they would learn, they would communicate and get frustrated by their own solution, then iterate.

Today, there are two kinds: 1) the ones who take a marginally smaller amount of time because they’re busy learning, testing and self reviewing, and 2) the ones who watch Twitch or Youtube videos while Claude does the job and come to me after two hours with “done, what’s next” while someone has to comb through the mess.

Leadership might see #2 and think they’re better, faster. But they are just a fucking boat anchor that drags down the whole team while providing nothing more than a shitty interface to an LLM in return.

The author introduces the term "Supervision Paradox", but IMHO this is simply one instance of the "Automation Paradox" [1], which has been haunting me since I started working in IT.

Interestingly, most jobs don't incentivize working harder or smarter, because it just leads to more work, and then burn-out.

[1] https://en.wikipedia.org/wiki/Automation#Paradox_of_automati...

Phrases like: "identity crisis", "burnout machine", "supervision paradox", "acceleration trap", "workload creep" are just AI slop.
You seem to be right. The author is pumping out one such article per day. I think I've spent more time in forming my comment than they did in generating the article. Oh well :)
loading story #47208115
loading story #47208485
loading story #47208288
> One engineer captured this shift perfectly in a widely shared essay, describing how AI transformed the engineering role from builder to reviewer.

I stopped here. Was this written by an an LLM? This sentence in particular reads exactly like the author supplied said essay as context and this sentence is the LLM's summarization of it. Nowhere is the original article linked, either, further decreasing trust. Moreover, there's an ad at the bottom for some BS "talent" platform to hire the author. This article is probably an LLM generated ad.

My trust is vacated.

This makes me feel that the SWE work/identity crisis is less important than the digital trust crisis.

loading story #47209415
In writing code, as in writing poetry, the mechanical labor is 5% writing, 45% editing, and 50% reading. But the only thing that makes it yours is you.
I've always been motivated by making simple solid foundations in my code the fastest way possible.

So for me being able to have AI wrote certain things extremely fast with me just doing voice to text with my specific approach, is amazing.

I am all in on everything AI and have a discord server just for openclaw and specialized per repo assistants. It really feels like when I'm busy I can throw it an issue tracker number for things.

Then I will ssh via vs code or regular ssh which forwards my ssh key from 1password. My agents have read only repo access and I can push only when I ssh in. Super secure. Sorry for the tangent to the article but I have always loved coding now I love it even more.

> This is not a contradiction. It is the reality ...

> That is not an upgrade. That is a career identity crisis.

This is not X. It is Y.

> The trap is ...

> This gap matters ...

> This is not empowerment ...

> This is not a minor adjustment...

Your typical AI slop rhetorical phrasing.

Phrases like: "identity crisis", "burnout machine", "supervision paradox", "acceleration trap", "workload creep"

These sound analytical but are lightly defined. They function as named concepts without rigorous definition or empirical grounding.

There might be some good arguments in the article, but AI slop remains AI slop.

N=1. I'm not convinced yet.
loading story #47207288
AI made it so individual developers can outsource their work, not just companies. Maybe there are some lessons to be learned from companies that manage outsourced work successfully.
loading story #47208815
I'm not sure if it's made engineering harder, but it's certainly changing what it means to be a good engineer. It's no longer just about writing code. Now it's increasingly about having good taste, making the right decisions, and sometimes just being blessed with the Midas touch.

In any case, I think we should start treating the majority of code as a commodity that will be thrown away sooner or later.

I wrote something about this here: https://chatbotkit.com/reflections/most-code-deserves-to-die - it was inspired by another conversation on HN.

> It's no longer just about writing code

It never was

But that was a large part of it. When it was difficult to write correct, well-structured code, that was a major determinant in who would get a job as a developer - ability to design and test came second. Now that generating code is automatic, it's the rest that becomes important. That works well for those of us who could do all of those things, but hurts those whose only ability was to generate code.
And now even more so.
loading story #47208300
{"deleted":true,"id":47207784,"parent":47206824,"time":1772380176,"type":"comment"}
loading story #47208988
The role of an engineer is to produce software that probably works. If you are producing more bugs it's because you're skipping provability. AI is also really good at writing tests and doing test-driven development You can get 100% branching coverage. You use a secondary LLM to review the work and make sure everything follows best practices.

LLMs Can accelerate you if you use best practices and focus on provability and quality, but if you produce slop LLMs will help you produce slop faster.

This section very much resonated with me, even though I still haven't tried any of the AI tools:

... most software engineers became engineers because they love writing code. Not managing code. Not reviewing code. Not supervising systems that produce code. Writing it. The act of thinking through a problem, designing a solution, and expressing it precisely in a language that makes a machine do exactly what you intended. That is what drew most of us to this profession. It is a creative act, a form of craftsmanship, and for many engineers, the most satisfying part of their day.

Actually surprised none of the other comments have picked up on this, as I don't think it's especially about AI. But the periods of my career when I've been actually writing code and solving complicated technical problems have been the most rewarding times in my life, and I'd frequently work on stuff outside work time just because I enjoyed it so much. But the other times when I was just maintaining other people's code, or working on really simple problems with cookie-cutter solutions, I get so demotivated that it's hard to even get started each day. 100%, I do this job for the challenges, not to just spend my days babysitting a fancy code generation tool.

loading story #47208635
loading story #47209620
loading story #47209539
I feel like there's a market out there for a weekly newsletter that summarises all the AI takes like this and collects the one meaningful snippet of insight (if any)
loading story #47208592
> ...most software engineers became engineers because they love writing code. Not managing code. Not reviewing code. Not supervising systems that produce code. Writing it...

A SWE who bases their entire identity and career around only writing code is not an engineer - they are a code monkey.

The entire point of hiring a Software ENGINEER is to help translate business requirements into technical requirements, and then implement the technical requirements into a tangible feature or product.

The only reason companies buy software is because the alternative means building in-house, and for most industries software is a cost-center not a revenue generator.

I don't pay (US specific) 200K-400K TCs for code monkeys, I pay that TC for Engineers.

And this does a disservice to the large portion of SWEs and former SWEs (like me) who have been in the industry because we are customer-outcome driven (how do we use code to solve a tangible customer need) and not here to write pretty code.

loading story #47207645
loading story #47208955
fuck me another ai written post

it's all so fucking tiresome

loading story #47208238
loading story #47208133
THERE IS A NEARLY INFINITE DEMAND FOR SKEPTICAL AND COMFORTING TAKES ABOUT AI CODING

THE MARKET WILL FILL THAT VOID

IT DOES NOT MAKE IT TRUE

I still feel like I'm writing code. I tell Claude what to write and I am very specific about it. There's still tons of problems for which Claude has no particular solution and it's on me and other humans to figure out what to do. For those cases where I tell it to just go off and write a whole script that I'm not even looking at, those are throwaway / low-value cases I dont care about where previously I'd not have even taken on that particular job.
Pangram detects this as a 100% AI generated article. Downvote this hustling slop to oblivion.

Also, check out the dude's linkedin: https://www.linkedin.com/in/ivanturkovic/

Wish we could downvote articles. Is it legitimate to flag AI slop?
Developers will become admins. Responsible for supervising and owning the outcomes of increasingly agentic engineering outputs. Trust is the most important thing in business and it’s worth more than ever.
This resonates. I've shipped 7 side projects in the past year using AI heavily. The code writing part got 10x faster. But I've noticed something counterintuitive: the total time from idea to shipped product barely decreased.

Why? Because the bottleneck was never typing code. It was always understanding the problem, making architectural decisions, debugging edge cases, and most importantly - knowing what NOT to build.

AI made me faster at producing code, but it also made me produce MORE code, which means more surface area for bugs, more maintenance burden, more complexity to reason about. The discipline of "write less code" is harder now because writing code costs almost nothing.

The engineers who thrive will be the ones who can resist the temptation to over-engineer when the marginal cost of adding complexity drops to near zero.

loading story #47207491
loading story #47208100
loading story #47207277
loading story #47207148
loading story #47207641
loading story #47207846
loading story #47207547
loading story #47207228
loading story #47207776
loading story #47207773
> AI made me faster at producing code, but it also made me produce MORE code, which means more surface area for bugs, more maintenance burden, more complexity to reason about

I think from time to time, it's better to ask the AI whether the codebase could be cleaned and simplified. Much better if you use different AI than what you use to make the project.

loading story #47207071
loading story #47207217
loading story #47207752
loading story #47207221
loading story #47207152
loading story #47207719
loading story #47207078
> The engineers who thrive will be the ones who can resist the temptation to over-engineer when the marginal cost of adding complexity drops to near zero.

One area --and many may not like that fact-- where it can help greatly is that the cost of adding tests also drops to near zero and that doesn't work against us (because tests are typically way more localized and aren't the maintenance burden production code is). And a some us were lazy and didn't like to write too many tests. Or take generative testing / fuzzy testing: writing the proper generators or fuzzers wasn't always that trivial. Now it could become much easier.

So we may be able to use the AI slop to help us have more correct code. Same for debugging edge cases: models can totally help (I've had case as simple as a cryptic error message which I didn't recognize: passed it + the code to a LLM and it could tell me what the error was).

But yup it's a given that, as you put it, when the marginal cost of adding complexity drops to near zero, we're opening a whole new can of worms.

TFA is AI slop but fundamentally it may not be incorrect: the gigantic amount of generated sloppy code needs to be kept in check and that's where engineering is going to kick in.

loading story #47207888
it didn't do shit
There's always a grain of truth in everything, but the recent article by the Redis guy (sorry for the lack of name) resonated more with me. It's correct that the load in other areas is increasing also because these tools are not there yet when it comes to for lack of a better word "good taste". I work with someone who hasn't written a line of code in a year and it shows and I'm about tired dealing with the slop. But also there's a bunch of things at work that you either did a million times already, aren't really challenging problems just annoying problems hard to solve because of all the cruft, a lot of boring manual work etc. and for this it's just an amazing help to the point I am more relaxed at work than I was previously. And when it does something that is not quite there, I can either fix it manually or tell it to fix it and it usually "gets it". Of course it it ultimately replaces me I will not be relaxed but that's a different topic.

Another little thing that resonated was a tweet that said "some will use it to learn everything and some so that they don't have to learn anything ". Of course it's not really a hard truth. It's questionable how much you can learn without really getting your hands dirty. But I do think people looking at it as a tool that helps then and/or makes them better will profit more than people looking to cut corners.

{"deleted":true,"id":47207290,"parent":47206824,"time":1772377127,"type":"comment"}
Not really, I disagree. The article did slightly touched on the real issue on why people enjoy writing code, a “craftsmanship”, yes, coding is NOT engineering, it is writing, and the people who enjoy doing it are actually writers not engineers, and I always keep mentioning that. With AI however, those writers have to be doing the engineering work: the goals, architecture design, managing blueprints, process design and refining, among many other things, and that job is not easy hence why engineers are “supposedly” paid well, AI now took the writing role, and you have to do the engineering one.
loading story #47209053
loading story #47208845
loading story #47209359
loading story #47208802
{"dead":true,"deleted":true,"id":47207760,"parent":47206824,"time":1772380054,"type":"comment"}
loading story #47209228