Ask HN: SWEs how do you future-proof your career in light of LLMs?
I have a job at a place I love and get more people in my direct network and extended contacting me about work than ever before in my 20 year career.
And finally I keep myself sharp by always making sure I challenge myself creatively. I’m not afraid to delve into areas to understand them that might look “solved” to others. For example I have a CPU-only custom 2D pixel blitter engine I wrote to make 2D games in styles practically impossible with modern GPU-based texture rendering engines, and I recently did 3D in it from scratch as well.
All the while re-evaluating all my assumptions and that of others.
If there’s ever a day where there’s an AI that can do these things, then I’ll gladly retire. But I think that’s generations away at best.
Honestly this fear that there will soon be no need for human programmers stems from people who either themselves don’t understand how LLM’s work, or from people who do that have a business interest convincing others that it’s more than it is as a technology. I say that with confidence.
U.S. (and German) automakers were absolutely sure that the Japanese would never be able to touch them. Then Koreans. Now Chinese. Now there are tariffs and more coming to save jobs.
Betting against AI (or increasing automation, really) is a bet against not against robots, but against human ingenuity. Humans are the ones making progress, and we can work with toothpicks as levers. LLM's are our current building blocks, and people are doing crazy things with them.
I've got almost 30 years experience but I'm a bit rusty in e.g. web. But I've used LLM's to build maybe 10 apps that I had no business building, from one-off kids games to learn math, to building a soccer team generator that uses Google's OR tools to optimise across various axes, to spinning up four different test apps with Replit's agent to try multiple approaches to a task I'm working on. All the while skilling up in React and friends.
I don't really have time for those side-quests but LLM's make them possible. Easy, even. The amount of time and energy I'd need pre-LLM's to do this makes this a million miles away from "a waste of time".
And even if LLM's get no better, we're good at finding the parts that work well and using that as leverage. I'm using it to build and check datasets, because it's really good at extraction. I can throw a human in the loop, but in a startup setting this is 80/20 and that's enough. When I need enterprise level code, I brainstorm 10 approaches with it and then take the reins. How is this not valuable?
Except, of course, that isn't true.
Personally - and I realize this is not generalizable advice - I don’t consider myself a SWE but a domain expert who happens to apply code to all of his tasks.
I’ve been intentionally focusing on a specific niche - computer graphics, CAD and computational geometry. For me writing software is part of the necessary investment to render something, model something or convert something from domain to domain.
The fun parts are really fun, but the boring parts are mega-boring. I’m actually eagerly awaiting for LLM:s to reach some level of human parity because there simply isn’t enough talent in my domains to do all the things that would be worthwhile to do (cost and return of investment, right).
The reason is my domain is so niche you can’t webscrape&label to reach the intuition and experience of two decades, working in various industries from graphics benchmarking, automotive HUDs, to industrial mission critical AEC workflows and to realtime maps.
There is enough knowledge to train LLMs to get a hint as soon as I tie few concepts together, and then they fly. The code they write at the moment apart from simple subroutines is not good enough to act as unsupervised assistant … most of the code is useless honestly. But I’m optimistic and hope they will improve.
I think a lot of engineers are in for some level of rude awakening. I think a lot of engineers havent applied some level of business/humanities thinking in this, and I think a lot of corporations care less about code quality than even our most pessimistic estimates. It wouldnt surprise me if cuts over the next few years get even deeper, and I think a lot of high performing (re: high paying) jobs are going to get cut. Ive seen so many comments like "this will improve engineering overall, as bad engineers get laid off" and I dont think its going to work like that.
Anecdotal, but no one from my network actually recovered from the post covid layoffs and they havent even stopped. I know loads of people who dont feel like theyll ever get a job as good as they had in 2021.
If you enjoy writing code, you might have to make it a hobby like gardening, instead of earning money from it.
But the breed of startup founder (junior and senior) that’s hustling for the love of building a product that adds value to users, will be fine.
Klarna said they stopped hiring a year ago because AI solved all their problems [1]. That's why they have 55 job openings right now, obviously [2] (including quite a few listed as "Contractor"; the utterly classic "we fucked up our staffing"). This kind of disconnect isn't even surprising; its exactly what I predict. Business leadership nowadays is so far disconnected from the reality of what's happening day-to-day in their businesses that they say things like this with total authenticity, they get a bunch of nods, and things just keep going the way they've been going. Benioff at Salesforce said basically the same thing. These are, put simply, people who have such low legibility on the mechanism of how their business makes money that they believe they understand how it can be replaced; and they're surrounded by men who nod and say "oh yes mark, yes of course we'll stop hiring engineers" then somehow conveniently that message never makes it to HR because those yes-men who surround him are the real people who run the business.
AI cannot replace people; AI augments people. If you say you've stopped hiring thanks to AI, what you're really saying is that your growth has stalled. The AI might grant your existing workforce an N% boon to productivity, but that's a one-time boon barring any major model breakthroughs (don't count on it). If you want to unlock more growth, you'll need to hire more people, but what you're stating is that you don't think more growth is in the cards for your business.
That's what these leaders are saying, at the end of the day; and its a reflection of the macroeconomic climate, not of the impacts of AI. These are dead businesses. They'll lumber along for decades, but their growth is gone.
[1] https://finance.yahoo.com/news/klarna-stopped-hiring-ago-rep...
Now this assumes that LLMs plateau around their current scores. While open models are catching up to closed ones (like Open AI), we are still to see a real jump in consciousness compared to GPT-4. That, and operating LLMs is too damn expensive. If you have explored bolt.new for a little while, you'll find out quick enough that a developer becomes cheaper as your code base gets larger.
The way I see it
1. LLM do not plateau and are fully capable of replacing software developers: There is nothing I can or most of us can do about this. Most people hate software developers and the process of software development itself. They'd be very happy to trade us in an instant. Pretty much all software developers are screwed in the next 3-4 years but it's only a matter of time before it hits any other desk field (management, design, finance, marketing, etc...). According to history, we get a world war (especially if these LLMs are open in the wild) and one can only hope he is safe.
2. LLMs plateau around current levels. They are very useful as a power booster but they can also produce lots of garbage (both in text and in code). There will be an adjustment time but software developers will still be needed. Probably in the next 2-3 years when everyone realizes the dead end, they'll stop pouring money into compute and business will be back as usual.
tl;dr: current tech is not enough to replace us. If tech becomes good enough to replace us, there is nothing that can be done about it.
It is more like across the board beyond engineers, including both junior and senior roles. We have heard first hand from Sam Altman that in the future that Agents will be more advanced and will work like a "senior colleague" (for cheap).
Devin is already going after everyone. Juniors were already replaced with GPT-4o and mid-seniors are already worried that they are next. To executives and management, they see you as a "cost".
So frankly, I'm afraid that the belief that software engineers of any level are safe in the intelligence age is 100% cope. In 2025, I predict that there will be more layoffs because of this.
Then (mid-senior or higher) engineers here will go back to these comments a year later and ask themselves:
"How did we not see this coming?"