Hacker News new | past | comments | ask | show | jobs | submit
> The truth is we are constantly moving further and further away from the silicon.

Are we? We're constantly changing abstractions, but we don't keep adding them all that often. Operating systems and high-level programming languages emerged in the 1960s. Since then, the only fundamentally new layer of abstraction were virtual machines (JVM, browser JS, hardware virtualization, etc). There's still plenty of hardware-specific APIs, you still debug assembly when something crashes, you still optimize databases for specific storage technologies and multimedia transcoders for specific CPU architectures...

Maybe fundamentally is an extremely load bearing word here, but just in the hardware itself we see far more abstraction than we saw in the 60s. The difference between what we called microcode in an 8086 and what is running in any processor you buy in 2025 is an abyss. It almost seems like hardware emulation. I could argue that the layers of memory caching that modern hardware have are themselves another layer vs the days when we sent instructions to change which memory banks to read. The fact that some addresses are very cheap and others are not, and the complexity is handled in hardware is very different than stashing data in extra registers we didn't need this loop. The virtualization any OS does for us is much deeper than even a big mainframe that was really running a dozen things at once. It only doesn't look like additional layers if you look from a mile away.

The majority of software today is written without knowing even which architecture the processor is going to be, how much of the processor we are going to have, whether anything will ever fit in memory... hell, we can write code that doesn't know not just the virtual machine it's going to run in, but even the family of virtual machine. I have written code that had no idea if it was running in a JVM, LLVM or a browser!

So when I compare my code from the 80s to what I wrote this morning, the distance from the hardware doesn't seem even remotely similar. I bet someone is writing hardware specific bits somewhere, and that maybe someone's debugging assembly might actually resemble what the hardware runs, maybe. But the vast majority of code is completely detached from anything.

At the company I work for, I routinely mock the software devs for solving every problem by adding yet another layer of abstraction. The piles of abstractions these people levy is mind numbingly absurd. Half the things they are fixing, if not more, are created by the abstractions in the first place.
Yeah, I remember watching a video of (I think?) a European professor who helped with an issue devs were having in developing The Witness. Turns out they had a large algorithm they developed in high level code (~2000 lines of code? can't remember) to place flora in the game world, which took minutes to process, and it was hampering productivity. He looked at it all, and redid almost all of it in something like <20 lines of assembly code, and it achieved the same result in microseconds. Unfortunately, I can't seem to find that video anymore...

Frankly though, when I bring stuff like this up, it feels like I'm being mocked than the other way around - like we're the minority. And sadly, I'm not sure if anything can ultimately be done about it. People just don't know what they don't know. Some things you can't tell people despite trying to, they just won't get it.

It's Casey Muratori, he's an American gamedev, not a professor. The video was a recorded guest lecture for a uni in the Netherlands though.

And it wasn't redone in assembly, it was C++ with SIMD intrinsics, which might as well just be assembly.

https://www.youtube.com/watch?v=Ge3aKEmZcqY&list=PLEMXAbCVnm...

See Plato's Cave. As an experienced dev, I have seen sunlight and the outside world and so many devs think shadows puppets in a cave is life.

https://en.m.wikipedia.org/wiki/Allegory_of_the_cave

That really sounds like no one bothered profiling the code. Which I'd say is underengineered, not over.
what's your point exactly? what do you hope to achieve by "bringing it up" (I assume in your workplace)?

most programmers are not able to solve a problem like that in 20 lines of assembly or whatever, and no amount of education or awareness is going to change that. acting as if they can is just going to come across as arrogant.

The point is exactly as the above post mentioned:

> Half the things they are fixing, if not more, are created by the abstractions in the first place

Unlike the above post though, in my experience, it's less often devs (at least the best ones) who want to keep moving away from the silicon, but more often management. Everywhere I have worked, management wants to avoid control over the lower-level workings of things and outsource or abstract it away. They then proceed to wonder why we struggle with the issues that we have, despite people who deal with these things trying to explain it to them. They seem to automatically assume that higher level abstractions are inherently better, and will lead to productivity gains, simply because you don't have to deal with the underlying workings of things. But the example I gave, is reason for why that that isn't always necessarily the case. Fact is, sometimes problems are better and more easily solved in a lower-level abstraction.

But as I had said, in my experience, management often wants to go the opposite way and often disallows us control over these things. So, as an engineer who wants to solve the problems as much as management or customers want their problems solved, hope to achieve by "bringing it up" in cases which seem appropriate, a change which empowers us to actually solve such problems.

Don't get me wrong though, I'm not saying lower-level is always the way to go. It always depends on the circumstances.

> acting as if they can is just going to come across as arrogant.

Hold on there a sec: WHAT?!

Engineers tend to solve their problems differently and the circumstances for those differences are not always clear. I'm in this field because I want to learn as many different approaches as possible. Did you never experience a moment when you could share a simpler solution to a problem with someone and could observe first hand when they became one of todays lucky 10'000[0]? That's anything but arrogant in my book.

Sadly, what I can increasingly observe is the complete opposite. Nobody wants to talk about their solutions, everyone wants to gatekeep and become indispensable, and criticism isn't seen as part of productive environments as "we just need to ship that damn feature!". Team members should be aware when decisions have been made out of lazyness, in good faith, out of experience, under pressure etc.

[0]: https://xkcd.com/1053/

> There's still plenty of hardware-specific APIs, you still debug assembly when something crashes, you still optimize databases for specific storage technologies and multimedia transcoders for specific CPU architectures...

You might, maybe, but an increasing proportion of developers:

- Don't have access to the assembly to debug it

- Don't even know what storage tech their database is sitting on

- Don't know or even control what CPU architecture their code is running on.

My job is debugging and performance profiling other people's code, but the vast majority of that is looking at query plans. If I'm really stumped, I'll look at the C++, but I've not yet once looked at assembly for it.

This makes sense to me. When I optimize, the most significant gains I find are algorithmic. Whether it's an extra call, a data structure that needs to be tweaked, or just utilizing a library that operates closer to silicon. I rarely need to go to assembly or even a lower level language to get acceptable performance. The only exception is occasionally getting into architecture specifics of a GPU. At this point, optimizing compilers are excellent and probably have more architecture details baked into them than I will ever know. Thank you, compiler programmers!
> At this point, optimizing compilers are excellent

the only people that say this are people who don't work on compilers. ask anyone that actually does and they'll tell you most compiler are pretty mediocre (tend to miss a lot of optimization opportunities), some compilers are horrendous, and a few are good in a small domain (matmul).

It's more that the God of Moore's Law have given us so many transistors that we are essentially always I/O blocked, so it effectively doesn't matter how good our assembly is for all but the most specialized of applications. Good assembly, bad assembly, whatever, the point is that your thread is almost always going to be blocked waiting for I/O (disk, network, human input) rather than something that a fancy optimization of the loop that enables better branch prediction can fix.
> It's more that the God of Moore's Law have given us so many transistors that we are essentially always I/O blocked

this is again just more brash confidence without experience. you're wrong. this is a post about GPUs and so i'll tell you that as a GPU compiler engineer i spend my entire day (work day) staring/thinking about asm in order to affect register pressure and ilp and load/store efficiency etc.

> rather than something that a fancy optimization of the loop

a fancy loop optimization (pipelinig) can fix some problems (load/store efficiency) but create other problems (register pressure). the fundamental fact is NFL theorem applies here fully: you cannot optimize for all programs uniformly.

https://en.wikipedia.org/wiki/No_free_lunch_theorem

loading story #43056468
> At this point, optimizing compilers are excellent and probably have more architecture details baked into them than I will ever know.

While modern compilers are great, you’d be surprised about the seemingly obvious optimizations compilers can’t do because of language semantics or the code transformations would be infeasible to detect.

I type versions of functions into godbolt all the time and it’s very interesting to see what code is/isn’t equivalent after O3 passes

The need to expose SSE instruction to system languages tells that compilers are not good at translating straightforward code into optimal machine code. And using SSE properly allows often to speed up the code by several times.
I don’t understand how you could say something like HTTP or Cloud Functions or React aren’t abstractions that software developers take for granted.
These days even if one writes in machine code it will be quite far away from the real silicon as that code has little to do with what CPU is actually doing. I suspect that C source code from, say, nineties was closer to the truth than the modern machine code.
Could you elaborate? I may very well just be ignorant on the topic.

I understand that if you write machine code and run it in your operating system, your operating system actually handles its execution (at least, I _think_ I understand that), but in what way does it have little to do with what the CPU is doing?

For instance, couldn't you still run that same code on bare metal?

Again, sorry if I'm misunderstanding something fundamental here, I'm still learning lol

loading story #43070525
The abstraction manifest more on the language level. No memory management, simplified synchronization primitives, no need of compilation.

Not sure virtual machine are fundamentally different. In the end if you have 3 virtual or 3 physical machine the most important difference is how fast you can change their configuration. They will still have all the other concepts (network, storage, etc.). The automation that comes with VM-s is better than it was for physical (probably), but then automation for everything got better (not only for machines).