These days even if one writes in machine code it will be quite far away from the real silicon as that code has little to do with what CPU is actually doing. I suspect that C source code from, say, nineties was closer to the truth than the modern machine code.
Could you elaborate? I may very well just be ignorant on the topic.
I understand that if you write machine code and run it in your operating system, your operating system actually handles its execution (at least, I _think_ I understand that), but in what way does it have little to do with what the CPU is doing?
For instance, couldn't you still run that same code on bare metal?
Again, sorry if I'm misunderstanding something fundamental here, I'm still learning lol
The quest for performance has turned modern CPUs into something that looks a lot more like a JITed bytecode interpreter rather than the straightforward “this opcode activates this bit of the hardware” model they once were. Things like branch prediction, speculative execution, out-of-order execution, hyperthreading, register renaming, L1/2/3 caches, µOp caches, TLB caches... all mean that even looking at the raw machine code tells you relatively little about what parts of the hardware the code will activate or how it will perform in practice.
loading story #43081926