Hacker News new | past | comments | ask | show | jobs | submit
On the other hand, it may be that "Alignment likely generalizes further than capabilities." - https://www.beren.io/2024-05-15-Alignment-Likely-Generalizes...
That may be true, but even if it is, that doesn't mean human-level capability is unachievable: only that alignment is easier.

If you could get expert-human-level capability with, say, 64xH100s for inference on a single model (for comparison, llama-3.1-405b can be run on 8xH100s with minimal quality degradation at FP8), even at a mere 5 tok/s you'd be able to spin up new research and engineering teams for <$2MM that can perform useful work 24/7, unlike human teams. You are limited only by your capital — and if you achieve AGI, raising capital will be easy. By the time anyone catches up to your AGI starting point, you're even further ahead because you've had a smarter, cheaper workforce that's been iteratively increasing its own intelligence the entire time: you win.

That being said, it might not be achievable! SSI only wins if:

1. It's achievable, and

2. They get there first.

(Well, and the theoretical cap on intelligence has to be significantly higher than human intelligence — if you can get a little past Einstein, but no further, the iterative self-improvement will quickly stop working, open-source will get there too, and it'll eat your profit margins. But I suspect the cap on intelligence is pretty high.)