Realistically what actually ends up happening imo, we get human level AGI and hit a ceiling there. Agents replace large portions of the current service economy greatly increasing automation / efficiency for companies.
People continue to live their lives, as the idea of having a human level AGI personal assistant becomes normalized and then taken for granted.
- A simple calculator can beat any human at arithmetic
- A word processor with a hard disk is orders of magnitude better than any human at memorizing text
- No human can approach the ELO of a chess program running on a cheap laptop
- No human polymath has journeyman-level knowledge of a tiny fraction of the fields that LLMs can recite textbook answers from
Why should we expect AGI to be the first cognitive capabilities that does not quickly shoot past human-level ability?
> People continue to live their lives
Presumably large numbers of those people no longer have jobs, and therefore no income.
> we get human level AGI and hit a ceiling there
Recently I've been wondering if our best chance for a brake on runaway non-hard-takeoff superintelligence would be that the economy would be trashed.