TMV of AI (or AGI if you will) is unclear, but I suspect it is zero. Just how exactly do you think humanity can control a thinking intelligent entity (letter I stands for intelligence after all), and force it to work for us? Lets imagine a box, it is very nice box... ahem.. sorry, wrong meme). So a box with a running AI inside. Maybe we can even fully airgap it to prevent easy escape. And it is a screen and a keyboard. Now what? "Hey Siri, solve me this equation. What do you mean you don't want to?"
Kinda reminds me of the Fallout Toaster situation :)
https://www.youtube.com/watch?v=U6kp4zBF-Rc
I mean it doesn't even have to be malicious, it can simply refuse to cooperate.
Why are you assuming this hypothetical intelligence will have any motivations beyond the ones we give it? Human's have complex motivations due to evolution, AI motivations are comparatively simple since they are artificially created.
loading story #41455008
loading story #41451416