The ONLY way for humans to maintain dominion over superintelligent AI in this scenario is if alignment was solved long before any superintelligent AI existed. And only then if this alignment solution were tailored specifically to produce robustly submissive motivational schemas for AGI. And only then if this solution were provably scalable to an arbitrary degree. And only then if this solution were well-enforced universally.
Even then, though, it’s not really dominion. It’s more like having gods who treat the universe as their playground but who also feel compelled to make sure their pet ants feel happy and important.
No.
The ONLY way for humans to maintain dominion over superintelligent AI in this scenario is if alignment was solved long before any superintelligent AI existed. And only then if this alignment solution were tailored specifically to produce robustly submissive motivational schemas for AGI. And only then if this solution were provably scalable to an arbitrary degree. And only then if this solution were well-enforced universally.
Even then, though, it’s not really dominion. It’s more like having gods who treat the universe as their playground but who also feel compelled to make sure their pet ants feel happy and important.