I am really sorry if this is a very entry-level question, but it has been on my mind a lot recently and I haven’t find any satisfying answers. It is addressed only to those who expect our future not to be totally terrible.
Let’s assume for a second that we manage to create Artificial Superintelligence (ASI). Let’s also assume that ASI takes over our planet. In this scenario, why would ASI not do either one of the following things: 1) Exploit humans in pursuit of its own goals, while giving us the barest minimum to survive (effectively making us slaves) or 2) Take over the resources of the entire solar system for itself and leave us starving without any resources?
Under such a scenario, why would we expect human lives to be any good (much less a utopia)?
The best argument here probably comes from Paul Christiano, but to summarize the argument, it’s because even in a situation where we messed up pretty badly in aligning the AI, so long as the failure mode isn’t deceptive alignment but instead misgeneralization of human preferences/non-deceptive alignment failures, it’s pretty likely that there will be at least some human-regarding preferences, and that means the AI will do some acts of niceness if it is cheap to them, and preserving humans is very cheap for superintelligent AI.
More answers can be found here:
https://www.lesswrong.com/posts/xvBZPEccSfM8Fsobt/what-are-the-best-arguments-for-against-ais-being-slightly#qsmA3GBJMrkFQM5Rn
https://www.lesswrong.com/posts/87EzRDAHkQJptLthE/but-why-would-the-ai-kill-us?commentId=sEzzJ8bjCQ7aKLSJo
https://www.lesswrong.com/posts/2NncxDQ3KBDCxiJiP/cosmopolitan-values-don-t-come-free?commentId=ofPTrG6wsq7CxuTXk
https://www.lesswrong.com/posts/87EzRDAHkQJptLthE/but-why-would-the-ai-kill-us?commentId=xK2iHGJfHvmyCCZsh
The ASI will do what it is programmed to do. If it means helping humans, it will help humans. If there is a bug in the program, it will… do something that is difficult to predict (and that sounds scary, because most random things are not good).
Make us slaves? We probably wouldn’t be useful slaves, compared to alternatives, such as robots, or human bodies with brains replaced by computers.
Taking over the resources probably means killing us in the process, if those resources include e.g. water or oxygen of Earth.