This is an obvious point, but: Any goal is likely to include some variance minimization as a subgoal, if only because of the possibility that another entity (rival AI, nation state, company) with different goals could take over the world. If an AI has the means to take over the world, then it probably takes seriously the scenario that a rival takes over the world. Could it prevent that scenario without taking over itself?
This is an obvious point, but: Any goal is likely to include some variance minimization as a subgoal, if only because of the possibility that another entity (rival AI, nation state, company) with different goals could take over the world. If an AI has the means to take over the world, then it probably takes seriously the scenario that a rival takes over the world. Could it prevent that scenario without taking over itself?
This is a variant of my old question:
There is a button at your table. If you press it, it will give you absolute power. Do you press it?
More people answer no. Followed by:
Hitler is sitting at the same table, and is looking at the button. Now do you press it?