One element that needs to be remembered here is that each major participant in this situation will have superhuman advice. Even if these are “do what I mean and check” order-following AI, if they can forsee that an order will lead to disaster they will presumably be programmed to say so (not doing so is possible, but is a clearly a flawed design). So if it is reasonably obvious to anything superintelligent that both:
a) treating this as a zero-sum winner-take all game is likely to lead to a disaster, and
b) there is a cooperative non-zero-sum game approach whose outcome is likely to be better, for the median participant
then we can reasonably expect that all the humans involved will be getting that advice from their AIs, unless-and-until they order them to shut up.
This of course does not prove that both a) and b) are true, merely that is that were the case, we can be optimistic of an outcome better than the usual results of human short-sightedness.
The potential benefits of cheap superintelligence certainly provide some opportunity for this to be a non-zero-sum game; what’s less clear is that having multiple groups of humans controlling multiple order-following AIs cooperating clearly improves that. The usual answer is that in research and the economy a diversity of approaches/competition increases the chances of success and the opportunities for cross-pollenization: whether that necessarily applies in this situation is less clear
The problem in that case is that I’m not sure your b) is true. I certainly hope it is. I agree that it’s unclear. That’s why I’d like to get more analysis of a multipolar human-controlled ASI scenario. I don’t think people have thought about this very seriously yet.
One element that needs to be remembered here is that each major participant in this situation will have superhuman advice. Even if these are “do what I mean and check” order-following AI, if they can forsee that an order will lead to disaster they will presumably be programmed to say so (not doing so is possible, but is a clearly a flawed design). So if it is reasonably obvious to anything superintelligent that both:
a) treating this as a zero-sum winner-take all game is likely to lead to a disaster, and
b) there is a cooperative non-zero-sum game approach whose outcome is likely to be better, for the median participant
then we can reasonably expect that all the humans involved will be getting that advice from their AIs, unless-and-until they order them to shut up.
This of course does not prove that both a) and b) are true, merely that is that were the case, we can be optimistic of an outcome better than the usual results of human short-sightedness.
The potential benefits of cheap superintelligence certainly provide some opportunity for this to be a non-zero-sum game; what’s less clear is that having multiple groups of humans controlling multiple order-following AIs cooperating clearly improves that. The usual answer is that in research and the economy a diversity of approaches/competition increases the chances of success and the opportunities for cross-pollenization: whether that necessarily applies in this situation is less clear
Absolutely. I mentioned getting advice briefly in this short article and a little more in Instruction-following AGI is easier...
The problem in that case is that I’m not sure your b) is true. I certainly hope it is. I agree that it’s unclear. That’s why I’d like to get more analysis of a multipolar human-controlled ASI scenario. I don’t think people have thought about this very seriously yet.