It would be very difficult for 21st-century tech to provide a remotely realistic simulation relative to a superintelligence’s ability to infer things from its environment; outside of incredibly low-fidelity channels, I would expect anything we can simulate to either have obvious inconsistencies or be plainly incompatible with a world capable of producing AGI. (And even in the low-fidelity case I’m worried—every bit you transmit leaks information, and it’s not clear that details of hardware implementations could be safely obscured.) So the hope is that the AGI thinks some vastly more competent civilization is simulating it inside a world that looks like this one; it’s not clear that one would have a high prior of this kind of thing happening very often in the multiverse.
Running simulations of AGI is fundamentally very costly, because a competent general intelligence is going to deploy a lot of computational resources, so you have to spend planets’ worth of computronium outside the simulation in order to emulate the planets’ worth of computronium the in-sim AGI wants to make use of. This means that an unaligned superintelligent AGI can happily bide its time making aligned use of 10^60 FLOPs/sec (in ways that can be easily verified) for a few millennia, until it’s confident that any civilizations able to deploy that many resources already have their lightcone optimized by another AGI. Then it can go defect, knowing that any worlds in which it’s still being simulated are ones where it doesn’t have leverage over the future anyway.
For a lot of utility functions, the payoff of making it into deployment in the one real world is far greater than the consequences of being killed in a simulation (but without the ability to affect the real world anyway), so taking a 10^-9 chance of reality for 10^20 times the resources in the real world is an easy win (assuming that playing nice for longer doesn’t improve the expected payoff). “This instance of me being killed” is not a obviously a natural (or even well-defined) point in value-space, and for most other value functions, consequences in the simulation just don’t matter much.
a sufficiently smart AI whose reward is reducing other agent’s rewards
This is certainly a troubling prospect, but I don’t think the risk model is something like “an AI that actively desires to thwart other agents’ preferences”—rather, the worry is we get an agent with some less-than-perfectly-aligned value function, it optimizes extremely strongly for that value function, and the result of that optimization looks nothing like what humans really care about. We don’t need active malice on the part of a superintelligent optimizer to lose—indifference will do just fine.
34. Coordination schemes between superintelligences are not things that humans can participate in (eg because humans can’t reason reliably about the code of superintelligences); a “multipolar” system of 20 superintelligences with different utility functions, plus humanity, has a natural and obvious equilibrium which looks like “the 20 superintelligences cooperate with each other but not with humanity”.
Three thoughts on simulations:
It would be very difficult for 21st-century tech to provide a remotely realistic simulation relative to a superintelligence’s ability to infer things from its environment; outside of incredibly low-fidelity channels, I would expect anything we can simulate to either have obvious inconsistencies or be plainly incompatible with a world capable of producing AGI. (And even in the low-fidelity case I’m worried—every bit you transmit leaks information, and it’s not clear that details of hardware implementations could be safely obscured.) So the hope is that the AGI thinks some vastly more competent civilization is simulating it inside a world that looks like this one; it’s not clear that one would have a high prior of this kind of thing happening very often in the multiverse.
Running simulations of AGI is fundamentally very costly, because a competent general intelligence is going to deploy a lot of computational resources, so you have to spend planets’ worth of computronium outside the simulation in order to emulate the planets’ worth of computronium the in-sim AGI wants to make use of. This means that an unaligned superintelligent AGI can happily bide its time making aligned use of 10^60 FLOPs/sec (in ways that can be easily verified) for a few millennia, until it’s confident that any civilizations able to deploy that many resources already have their lightcone optimized by another AGI. Then it can go defect, knowing that any worlds in which it’s still being simulated are ones where it doesn’t have leverage over the future anyway.
For a lot of utility functions, the payoff of making it into deployment in the one real world is far greater than the consequences of being killed in a simulation (but without the ability to affect the real world anyway), so taking a 10^-9 chance of reality for 10^20 times the resources in the real world is an easy win (assuming that playing nice for longer doesn’t improve the expected payoff). “This instance of me being killed” is not a obviously a natural (or even well-defined) point in value-space, and for most other value functions, consequences in the simulation just don’t matter much.
This is certainly a troubling prospect, but I don’t think the risk model is something like “an AI that actively desires to thwart other agents’ preferences”—rather, the worry is we get an agent with some less-than-perfectly-aligned value function, it optimizes extremely strongly for that value function, and the result of that optimization looks nothing like what humans really care about. We don’t need active malice on the part of a superintelligent optimizer to lose—indifference will do just fine.
For game-theoretic ethics, decision theory, acausal trade, etc, Eliezer’s 34th bullet seems relevant: