This seems to be what we desire. The coin flip is only relevant via it’s impact on the button; we want the AI to ignore the impact via the button; hence the AI ignore the coin flip.
It’s illustrating the failure of a further desideratum for the shutdown problem: we would like the AI to be able to update on and react to things that happen in the world which correlate with a certain channel, and yet still not attempt to influence that channel.
For motivation, assume a variant on the paperclip game:
the humans can be observed reaching for the button several turns before it is pressed
the humans’ decision to press the button is a stochastic function of environmental variables (like seeing that the AI has unexpectedly been hit by lightning, or has started producing Too Many Paperclips, etc)
We would like a solution which in some sense updates on the precursors to shutdown and minimizes the damage while still not attempting to influence the button press. (If doing such a thing robustly is impossible, we would like to discover this; Jessica mentioned that there is a version which does this but is not reflectively consistent.)
Intuitively, I could imagine a well-constructed AI reasoning “oh, they’re showing signs that they’re going to shut me down, guess my goal is wrong, I’ll initiate Safe Shutdown Protocol now rather than risk doing further damage”, but current formalizations don’t do this.
This seems to be what we desire. The coin flip is only relevant via it’s impact on the button; we want the AI to ignore the impact via the button; hence the AI ignore the coin flip.
It’s illustrating the failure of a further desideratum for the shutdown problem: we would like the AI to be able to update on and react to things that happen in the world which correlate with a certain channel, and yet still not attempt to influence that channel.
For motivation, assume a variant on the paperclip game:
the humans can be observed reaching for the button several turns before it is pressed
the humans’ decision to press the button is a stochastic function of environmental variables (like seeing that the AI has unexpectedly been hit by lightning, or has started producing Too Many Paperclips, etc)
We would like a solution which in some sense updates on the precursors to shutdown and minimizes the damage while still not attempting to influence the button press. (If doing such a thing robustly is impossible, we would like to discover this; Jessica mentioned that there is a version which does this but is not reflectively consistent.)
Intuitively, I could imagine a well-constructed AI reasoning “oh, they’re showing signs that they’re going to shut me down, guess my goal is wrong, I’ll initiate Safe Shutdown Protocol now rather than risk doing further damage”, but current formalizations don’t do this.