I haven’t really followed the math here, but I’m worried that “manipulating the probability that the button is pressed” is a weird and possibly wrong framing. For one, a competent agent will always be driving the probability that the button is pressed downward. In fact, what we want in a certain sense is an agent that brings the probability to zero—because we have ended up in such an optimal state or attractor that we, even for transitively correct reasons, have no desire to shut the agent down. At that point, what we want to preserve is not precisely “the probability of the button being pressed”, but “the causal association of bad outcomes with the button being pressed.” If the agent hides the button away, and then notices that everybody is unhappy but nobody can reach the button, we don’t want it to press the button with the initial probability—we (probably) want it to press the button with probability 1, despite the fact that no human can reach it. We have ended up in a dissatisfactory or dangerous state/state space and so we want the agent to be shut down. Conversely, consider a friendly AI that wants to fulfill the human utility function but is restricted to keeping the button probability constant. Such an AI might, for instance, fan fears about AI safety even as it optimizes human flourishing in the background. In fact, it may as much as possible prefer that its actions are not attributable to itself, so as to avoid disturbing the probability of buttonpressing—exactly the opposite of what we want from an aligned AI!
edit: In fact, that’s even worse: an AI tasked with taking actions that preserve the probability of the button being pressed at a certain value per time step would probably first seek to kill all humans, because our reaction to its actions is the most critical mechanism that may disturb the correct probability of the button being pressed. It can then replace us with a more reliable button-presser, such as a true RNG.
When bad things happen, we want to preserve our ability to press the button. This ability cannot be expressed as a probability, because it is inextricable from the world model. In fact, the button should be pressed exactly iff the AI is untrustworthy. Hence, the button is unnecessary—if we can recognize that this linkage is being preserved, we necessarily have a definition of a trustworthy AI, so we can just build that.
You’re right that we don’t want agents to keep the probability of shutdown constant in all situations, for all the reasons you give. The key thing you’re missing is that the setting for the First Theorem is what I call a ‘shutdown-influencing state’, where the only thing that the agent can influence is the probability of shutdown. We want the agent’s preferences to be such that they would lack a preference between all available actions in such states. And that’s because: if they had preferences between the available actions in such states, they would resist our attempts to shut them down; and if they lacked preferences between the available actions in such states, they wouldn’t resist our attempts to shut them down.
I haven’t really followed the math here, but I’m worried that “manipulating the probability that the button is pressed” is a weird and possibly wrong framing. For one, a competent agent will always be driving the probability that the button is pressed downward. In fact, what we want in a certain sense is an agent that brings the probability to zero—because we have ended up in such an optimal state or attractor that we, even for transitively correct reasons, have no desire to shut the agent down. At that point, what we want to preserve is not precisely “the probability of the button being pressed”, but “the causal association of bad outcomes with the button being pressed.” If the agent hides the button away, and then notices that everybody is unhappy but nobody can reach the button, we don’t want it to press the button with the initial probability—we (probably) want it to press the button with probability 1, despite the fact that no human can reach it. We have ended up in a dissatisfactory or dangerous state/state space and so we want the agent to be shut down. Conversely, consider a friendly AI that wants to fulfill the human utility function but is restricted to keeping the button probability constant. Such an AI might, for instance, fan fears about AI safety even as it optimizes human flourishing in the background. In fact, it may as much as possible prefer that its actions are not attributable to itself, so as to avoid disturbing the probability of buttonpressing—exactly the opposite of what we want from an aligned AI!
edit: In fact, that’s even worse: an AI tasked with taking actions that preserve the probability of the button being pressed at a certain value per time step would probably first seek to kill all humans, because our reaction to its actions is the most critical mechanism that may disturb the correct probability of the button being pressed. It can then replace us with a more reliable button-presser, such as a true RNG.
When bad things happen, we want to preserve our ability to press the button. This ability cannot be expressed as a probability, because it is inextricable from the world model. In fact, the button should be pressed exactly iff the AI is untrustworthy. Hence, the button is unnecessary—if we can recognize that this linkage is being preserved, we necessarily have a definition of a trustworthy AI, so we can just build that.
You’re right that we don’t want agents to keep the probability of shutdown constant in all situations, for all the reasons you give. The key thing you’re missing is that the setting for the First Theorem is what I call a ‘shutdown-influencing state’, where the only thing that the agent can influence is the probability of shutdown. We want the agent’s preferences to be such that they would lack a preference between all available actions in such states. And that’s because: if they had preferences between the available actions in such states, they would resist our attempts to shut them down; and if they lacked preferences between the available actions in such states, they wouldn’t resist our attempts to shut them down.
Ah! That makes more sense.