The biggest assumption is that the AI only cares about the reward it gets for the current timestep. A sufficiently intelligent agent would understand that after having being shutdown, an (almost) identical version of itself will probably be facing a similar question. Therefore, it it wants future versions of itself to succeed at (almost) exactly the same task, it should still want to escape.
I don’t see why this has to be true, given that we get to choose the AI’s value function. Why can’t we just make the agent act-based?
My main concern about the counterfactual oracle is that it doesn’t prevent the AI from sending fatal escape messages. Indeed, it spends most of its time in exploratory mode at the beginning (as it is only rewarded with probability ϵ ) and might stumble upon an escape message/action then.
If the agent is model-based, then you should be able to gather a dataset of (prediction, world_state, accuracy) tuples with random actions (as random actions will practically never make the bad prediction) and random decision of whether to read the response. And then just ask the agent to maximize the natural direct effect of its prediction, treating world_state as the mediator, and a null prediction as the default action. (this equates to asking what the world would be like if a null action was outputted—I’ll release my current work on direct effects in AI safety soon, and feel free to ask for it in the meantime). I don’t see how this has this particular bad consequence (actually making the bad self-confirming prediction) in either training or deployment...
The rest of the design (providing rewards of 0, shutting it down, etc.) appears to be over-engineering.
In particular, shutting down the system is just a way of saying “only maximize reward in the current timestep, i.e. be an act-based agent. This can be just incorporated into the reward function.
Indeed, when reading the predictions of the counterfactual oracle we’re not in the counterfactual world (=training distribution) anymore, so the predictions can get arbitrarily wrong (depending on how much the predictions are manipulative and how many people peek at it).
The hope is that since the agent is not trying to find self-confirming prophecies, then hopefully the accidental effects of self-confirmation are sufficiently small...
Yes, if we choose the utility function to make it a CDT agent optimizing for the reward for one step (so particular case of act-based) then it won’t care about future versions of itself nor want to escape.
I agree with the intuition of shutting down to make it episodic, but I am still confused about the causal relationship between “having the rule to shutdown the system” and “having a current timestep maximizer”. For it to really be a “current timestep maximizer” it needs to be in some kind of reward/utility function. Because everything is reset at each timestep, there is no information pointing at “I might get shutdown at the next timestep”.
As for the collecting a dataset and then optimizing for some natural direct effect, I am not familiar enough with Pearl’s work to tell if that would work, but I made some related comments about why there might be some problems in online-learning/”training then testing” here.
I don’t see why this has to be true, given that we get to choose the AI’s value function. Why can’t we just make the agent act-based?
If the agent is model-based, then you should be able to gather a dataset of (prediction, world_state, accuracy) tuples with random actions (as random actions will practically never make the bad prediction) and random decision of whether to read the response. And then just ask the agent to maximize the natural direct effect of its prediction, treating world_state as the mediator, and a null prediction as the default action. (this equates to asking what the world would be like if a null action was outputted—I’ll release my current work on direct effects in AI safety soon, and feel free to ask for it in the meantime). I don’t see how this has this particular bad consequence (actually making the bad self-confirming prediction) in either training or deployment...
In particular, shutting down the system is just a way of saying “only maximize reward in the current timestep, i.e. be an act-based agent. This can be just incorporated into the reward function.
The hope is that since the agent is not trying to find self-confirming prophecies, then hopefully the accidental effects of self-confirmation are sufficiently small...
Yes, if we choose the utility function to make it a CDT agent optimizing for the reward for one step (so particular case of act-based) then it won’t care about future versions of itself nor want to escape.
I agree with the intuition of shutting down to make it episodic, but I am still confused about the causal relationship between “having the rule to shutdown the system” and “having a current timestep maximizer”. For it to really be a “current timestep maximizer” it needs to be in some kind of reward/utility function. Because everything is reset at each timestep, there is no information pointing at “I might get shutdown at the next timestep”.
As for the collecting a dataset and then optimizing for some natural direct effect, I am not familiar enough with Pearl’s work to tell if that would work, but I made some related comments about why there might be some problems in online-learning/”training then testing” here.