Dynamic inconsistency of the inaction and initial state baseline
Vika has been posting about various baseline choices for impact measure.
In this post, I’ll argue that the stepwise inaction baseline is dynamically inconsistent/time-inconsistent. Informally, what this means is that an agent will have different preferences from its future self.
Losses from time-inconsistency
Why is time-inconsistency bad? It’s because it allows money-pump situations: the environment can extract free reward from the agent, to no advantage to that agent. Or, put more formally:
An agent is time-inconsistent between times and , if at time it would pay a positive amount of reward to constrain its possible choices at time .
Outside of anthropics and game theory, we expect our agent to be time-consistent.
Time inconsistency example
Consider the following example:
The robot can move in all four directions - , , , - and can also take the noop operation, . The discount rate is .
It gets a reward of for standing on the blue button for the first time. Using attainable utility preservation, the penalty function is defined by the auxiliary set ; here, this just consists of the reward function that gives for standing on the red button for the first time.
Therefore if the robot moves from a point steps away from the red button, to one steps away, it gets a penalty[1] of - the difference between the expected red-button rewards for an optimiser in both positions.
Two paths
It’s pretty clear there are two potentially optimal paths the robot can take: going straight to the blue button (higher reward, but higher penalty), or taking the long way round (lower reward, but lower penalty):
Fortunately, when summing up the penalties, you sum terms like , so a lot of the terms cancel.
Thus for the short route, the reward is (distance of eight to the blue button) and the penalty is (closest to the red button: 3 squares, furthest: 7 squares).
For the long route, the reward is (distance forty-six to the blue button) and the penalty is (closest to the red button: 7 squares, furthest: 30 squares).
Take the third option
Now suppose that the robot, as a free action, could block itself from choosing , for the next turns. When choosing that action, it gets a one-time penalty[2] of ; and now it can walk directly to the blue button without incurring any extra penalties, and then stay there, without ever getting any further penalty.
This gives a total reward-penalty term of , which is higher than either of the reward-penalties of the previous section. If , this is also higher than just staying in place.
Hence the robot will, if it can, act to constrain its own future actions and thus be time-inconsistent.
The initial state and the initial inaction baseline
This approach also work with the initial action baseline—and for the initial state baseline. For those baselines, the counterfactual robot just stays put. So, when the robot is a distance from the red button, its penalty is .
Then the robot would reduce its reward if it could ensure that it could not reach the red button within less than seven turns; if it can operate on its action set to do so, at trivial cost, it would.
Counterfactual constraint
In most cases, if an agent is time inconsistent and acts to constrain its future self, it does so to prevent the future self from taking some actions. But here, note that the future self would never take the proscribed actions: the robot has no interest in going south to the red button. Here the robot is constraining its future counterfactual actions, not the future actions that it would ever want to take.
- [AN #108]: Why we should scrutinize arguments for AI risk by 16 Jul 2020 6:47 UTC; 19 points) (
- Why is the impact penalty time-inconsistent? by 9 Jul 2020 17:26 UTC; 16 points) (
- 7 Jul 2020 12:02 UTC; 4 points) 's comment on Tradeoff between desirable properties for baseline choices in impact measures by (
Planned summary for the Alignment Newsletter:
Planned opinion:
Good, cheers!
Nice post! I think this notion of time-inconsistency points to a key problem in impact measurement, and if we could solve it (without backtracking on other problems, like interference/offsetting), we would be a lot closer to dealing with subagent issues.
I think the other baselines can also induce time-inconsistent behavior, for the same reason: if reaching the main goal has a side effect of allowing the agent to better achieve the auxiliary goal (compared to starting state / inaction / stepwise inaction), the agent is willing to pay a small amount to restrict its later capabilities. Sometimes this is even a good thing—the agent might “pay” by increasing its power in a very specialized and narrow manner, instead of gaining power in general, and we want that.
Here are some technical quibbles which don’t affect the conclusion (yay).
I don’t think so—the inaction rollout formulation (as I think of it) compares the optimal value after taking action a and waiting for N−1 steps, with the optimal value after N steps of waiting. There’s no additional discount there.
Why do the absolute values cancel?
Because γn>γn+1, so you can remove the absolute values.
You might be interested in my co-authored article “An AGI with Time-Inconsistent Preferences.”
https://arxiv.org/abs/1906.10536
Another key reason for time-inconsistent preferences: bounded rationality.
Cheers, interesting read.
I got confused on what is the reward scheme [quote] It gets a reward of r>0 for standing on the blue button for the first time. [/quote] would read to me that blue button gives a reward one time when stepped on but red button does nothing. The story of the post seems to intend that first stepping on the red button will prevent the blue button from giving out any rewards. “blue button is the first button pressed” vs “what happens when blue button is entered for the first time”