The claim that “this isn’t changed at all by trying updateless reasoning” depends on the assumptions about updateless reasoning. If the agent chooses a policy in the form of a self-sufficient program, then you are right. On the other hand, if the agent chooses a policy in the form of a program with oracle access to the “utility estimator,” then there is an equilibrium where both smoke-lovers and non-smoke-lovers self-modify into CDT. Admittedly, there are also “bad” equilibria, e.g. non-smoke-lovers staying with EDT and smoke-lovers choosing between EDT and CDT with some probability. However, it seems arguable that the presence of bad equilibria is due to the “degenerate” property of the problem that one type of agents have incentives to move away from EDT whereas another type has exactly zero such incentives.
The non-smoke-loving agents think of themselves as having a negative incentive to switch to CDT in that case. They think that if they build a CDT agent with oracle access to their true reward function, they may smoke (since they don’t know what their true reward function is). So I don’t think there’s an equilibrium there. The non-smoke-lovers would prefer to explicitly give a CDT successor a non-smoke-loving utility function, if they wanted to switch to CDT. But then, this action itself would give evidence of their own true utility function, likely counter-balancing any reason to switch to CDT.
I was wondering about what happens if the agents try to write a strategy for switching between using such a utility oracle and a hand-written utility function (which would in fact be the same function, since they prefer their own utility function). But this probably doesn’t do anything nice either, since a useful choice of policy their would also reveal too much information about motives.
Yeah, you’re right. This setting is quite confusing :) In fact, if your agent doesn’t commit to a policy once and for all, things get pretty weird because it doesn’t trust its future-self.
The claim that “this isn’t changed at all by trying updateless reasoning” depends on the assumptions about updateless reasoning. If the agent chooses a policy in the form of a self-sufficient program, then you are right. On the other hand, if the agent chooses a policy in the form of a program with oracle access to the “utility estimator,” then there is an equilibrium where both smoke-lovers and non-smoke-lovers self-modify into CDT. Admittedly, there are also “bad” equilibria, e.g. non-smoke-lovers staying with EDT and smoke-lovers choosing between EDT and CDT with some probability. However, it seems arguable that the presence of bad equilibria is due to the “degenerate” property of the problem that one type of agents have incentives to move away from EDT whereas another type has exactly zero such incentives.
The non-smoke-loving agents think of themselves as having a negative incentive to switch to CDT in that case. They think that if they build a CDT agent with oracle access to their true reward function, they may smoke (since they don’t know what their true reward function is). So I don’t think there’s an equilibrium there. The non-smoke-lovers would prefer to explicitly give a CDT successor a non-smoke-loving utility function, if they wanted to switch to CDT. But then, this action itself would give evidence of their own true utility function, likely counter-balancing any reason to switch to CDT.
I was wondering about what happens if the agents try to write a strategy for switching between using such a utility oracle and a hand-written utility function (which would in fact be the same function, since they prefer their own utility function). But this probably doesn’t do anything nice either, since a useful choice of policy their would also reveal too much information about motives.
Yeah, you’re right. This setting is quite confusing :) In fact, if your agent doesn’t commit to a policy once and for all, things get pretty weird because it doesn’t trust its future-self.