I read the post and parts of the paper. Here is my understanding: conditions similar to those in Theorem 2 above don’t exist, because Alex’s paper doesn’t take an arbitrary utility function and prove instrumental convergence; instead, the idea is to set the rewards for the MDP randomly (by sampling i.i.d. from some distribution) and then show that in most cases, the agent seeks “power” (states which allow the agent to obtain high rewards in the future). So it avoids the twitching robot not by saying that it can’t make use of additional resources, but by saying that the twitching robot has an atypical reward function. So even though there aren’t conditions similar to those in Theorem 2, there are still conditions analogous to them (in the structure of the argument “expected utility/reward maximization + X implies catastrophe”), namely X = “the reward function is typical”. Does that sound right?
Writing this comment reminded me of Oliver’s comment where X = “agent wasn’t specifically optimized away from goal-directedness”.
because Alex’s paper doesn’t take an arbitrary utility function and prove instrumental convergence;
That’s right; that would prove too much.
namely X = “the reward function is typical”. Does that sound right?
Yeah, although note that I proved asymptotic instrumental convergence for typical functions under iid reward sampling assumptions at each state, so I think there’s wiggle room to say “but the reward functions we provide aren’t drawn from this distribution!”. I personally think this doesn’t matter much, because the work still tells us a lot about the underlying optimization pressures.
The result is also true in the general case of an arbitrary reward function distribution, you just don’t know in advance which terminal states the distribution prefers.
I read the post and parts of the paper. Here is my understanding: conditions similar to those in Theorem 2 above don’t exist, because Alex’s paper doesn’t take an arbitrary utility function and prove instrumental convergence; instead, the idea is to set the rewards for the MDP randomly (by sampling i.i.d. from some distribution) and then show that in most cases, the agent seeks “power” (states which allow the agent to obtain high rewards in the future). So it avoids the twitching robot not by saying that it can’t make use of additional resources, but by saying that the twitching robot has an atypical reward function. So even though there aren’t conditions similar to those in Theorem 2, there are still conditions analogous to them (in the structure of the argument “expected utility/reward maximization + X implies catastrophe”), namely X = “the reward function is typical”. Does that sound right?
Writing this comment reminded me of Oliver’s comment where X = “agent wasn’t specifically optimized away from goal-directedness”.
That’s right; that would prove too much.
Yeah, although note that I proved asymptotic instrumental convergence for typical functions under iid reward sampling assumptions at each state, so I think there’s wiggle room to say “but the reward functions we provide aren’t drawn from this distribution!”. I personally think this doesn’t matter much, because the work still tells us a lot about the underlying optimization pressures.
The result is also true in the general case of an arbitrary reward function distribution, you just don’t know in advance which terminal states the distribution prefers.