This doesn’t seem implausible. But on the other hand, imagine an agent which goes through a million episodes, and in each one reasons at the beginning “X is my misaligned terminal goal, and therefore I’m going to deceptively behave as if I’m aligned” and then acts perfectly like an aligned agent from then on. My claims then would be:
a) Over many update steps, even a small description length penalty of having terminal goal X (compared with being aligned) will add up. b) Having terminal goal X also adds a runtime penalty, and I expect that NNs in practice are biased against runtime penalties (at the very least because it prevents them from doing other more useful stuff with that runtime).
In a setting where you also have outer alignment failures, the same argument still holds, just replace “aligned agent” with “reward-maximizing agent”.
This doesn’t seem implausible. But on the other hand, imagine an agent which goes through a million episodes, and in each one reasons at the beginning “X is my misaligned terminal goal, and therefore I’m going to deceptively behave as if I’m aligned” and then acts perfectly like an aligned agent from then on. My claims then would be:
a) Over many update steps, even a small description length penalty of having terminal goal X (compared with being aligned) will add up.
b) Having terminal goal X also adds a runtime penalty, and I expect that NNs in practice are biased against runtime penalties (at the very least because it prevents them from doing other more useful stuff with that runtime).
In a setting where you also have outer alignment failures, the same argument still holds, just replace “aligned agent” with “reward-maximizing agent”.