Argument sketch for why boxing is doomed if the agent is perfectly misaligned:
Consider a perfectly misaligned agent which has −1 times your utility function—it’s zero-sum. Then suppose you got useful output of the agent. This means you’re able to increase your EU. This means the AI decreased its EU by saying anything. Therefore, it should have shut up instead. But since we assume it’s smarter than you, it realized this possibility, and so the fact that it’s saying something means that it expects to gain by hurting your interests via its output. Therefore, the output can’t be useful.
Makes sense, with the proviso that this is sometimes true only statistically. Like, the AI may choose to write an output which has a 70% chance to hurt you and a 30% chance to (equally) help you, if that is its best option.
If you assume that the AI is smarter than you, and has a good model of you, you should not read the output. But if you accidentally read it, and luckily you react in the right (for you) way, that is a possible result, too. You just cannot and should not rely on being so lucky.
Argument sketch for why boxing is doomed if the agent is perfectly misaligned:
Consider a perfectly misaligned agent which has −1 times your utility function—it’s zero-sum. Then suppose you got useful output of the agent. This means you’re able to increase your EU. This means the AI decreased its EU by saying anything. Therefore, it should have shut up instead. But since we assume it’s smarter than you, it realized this possibility, and so the fact that it’s saying something means that it expects to gain by hurting your interests via its output. Therefore, the output can’t be useful.
Makes sense, with the proviso that this is sometimes true only statistically. Like, the AI may choose to write an output which has a 70% chance to hurt you and a 30% chance to (equally) help you, if that is its best option.
If you assume that the AI is smarter than you, and has a good model of you, you should not read the output. But if you accidentally read it, and luckily you react in the right (for you) way, that is a possible result, too. You just cannot and should not rely on being so lucky.
You also have to assume that the AI knows everything you know which might not be true if it’s boxed.