Yes this would be a no free lunch theorem for decision theory.
It is different from the “No free lunch in search and optimization” theorem though. I think people had an intuition that LDT will never regret its decision theory, because if there is a better decision theory than LDT will just copy it. You can think of this as LDT acting as tho it could self-modify. So the belief (which I am debunking) is that the environment can never punish the LDT agent; it just pretends to be the environment’s favorite agent.
The issue with this argument is that in the problem I published above, the problem itself contains a LDT agent, and that LDT agent can “punish” the first for acting like, or even pre-committing to, or even literally self-modifying to become $9 rock. It knows that the first agent didn’t have to do that.
So the first LDT agent will literally regret not being hardcoded to “output $9”.
This is very robust to what we “allow” agents to do (can they predict each other, how accurately can they predict each other, what counterfactuals are legit or not, etc...), because no matter what the rules are you can’t get more than $5 in expectation in a mirror match.
Yes this would be a no free lunch theorem for decision theory.
It is different from the “No free lunch in search and optimization” theorem though. I think people had an intuition that LDT will never regret its decision theory, because if there is a better decision theory than LDT will just copy it. You can think of this as LDT acting as tho it could self-modify. So the belief (which I am debunking) is that the environment can never punish the LDT agent; it just pretends to be the environment’s favorite agent.
The issue with this argument is that in the problem I published above, the problem itself contains a LDT agent, and that LDT agent can “punish” the first for acting like, or even pre-committing to, or even literally self-modifying to become $9 rock. It knows that the first agent didn’t have to do that.
So the first LDT agent will literally regret not being hardcoded to “output $9”.
This is very robust to what we “allow” agents to do (can they predict each other, how accurately can they predict each other, what counterfactuals are legit or not, etc...), because no matter what the rules are you can’t get more than $5 in expectation in a mirror match.