Yes! This functional difference is very important!
In Logic, you begin with a set of non-contradicting assumptions and then build a consistent theory based on those assumptions. The deductions you make are analogous to being rational. If the assumptions are non-contradicting, then it is impossible to deduce something false in the system. (Analogously, it is impossible for rationality not to win.) However, you can get a paradox by having a self-referential statement. You can prove that every sufficiently complex theory is not closed—there are things that are true that you can’t prove from within the system. Along the same lines, you can build a paradox by forcing the system to try to talk about it itself.
What Grobstein has presented is a classic paradox and is the closest you can come to rationality not winning.
I understand all that, but I still think it’s impossible to operationalize an admonition to Win. If
Omega says that Box B is empty if you try to win what’s inside it.
then you simply cannot implement a strategy that will give you the proceeds of Box B (unless you’re using some definition of “try” that is inconsistent with “choose a strategy that has a particular expected result”).
I think that falls under the “ritual of cognition” exception that Eliezer discussed for a while: when Winning depends directly on the ritual of cognition, then of course we can define a situation in which rationality doesn’t Win. But that is perfectly meaningless in every other situation (which is to say, in the world), where the result of the ritual is what matters.
Here’s a functional difference: Omega says that Box B is empty if you try to win what’s inside it.
Yes! This functional difference is very important!
In Logic, you begin with a set of non-contradicting assumptions and then build a consistent theory based on those assumptions. The deductions you make are analogous to being rational. If the assumptions are non-contradicting, then it is impossible to deduce something false in the system. (Analogously, it is impossible for rationality not to win.) However, you can get a paradox by having a self-referential statement. You can prove that every sufficiently complex theory is not closed—there are things that are true that you can’t prove from within the system. Along the same lines, you can build a paradox by forcing the system to try to talk about it itself.
What Grobstein has presented is a classic paradox and is the closest you can come to rationality not winning.
I understand all that, but I still think it’s impossible to operationalize an admonition to Win. If
then you simply cannot implement a strategy that will give you the proceeds of Box B (unless you’re using some definition of “try” that is inconsistent with “choose a strategy that has a particular expected result”).
I think that falls under the “ritual of cognition” exception that Eliezer discussed for a while: when Winning depends directly on the ritual of cognition, then of course we can define a situation in which rationality doesn’t Win. But that is perfectly meaningless in every other situation (which is to say, in the world), where the result of the ritual is what matters.