The barber paradox is just a simple, “sets of all sets which do not contain themselves” thing which has nothing to do with what I wrote.
Hm, maybe not the barber. I was thinking of how and when you define what is a “win”.
Let’s do a toy example where P is a limited discrete set, say { door1, door2, door3 }. If we know what the doors lead to, and we know what a “win” is, we can make the rules be a simple lookup table. It works perfectly fine.
You can break it in two ways. One way is to redefine a “win” (whatever you pick for door1 we declare to be !win). Another is to change the set P.
Say, we add door4 to the set. The lookup table says “I don’t know” and that is, actually, a viable answer. If you want to disallow that, we have to move into the realm of models and generalizations. And in that realm asking that your function F(P) produces the optimum (“win”) for whatever P could be is, I think, too much to ask for. It can work for mathematical abstractions, but if we are talking about a decision theory that is applicable to the real world, sorry, I don’t think “optimal all the time, no exceptions” is a realistic goal or criterion.
The issue is, basically, what you allow to be in set P. If it’s sufficiently restricted, F(P) can guarantee wins, it is is not, it can not.
I agree with you that “optimal all the time, no exceptions” is not a realistic goal or criterion.
Indeed I believe it’s provably impossible even without needing to add the fuzziness and confusion of real life into the mix. Even if we limit ourselves to simple bounded systems.
Which kind of puts a hole in EY’s thesis that it should be possible to have a decision theory which always wins.
Eliezer has conceded that it is impossible in principle to have a decision theory which always wins. He says he wants one that will always win except when an adversary is deliberately making it lose. In other words, he hopes that your scenario is sufficiently complicated that it wouldn’t happen in reality unless someone arranges things to cause the decision theory to lose.
Even if we limit ourselves to simple bounded systems.
If the “simple bounded systems” are, basically, enumerable and the definition of “win” is fixed, F(P) can be a simple lookup table which does always win.
It’s the same thing as saying that given a dataset I can always construct a model with zero error for members of this dataset. That does not mean that the model will perform well on out-of-sample data.
I am also not sure to which degree EY intended this statement to be a “hard”, literal claim.
Hm, maybe not the barber. I was thinking of how and when you define what is a “win”.
Let’s do a toy example where P is a limited discrete set, say { door1, door2, door3 }. If we know what the doors lead to, and we know what a “win” is, we can make the rules be a simple lookup table. It works perfectly fine.
You can break it in two ways. One way is to redefine a “win” (whatever you pick for door1 we declare to be !win). Another is to change the set P.
Say, we add door4 to the set. The lookup table says “I don’t know” and that is, actually, a viable answer. If you want to disallow that, we have to move into the realm of models and generalizations. And in that realm asking that your function F(P) produces the optimum (“win”) for whatever P could be is, I think, too much to ask for. It can work for mathematical abstractions, but if we are talking about a decision theory that is applicable to the real world, sorry, I don’t think “optimal all the time, no exceptions” is a realistic goal or criterion.
The issue is, basically, what you allow to be in set P. If it’s sufficiently restricted, F(P) can guarantee wins, it is is not, it can not.
I agree with you that “optimal all the time, no exceptions” is not a realistic goal or criterion.
Indeed I believe it’s provably impossible even without needing to add the fuzziness and confusion of real life into the mix. Even if we limit ourselves to simple bounded systems.
Which kind of puts a hole in EY’s thesis that it should be possible to have a decision theory which always wins.
Eliezer has conceded that it is impossible in principle to have a decision theory which always wins. He says he wants one that will always win except when an adversary is deliberately making it lose. In other words, he hopes that your scenario is sufficiently complicated that it wouldn’t happen in reality unless someone arranges things to cause the decision theory to lose.
If the “simple bounded systems” are, basically, enumerable and the definition of “win” is fixed, F(P) can be a simple lookup table which does always win.
It’s the same thing as saying that given a dataset I can always construct a model with zero error for members of this dataset. That does not mean that the model will perform well on out-of-sample data.
I am also not sure to which degree EY intended this statement to be a “hard”, literal claim.