This is a strange scenario (it seems to be very different from the sort of scenario one usually encounters in such problems), but sure, let’s consider it. My question is: how is it different from “Omega doesn’t give A any money, ever (due to a deep-seated personal dislike of A). Other agents may, or may not, get money, depending on various factors (the details of which are moot)”?
This doesn’t seem to have much to do with decision theories.
Yes, this is correct, and is precisely the point EYNS was trying to make when they said
Intuitively, this problem is unfair to Fiona, and we should compare her performance to Carl’s not on the “act differently from Fiona” game, but on the analogous “act differently from Carl” game.
“Omega doesn’t give A any money, ever (due to a deep-seated personal dislike of A)” is a scenario that does not depend on the decision theory A uses, and hence is an intuitively “unfair” scenario to examine; it tells us nothing about the quality of the decision theory A is using, and therefore is useless to decision theorists. (However, formalizing this intuitive notion of “fairness” is difficult, which is why EYNS brought it up in the paper.)
I’m not sure why shminux seems to think that his world-counting procedure manages to avoid this kind of “unfair” punishment; the whole point of it is that it is unfair, and hence unavoidable. There is no way for an agent to win if the problem setup is biased against them to start with, so I can only conclude that shminux misunderstood what EYNS was trying to say when he (shminux) wrote
I note here that simply enumerating possible worlds evades this problem as far as I can tell.
I didn’t read shminux’s post as suggesting that his scheme allows an agent to avoid, say, being punched in the face apropos of nothing. (And that’s what all the “unfair” scenarios described in the comments here boil down to!) I think we can all agree that “arbitrary face-punching by an adversary capable of punching us in the face” is not something we can avoid, no matter our decision theory, no matter how we make choices, etc.
Yes, this is correct, and is precisely the point EYNS was trying to make when they said
“Omega doesn’t give A any money, ever (due to a deep-seated personal dislike of A)” is a scenario that does not depend on the decision theory A uses, and hence is an intuitively “unfair” scenario to examine; it tells us nothing about the quality of the decision theory A is using, and therefore is useless to decision theorists. (However, formalizing this intuitive notion of “fairness” is difficult, which is why EYNS brought it up in the paper.)
I’m not sure why shminux seems to think that his world-counting procedure manages to avoid this kind of “unfair” punishment; the whole point of it is that it is unfair, and hence unavoidable. There is no way for an agent to win if the problem setup is biased against them to start with, so I can only conclude that shminux misunderstood what EYNS was trying to say when he (shminux) wrote
I didn’t read shminux’s post as suggesting that his scheme allows an agent to avoid, say, being punched in the face apropos of nothing. (And that’s what all the “unfair” scenarios described in the comments here boil down to!) I think we can all agree that “arbitrary face-punching by an adversary capable of punching us in the face” is not something we can avoid, no matter our decision theory, no matter how we make choices, etc.
I am not sure how else to interpret the part of shminux’s post quoted by dxu. How do you interpret it?