dxu did not claim that A could receive the money with 50% probability by choosing randomly. They claimed that a simple agent B that chose randomly would receive the money with 50% probability. The point is that Omega is only trying to predict A, not B, so it doesn’t matter how well Omega can predict B’s actions.
The point can be made even more clear by introducing an agent C that just does the opposite of whatever A would do. Then C gets the money 100% of the time (unless A gets tortured, in which case C also gets tortured).
This doesn’t make a whole lot of sense. Why, and on what basis, are agents B and C receiving any money?
Are you suggesting some sort of scenario where Omega gives A money iff A does the opposite of what Omega predicted A would do, and then also gives any other agent (such as B or C) money iff said other agent does the opposite of what Omega predicted A would do?
This is a strange scenario (it seems to be very different from the sort of scenario one usually encounters in such problems), but sure, let’s consider it. My question is: how is it different from “Omega doesn’t give A any money, ever (due to a deep-seated personal dislike of A). Other agents may, or may not, get money, depending on various factors (the details of which are moot)”?
This doesn’t seem to have much to do with decision theories. Maybe shminux ought to rephrase his challenge. After all—
Please propose a mechanism by which you can make an agent who enumerates the worlds seen as possible by every agent, no matter what their decision theory is, end up in a world with lower utility than some other agent.
… can be satisfied with “Omega punches A in the face, thus causing A to end up with lower utility than B, who remains un-punched”. What this tells us about decision theories, I can’t rightly see.
This is a strange scenario (it seems to be very different from the sort of scenario one usually encounters in such problems), but sure, let’s consider it. My question is: how is it different from “Omega doesn’t give A any money, ever (due to a deep-seated personal dislike of A). Other agents may, or may not, get money, depending on various factors (the details of which are moot)”?
This doesn’t seem to have much to do with decision theories.
Yes, this is correct, and is precisely the point EYNS was trying to make when they said
Intuitively, this problem is unfair to Fiona, and we should compare her performance to Carl’s not on the “act differently from Fiona” game, but on the analogous “act differently from Carl” game.
“Omega doesn’t give A any money, ever (due to a deep-seated personal dislike of A)” is a scenario that does not depend on the decision theory A uses, and hence is an intuitively “unfair” scenario to examine; it tells us nothing about the quality of the decision theory A is using, and therefore is useless to decision theorists. (However, formalizing this intuitive notion of “fairness” is difficult, which is why EYNS brought it up in the paper.)
I’m not sure why shminux seems to think that his world-counting procedure manages to avoid this kind of “unfair” punishment; the whole point of it is that it is unfair, and hence unavoidable. There is no way for an agent to win if the problem setup is biased against them to start with, so I can only conclude that shminux misunderstood what EYNS was trying to say when he (shminux) wrote
I note here that simply enumerating possible worlds evades this problem as far as I can tell.
I didn’t read shminux’s post as suggesting that his scheme allows an agent to avoid, say, being punched in the face apropos of nothing. (And that’s what all the “unfair” scenarios described in the comments here boil down to!) I think we can all agree that “arbitrary face-punching by an adversary capable of punching us in the face” is not something we can avoid, no matter our decision theory, no matter how we make choices, etc.
can be satisfied with “Omega punches A in the face, thus causing A to end up with lower utility than B, who remains un-punched”.
It seems to be a good summary of what dxu and Dacyn were suggesting! I think it preserves the salient features without all the fluff of copying and destroying, or having multiple agents. Which makes it clear why the counterexample does not work: I said “the worlds seen as possible by every agent, no matter what their decision theory is,” and the unpunched world is not a possible one for the world enumerator in this setup.
My point was that CDT makes a suboptimal decision in Newcomb, and FDT struggles to pick the best decision in some of the problems, as well, because it is lost in the forest of causal trees, or at least this is my impression from the EYNS paper. Once you stop worrying about causality and the agent’s ability to change the world by their actions, you end up with a simper question “what possible world does this agent live in and with what probability?”
dxu did not claim that A could receive the money with 50% probability by choosing randomly. They claimed that a simple agent B that chose randomly would receive the money with 50% probability. The point is that Omega is only trying to predict A, not B, so it doesn’t matter how well Omega can predict B’s actions.
The point can be made even more clear by introducing an agent C that just does the opposite of whatever A would do. Then C gets the money 100% of the time (unless A gets tortured, in which case C also gets tortured).
This doesn’t make a whole lot of sense. Why, and on what basis, are agents B and C receiving any money?
Are you suggesting some sort of scenario where Omega gives A money iff A does the opposite of what Omega predicted A would do, and then also gives any other agent (such as B or C) money iff said other agent does the opposite of what Omega predicted A would do?
This is a strange scenario (it seems to be very different from the sort of scenario one usually encounters in such problems), but sure, let’s consider it. My question is: how is it different from “Omega doesn’t give A any money, ever (due to a deep-seated personal dislike of A). Other agents may, or may not, get money, depending on various factors (the details of which are moot)”?
This doesn’t seem to have much to do with decision theories. Maybe shminux ought to rephrase his challenge. After all—
… can be satisfied with “Omega punches A in the face, thus causing A to end up with lower utility than B, who remains un-punched”. What this tells us about decision theories, I can’t rightly see.
Yes, this is correct, and is precisely the point EYNS was trying to make when they said
“Omega doesn’t give A any money, ever (due to a deep-seated personal dislike of A)” is a scenario that does not depend on the decision theory A uses, and hence is an intuitively “unfair” scenario to examine; it tells us nothing about the quality of the decision theory A is using, and therefore is useless to decision theorists. (However, formalizing this intuitive notion of “fairness” is difficult, which is why EYNS brought it up in the paper.)
I’m not sure why shminux seems to think that his world-counting procedure manages to avoid this kind of “unfair” punishment; the whole point of it is that it is unfair, and hence unavoidable. There is no way for an agent to win if the problem setup is biased against them to start with, so I can only conclude that shminux misunderstood what EYNS was trying to say when he (shminux) wrote
I didn’t read shminux’s post as suggesting that his scheme allows an agent to avoid, say, being punched in the face apropos of nothing. (And that’s what all the “unfair” scenarios described in the comments here boil down to!) I think we can all agree that “arbitrary face-punching by an adversary capable of punching us in the face” is not something we can avoid, no matter our decision theory, no matter how we make choices, etc.
I am not sure how else to interpret the part of shminux’s post quoted by dxu. How do you interpret it?
It seems to be a good summary of what dxu and Dacyn were suggesting! I think it preserves the salient features without all the fluff of copying and destroying, or having multiple agents. Which makes it clear why the counterexample does not work: I said “the worlds seen as possible by every agent, no matter what their decision theory is,” and the unpunched world is not a possible one for the world enumerator in this setup.
My point was that CDT makes a suboptimal decision in Newcomb, and FDT struggles to pick the best decision in some of the problems, as well, because it is lost in the forest of causal trees, or at least this is my impression from the EYNS paper. Once you stop worrying about causality and the agent’s ability to change the world by their actions, you end up with a simper question “what possible world does this agent live in and with what probability?”