If that’s really all the information you want to preserve, then I don’t understand why you bother with amnesia in Newcomb’s Problem. Just offer the player two boxes, the first one contains $1K, the second contains $1M, taking both boxes triggers a bomb that destroys the second box. I’m not sure what insight into decision theory we’re supposed to get from such translations.
offer the player two boxes, the first one contains $1K, the second contains $1M, taking both boxes triggers a bomb that destroys the second box.
Hmm. This form has the same expected winnings for all strategies, but the $0 and $1,001,000 outcomes are impossible, unlike in the transformed Newcomb and the original Newcomb (given an Omega that doesn’t punish mixed strategies). Also, expected winnings doesn’t equal expected utility. For some utility functions, your problem has different expected utility than the normal or amnesiac Newcomb even if you play the same strategy in each. So it’s not really equivalent.
Another example: consider (the tranformation of) Parfit’s Hitchiker. If you use a coinflipping strategy there, the expected utility is
While the expected utility in the version where you simply plop the player in front of an ATM and drive them to the desert and dump them there if they don’t pay $100 is:
Your transformation seems to require weird Omegas that respond to randomizing players by randomizing too. It’s not clear to me why an Omega would want to behave like that (probabilistically reward cheaters). Can you handle other kinds of Omegas, e.g. the original kind specified by Eliezer?
I don’t think they’re weird. I think Omegas that go out of their way to discriminate against mixed strategies are weird. A strategy that one-boxes with probability 0.999 never gets a million, while one that one-boxes with probability 1 always gets a million. You could call that a discontinuity.
And I thought 1 was not a probability anyway! Any real rational one-boxing agent will expect to one-box with probability ~1, not with “probability” 1. Does that mean that the agent is using a mixed strategy? On the other hand, any agent that isn’t using quantum randomness will in fact either one-box or two-box, even if it flips coins and stuff. Does that mean the agent is using a pure strategy? I can’t answer this off the top of my head.
I assume the following is the key thing about Eliezer’s original Omega:
Omega has been correct on each of 100 observed occasions so far—everyone who took both boxes has found box B empty and received only a thousand dollars; everyone who took only box B has found B containing a million dollars
I didn’t see Eleizer saying that Omega doesn’t tolerate mixed strategies. If there were coinflippers among that 100, presumably Omega predicted the results of their coinflips and set up box B accordingly. To the extent that I can’t duplicate the conditions perfectly to make sure any coin will land the same way both times, I can’t do that. To the extent that I can, I can.
If that’s really all the information you want to preserve, then I don’t understand why you bother with amnesia in Newcomb’s Problem. Just offer the player two boxes, the first one contains $1K, the second contains $1M, taking both boxes triggers a bomb that destroys the second box. I’m not sure what insight into decision theory we’re supposed to get from such translations.
Hmm. This form has the same expected winnings for all strategies, but the $0 and $1,001,000 outcomes are impossible, unlike in the transformed Newcomb and the original Newcomb (given an Omega that doesn’t punish mixed strategies). Also, expected winnings doesn’t equal expected utility. For some utility functions, your problem has different expected utility than the normal or amnesiac Newcomb even if you play the same strategy in each. So it’s not really equivalent.
Another example: consider (the tranformation of) Parfit’s Hitchiker. If you use a coinflipping strategy there, the expected utility is
While the expected utility in the version where you simply plop the player in front of an ATM and drive them to the desert and dump them there if they don’t pay $100 is:
Which is clearly different.
Your transformation seems to require weird Omegas that respond to randomizing players by randomizing too. It’s not clear to me why an Omega would want to behave like that (probabilistically reward cheaters). Can you handle other kinds of Omegas, e.g. the original kind specified by Eliezer?
I don’t think they’re weird. I think Omegas that go out of their way to discriminate against mixed strategies are weird. A strategy that one-boxes with probability 0.999 never gets a million, while one that one-boxes with probability 1 always gets a million. You could call that a discontinuity.
And I thought 1 was not a probability anyway! Any real rational one-boxing agent will expect to one-box with probability ~1, not with “probability” 1. Does that mean that the agent is using a mixed strategy? On the other hand, any agent that isn’t using quantum randomness will in fact either one-box or two-box, even if it flips coins and stuff. Does that mean the agent is using a pure strategy? I can’t answer this off the top of my head.
I assume the following is the key thing about Eliezer’s original Omega:
I didn’t see Eleizer saying that Omega doesn’t tolerate mixed strategies. If there were coinflippers among that 100, presumably Omega predicted the results of their coinflips and set up box B accordingly. To the extent that I can’t duplicate the conditions perfectly to make sure any coin will land the same way both times, I can’t do that. To the extent that I can, I can.
Uh, then my transformation of the problem is better than yours because it “predicts” coinflips perfectly, not just “to the extent that I can” :-)