What is the lowest payoff ratio below at which you would one-box on Newcomb’s problem, given your current subjective beliefs? [Or answer “none” if you would never one-box.]
Do these options keep any of the absolute payoffs constant, like box A always containing $1,000 and the contents of B varying according to the selected ratio? If not, the varying marginal utility of money makes this difficult to answer—I’m much more likely to risk a sure $1,000 for $1,000,000 than I am to risk a sure $1,000,000 for $1,000,000,000.
If you’re completely confident in one-boxing, then a 1:1 ratio implies that you should be indifferent between one- and two-boxing. If you interpret the original wording as “at what ratio would you be willing to one-box” (instead of “at what ratio would you always insist on one-boxing”), then it makes sense to pick 1:1, since there’d be no reason not to one-box, though also no reason not to two-box.
If you’re completely confident in one-boxing, then a 1:1 ratio implies that you should be indifferent between one- and two-boxing. If you interpret the original wording as “at what ratio would you be willing to one-box” (instead of “at what ratio would you always insist on one-boxing”), then it makes sense to pick 1:1, since there’d be no reason not to one-box, though also no reason not to two-box.
I had expected the group of people who are confident in one-boxing to also be likely to not be perfectly confident. All correct answers will be some form of “>1”. “=1″ is an error (assuming they are actually answering the Normative Uncertainty Newcomb’s Problem as asked).
I didn’t intend “perfectly confident” to imply people literally assigning a probability of 1. It is enough for them to assign a high enough probability that it rounds closer to 1:1 than 1.01:1.
I didn’t intend “perfectly confident” to imply people literally assigning a probability of 1. It is enough for them to assign a high enough probability that it rounds closer to 1:1 than 1.01:1.
That isn’t enough. Neither the actual behaviour of rational agents nor those following the instructions Carl gave for the survey (quoted below) would ever choose the bad deal due to rounding error. If people went about one boxing at 0.999:1 I hope you would agree that there is a problem.
What is the lowest payoff ratio below at which you would one-box on Newcomb’s problem, given your current subjective beliefs? [Or answer “none” if you would never one-box.]
Perhaps the reasoning is that it is good to be the type of agent that one-boxes, as that will lead to good results on most variations of the problem. So having an absolute rule to always one-box can be an advantage, as it is easier to predict that you will one-box then someone who has a complicated calculation to figure out whether it’s worthwhile.
Of course, that only makes a difference if Omega is not perfectly omniscient, but only extremely smart and ultimately fallible. Still, because “in the real world” you are not going to ever meet a perfectly omniscient being, only (perhaps) an extremely smart one, I think one could make a reasonable argument for the position that you should try to be a type of agent that is very easy to predict will one-box.
You might as well precommit to one-box at 1:1 odds anyway. If Omega has ever been observed to make an error, it’s to your advantage to be extremely easy to model in case the problem ever comes up again. On the other hand, if Omega is truly omniscient… well, you aren’t getting more than $1,000 anyway, and Omega knows where to put it.
If there is visibly $1,000 in box A and there’s a probability 0 EU(one-boxing), unless one is particularly incompetent at opening boxes labelled “A”. Even if Omega is omniscient, I’m not, so I can never have p=1.
If anyone would one-box at 1:1 odds, would they also one-box at 1:1.01 odds (taking $990 over $1000 by two-boxing) in the hope that Omega would offer better odds in the future and predict them better?
I wouldn’t one-box at 1:1.01 odds; the rule I was working off was: “Precommit to one-boxing when box B is stated to contain at least as much money as box A,” and I was about to launch into this big justification on how even if Omega was observed to have 99+% accuracy, rather than being a perfect predictor, it’ll fail at predicting a complicated theory before it fails at predicting a simple one...
...and that’s when I realized that “Precommit to one-boxing when box B is stated to contain more money than box A,” is just as simple a rule that lets me two-box at 1:1 and one-box when it will earn me more.
I took that option to mean “one-box all the way down to 1:1, even if it’s 1:1.00001.” If it were actually exactly 1:1, I would be indifferent between one- and two-boxing.
The payoffs listed are monetary, and box A only has $1000. Non-monetary consequences can be highly significant in comparison. There is value in sticking one’s neck out to prove a point.
The payoffs listed are monetary, and box A only has $1000.
This isn’t even specified. Carl mentioned that both boxes were to be altered but didn’t bother specifying the specifics since it is the ratio that is important for the purpose of the problem.
Non-monetary consequences can be highly significant in comparison.
There is value in sticking one’s neck out to prove a point.
It is troubling if “One box! Cooperate!” is such an applause light that people choose it to ‘prove a point’ even when the reason for it to be a good idea is removed. “One Box!” is the right answer in Newcomb’s Problem and the wrong answer in Normative Uncertainty Necomb’s Problem (1:1). If there is still value to ‘proving that point’ then something is broken.
Applause lights are one thing, fame (paradoxically, I guess) is another. If one were to imagine the scenario in an otherwise-realistic world, such a rash decision would gain a lot of news coverage. Which can be turned to useful ends, by most people’s lights.
As for fighting the hypothetical, yeah guilty. But it’s useful to remind ourselves that (A) money isn’t utility and, more importantly, (B) while money clearly is ratio scalable, it’s not uncontroversial that utility even fits an interval scale. I’m doubtful about (B), so sticking with money allows me to play along with the ratio assumption—but invites other complications.
Edited to add: in the comments Carl specified to keep box A constant at $1000.
Applause lights are one thing, fame (paradoxically, I guess) is another. If one were to imagine the scenario in an otherwise-realistic world, such a rash decision would gain a lot of news coverage.
Your model of how to gain fame does not seem to be similar to mine.
I’m looking for the “I don’t understand the question” choice. (Maybe I’m being the Village Idiot today, rather than this actually needing clarification… but I’d bet I’m not alone.)
Actually, the ratio alone is not sufficient, because there is a reward for two-boxing related to “verifying if Omega was right”—if Omega is right “apriori” then I see no point in two-boxing above 1:1. I think the poll would be more meaningful if 1 stood for $1. ETA: actually, “verifying” or “being playful” might mean for example tossing a coin to decide.
What is the lowest payoff ratio below at which you would one-box on Newcomb’s problem, given your current subjective beliefs? [Or answer “none” if you would never one-box.]
[pollid:469]
Do these options keep any of the absolute payoffs constant, like box A always containing $1,000 and the contents of B varying according to the selected ratio? If not, the varying marginal utility of money makes this difficult to answer—I’m much more likely to risk a sure $1,000 for $1,000,000 than I am to risk a sure $1,000,000 for $1,000,000,000.
Assume all playoffs are in utilons, not dollars.
Keep box A constant at $1,000.
Curious. A majority is more confident in their one-boxing than I am.
Even more curious are the 8% who one box at 1:1. Why? (Oh, ‘8%’ means ‘one person’. That is somewhat less curious.)
There are now 5 people one boxing at 1:1. We rationalists may not believe in god but apparently we believe in Omega, may prosperity be upon his name.
If you’re completely confident in one-boxing, then a 1:1 ratio implies that you should be indifferent between one- and two-boxing. If you interpret the original wording as “at what ratio would you be willing to one-box” (instead of “at what ratio would you always insist on one-boxing”), then it makes sense to pick 1:1, since there’d be no reason not to one-box, though also no reason not to two-box.
I had expected the group of people who are confident in one-boxing to also be likely to not be perfectly confident. All correct answers will be some form of “>1”. “=1″ is an error (assuming they are actually answering the Normative Uncertainty Newcomb’s Problem as asked).
I didn’t intend “perfectly confident” to imply people literally assigning a probability of 1. It is enough for them to assign a high enough probability that it rounds closer to 1:1 than 1.01:1.
That isn’t enough. Neither the actual behaviour of rational agents nor those following the instructions Carl gave for the survey (quoted below) would ever choose the bad deal due to rounding error. If people went about one boxing at 0.999:1 I hope you would agree that there is a problem.
Perhaps the reasoning is that it is good to be the type of agent that one-boxes, as that will lead to good results on most variations of the problem. So having an absolute rule to always one-box can be an advantage, as it is easier to predict that you will one-box then someone who has a complicated calculation to figure out whether it’s worthwhile.
Of course, that only makes a difference if Omega is not perfectly omniscient, but only extremely smart and ultimately fallible. Still, because “in the real world” you are not going to ever meet a perfectly omniscient being, only (perhaps) an extremely smart one, I think one could make a reasonable argument for the position that you should try to be a type of agent that is very easy to predict will one-box.
You might as well precommit to one-box at 1:1 odds anyway. If Omega has ever been observed to make an error, it’s to your advantage to be extremely easy to model in case the problem ever comes up again. On the other hand, if Omega is truly omniscient… well, you aren’t getting more than $1,000 anyway, and Omega knows where to put it.
If there is visibly $1,000 in box A and there’s a probability 0 EU(one-boxing), unless one is particularly incompetent at opening boxes labelled “A”. Even if Omega is omniscient, I’m not, so I can never have p=1.
If anyone would one-box at 1:1 odds, would they also one-box at 1:1.01 odds (taking $990 over $1000 by two-boxing) in the hope that Omega would offer better odds in the future and predict them better?
I wouldn’t one-box at 1:1.01 odds; the rule I was working off was: “Precommit to one-boxing when box B is stated to contain at least as much money as box A,” and I was about to launch into this big justification on how even if Omega was observed to have 99+% accuracy, rather than being a perfect predictor, it’ll fail at predicting a complicated theory before it fails at predicting a simple one...
...and that’s when I realized that “Precommit to one-boxing when box B is stated to contain more money than box A,” is just as simple a rule that lets me two-box at 1:1 and one-box when it will earn me more.
TL;DR—your point is well taken.
I took that option to mean “one-box all the way down to 1:1, even if it’s 1:1.00001.” If it were actually exactly 1:1, I would be indifferent between one- and two-boxing.
The payoffs listed are monetary, and box A only has $1000. Non-monetary consequences can be highly significant in comparison. There is value in sticking one’s neck out to prove a point.
This isn’t even specified. Carl mentioned that both boxes were to be altered but didn’t bother specifying the specifics since it is the ratio that is important for the purpose of the problem.
They also fall under fighting the hypothetical.
It is troubling if “One box! Cooperate!” is such an applause light that people choose it to ‘prove a point’ even when the reason for it to be a good idea is removed. “One Box!” is the right answer in Newcomb’s Problem and the wrong answer in Normative Uncertainty Necomb’s Problem (1:1). If there is still value to ‘proving that point’ then something is broken.
Applause lights are one thing, fame (paradoxically, I guess) is another. If one were to imagine the scenario in an otherwise-realistic world, such a rash decision would gain a lot of news coverage. Which can be turned to useful ends, by most people’s lights.
As for fighting the hypothetical, yeah guilty. But it’s useful to remind ourselves that (A) money isn’t utility and, more importantly, (B) while money clearly is ratio scalable, it’s not uncontroversial that utility even fits an interval scale. I’m doubtful about (B), so sticking with money allows me to play along with the ratio assumption—but invites other complications.
Edited to add: in the comments Carl specified to keep box A constant at $1000.
Your model of how to gain fame does not seem to be similar to mine.
I’m looking for the “I don’t understand the question” choice. (Maybe I’m being the Village Idiot today, rather than this actually needing clarification… but I’d bet I’m not alone.)
Actually, the ratio alone is not sufficient, because there is a reward for two-boxing related to “verifying if Omega was right”—if Omega is right “apriori” then I see no point in two-boxing above 1:1. I think the poll would be more meaningful if 1 stood for $1. ETA: actually, “verifying” or “being playful” might mean for example tossing a coin to decide.