I wanted to consider some truly silly solution. But since taking only box A is out (and I can’t find a good reason for choosing box A, other than a vague argument based in irrationality along the lines that I’d rather not know if omniscience exists…), so I came up with this instead. I won’t apologize for all the math-economics, but it might get dense.
Omega has been correct 100 times before, right? Fully intending to take both boxes, I’ll go to each of the 100 other people. There’re 4 categories of people. Let’s assume they aren’t bound by psychology and they’re risk-neutral, but they are bound by their beliefs.
Two-boxers who defend their decision do so on ground of “no backwards causality” (uh, what’s the smart-people term for that?). They don’t believe in Omega’s omniscience. There’s Q1 of these.
Two-boxers who regret their decision also concede to Omega’s near-perfect omniscience. There’re Q2 of these.
One-boxers who’re happy also concede to Omega’s near-perfect omniscience. There’re Q3 of these.
One-boxers who regret foregoing $1000. They don’t believe in Omega’s omniscience. There’re Q4 of these.
I’ll offer groups 2 and 3 (believers in that I’ll only get 1000) to split my 1000 between them, in proportion to their bet, if they’re right. If they believe in Omega’s perfect predictive powers, they think there’s a 0% chance of me winning. Therefore, it’s a good bet for them. Expected profit = 1000/weight-0*(all their money)>0
Groups 1 and 4 are trickier. They think Omega has a P chance of being wrong about me. I’ll ask them to bet X=1001000P/((1-P)weight)-eps, where weight is a positive number >1 that’s a function of how many people donated how much. Explicitly defining weight(Q1, Q4, various money caps) is a medium-difficulty exercise for a beginning calculus student. If you insist, I’ll model it, but it will take me more time than I’d already spent on this. So, for a person in one of these groups, expected profit = -X(1-P)+1001000P/weight = eps > 0!
So what do I have now? (Should I pray to Bayes that my intuition be confirmed?) There’re two possible outcomes of taking both boxes.
Both are full. I give the 1001000 to groups 1 and 4, and collect Q21000+Q31000000 from groups 2 and 3, which is more than 1001000 if Q3>0 AND Q2>0, or if Q3>1. This outcome has potential for tremendous profit. Call this number PIE >> 1001000.
Only A is full. I split my 1000 between groups 2 and 3, and collect X1Q1+X4Q4 from groups 1 and 4. What are X1 and X4 again? X, the amount of money group 1 and group 4 bet, is unique for each group. I called group 1’s X X1, group 4’s X4.
I need to find the conditions when X1Q1+X4Q4 > 1000. So suppose I undermaximized my profit, and completely ignored the poor group 1 (their 1000 won’t make much difference either way). Then X=X4 becomes much simpler, X=1001000P/((1-P)Q4)-eps, and then they payoff I get is -Q4eps+1001000P/(1-P). P = 0.001 and Q4eps < $2 guarantee X1Q1+X4Q4 > X4Q4 > 1000.
That’s all well and good, but if P is low (under 0.5), I’m getting less than 1001000. What can I do? Hedge again! I would actually go to people of groups 1 and 4 again, except it’s getting too confusing, so let’s introduce a “bank” that has the same mentality as the people of groups 1 and 4 (that there’s a chance P that Omega will be wrong about me). Remember PIE? The bank estimates my chances of getting PIE at P. Let’s say if I don’t get PIE, I get 1000 (which is the lowest possible profit for outcome 2; otherwise it’s not worth making that bet). I ask the following sum from the bank: PIEP+1000(1-P) – eps. The bank makes a profit of eps > 0. Since PIE is a large number, my profit at the end is approximately PIEP+1000(1-P) > 1001000.
Note that I’d been trying to find the LOWER bound on this gambit. Actually plugging in numbers for P and Q’s easily yielded profits in the 5 mil to 50 mil range.
You’re essentially engaging in arbitrage, taking advantage of the difference in the probabilities assigned to both boxes being full by different people. Which is one reason rational people never assign 0 probability to anything.
You could just as well go to some one-boxers (who “believe P(both full) = 0”) and offer them a $1 bet 10000000:1 in your favor that both boxes will be full; then offer the two-boxers whatever bet they will take “that only one box is full” that will give you more than $1 profit if you win. Thus, either way, you make a profit, and you can make however much you like just by increasing the stakes.
This still doesn’t actually solve newcomb’s problem, though. I’d call it more of a cautionary tale against being absolutely certain.
(Incidentally, since you’re going into this “fully intending” to take both boxes, I’d expect both one boxers and two boxers to agree on the extremely low probability Omega is going to have filled both boxes.)
Yes, nshepperd, my assumption is that P << 0.5, something in the 0.0001 to 0.01 range.
Besides, arbitrage would still be possible if some people estimated P=0.01 and others P=0.0001, only the solution would be messier than what I’d ever want to do casually. Besides, if I were unconstrained by the bets I could make (I’d tried to work with a cap before), that would make making profits even easier.
I wasn’t exactly trying to solve the problem, only to find a “naively rational” workaround (using the same naive rationality that leads prisoners to rat each other out in PD).
When you’re saying that this doesn’t solve Newcomb’s problem, what do you expect the solution to actually entail?
Yes, arbitrage is possible pretty much whenever people’s probabilities disagree to any significant degree. Setting P = 0 just lets you take it to absurd levels (eg. put up no stake at all, and it’s still a “fair bet”).
When you’re saying that this doesn’t solve Newcomb’s problem, what do you expect the solution to actually entail?
Maximizing the money found upon opening the box(es) you have selected.
If you like, replace the money with cures for cancer with differing probabilities of working, or machines with differing probabilities of being a halting oracle, or something else you can’t get by exploiting other humans.
No, P(I’m wrong about something mathematical) is 1-epsilon. P(I’m wrong about this mathematical thing) is often low- like 2%, and sometimes actually 0, like when discussing the intersection of a set and its complement. It’s defined to be the empty set- there’s no way that it can fail to be the empty set. I may not have complete confidence in the rest of set theory, and I may not expect that the complement of a set (or the set itself) is always well-defined, but when I limit myself to probability measures over reasonable spaces then I’m content.
So, for some particular aspects of math, you have certainty 1-epsilon, where epsilon is exactly zero?
What you are really doing is making the claim “Given that what I know about mathematics is correct, then the intersection of a set and its complement is the empty set.”
I was interpreting “something” as “at least one thing.” Almost surely my understanding of mathematics as a whole is incorrect somewhere, but there are a handful of mathematical statements that I believe with complete metaphysical certitude.
What you are really doing is making the claim “Given that what I know about mathematics is correct, then the intersection of a set and its complement is the empty set.”
“Correct” is an unclear word, here. Suppose I start off with a handful of axioms. What is the probability that one of the axioms is true / correct? In the context of that system, 1, since it’s the starting point. Now, the axioms might not be useful or relevant to reality, and the axioms may conflict and thus the system isn’t internally consistent (i.e. statements having probability 0 and 1 simultaneously). And so the geometer who is only 1-epsilon sure that Euclid’s axioms describe the real world will be able to update gracefully when presented with evidence that real space is curved, even though they retain the same confidence in their Euclidean proofs (as they apply to abstract concepts).
Basically, I only agree with this post when it comes to statements about which uncertainty is reasonable. If you require 1-epsilon certainty for anything, even P(A|A), then you break the math of probability.
Yes, the probability is a belief, but your previous question was about something more like P(!A&P(A)=1), that is to say, an absolute belief being inconsistent with the facts. Vaniver’s assertion was about the facts themselves being inconsistent with the facts, which would have a rather alarming lack of implications.
One-boxers who regret foregoing $1000. They don’t believe in Omega’s omniscience. There’re Q4 of these.
How is there anybody in this group? Considering that all of them have $1,000,000, what convinced them to one-box in the first place such that they later changed their minds about it and regretted the decision? (Like, I guess a one-boxer could say afterwards “I bet that guy wasn’t really omniscient, I should have taken the other box too, then I’d have gotten $1,001,000 instead”, but why wouldn’t a person who thinks that way two-box to begin with?)
I only took that case into account for completeness, to cover my bases against the criticism that “not all one-boxers would be happy with their decisions.”
Naively, when you have a choice between 1000000.01 and 1000000.02, it’s very easy to argue that the latter is the better option. To argue for the former, you would probably cite the insignificance of that cent next to the rest of 1000000.01: that eps doesn’t matter, or that an extra penny in your pocket is inconvenient, or that you already have 1000000.01, so why do you need another 0.01?
I wanted to consider some truly silly solution. But since taking only box A is out (and I can’t find a good reason for choosing box A, other than a vague argument based in irrationality along the lines that I’d rather not know if omniscience exists…), so I came up with this instead. I won’t apologize for all the math-economics, but it might get dense.
Omega has been correct 100 times before, right? Fully intending to take both boxes, I’ll go to each of the 100 other people. There’re 4 categories of people. Let’s assume they aren’t bound by psychology and they’re risk-neutral, but they are bound by their beliefs.
Two-boxers who defend their decision do so on ground of “no backwards causality” (uh, what’s the smart-people term for that?). They don’t believe in Omega’s omniscience. There’s Q1 of these.
Two-boxers who regret their decision also concede to Omega’s near-perfect omniscience. There’re Q2 of these.
One-boxers who’re happy also concede to Omega’s near-perfect omniscience. There’re Q3 of these.
One-boxers who regret foregoing $1000. They don’t believe in Omega’s omniscience. There’re Q4 of these.
I’ll offer groups 2 and 3 (believers in that I’ll only get 1000) to split my 1000 between them, in proportion to their bet, if they’re right. If they believe in Omega’s perfect predictive powers, they think there’s a 0% chance of me winning. Therefore, it’s a good bet for them. Expected profit = 1000/weight-0*(all their money)>0
Groups 1 and 4 are trickier. They think Omega has a P chance of being wrong about me. I’ll ask them to bet X=1001000P/((1-P)weight)-eps, where weight is a positive number >1 that’s a function of how many people donated how much. Explicitly defining weight(Q1, Q4, various money caps) is a medium-difficulty exercise for a beginning calculus student. If you insist, I’ll model it, but it will take me more time than I’d already spent on this. So, for a person in one of these groups, expected profit = -X(1-P)+1001000P/weight = eps > 0!
So what do I have now? (Should I pray to Bayes that my intuition be confirmed?) There’re two possible outcomes of taking both boxes.
Both are full. I give the 1001000 to groups 1 and 4, and collect Q21000+Q31000000 from groups 2 and 3, which is more than 1001000 if Q3>0 AND Q2>0, or if Q3>1. This outcome has potential for tremendous profit. Call this number PIE >> 1001000.
Only A is full. I split my 1000 between groups 2 and 3, and collect X1Q1+X4Q4 from groups 1 and 4. What are X1 and X4 again? X, the amount of money group 1 and group 4 bet, is unique for each group. I called group 1’s X X1, group 4’s X4.
I need to find the conditions when X1Q1+X4Q4 > 1000. So suppose I undermaximized my profit, and completely ignored the poor group 1 (their 1000 won’t make much difference either way). Then X=X4 becomes much simpler, X=1001000P/((1-P)Q4)-eps, and then they payoff I get is -Q4eps+1001000P/(1-P). P = 0.001 and Q4eps < $2 guarantee X1Q1+X4Q4 > X4Q4 > 1000.
That’s all well and good, but if P is low (under 0.5), I’m getting less than 1001000. What can I do? Hedge again! I would actually go to people of groups 1 and 4 again, except it’s getting too confusing, so let’s introduce a “bank” that has the same mentality as the people of groups 1 and 4 (that there’s a chance P that Omega will be wrong about me). Remember PIE? The bank estimates my chances of getting PIE at P. Let’s say if I don’t get PIE, I get 1000 (which is the lowest possible profit for outcome 2; otherwise it’s not worth making that bet). I ask the following sum from the bank: PIEP+1000(1-P) – eps. The bank makes a profit of eps > 0. Since PIE is a large number, my profit at the end is approximately PIEP+1000(1-P) > 1001000.
Note that I’d been trying to find the LOWER bound on this gambit. Actually plugging in numbers for P and Q’s easily yielded profits in the 5 mil to 50 mil range.
You’re essentially engaging in arbitrage, taking advantage of the difference in the probabilities assigned to both boxes being full by different people. Which is one reason rational people never assign 0 probability to anything.
You could just as well go to some one-boxers (who “believe P(both full) = 0”) and offer them a $1 bet 10000000:1 in your favor that both boxes will be full; then offer the two-boxers whatever bet they will take “that only one box is full” that will give you more than $1 profit if you win. Thus, either way, you make a profit, and you can make however much you like just by increasing the stakes.
This still doesn’t actually solve newcomb’s problem, though. I’d call it more of a cautionary tale against being absolutely certain.
(Incidentally, since you’re going into this “fully intending” to take both boxes, I’d expect both one boxers and two boxers to agree on the extremely low probability Omega is going to have filled both boxes.)
Yes, nshepperd, my assumption is that P << 0.5, something in the 0.0001 to 0.01 range.
Besides, arbitrage would still be possible if some people estimated P=0.01 and others P=0.0001, only the solution would be messier than what I’d ever want to do casually. Besides, if I were unconstrained by the bets I could make (I’d tried to work with a cap before), that would make making profits even easier.
I wasn’t exactly trying to solve the problem, only to find a “naively rational” workaround (using the same naive rationality that leads prisoners to rat each other out in PD).
When you’re saying that this doesn’t solve Newcomb’s problem, what do you expect the solution to actually entail?
Yes, arbitrage is possible pretty much whenever people’s probabilities disagree to any significant degree. Setting P = 0 just lets you take it to absurd levels (eg. put up no stake at all, and it’s still a “fair bet”).
Maximizing the money found upon opening the box(es) you have selected.
If you like, replace the money with cures for cancer with differing probabilities of working, or machines with differing probabilities of being a halting oracle, or something else you can’t get by exploiting other humans.
I don’t know, I feel pretty confident assigning P(A&!A)=0 :P
Do you assign 0 probability to the hypothesis that there exists something which you believe to be mathematically true which is not?
No, P(I’m wrong about something mathematical) is 1-epsilon. P(I’m wrong about this mathematical thing) is often low- like 2%, and sometimes actually 0, like when discussing the intersection of a set and its complement. It’s defined to be the empty set- there’s no way that it can fail to be the empty set. I may not have complete confidence in the rest of set theory, and I may not expect that the complement of a set (or the set itself) is always well-defined, but when I limit myself to probability measures over reasonable spaces then I’m content.
So, for some particular aspects of math, you have certainty 1-epsilon, where epsilon is exactly zero?
What you are really doing is making the claim “Given that what I know about mathematics is correct, then the intersection of a set and its complement is the empty set.”
I was interpreting “something” as “at least one thing.” Almost surely my understanding of mathematics as a whole is incorrect somewhere, but there are a handful of mathematical statements that I believe with complete metaphysical certitude.
“Correct” is an unclear word, here. Suppose I start off with a handful of axioms. What is the probability that one of the axioms is true / correct? In the context of that system, 1, since it’s the starting point. Now, the axioms might not be useful or relevant to reality, and the axioms may conflict and thus the system isn’t internally consistent (i.e. statements having probability 0 and 1 simultaneously). And so the geometer who is only 1-epsilon sure that Euclid’s axioms describe the real world will be able to update gracefully when presented with evidence that real space is curved, even though they retain the same confidence in their Euclidean proofs (as they apply to abstract concepts).
Basically, I only agree with this post when it comes to statements about which uncertainty is reasonable. If you require 1-epsilon certainty for anything, even P(A|A), then you break the math of probability.
The map is not the territory. “A&!A” would mean some fact about the world being both true and false, rather than anyone’s beliefs about that fact.
Assigning zero or nonzero probability to that assertion is having a belief about it.
Yes, the probability is a belief, but your previous question was about something more like P(!A&P(A)=1), that is to say, an absolute belief being inconsistent with the facts. Vaniver’s assertion was about the facts themselves being inconsistent with the facts, which would have a rather alarming lack of implications.
“Pretty confident” is about as close to “actually 0″ as the moon is (which I don’t care to quantify :P).
“Pretty confident” was also a rhetorical understatement. :P
How is there anybody in this group? Considering that all of them have $1,000,000, what convinced them to one-box in the first place such that they later changed their minds about it and regretted the decision? (Like, I guess a one-boxer could say afterwards “I bet that guy wasn’t really omniscient, I should have taken the other box too, then I’d have gotten $1,001,000 instead”, but why wouldn’t a person who thinks that way two-box to begin with?)
True.
I only took that case into account for completeness, to cover my bases against the criticism that “not all one-boxers would be happy with their decisions.”
Naively, when you have a choice between 1000000.01 and 1000000.02, it’s very easy to argue that the latter is the better option. To argue for the former, you would probably cite the insignificance of that cent next to the rest of 1000000.01: that eps doesn’t matter, or that an extra penny in your pocket is inconvenient, or that you already have 1000000.01, so why do you need another 0.01?