You’re essentially engaging in arbitrage, taking advantage of the difference in the probabilities assigned to both boxes being full by different people. Which is one reason rational people never assign 0 probability to anything.
You could just as well go to some one-boxers (who “believe P(both full) = 0”) and offer them a $1 bet 10000000:1 in your favor that both boxes will be full; then offer the two-boxers whatever bet they will take “that only one box is full” that will give you more than $1 profit if you win. Thus, either way, you make a profit, and you can make however much you like just by increasing the stakes.
This still doesn’t actually solve newcomb’s problem, though. I’d call it more of a cautionary tale against being absolutely certain.
(Incidentally, since you’re going into this “fully intending” to take both boxes, I’d expect both one boxers and two boxers to agree on the extremely low probability Omega is going to have filled both boxes.)
Yes, nshepperd, my assumption is that P << 0.5, something in the 0.0001 to 0.01 range.
Besides, arbitrage would still be possible if some people estimated P=0.01 and others P=0.0001, only the solution would be messier than what I’d ever want to do casually. Besides, if I were unconstrained by the bets I could make (I’d tried to work with a cap before), that would make making profits even easier.
I wasn’t exactly trying to solve the problem, only to find a “naively rational” workaround (using the same naive rationality that leads prisoners to rat each other out in PD).
When you’re saying that this doesn’t solve Newcomb’s problem, what do you expect the solution to actually entail?
Yes, arbitrage is possible pretty much whenever people’s probabilities disagree to any significant degree. Setting P = 0 just lets you take it to absurd levels (eg. put up no stake at all, and it’s still a “fair bet”).
When you’re saying that this doesn’t solve Newcomb’s problem, what do you expect the solution to actually entail?
Maximizing the money found upon opening the box(es) you have selected.
If you like, replace the money with cures for cancer with differing probabilities of working, or machines with differing probabilities of being a halting oracle, or something else you can’t get by exploiting other humans.
No, P(I’m wrong about something mathematical) is 1-epsilon. P(I’m wrong about this mathematical thing) is often low- like 2%, and sometimes actually 0, like when discussing the intersection of a set and its complement. It’s defined to be the empty set- there’s no way that it can fail to be the empty set. I may not have complete confidence in the rest of set theory, and I may not expect that the complement of a set (or the set itself) is always well-defined, but when I limit myself to probability measures over reasonable spaces then I’m content.
So, for some particular aspects of math, you have certainty 1-epsilon, where epsilon is exactly zero?
What you are really doing is making the claim “Given that what I know about mathematics is correct, then the intersection of a set and its complement is the empty set.”
I was interpreting “something” as “at least one thing.” Almost surely my understanding of mathematics as a whole is incorrect somewhere, but there are a handful of mathematical statements that I believe with complete metaphysical certitude.
What you are really doing is making the claim “Given that what I know about mathematics is correct, then the intersection of a set and its complement is the empty set.”
“Correct” is an unclear word, here. Suppose I start off with a handful of axioms. What is the probability that one of the axioms is true / correct? In the context of that system, 1, since it’s the starting point. Now, the axioms might not be useful or relevant to reality, and the axioms may conflict and thus the system isn’t internally consistent (i.e. statements having probability 0 and 1 simultaneously). And so the geometer who is only 1-epsilon sure that Euclid’s axioms describe the real world will be able to update gracefully when presented with evidence that real space is curved, even though they retain the same confidence in their Euclidean proofs (as they apply to abstract concepts).
Basically, I only agree with this post when it comes to statements about which uncertainty is reasonable. If you require 1-epsilon certainty for anything, even P(A|A), then you break the math of probability.
Yes, the probability is a belief, but your previous question was about something more like P(!A&P(A)=1), that is to say, an absolute belief being inconsistent with the facts. Vaniver’s assertion was about the facts themselves being inconsistent with the facts, which would have a rather alarming lack of implications.
You’re essentially engaging in arbitrage, taking advantage of the difference in the probabilities assigned to both boxes being full by different people. Which is one reason rational people never assign 0 probability to anything.
You could just as well go to some one-boxers (who “believe P(both full) = 0”) and offer them a $1 bet 10000000:1 in your favor that both boxes will be full; then offer the two-boxers whatever bet they will take “that only one box is full” that will give you more than $1 profit if you win. Thus, either way, you make a profit, and you can make however much you like just by increasing the stakes.
This still doesn’t actually solve newcomb’s problem, though. I’d call it more of a cautionary tale against being absolutely certain.
(Incidentally, since you’re going into this “fully intending” to take both boxes, I’d expect both one boxers and two boxers to agree on the extremely low probability Omega is going to have filled both boxes.)
Yes, nshepperd, my assumption is that P << 0.5, something in the 0.0001 to 0.01 range.
Besides, arbitrage would still be possible if some people estimated P=0.01 and others P=0.0001, only the solution would be messier than what I’d ever want to do casually. Besides, if I were unconstrained by the bets I could make (I’d tried to work with a cap before), that would make making profits even easier.
I wasn’t exactly trying to solve the problem, only to find a “naively rational” workaround (using the same naive rationality that leads prisoners to rat each other out in PD).
When you’re saying that this doesn’t solve Newcomb’s problem, what do you expect the solution to actually entail?
Yes, arbitrage is possible pretty much whenever people’s probabilities disagree to any significant degree. Setting P = 0 just lets you take it to absurd levels (eg. put up no stake at all, and it’s still a “fair bet”).
Maximizing the money found upon opening the box(es) you have selected.
If you like, replace the money with cures for cancer with differing probabilities of working, or machines with differing probabilities of being a halting oracle, or something else you can’t get by exploiting other humans.
I don’t know, I feel pretty confident assigning P(A&!A)=0 :P
Do you assign 0 probability to the hypothesis that there exists something which you believe to be mathematically true which is not?
No, P(I’m wrong about something mathematical) is 1-epsilon. P(I’m wrong about this mathematical thing) is often low- like 2%, and sometimes actually 0, like when discussing the intersection of a set and its complement. It’s defined to be the empty set- there’s no way that it can fail to be the empty set. I may not have complete confidence in the rest of set theory, and I may not expect that the complement of a set (or the set itself) is always well-defined, but when I limit myself to probability measures over reasonable spaces then I’m content.
So, for some particular aspects of math, you have certainty 1-epsilon, where epsilon is exactly zero?
What you are really doing is making the claim “Given that what I know about mathematics is correct, then the intersection of a set and its complement is the empty set.”
I was interpreting “something” as “at least one thing.” Almost surely my understanding of mathematics as a whole is incorrect somewhere, but there are a handful of mathematical statements that I believe with complete metaphysical certitude.
“Correct” is an unclear word, here. Suppose I start off with a handful of axioms. What is the probability that one of the axioms is true / correct? In the context of that system, 1, since it’s the starting point. Now, the axioms might not be useful or relevant to reality, and the axioms may conflict and thus the system isn’t internally consistent (i.e. statements having probability 0 and 1 simultaneously). And so the geometer who is only 1-epsilon sure that Euclid’s axioms describe the real world will be able to update gracefully when presented with evidence that real space is curved, even though they retain the same confidence in their Euclidean proofs (as they apply to abstract concepts).
Basically, I only agree with this post when it comes to statements about which uncertainty is reasonable. If you require 1-epsilon certainty for anything, even P(A|A), then you break the math of probability.
The map is not the territory. “A&!A” would mean some fact about the world being both true and false, rather than anyone’s beliefs about that fact.
Assigning zero or nonzero probability to that assertion is having a belief about it.
Yes, the probability is a belief, but your previous question was about something more like P(!A&P(A)=1), that is to say, an absolute belief being inconsistent with the facts. Vaniver’s assertion was about the facts themselves being inconsistent with the facts, which would have a rather alarming lack of implications.
“Pretty confident” is about as close to “actually 0″ as the moon is (which I don’t care to quantify :P).
“Pretty confident” was also a rhetorical understatement. :P