No, I would run out of statements I was that confident in long before I reached a trillion.
Nitpicking.
First, there is more than a 1/10^12 chance of cheating in that game, by putting a strong magnet in the ceiling for example.
You know that you’re not cheating, and it doesn’t seem likely that Buffet would cheat when doing so would make him less likely to win. Of course, maybe there’s a 10^-10 chance that Buffet would go insane and cheat anyway, but can we just assume a least convenient possible world where we ignore those interfering issues.
Or come up with your own hypothetical if you don’t like mine (you could use Omega instead of Buffet to eliminate cheating).
And second, utility is not linear in money over that interval; Warren Buffet would value a ten cent gain less than 1/10^12 as much as avoiding a $10^11 loss.
I don’t care what Buffet values, the important thing is what I value, and think I actually value avoiding a ten cent loss a lot more than 10^-12 as much as achieving a $50billion gain.
No, I would run out of statements I was that confident in long before I reached a trillion.
Nitpicking.
Not at all. The feeling of impossibility of making trillion statements like that and never being wrong partly stems from our inability to conceive trillion distinct statements supported so much by evidence as the validity of gravitational laws. (The second reason for the intuition is imagining making mistake because of being tired of making one prediction after another.) Certainly there is far less than trillion independent statements of comparable trustworthiness that people can utter, which makes the general calibration approach to defining the subjective probability hard to use here.
You know that you’re not cheating, and it doesn’t seem likely that Buffet would cheat when doing so would make him less likely to win.
Someone else may put the magnet in the pen; this is the sort of concerns you cannot rule out in real life. Or perhaps Buffet is fed up with being rich and wants to award his possessions to some random person and do it in an unusual way. But I find most probable that your willingness to accept this very bet is caused by the same bias which makes people buy lottery tickets; except here you are able to rationalise the probabilities afterwards to justify your decision.
As for your estimate for pen falling down instead of up (no cheating assumed) being 99.99%: seriously? Does it mean that you expect the pen fall up once in every ten thousand trials? If 99.99% is your upper bound for probability in general (this may be a reasonable interpretation because the pen example is certainly one of the most certain predictions one can state), do you play any lottery where the jackpot is more than 10000 more worth the ticket?
Someone else may put the magnet in the pen; this is the sort of concerns you cannot rule out in real life. Or perhaps Buffet is fed up with being rich and wants to award his possessions to some random person and do it in an unusual way.
As I said, least convenient possible world.
As for your estimate for pen falling down instead of up (no cheating assumed) being 99.99%: seriously?
Firstly, as of my recent discussion with TOD my actual estimate is now more like 99.999% than 99.99%, but still way below 99.9999999999%. I do not assume no cheating in this estimate, I merely assume the amount of cheating you’d expect when idly tossing a pen in my room rather than the amount I’d expect when playing billion-dollar games with Warren Buffet.
Does it mean that you expect the pen fall up once in every ten thousand trials?
No. Most of that probability is concentrated in hypotheses where the pen never falls up. This is the difference between a Bayesian probability and a frequency.
I will point out though, that I have probably dropped fewer than 10000 pens in my life and one of them did go sideways (to the best I could tell there was no statistically significant vertical component to its movement), though I suppose I should have predicted that given the weather conditions at the time.
If 99.99% is your upper bound for probability in general (this may be a reasonable interpretation because the pen example is certainly one of the most certain predictions one can state), do you play any lottery where the jackpot is more than 10000 more worth the ticket?
Actually, this is one kind of claim I will happily assign probabilities like 99.9999999999% and higher to, which is the negation of a very specific claim (this is equivalent to the union of a huge number of other claims), for example:
“There is not currently a gang of 17643529 elephants, each painted with the national flag of Moldova, all riding the same giant unicycle around a 57km wide crater on the innermost planet of the 59th closest star to earth in the Andromeda galaxy”.
Happy to go well above 99.9999999999% on that one, as there are easily 10^12 mutually exclusive alternatives, all at least as plausible (in addition, by allowing claims like this, but slightly more plausible, I bet I could find a trillion claims each about as plausible as the pen).
What is impressive about the pen, as well as about most scientific theories, is that it is a single, very specific, hypothesis which earned its high probability through accuracy rather than bulk.
Nitpicking.
You know that you’re not cheating, and it doesn’t seem likely that Buffet would cheat when doing so would make him less likely to win. Of course, maybe there’s a 10^-10 chance that Buffet would go insane and cheat anyway, but can we just assume a least convenient possible world where we ignore those interfering issues.
Or come up with your own hypothetical if you don’t like mine (you could use Omega instead of Buffet to eliminate cheating).
I don’t care what Buffet values, the important thing is what I value, and think I actually value avoiding a ten cent loss a lot more than 10^-12 as much as achieving a $50billion gain.
Not at all. The feeling of impossibility of making trillion statements like that and never being wrong partly stems from our inability to conceive trillion distinct statements supported so much by evidence as the validity of gravitational laws. (The second reason for the intuition is imagining making mistake because of being tired of making one prediction after another.) Certainly there is far less than trillion independent statements of comparable trustworthiness that people can utter, which makes the general calibration approach to defining the subjective probability hard to use here.
Someone else may put the magnet in the pen; this is the sort of concerns you cannot rule out in real life. Or perhaps Buffet is fed up with being rich and wants to award his possessions to some random person and do it in an unusual way. But I find most probable that your willingness to accept this very bet is caused by the same bias which makes people buy lottery tickets; except here you are able to rationalise the probabilities afterwards to justify your decision.
As for your estimate for pen falling down instead of up (no cheating assumed) being 99.99%: seriously? Does it mean that you expect the pen fall up once in every ten thousand trials? If 99.99% is your upper bound for probability in general (this may be a reasonable interpretation because the pen example is certainly one of the most certain predictions one can state), do you play any lottery where the jackpot is more than 10000 more worth the ticket?
As I said, least convenient possible world.
Firstly, as of my recent discussion with TOD my actual estimate is now more like 99.999% than 99.99%, but still way below 99.9999999999%. I do not assume no cheating in this estimate, I merely assume the amount of cheating you’d expect when idly tossing a pen in my room rather than the amount I’d expect when playing billion-dollar games with Warren Buffet.
No. Most of that probability is concentrated in hypotheses where the pen never falls up. This is the difference between a Bayesian probability and a frequency.
I will point out though, that I have probably dropped fewer than 10000 pens in my life and one of them did go sideways (to the best I could tell there was no statistically significant vertical component to its movement), though I suppose I should have predicted that given the weather conditions at the time.
Actually, this is one kind of claim I will happily assign probabilities like 99.9999999999% and higher to, which is the negation of a very specific claim (this is equivalent to the union of a huge number of other claims), for example:
“There is not currently a gang of 17643529 elephants, each painted with the national flag of Moldova, all riding the same giant unicycle around a 57km wide crater on the innermost planet of the 59th closest star to earth in the Andromeda galaxy”.
Happy to go well above 99.9999999999% on that one, as there are easily 10^12 mutually exclusive alternatives, all at least as plausible (in addition, by allowing claims like this, but slightly more plausible, I bet I could find a trillion claims each about as plausible as the pen).
What is impressive about the pen, as well as about most scientific theories, is that it is a single, very specific, hypothesis which earned its high probability through accuracy rather than bulk.