As in the LHC example, the criterion is making a million statements with independent reasoning behind each. Predicting a non-win in a million independent lotteries isn’t what ciphergoth was thinking, so much as making a million predictions in widely different areas, each of which you (or I) estimate has probability less than 10^-8.
Even ruling out fatigue as a factor by imagining Omega copies me a million times and asks each a different question, I believe my mind is so constituted that I’d be very overconfident in tens of thousands of cases, and that several of them would prove me wrong.
That’s certainly true given full rationality and arbitrary computing power, but there are certainly many individual things I could be wrong about without being able to immediately see how it contradicts other things I get right. I wouldn’t put it past Omega to pull this off.
As in the LHC example, the criterion is making a million statements with independent reasoning behind each. Predicting a non-win in a million independent lotteries isn’t what ciphergoth was thinking, so much as making a million predictions in widely different areas, each of which you (or I) estimate has probability less than 10^-8.
I’m not sure this properly represents what I was thinking. We all agree that any decision procedure that leads you to play the lottery is flawed. But the “million equivalent statement” test seems to indicate that you can’t have sufficient confidence of not winning not to play given the payoffs. If you insist on independent reasoning, passing the million-statement test is even harder, and justifying not playing is therefore harder. It’s a kind of real-life Pascal’s mugging.
I don’t have a solution to Pascal’s mugging, but for the lottery, I’m inclined to think that I really can have 10^-8 confidence of not winning, that the flaw is with the million-statement test, and it’s simply that there aren’t a million disparate situations where you can have this kind of confidence, though there certainly are a million broadly similar situations in the reference class “we are actually in a strong position to calculate high-quality odds on this coming to pass”.
Can you please explain that further? Why not? Do you just mean that the pleasure of buying the ticket could be worth a dollar, even though you know you won’t win?
Winning ten million dollars provides less than ten million times the utility of winning one dollar, because the richer you are, the less difference each additional dollar makes. That seems to argue against playing the lottery, though.
Very clever! You’re right; that is a situation where you might as well play the lottery.
This actually comes up in business, in terms of the types of investments that businesses make when they have a good chance of going bankrupt. They may not play the lottery, but they’re likely to make riskier moves since they have very little to lose and a lot to gain.
They may not play the lottery, but they’re likely to make riskier moves since they have very little to lose and a lot to gain.
It also applies if you believe your company will be bailed out by the government. I don’t tend to approve of bank bailouts for this reason. (Although government guarantees for deposits I place in a different category.)
Well, in an abstract case it would be reasonable, but if you are considering (for example) the lottery, the rule of thumb “you won’t win playing the lottery” outweighs any expectation of errors in your own calculations.
Let A represent the event when the lottery under consideration is profitable (positive expected value from playing); let X represent the event in which your calculation of the lottery’s value is correct. What is desired is P(A). Trivially:
P(A) = P(X) * P(A|X) + P(~X) * P(A|~X)
From your calculations, you know P(A|X) - this is the arbitrarily-strong confidence komponisto described. What you need to estimate is P(X) and P(A|~X).
P(X) I cannot help you with. From my own experience, depending on whether I checked my work, I’d put it in the range {0.9,0.999}, but that’s your business.
P(A|~X) I would put in the range {1e-10, 1e-4}.
In order to conclude that you should always play the lottery, you would have to put P(A|~X) close to unity.
Q.E.D.
Edit: The error I see is supposing that a wrong calculation gives positive information about the correct answer. That’s practically false—if your calculation is wrong, the prior should be approximately correct.
I think this doesn’t work, or at least is incomplete, because what is needed (under standard decision theory) to decide whether or not to play is not the probability of the lottery having a positive expected value, but the expected utility of the lottery, which I don’t see how to compute from your P(A) (assuming that utility is linear in dollars).
ETA: In case the point isn’t clear, suppose P(A)=1e-4, but the expected value of the lottery, conditional on A being true, is 1e5, then you should still play, right?
Let E(A) be the expected value of the lottery that you should use in determining your actions. Let E(a) be the expected value you calculate. Let p be your confidence in your calculation (a probability in the Bayesian sense).
If we want to account for the possibility of calculating wrong, we are tempted to write something like
E(A) = p * E(a) + (1-p) * x
where x is what you would expect the lottery to be worth if your calculation was wrong.
The naive calculation—the one which says, “play the lottery”—takes x as equal to the jackpot. This is not justified. The correct value for x is closer to your reference-class prediction.
Setting x equal to “negative the cost of the ticket plus epsilon”, then, it becomes abundantly clear that your ignorance does not make the lottery a good bet.
Edit: This also explains why you check your math before betting when it looks like a lottery is a good bet, which is nice.
If we follow your suggestion and obtain E(A) < 0, then compute from that the probability of winning the lottery, we end up with P(will win lottery) < 1e-8. But what if we want to compute P(will win lottery) directly? Or, if you think we shouldn’t try to compute it directly, but should do it in this roundabout way, then we need a method for deciding when this indirect method is necessary. (Meta point: I think you might be stopping at the first good answer.)
As in the LHC example, the criterion is making a million statements with independent reasoning behind each. Predicting a non-win in a million independent lotteries isn’t what ciphergoth was thinking, so much as making a million predictions in widely different areas, each of which you (or I) estimate has probability less than 10^-8.
Even ruling out fatigue as a factor by imagining Omega copies me a million times and asks each a different question, I believe my mind is so constituted that I’d be very overconfident in tens of thousands of cases, and that several of them would prove me wrong.
Everything is dependent on everything else. I can’t make many independent statements.
That’s certainly true given full rationality and arbitrary computing power, but there are certainly many individual things I could be wrong about without being able to immediately see how it contradicts other things I get right. I wouldn’t put it past Omega to pull this off.
I’m not sure this properly represents what I was thinking. We all agree that any decision procedure that leads you to play the lottery is flawed. But the “million equivalent statement” test seems to indicate that you can’t have sufficient confidence of not winning not to play given the payoffs. If you insist on independent reasoning, passing the million-statement test is even harder, and justifying not playing is therefore harder. It’s a kind of real-life Pascal’s mugging.
I don’t have a solution to Pascal’s mugging, but for the lottery, I’m inclined to think that I really can have 10^-8 confidence of not winning, that the flaw is with the million-statement test, and it’s simply that there aren’t a million disparate situations where you can have this kind of confidence, though there certainly are a million broadly similar situations in the reference class “we are actually in a strong position to calculate high-quality odds on this coming to pass”.
I don’t.
Can you please explain that further? Why not? Do you just mean that the pleasure of buying the ticket could be worth a dollar, even though you know you won’t win?
Just reasoning based on a non linear relationship between money and utility.
Winning ten million dollars provides less than ten million times the utility of winning one dollar, because the richer you are, the less difference each additional dollar makes. That seems to argue against playing the lottery, though.
$5,000,000 debt. Bankruptcy laws.
Very clever! You’re right; that is a situation where you might as well play the lottery.
This actually comes up in business, in terms of the types of investments that businesses make when they have a good chance of going bankrupt. They may not play the lottery, but they’re likely to make riskier moves since they have very little to lose and a lot to gain.
It also applies if you believe your company will be bailed out by the government. I don’t tend to approve of bank bailouts for this reason. (Although government guarantees for deposits I place in a different category.)
It looks to me like the flaw is in calculating the expected utility after changing the probability estimate with the probability of error.
What alternative do you have in mind?
Well, in an abstract case it would be reasonable, but if you are considering (for example) the lottery, the rule of thumb “you won’t win playing the lottery” outweighs any expectation of errors in your own calculations.
Potentially promising approach, but how does that translate into math?
Let A represent the event when the lottery under consideration is profitable (positive expected value from playing); let X represent the event in which your calculation of the lottery’s value is correct. What is desired is P(A). Trivially:
From your calculations, you know P(A|X) - this is the arbitrarily-strong confidence komponisto described. What you need to estimate is P(X) and P(A|~X).
P(X) I cannot help you with. From my own experience, depending on whether I checked my work, I’d put it in the range {0.9,0.999}, but that’s your business.
P(A|~X) I would put in the range {1e-10, 1e-4}.
In order to conclude that you should always play the lottery, you would have to put P(A|~X) close to unity.
Q.E.D.
Edit: The error I see is supposing that a wrong calculation gives positive information about the correct answer. That’s practically false—if your calculation is wrong, the prior should be approximately correct.
I think this doesn’t work, or at least is incomplete, because what is needed (under standard decision theory) to decide whether or not to play is not the probability of the lottery having a positive expected value, but the expected utility of the lottery, which I don’t see how to compute from your P(A) (assuming that utility is linear in dollars).
ETA: In case the point isn’t clear, suppose P(A)=1e-4, but the expected value of the lottery, conditional on A being true, is 1e5, then you should still play, right?
You’re right: recalculating...
Let E(A) be the expected value of the lottery that you should use in determining your actions. Let E(a) be the expected value you calculate. Let p be your confidence in your calculation (a probability in the Bayesian sense).
If we want to account for the possibility of calculating wrong, we are tempted to write something like
where x is what you would expect the lottery to be worth if your calculation was wrong.
The naive calculation—the one which says, “play the lottery”—takes x as equal to the jackpot. This is not justified. The correct value for x is closer to your reference-class prediction.
Setting x equal to “negative the cost of the ticket plus epsilon”, then, it becomes abundantly clear that your ignorance does not make the lottery a good bet.
Edit: This also explains why you check your math before betting when it looks like a lottery is a good bet, which is nice.
If we follow your suggestion and obtain E(A) < 0, then compute from that the probability of winning the lottery, we end up with P(will win lottery) < 1e-8. But what if we want to compute P(will win lottery) directly? Or, if you think we shouldn’t try to compute it directly, but should do it in this roundabout way, then we need a method for deciding when this indirect method is necessary. (Meta point: I think you might be stopping at the first good answer.)
The parallel calculation would be
I don’t put P_typical very high.
Okay, I’ll grant you that one. I’m still promoting my original idea to a top-level post.
Edit: …in part because I would like more eyes to see it and provide feedback—I would love to know if it has some interesting faults.
Edit: Here it is.