I mean, suppose someone flips a coin that you know to be slightly biased towards heads. Would you be willing to bet a thousand dollars that the coin comes up heads?
Well, that is what Bayesian decision theory would suggest you do, provided your utility function is linear with respect to money.
But, to illustrate the problem with acting as though you were 100% certain in your best theory, suppose I offer you the following bet. I will roll an ordinary 6 sided dice, and if the result is between 1 and 4 (inclusively), I will pay you $10. But if the result is 5 or 6, you will pay me $100. So you see that getting a result between 1 and 4 is more likely than getting a 5 or a 6, so you treat it as certain, so you accept my bet which you assign an expected value of $10. But really, the expected value is (2/3)$10 - (1/3)$100 = -$80/3. On average, you lose about $27 with this bet.
The problem here is that, by acting as though you are 100% sure, you give no weight to the potential costs of being wrong (including the opportunity of cost of the potential benefits of a different decision).
I was talking about ordinary circumstances. I’ve never bet money on the roll of a die, nor shall I. If it were to come up, I might well do the sort of analysis you suggest, as probability seems like it’s correctly applied to die rolling. Can you think of a better example, that might actually occur in my life?
[P]robability seems like it’s correctly applied to die rolling.
Dice roles are deterministic. Given the initial orientation, the mass and elasticity of the dice, the position, velocity, and angular momentum it is released with (which themselves are deterministic), and the surface it is rolled on, it is possible in principal to deduce what the result will be. (Quantum effects will be negligible, the classical approximation is valid in this domain. Imagine the dice is thrown by mechanical device if you are worried this does not apply to the nervous system of the dice roller.)
The probability does not describe randomness in the dice, because the dice is not random. The probability describes your ignorance of the relevant factors and your lack of logical omniscience to compute the result from those factors.
If you reject this argument in the case of dice rolling, how do you accept it (or what alternative do you use) in other cases of probability representing uncertainty?
Do you wear a seatbelt when you ride in a car? (I’m aware of at least one libertarian who didn’t.) The most probable theory is that you won’t need to, but even a small chance that it might prevent harm is generally thought to be worth the effort to put it on. Any action you take that fits this pattern qualifies.
I’m happy to report that I have made the decision to wear seat belts without evaluating anything using probability. If the justification is really:
but even a small chance that it might prevent harm is generally thought to be worth the effort to put it on
Then you’re not explicitly assigning probabilities. Change ‘small chance’ to ‘5%’ and I’d wonder how you got that number, and what would happen if the chance were 4.99%.
How did you make the decision to wear seat belts then? If it is because you were taught to at a young age, or it is the law, then can you think of any safety precaution you take (or don’t take) because it prevents or mitigates a problem that you believe would have less than 50% chance of occurring any particular time you do not take the precaution?
Then you’re not explicitly assigning probabilities.
Often we make decisions based on our vague feelings of uncertainty, which are difficult to describe as a probability that could be communicated to others or explicitly analyzed mathematically. This difficulty is a failure of introspection, but the uncertainty we feel does somewhat approximate Bayesian probability theory. Many biases represent the limits of this approximation.
On things I care about, I find the best position I can and act as though I’m 100% certain of it. When another position is shown to be superior, I reject the original view entirely.
with the implicit assumption that “best positions” are about states of the world, and not synonymous with “best decisions”.
I guess we need to go back to Z. M. Davis’s last paragraph, reproduced here for your convenience:
I agree that it would be incredibly silly to try to explicitly calculate all your beliefs using probability theory. But a qualitative or implicit notion of probability does seem natural. You don’t have to think in terms of likelihood ratios to say things like “It’s probably going to rain today” or “I think I locked the door, but I’m not entirely sure.” Is this the sort of thing that you mean by the word judgment? In any case, even if bringing probability into the process isn’t helpful, bringing in this dichotomy between absolutely-certain-until-proven-otherwise and complete ignorance seems downright harmful. I mean, what do you do when you think you’ve locked the door, but you’re not entirely sure? Or does that just not happen to you?
Well, that is what Bayesian decision theory would suggest you do, provided your utility function is linear with respect to money.
But, to illustrate the problem with acting as though you were 100% certain in your best theory, suppose I offer you the following bet. I will roll an ordinary 6 sided dice, and if the result is between 1 and 4 (inclusively), I will pay you $10. But if the result is 5 or 6, you will pay me $100. So you see that getting a result between 1 and 4 is more likely than getting a 5 or a 6, so you treat it as certain, so you accept my bet which you assign an expected value of $10. But really, the expected value is (2/3)$10 - (1/3)$100 = -$80/3. On average, you lose about $27 with this bet.
The problem here is that, by acting as though you are 100% sure, you give no weight to the potential costs of being wrong (including the opportunity of cost of the potential benefits of a different decision).
Right; I wasn’t thinking. Your example is better.
I was talking about ordinary circumstances. I’ve never bet money on the roll of a die, nor shall I. If it were to come up, I might well do the sort of analysis you suggest, as probability seems like it’s correctly applied to die rolling. Can you think of a better example, that might actually occur in my life?
Dice roles are deterministic. Given the initial orientation, the mass and elasticity of the dice, the position, velocity, and angular momentum it is released with (which themselves are deterministic), and the surface it is rolled on, it is possible in principal to deduce what the result will be. (Quantum effects will be negligible, the classical approximation is valid in this domain. Imagine the dice is thrown by mechanical device if you are worried this does not apply to the nervous system of the dice roller.)
The probability does not describe randomness in the dice, because the dice is not random. The probability describes your ignorance of the relevant factors and your lack of logical omniscience to compute the result from those factors.
If you reject this argument in the case of dice rolling, how do you accept it (or what alternative do you use) in other cases of probability representing uncertainty?
Do you wear a seatbelt when you ride in a car? (I’m aware of at least one libertarian who didn’t.) The most probable theory is that you won’t need to, but even a small chance that it might prevent harm is generally thought to be worth the effort to put it on. Any action you take that fits this pattern qualifies.
I’m happy to report that I have made the decision to wear seat belts without evaluating anything using probability. If the justification is really:
Then you’re not explicitly assigning probabilities. Change ‘small chance’ to ‘5%’ and I’d wonder how you got that number, and what would happen if the chance were 4.99%.
How did you make the decision to wear seat belts then? If it is because you were taught to at a young age, or it is the law, then can you think of any safety precaution you take (or don’t take) because it prevents or mitigates a problem that you believe would have less than 50% chance of occurring any particular time you do not take the precaution?
Often we make decisions based on our vague feelings of uncertainty, which are difficult to describe as a probability that could be communicated to others or explicitly analyzed mathematically. This difficulty is a failure of introspection, but the uncertainty we feel does somewhat approximate Bayesian probability theory. Many biases represent the limits of this approximation.
I was arguing against:
with the implicit assumption that “best positions” are about states of the world, and not synonymous with “best decisions”.
I guess we need to go back to Z. M. Davis’s last paragraph, reproduced here for your convenience: