While I agree that voting shouldn’t be based strictly on agreement/disagreement, voting is supposed to be an indicator of comment quality, with downvotes going to poorly-argued comments that one would like to see less of. It is worth bearing in mind that the more mistaken a conclusion is, the less likely one is to encounter strong comments in support of that conclusion.
If someone were to present specific, clearly-articulated arguments purporting to show that popular notions of accuracy and calibration are mistaken, that might well deserve an upvote in my book. But above, thomblake seems to be rejecting out of hand the very notion of decisionmaking under uncertainty, which seems to me to be absolutely fundamental to the study of rationality. (The very name Less Wrong denotes wanting beliefs that are closer to the truth, even if one knows that not everything one believes is perfectly true.) I’ve downvoted thomblake’s comment for this reason, and I’ve downvoted your comment because I don’t think it advances the discourse to discourage downvotes of poor comments.
rejecting out of hand the very notion of decisionmaking under uncertainty
Nope. I’m something of a Popperian. On things I care about, I find the best position I can and act as though I’m 100% certain of it. When another position is shown to be superior, I reject the original view entirely.
There are some circumstances where we need to make a decision without anything we can feel that strongly about, but I think that in most circumstances, bringing ‘probability’ into the process isn’t helpful. Humans just aren’t built to think like that, and I’d rather use just plain ‘judgment’.
On things I care about, I find the best position I can and act as though I’m 100% certain of it. When another position is shown to be superior, I reject the original view entirely.
Thank you for the clarification. Although frankly, I don’t see how that could possibly work. I mean, suppose someone flips a coin that you know to be slightly biased towards heads. Would you be willing to bet a thousand dollars that the coin comes up heads?
Okay, coinflip examples are always somewhat contrived (although I really could offer you that bet), but we can come up with much more realistic scenarios, too. Say you’ve just been on a job interview that went really well, and it seems like you’re going to get the position. Do you therefore not bother to apply anywhere else, acting with 100% certainty that you will get the job? Or I guess you could alternatively say that you simply “don’t know” whether you’ll get the position—but can “I don’t know” really be a complete summary of your epistemological state? Wouldn’t you at least have some qualitative feeling of “The interview went well; I’ll probably get hired” versus “The interview did not go well at all; I probably won’t get hired”?
Humans just aren’t built to think like that
I certainly agree that humans aren’t built to naturally think in terms of probabilities, but I see no reason to believe that human-default modes of reasoning are normative: we could just be systematically stupid on an absolute scale, and indeed I am rather convinced this is the case.
bringing ‘probability’ into the process isn’t helpful
I agree that it would be incredibly silly to try to explicitly calculate all your beliefs using probability theory. But a qualitative or implicit notion of probability does seem natural. You don’t have to think in terms of likelihood ratios to say things like “It’s probably going to rain today” or “I think I locked the door, but I’m not entirely sure.” Is this the sort of thing that you mean by the word judgment? In any case, even if bringing probability into the process isn’t helpful, bringing in this dichotomy between absolutely-certain-until-proven-otherwise and complete ignorance seems downright harmful. I mean, what do you do when you think you’ve locked the door, but you’re not entirely sure? Or does that just not happen to you?
I mean, suppose someone flips a coin that you know to be slightly biased towards heads. Would you be willing to bet a thousand dollars that the coin comes up heads?
Well, that is what Bayesian decision theory would suggest you do, provided your utility function is linear with respect to money.
But, to illustrate the problem with acting as though you were 100% certain in your best theory, suppose I offer you the following bet. I will roll an ordinary 6 sided dice, and if the result is between 1 and 4 (inclusively), I will pay you $10. But if the result is 5 or 6, you will pay me $100. So you see that getting a result between 1 and 4 is more likely than getting a 5 or a 6, so you treat it as certain, so you accept my bet which you assign an expected value of $10. But really, the expected value is (2/3)$10 - (1/3)$100 = -$80/3. On average, you lose about $27 with this bet.
The problem here is that, by acting as though you are 100% sure, you give no weight to the potential costs of being wrong (including the opportunity of cost of the potential benefits of a different decision).
I was talking about ordinary circumstances. I’ve never bet money on the roll of a die, nor shall I. If it were to come up, I might well do the sort of analysis you suggest, as probability seems like it’s correctly applied to die rolling. Can you think of a better example, that might actually occur in my life?
[P]robability seems like it’s correctly applied to die rolling.
Dice roles are deterministic. Given the initial orientation, the mass and elasticity of the dice, the position, velocity, and angular momentum it is released with (which themselves are deterministic), and the surface it is rolled on, it is possible in principal to deduce what the result will be. (Quantum effects will be negligible, the classical approximation is valid in this domain. Imagine the dice is thrown by mechanical device if you are worried this does not apply to the nervous system of the dice roller.)
The probability does not describe randomness in the dice, because the dice is not random. The probability describes your ignorance of the relevant factors and your lack of logical omniscience to compute the result from those factors.
If you reject this argument in the case of dice rolling, how do you accept it (or what alternative do you use) in other cases of probability representing uncertainty?
Do you wear a seatbelt when you ride in a car? (I’m aware of at least one libertarian who didn’t.) The most probable theory is that you won’t need to, but even a small chance that it might prevent harm is generally thought to be worth the effort to put it on. Any action you take that fits this pattern qualifies.
I’m happy to report that I have made the decision to wear seat belts without evaluating anything using probability. If the justification is really:
but even a small chance that it might prevent harm is generally thought to be worth the effort to put it on
Then you’re not explicitly assigning probabilities. Change ‘small chance’ to ‘5%’ and I’d wonder how you got that number, and what would happen if the chance were 4.99%.
How did you make the decision to wear seat belts then? If it is because you were taught to at a young age, or it is the law, then can you think of any safety precaution you take (or don’t take) because it prevents or mitigates a problem that you believe would have less than 50% chance of occurring any particular time you do not take the precaution?
Then you’re not explicitly assigning probabilities.
Often we make decisions based on our vague feelings of uncertainty, which are difficult to describe as a probability that could be communicated to others or explicitly analyzed mathematically. This difficulty is a failure of introspection, but the uncertainty we feel does somewhat approximate Bayesian probability theory. Many biases represent the limits of this approximation.
On things I care about, I find the best position I can and act as though I’m 100% certain of it. When another position is shown to be superior, I reject the original view entirely.
with the implicit assumption that “best positions” are about states of the world, and not synonymous with “best decisions”.
I guess we need to go back to Z. M. Davis’s last paragraph, reproduced here for your convenience:
I agree that it would be incredibly silly to try to explicitly calculate all your beliefs using probability theory. But a qualitative or implicit notion of probability does seem natural. You don’t have to think in terms of likelihood ratios to say things like “It’s probably going to rain today” or “I think I locked the door, but I’m not entirely sure.” Is this the sort of thing that you mean by the word judgment? In any case, even if bringing probability into the process isn’t helpful, bringing in this dichotomy between absolutely-certain-until-proven-otherwise and complete ignorance seems downright harmful. I mean, what do you do when you think you’ve locked the door, but you’re not entirely sure? Or does that just not happen to you?
While I agree that voting shouldn’t be based strictly on agreement/disagreement, voting is supposed to be an indicator of comment quality, with downvotes going to poorly-argued comments that one would like to see less of. It is worth bearing in mind that the more mistaken a conclusion is, the less likely one is to encounter strong comments in support of that conclusion.
If someone were to present specific, clearly-articulated arguments purporting to show that popular notions of accuracy and calibration are mistaken, that might well deserve an upvote in my book. But above, thomblake seems to be rejecting out of hand the very notion of decisionmaking under uncertainty, which seems to me to be absolutely fundamental to the study of rationality. (The very name Less Wrong denotes wanting beliefs that are closer to the truth, even if one knows that not everything one believes is perfectly true.) I’ve downvoted thomblake’s comment for this reason, and I’ve downvoted your comment because I don’t think it advances the discourse to discourage downvotes of poor comments.
Nope. I’m something of a Popperian. On things I care about, I find the best position I can and act as though I’m 100% certain of it. When another position is shown to be superior, I reject the original view entirely.
There are some circumstances where we need to make a decision without anything we can feel that strongly about, but I think that in most circumstances, bringing ‘probability’ into the process isn’t helpful. Humans just aren’t built to think like that, and I’d rather use just plain ‘judgment’.
Thank you for the clarification. Although frankly, I don’t see how that could possibly work. I mean, suppose someone flips a coin that you know to be slightly biased towards heads. Would you be willing to bet a thousand dollars that the coin comes up heads?
Okay, coinflip examples are always somewhat contrived (although I really could offer you that bet), but we can come up with much more realistic scenarios, too. Say you’ve just been on a job interview that went really well, and it seems like you’re going to get the position. Do you therefore not bother to apply anywhere else, acting with 100% certainty that you will get the job? Or I guess you could alternatively say that you simply “don’t know” whether you’ll get the position—but can “I don’t know” really be a complete summary of your epistemological state? Wouldn’t you at least have some qualitative feeling of “The interview went well; I’ll probably get hired” versus “The interview did not go well at all; I probably won’t get hired”?
I certainly agree that humans aren’t built to naturally think in terms of probabilities, but I see no reason to believe that human-default modes of reasoning are normative: we could just be systematically stupid on an absolute scale, and indeed I am rather convinced this is the case.
I agree that it would be incredibly silly to try to explicitly calculate all your beliefs using probability theory. But a qualitative or implicit notion of probability does seem natural. You don’t have to think in terms of likelihood ratios to say things like “It’s probably going to rain today” or “I think I locked the door, but I’m not entirely sure.” Is this the sort of thing that you mean by the word judgment? In any case, even if bringing probability into the process isn’t helpful, bringing in this dichotomy between absolutely-certain-until-proven-otherwise and complete ignorance seems downright harmful. I mean, what do you do when you think you’ve locked the door, but you’re not entirely sure? Or does that just not happen to you?
Well, that is what Bayesian decision theory would suggest you do, provided your utility function is linear with respect to money.
But, to illustrate the problem with acting as though you were 100% certain in your best theory, suppose I offer you the following bet. I will roll an ordinary 6 sided dice, and if the result is between 1 and 4 (inclusively), I will pay you $10. But if the result is 5 or 6, you will pay me $100. So you see that getting a result between 1 and 4 is more likely than getting a 5 or a 6, so you treat it as certain, so you accept my bet which you assign an expected value of $10. But really, the expected value is (2/3)$10 - (1/3)$100 = -$80/3. On average, you lose about $27 with this bet.
The problem here is that, by acting as though you are 100% sure, you give no weight to the potential costs of being wrong (including the opportunity of cost of the potential benefits of a different decision).
Right; I wasn’t thinking. Your example is better.
I was talking about ordinary circumstances. I’ve never bet money on the roll of a die, nor shall I. If it were to come up, I might well do the sort of analysis you suggest, as probability seems like it’s correctly applied to die rolling. Can you think of a better example, that might actually occur in my life?
Dice roles are deterministic. Given the initial orientation, the mass and elasticity of the dice, the position, velocity, and angular momentum it is released with (which themselves are deterministic), and the surface it is rolled on, it is possible in principal to deduce what the result will be. (Quantum effects will be negligible, the classical approximation is valid in this domain. Imagine the dice is thrown by mechanical device if you are worried this does not apply to the nervous system of the dice roller.)
The probability does not describe randomness in the dice, because the dice is not random. The probability describes your ignorance of the relevant factors and your lack of logical omniscience to compute the result from those factors.
If you reject this argument in the case of dice rolling, how do you accept it (or what alternative do you use) in other cases of probability representing uncertainty?
Do you wear a seatbelt when you ride in a car? (I’m aware of at least one libertarian who didn’t.) The most probable theory is that you won’t need to, but even a small chance that it might prevent harm is generally thought to be worth the effort to put it on. Any action you take that fits this pattern qualifies.
I’m happy to report that I have made the decision to wear seat belts without evaluating anything using probability. If the justification is really:
Then you’re not explicitly assigning probabilities. Change ‘small chance’ to ‘5%’ and I’d wonder how you got that number, and what would happen if the chance were 4.99%.
How did you make the decision to wear seat belts then? If it is because you were taught to at a young age, or it is the law, then can you think of any safety precaution you take (or don’t take) because it prevents or mitigates a problem that you believe would have less than 50% chance of occurring any particular time you do not take the precaution?
Often we make decisions based on our vague feelings of uncertainty, which are difficult to describe as a probability that could be communicated to others or explicitly analyzed mathematically. This difficulty is a failure of introspection, but the uncertainty we feel does somewhat approximate Bayesian probability theory. Many biases represent the limits of this approximation.
I was arguing against:
with the implicit assumption that “best positions” are about states of the world, and not synonymous with “best decisions”.
I guess we need to go back to Z. M. Davis’s last paragraph, reproduced here for your convenience: