Person who put “2172”, you probably thought you were screwing up the results, but in fact you managed to counterbalance the other person who put “1700″, allowing the mean to revert back to within one year of the correct value :P
Not to worry—I am a believer in the wisdom of crowds, so I knew full well that I wasn’t going to be screwing up anything. That response was pure noise.
I just don’t like guessing, and so I put “0%” for my confidence on that question, so that one of my answers was definitely wrong and the other was definitely right.
Yes, but what was the point of that survey question? Among other things, it could assess a) the distribution of the survey takers accuracy, b) the distribution of the survey takers calibration, c) the relationship of accuracy and calibration to other personal characteristics.
I don’t mean to make an overly-big-deal about this, and I appreciate thomblake’s other contributions to the LW community, but because he didn’t really give us his best guess about when the lightbulb was invented, he reduced our ability to learn all these things.
I don’t think much of the concepts of ‘accuracy’ and ‘calibration’ and whatnot as they are used here. As far as I’m concerned, the correct response choices were either the correct answer with 100% confidence, or “I don’t know”. So my contemptuous answer to the question can be used to relate my attitude towards that to other personal characteristics.
I thought you were a believer in the wisdom of crowds, though? The magic of collaborative estimation doesn’t occur if everybody who isn’t absolutely sure shuts up.
When providing what you think is the correct answer, there is still some probability that you’re mistaken. That probability could be 10^-10, but it can’t be zero. And when answering “I don’t know”, you can still guess, and produce a probability that your guess was correct. Lumping all low probabilities of being correct into a single qualitative judgment of “I don’t know” sometimes makes sense, but sometimes a concrete probability is useful so you should know how to generate one.
At the time of this comment, thomblake’s above comment is at −3 points and there are no comments arguing against his opinion, or why he is wrong. We should not downvote a comment simply because we disagree with it. Thomblake expressed an opinion that differs (I presume) from the community majority. A better response to such an expressed opinion is to present arguments that correct his belief.
Voting based on agreement/disagreement will lead people not to express viewpoints they believe differ from the community’s.
While I agree that voting shouldn’t be based strictly on agreement/disagreement, voting is supposed to be an indicator of comment quality, with downvotes going to poorly-argued comments that one would like to see less of. It is worth bearing in mind that the more mistaken a conclusion is, the less likely one is to encounter strong comments in support of that conclusion.
If someone were to present specific, clearly-articulated arguments purporting to show that popular notions of accuracy and calibration are mistaken, that might well deserve an upvote in my book. But above, thomblake seems to be rejecting out of hand the very notion of decisionmaking under uncertainty, which seems to me to be absolutely fundamental to the study of rationality. (The very name Less Wrong denotes wanting beliefs that are closer to the truth, even if one knows that not everything one believes is perfectly true.) I’ve downvoted thomblake’s comment for this reason, and I’ve downvoted your comment because I don’t think it advances the discourse to discourage downvotes of poor comments.
rejecting out of hand the very notion of decisionmaking under uncertainty
Nope. I’m something of a Popperian. On things I care about, I find the best position I can and act as though I’m 100% certain of it. When another position is shown to be superior, I reject the original view entirely.
There are some circumstances where we need to make a decision without anything we can feel that strongly about, but I think that in most circumstances, bringing ‘probability’ into the process isn’t helpful. Humans just aren’t built to think like that, and I’d rather use just plain ‘judgment’.
On things I care about, I find the best position I can and act as though I’m 100% certain of it. When another position is shown to be superior, I reject the original view entirely.
Thank you for the clarification. Although frankly, I don’t see how that could possibly work. I mean, suppose someone flips a coin that you know to be slightly biased towards heads. Would you be willing to bet a thousand dollars that the coin comes up heads?
Okay, coinflip examples are always somewhat contrived (although I really could offer you that bet), but we can come up with much more realistic scenarios, too. Say you’ve just been on a job interview that went really well, and it seems like you’re going to get the position. Do you therefore not bother to apply anywhere else, acting with 100% certainty that you will get the job? Or I guess you could alternatively say that you simply “don’t know” whether you’ll get the position—but can “I don’t know” really be a complete summary of your epistemological state? Wouldn’t you at least have some qualitative feeling of “The interview went well; I’ll probably get hired” versus “The interview did not go well at all; I probably won’t get hired”?
Humans just aren’t built to think like that
I certainly agree that humans aren’t built to naturally think in terms of probabilities, but I see no reason to believe that human-default modes of reasoning are normative: we could just be systematically stupid on an absolute scale, and indeed I am rather convinced this is the case.
bringing ‘probability’ into the process isn’t helpful
I agree that it would be incredibly silly to try to explicitly calculate all your beliefs using probability theory. But a qualitative or implicit notion of probability does seem natural. You don’t have to think in terms of likelihood ratios to say things like “It’s probably going to rain today” or “I think I locked the door, but I’m not entirely sure.” Is this the sort of thing that you mean by the word judgment? In any case, even if bringing probability into the process isn’t helpful, bringing in this dichotomy between absolutely-certain-until-proven-otherwise and complete ignorance seems downright harmful. I mean, what do you do when you think you’ve locked the door, but you’re not entirely sure? Or does that just not happen to you?
I mean, suppose someone flips a coin that you know to be slightly biased towards heads. Would you be willing to bet a thousand dollars that the coin comes up heads?
Well, that is what Bayesian decision theory would suggest you do, provided your utility function is linear with respect to money.
But, to illustrate the problem with acting as though you were 100% certain in your best theory, suppose I offer you the following bet. I will roll an ordinary 6 sided dice, and if the result is between 1 and 4 (inclusively), I will pay you $10. But if the result is 5 or 6, you will pay me $100. So you see that getting a result between 1 and 4 is more likely than getting a 5 or a 6, so you treat it as certain, so you accept my bet which you assign an expected value of $10. But really, the expected value is (2/3)$10 - (1/3)$100 = -$80/3. On average, you lose about $27 with this bet.
The problem here is that, by acting as though you are 100% sure, you give no weight to the potential costs of being wrong (including the opportunity of cost of the potential benefits of a different decision).
I was talking about ordinary circumstances. I’ve never bet money on the roll of a die, nor shall I. If it were to come up, I might well do the sort of analysis you suggest, as probability seems like it’s correctly applied to die rolling. Can you think of a better example, that might actually occur in my life?
[P]robability seems like it’s correctly applied to die rolling.
Dice roles are deterministic. Given the initial orientation, the mass and elasticity of the dice, the position, velocity, and angular momentum it is released with (which themselves are deterministic), and the surface it is rolled on, it is possible in principal to deduce what the result will be. (Quantum effects will be negligible, the classical approximation is valid in this domain. Imagine the dice is thrown by mechanical device if you are worried this does not apply to the nervous system of the dice roller.)
The probability does not describe randomness in the dice, because the dice is not random. The probability describes your ignorance of the relevant factors and your lack of logical omniscience to compute the result from those factors.
If you reject this argument in the case of dice rolling, how do you accept it (or what alternative do you use) in other cases of probability representing uncertainty?
Do you wear a seatbelt when you ride in a car? (I’m aware of at least one libertarian who didn’t.) The most probable theory is that you won’t need to, but even a small chance that it might prevent harm is generally thought to be worth the effort to put it on. Any action you take that fits this pattern qualifies.
I’m happy to report that I have made the decision to wear seat belts without evaluating anything using probability. If the justification is really:
but even a small chance that it might prevent harm is generally thought to be worth the effort to put it on
Then you’re not explicitly assigning probabilities. Change ‘small chance’ to ‘5%’ and I’d wonder how you got that number, and what would happen if the chance were 4.99%.
How did you make the decision to wear seat belts then? If it is because you were taught to at a young age, or it is the law, then can you think of any safety precaution you take (or don’t take) because it prevents or mitigates a problem that you believe would have less than 50% chance of occurring any particular time you do not take the precaution?
Then you’re not explicitly assigning probabilities.
Often we make decisions based on our vague feelings of uncertainty, which are difficult to describe as a probability that could be communicated to others or explicitly analyzed mathematically. This difficulty is a failure of introspection, but the uncertainty we feel does somewhat approximate Bayesian probability theory. Many biases represent the limits of this approximation.
On things I care about, I find the best position I can and act as though I’m 100% certain of it. When another position is shown to be superior, I reject the original view entirely.
with the implicit assumption that “best positions” are about states of the world, and not synonymous with “best decisions”.
I guess we need to go back to Z. M. Davis’s last paragraph, reproduced here for your convenience:
I agree that it would be incredibly silly to try to explicitly calculate all your beliefs using probability theory. But a qualitative or implicit notion of probability does seem natural. You don’t have to think in terms of likelihood ratios to say things like “It’s probably going to rain today” or “I think I locked the door, but I’m not entirely sure.” Is this the sort of thing that you mean by the word judgment? In any case, even if bringing probability into the process isn’t helpful, bringing in this dichotomy between absolutely-certain-until-proven-otherwise and complete ignorance seems downright harmful. I mean, what do you do when you think you’ve locked the door, but you’re not entirely sure? Or does that just not happen to you?
We should not downvote a comment simply because we disagree with it.
This sounds great in theory, but other communities have applied that policy with terrible results. Whether I agree with something or not is the only information I have as to whether it’s true/wise, and that should be the main factor determining score. Excluding disagreement as grounds for downvoting leaves only presentation, resulting in posts that are eloquent, highly rated, and wrong. Those are mental poison.
When someone honestly presents their position, and is open to discussing it further, there is no need to down vote their comment for being wrong. In fact, it is counterproductive. By discouraging them from expressing their incorrect position, you do not cause them to relinquish it. By instead explaining why you think it is wrong, you help them to adopt a better position. And if it happens that they were right and you were wrong, then you have the opportunity to learn something.
I tend to down vote comments that are off topic, incoherent, arrogant, or present a conclusion without support.
I tend to up vote comments when they are eloquent, insightful, and correct, or sometimes, when they say pretty much what I was planning to say.
The “wisdom of crowds” would only apply if everyone is trying to actually get the answer right, and if the errors of incompetence are somewhat random. A large number of intentional pranksters (or one prankster who says “a googolplex”) can predictably screw up the average by introducing large variance or acting in a non-random fashion.
Not to worry—I am a believer in the wisdom of crowds, so I knew full well that I wasn’t going to be screwing up anything. That response was pure noise.
I just don’t like guessing, and so I put “0%” for my confidence on that question, so that one of my answers was definitely wrong and the other was definitely right.
I believe in the wisdom of crowds, but I also think that your actions were screwing up the results.
If you weren’t going to take a question seriously, I wish you wouldn’t have answered it at all.
ADDED: I decided not to downvote you because I don’t want to discourage being honest/forthcoming.
0% confidence should mean zero weight when computing the results, no?
Yes, but what was the point of that survey question? Among other things, it could assess a) the distribution of the survey takers accuracy, b) the distribution of the survey takers calibration, c) the relationship of accuracy and calibration to other personal characteristics.
I don’t mean to make an overly-big-deal about this, and I appreciate thomblake’s other contributions to the LW community, but because he didn’t really give us his best guess about when the lightbulb was invented, he reduced our ability to learn all these things.
I don’t think much of the concepts of ‘accuracy’ and ‘calibration’ and whatnot as they are used here. As far as I’m concerned, the correct response choices were either the correct answer with 100% confidence, or “I don’t know”. So my contemptuous answer to the question can be used to relate my attitude towards that to other personal characteristics.
I thought you were a believer in the wisdom of crowds, though? The magic of collaborative estimation doesn’t occur if everybody who isn’t absolutely sure shuts up.
When providing what you think is the correct answer, there is still some probability that you’re mistaken. That probability could be 10^-10, but it can’t be zero. And when answering “I don’t know”, you can still guess, and produce a probability that your guess was correct. Lumping all low probabilities of being correct into a single qualitative judgment of “I don’t know” sometimes makes sense, but sometimes a concrete probability is useful so you should know how to generate one.
At the time of this comment, thomblake’s above comment is at −3 points and there are no comments arguing against his opinion, or why he is wrong. We should not downvote a comment simply because we disagree with it. Thomblake expressed an opinion that differs (I presume) from the community majority. A better response to such an expressed opinion is to present arguments that correct his belief. Voting based on agreement/disagreement will lead people not to express viewpoints they believe differ from the community’s.
While I agree that voting shouldn’t be based strictly on agreement/disagreement, voting is supposed to be an indicator of comment quality, with downvotes going to poorly-argued comments that one would like to see less of. It is worth bearing in mind that the more mistaken a conclusion is, the less likely one is to encounter strong comments in support of that conclusion.
If someone were to present specific, clearly-articulated arguments purporting to show that popular notions of accuracy and calibration are mistaken, that might well deserve an upvote in my book. But above, thomblake seems to be rejecting out of hand the very notion of decisionmaking under uncertainty, which seems to me to be absolutely fundamental to the study of rationality. (The very name Less Wrong denotes wanting beliefs that are closer to the truth, even if one knows that not everything one believes is perfectly true.) I’ve downvoted thomblake’s comment for this reason, and I’ve downvoted your comment because I don’t think it advances the discourse to discourage downvotes of poor comments.
Nope. I’m something of a Popperian. On things I care about, I find the best position I can and act as though I’m 100% certain of it. When another position is shown to be superior, I reject the original view entirely.
There are some circumstances where we need to make a decision without anything we can feel that strongly about, but I think that in most circumstances, bringing ‘probability’ into the process isn’t helpful. Humans just aren’t built to think like that, and I’d rather use just plain ‘judgment’.
Thank you for the clarification. Although frankly, I don’t see how that could possibly work. I mean, suppose someone flips a coin that you know to be slightly biased towards heads. Would you be willing to bet a thousand dollars that the coin comes up heads?
Okay, coinflip examples are always somewhat contrived (although I really could offer you that bet), but we can come up with much more realistic scenarios, too. Say you’ve just been on a job interview that went really well, and it seems like you’re going to get the position. Do you therefore not bother to apply anywhere else, acting with 100% certainty that you will get the job? Or I guess you could alternatively say that you simply “don’t know” whether you’ll get the position—but can “I don’t know” really be a complete summary of your epistemological state? Wouldn’t you at least have some qualitative feeling of “The interview went well; I’ll probably get hired” versus “The interview did not go well at all; I probably won’t get hired”?
I certainly agree that humans aren’t built to naturally think in terms of probabilities, but I see no reason to believe that human-default modes of reasoning are normative: we could just be systematically stupid on an absolute scale, and indeed I am rather convinced this is the case.
I agree that it would be incredibly silly to try to explicitly calculate all your beliefs using probability theory. But a qualitative or implicit notion of probability does seem natural. You don’t have to think in terms of likelihood ratios to say things like “It’s probably going to rain today” or “I think I locked the door, but I’m not entirely sure.” Is this the sort of thing that you mean by the word judgment? In any case, even if bringing probability into the process isn’t helpful, bringing in this dichotomy between absolutely-certain-until-proven-otherwise and complete ignorance seems downright harmful. I mean, what do you do when you think you’ve locked the door, but you’re not entirely sure? Or does that just not happen to you?
Well, that is what Bayesian decision theory would suggest you do, provided your utility function is linear with respect to money.
But, to illustrate the problem with acting as though you were 100% certain in your best theory, suppose I offer you the following bet. I will roll an ordinary 6 sided dice, and if the result is between 1 and 4 (inclusively), I will pay you $10. But if the result is 5 or 6, you will pay me $100. So you see that getting a result between 1 and 4 is more likely than getting a 5 or a 6, so you treat it as certain, so you accept my bet which you assign an expected value of $10. But really, the expected value is (2/3)$10 - (1/3)$100 = -$80/3. On average, you lose about $27 with this bet.
The problem here is that, by acting as though you are 100% sure, you give no weight to the potential costs of being wrong (including the opportunity of cost of the potential benefits of a different decision).
Right; I wasn’t thinking. Your example is better.
I was talking about ordinary circumstances. I’ve never bet money on the roll of a die, nor shall I. If it were to come up, I might well do the sort of analysis you suggest, as probability seems like it’s correctly applied to die rolling. Can you think of a better example, that might actually occur in my life?
Dice roles are deterministic. Given the initial orientation, the mass and elasticity of the dice, the position, velocity, and angular momentum it is released with (which themselves are deterministic), and the surface it is rolled on, it is possible in principal to deduce what the result will be. (Quantum effects will be negligible, the classical approximation is valid in this domain. Imagine the dice is thrown by mechanical device if you are worried this does not apply to the nervous system of the dice roller.)
The probability does not describe randomness in the dice, because the dice is not random. The probability describes your ignorance of the relevant factors and your lack of logical omniscience to compute the result from those factors.
If you reject this argument in the case of dice rolling, how do you accept it (or what alternative do you use) in other cases of probability representing uncertainty?
Do you wear a seatbelt when you ride in a car? (I’m aware of at least one libertarian who didn’t.) The most probable theory is that you won’t need to, but even a small chance that it might prevent harm is generally thought to be worth the effort to put it on. Any action you take that fits this pattern qualifies.
I’m happy to report that I have made the decision to wear seat belts without evaluating anything using probability. If the justification is really:
Then you’re not explicitly assigning probabilities. Change ‘small chance’ to ‘5%’ and I’d wonder how you got that number, and what would happen if the chance were 4.99%.
How did you make the decision to wear seat belts then? If it is because you were taught to at a young age, or it is the law, then can you think of any safety precaution you take (or don’t take) because it prevents or mitigates a problem that you believe would have less than 50% chance of occurring any particular time you do not take the precaution?
Often we make decisions based on our vague feelings of uncertainty, which are difficult to describe as a probability that could be communicated to others or explicitly analyzed mathematically. This difficulty is a failure of introspection, but the uncertainty we feel does somewhat approximate Bayesian probability theory. Many biases represent the limits of this approximation.
I was arguing against:
with the implicit assumption that “best positions” are about states of the world, and not synonymous with “best decisions”.
I guess we need to go back to Z. M. Davis’s last paragraph, reproduced here for your convenience:
This sounds great in theory, but other communities have applied that policy with terrible results. Whether I agree with something or not is the only information I have as to whether it’s true/wise, and that should be the main factor determining score. Excluding disagreement as grounds for downvoting leaves only presentation, resulting in posts that are eloquent, highly rated, and wrong. Those are mental poison.
When someone honestly presents their position, and is open to discussing it further, there is no need to down vote their comment for being wrong. In fact, it is counterproductive. By discouraging them from expressing their incorrect position, you do not cause them to relinquish it. By instead explaining why you think it is wrong, you help them to adopt a better position. And if it happens that they were right and you were wrong, then you have the opportunity to learn something.
I tend to down vote comments that are off topic, incoherent, arrogant, or present a conclusion without support.
I tend to up vote comments when they are eloquent, insightful, and correct, or sometimes, when they say pretty much what I was planning to say.
I’m intrigued. Please point me to a discussion of these issues or make a top level post.
Sounds to me like simply a rejection of the Bayesian interpretation of probability based on the usual frequentist objections.
That’s an interesting idea, but I think Yvain just averaged the answers without regard to confidence.
This seems contradictory. Care to explain?
The “wisdom of crowds” would only apply if everyone is trying to actually get the answer right, and if the errors of incompetence are somewhat random. A large number of intentional pranksters (or one prankster who says “a googolplex”) can predictably screw up the average by introducing large variance or acting in a non-random fashion.