(Like Jiro’s comment, don’t read this if you’re going to take the poll but haven’t yet.)
So this doesn’t prove that people find anything convincing with a good explanation, but rather that people find things unconvincing with a poor one.
Fair point. The conclusion to draw, then, should be a more general one: given an observation O and an explanation E of O, people can over-weight E as a piece of evidence about O’s probability. (If E sounds plausible it might be taken as de facto proof of O; if E sounds implausible it might be taken as a disconfirmation of O.)
Edit2: The Gallup poll at http://www.gallup.com/poll/126581/generational-differences-abortion-narrow.aspx gives a different impression of abortion opinions among the young. If you look at a longer time scale, younger people support abortion more and satt’s poll inly shows that they do not because the people who were young in those earlier years got older and kept their opinions.
This strikes me as I-was-not-wrong-but-I-was-almost-right reasoning. Had I posted this in 1992, claim 3 would indeed have been true. But it hasn’t been true for something like a decade, and at some point informed people should update their beliefs.
The conclusion to draw, then, should be a more general one: given an observation O and an explanation E of O, people can over-weight E as a piece of evidence about O’s probability.
But is it overweighting to use the fact that the explanation is bad as evidence against the statement being true? A true statement is more likely to have a good explanation than a false one, so it seems that one could do a Bayseian update on the truth of the staement based on the quality of the explanation.
Sounds reasonable. Although I think it’s evidence against that kind of updating if it leads one to get a question wrong, one might well get more evidence in favour of that kind of updating in everyday life.
(Like Jiro’s comment, don’t read this if you’re going to take the poll but haven’t yet.)
Fair point. The conclusion to draw, then, should be a more general one: given an observation O and an explanation E of O, people can over-weight E as a piece of evidence about O’s probability. (If E sounds plausible it might be taken as de facto proof of O; if E sounds implausible it might be taken as a disconfirmation of O.)
This strikes me as I-was-not-wrong-but-I-was-almost-right reasoning. Had I posted this in 1992, claim 3 would indeed have been true. But it hasn’t been true for something like a decade, and at some point informed people should update their beliefs.
But is it overweighting to use the fact that the explanation is bad as evidence against the statement being true? A true statement is more likely to have a good explanation than a false one, so it seems that one could do a Bayseian update on the truth of the staement based on the quality of the explanation.
Sounds reasonable. Although I think it’s evidence against that kind of updating if it leads one to get a question wrong, one might well get more evidence in favour of that kind of updating in everyday life.