You propose a theory of maximum entropy, apparently retreating from your original statement that “`people are more likely to believe true things’ holds only for a limited class of theories.” Can you suggest a test for what sort of beliefs are likely to be [un]correlated with truth?
Your comment doesn’t seem to respond to Alex in a useful way. Alex’s point is not just that the majority is sometimes right and sometimes wrong but that there’s a tendency for it to be more right than wrong even for abstract issues. In this context, simply saying what you have said seems to be a restatement of your earlier argument rather than anything new.
Incidentally, there are other examples of how in large areas of abstract thought the majority will generally be right. The speed of light example is a pretty weak one. Here are some more broad examples.
First, if you ask people to do arithmetic with small numbers they are far more likely to get it correct than to make a mistake. Even in questions where they are likely to make a mistake (involving larger numbers) the plurality answers are generally correct.
Second, there’s a lot of evidence that across a wide variety of fields, of varying degrees of abstraction, crowds do quite well. In one famous but unscientific example, 91% of the time in the show “Who Wants to be a Millionaire” the audience when asked got the right answer often by a high percentage. They also did better than “smart” people. The way this works is the contestants who are trying to answer multiple choice questions have a set of different “lifelines” which they can each use once. One of those lifelines is to poll the audience for which of the four answers they think is correct. The audience was generally correct a large fraction of the time.
Third, for more abstracted issues one can look at things like the GSS data. For the GSS data although large fractions of the public get some science questions wrong, for every factual science issue, the majority, and generally a clear majority. This is not limited to the GSS but has been true for other studies that more specifically are trying to study scientific knowledge levels. Other studies have shown similar numbers.
One should also consider that by most metrics there are a lot more false hypotheses than true ones. If you pick a random hypothesis people are most likely going to be able to recognize it as imply wrong. (e.g. If I said “True or false the tides are caused by the sun and moon influencing the Earth with __” and I had in that blank any of {elephants, lasers, the Illuminati, Grover Cleveland, Gandalf} people would likely say false to any of them. If I had in that blank “electromagnetism” a slightly larger percentage would might say true, but it would almost certainly be tiny. And this is a short hypothesis. Almost any long hypothesis will simply have the absurdity heuristic applied to it. This means that at a very weak level, people will have to be right most of the time simply because they will discount absurd or overly convoluted hypotheses.
The question that seems to be more interesting is whether of the set of hypotheses that have come to attention either from evidence or from historical accident, whether people perform better than randomly. I don’t know, and I’m not sure this is even well-defined. But claiming some form of this, that on the boundary of interesting non-trivial hypotheses, the majority does not better than random chance, might be an easier claim to make.
I was suggesting that people’s beliefs are correlated with reality even in abstract areas.
Oh that simple.. well yes sometimes the majority is right and sometimes it is wrong.
You propose a theory of maximum entropy, apparently retreating from your original statement that “`people are more likely to believe true things’ holds only for a limited class of theories.” Can you suggest a test for what sort of beliefs are likely to be [un]correlated with truth?
Your comment doesn’t seem to respond to Alex in a useful way. Alex’s point is not just that the majority is sometimes right and sometimes wrong but that there’s a tendency for it to be more right than wrong even for abstract issues. In this context, simply saying what you have said seems to be a restatement of your earlier argument rather than anything new.
Incidentally, there are other examples of how in large areas of abstract thought the majority will generally be right. The speed of light example is a pretty weak one. Here are some more broad examples.
First, if you ask people to do arithmetic with small numbers they are far more likely to get it correct than to make a mistake. Even in questions where they are likely to make a mistake (involving larger numbers) the plurality answers are generally correct.
Second, there’s a lot of evidence that across a wide variety of fields, of varying degrees of abstraction, crowds do quite well. In one famous but unscientific example, 91% of the time in the show “Who Wants to be a Millionaire” the audience when asked got the right answer often by a high percentage. They also did better than “smart” people. The way this works is the contestants who are trying to answer multiple choice questions have a set of different “lifelines” which they can each use once. One of those lifelines is to poll the audience for which of the four answers they think is correct. The audience was generally correct a large fraction of the time.
Third, for more abstracted issues one can look at things like the GSS data. For the GSS data although large fractions of the public get some science questions wrong, for every factual science issue, the majority, and generally a clear majority. This is not limited to the GSS but has been true for other studies that more specifically are trying to study scientific knowledge levels. Other studies have shown similar numbers.
One should also consider that by most metrics there are a lot more false hypotheses than true ones. If you pick a random hypothesis people are most likely going to be able to recognize it as imply wrong. (e.g. If I said “True or false the tides are caused by the sun and moon influencing the Earth with __” and I had in that blank any of {elephants, lasers, the Illuminati, Grover Cleveland, Gandalf} people would likely say false to any of them. If I had in that blank “electromagnetism” a slightly larger percentage would might say true, but it would almost certainly be tiny. And this is a short hypothesis. Almost any long hypothesis will simply have the absurdity heuristic applied to it. This means that at a very weak level, people will have to be right most of the time simply because they will discount absurd or overly convoluted hypotheses.
The question that seems to be more interesting is whether of the set of hypotheses that have come to attention either from evidence or from historical accident, whether people perform better than randomly. I don’t know, and I’m not sure this is even well-defined. But claiming some form of this, that on the boundary of interesting non-trivial hypotheses, the majority does not better than random chance, might be an easier claim to make.