You said The fact of widespread theism is evidence for [theism is true]. This sounds mistaken to me. How do you derive it?
That seems accurate to me. Humans seem to have a bias towards induction and induction seems to work. Lots of humans thinking something seems to happen more for true things than for untrue things.
It isn’t especially strong evidence. It is also evidence that you have probably already taken into account by the time you choose to ask “is theism true?” and should avoid double counting.
Hmm...I guess I just assumed that people were more likely to believe something true than something false, other things being equal. What do you think about that?
Also, I should note this: Even if people aren’t more likely to believe something true than false, the original finding (that theism causes happiness) should still militate against theism, because however wrong people may go in their beliefs, surely they are even less likely to believe something true if believing a false thing can make them happy. Right?
people were more likely to believe something true than something false
In general I reject this. Maybe it holds for small classes of statements that regard direct experience (for example “stuff falls down”, “getting hit by a car is bad for you”) but certainly not for abstract things like theism.
To believe in something because others do allows false beliefs to circularly maintain themselves. On the other hand, if you believed in something because others believe in it for the right reasons then learning that many people believe in something for the wrong reason (e.g. because it makes them happy) will decrease your estimate.
As crazy and improbable as an idea might seem, surely it would seem even less credible if you learned that literally nobody believed it. Turning this around, wouldn’t it seem at least a little more probable if a bunch of people believed it? And shouldn’t this hold for abstract beliefs, too?
In general I reject this. Maybe it holds for small classes of statements that regard direct experience (for example “stuff falls down”, “getting hit by a car is bad for you”) but certainly not for abstract things like theism.
Bear in mind that “more likely” is an extremely weak claim. To say that it certainly doesn’t hold for abstract things is, therefore, a rather strong claim. Perhaps stronger than you intend. It would be extremely surprising if humans weren’t on average slightly better than random at arriving at correct abstract beliefs even of that type.
Most people believe that it is impossible to travel faster than light, although they don’t understand why that is and it is all very abstract to them. I speculate that this might be connected with the fact that it is impossible to travel faster than light.
You propose a theory of maximum entropy, apparently retreating from your original statement that “`people are more likely to believe true things’ holds only for a limited class of theories.” Can you suggest a test for what sort of beliefs are likely to be [un]correlated with truth?
Your comment doesn’t seem to respond to Alex in a useful way. Alex’s point is not just that the majority is sometimes right and sometimes wrong but that there’s a tendency for it to be more right than wrong even for abstract issues. In this context, simply saying what you have said seems to be a restatement of your earlier argument rather than anything new.
Incidentally, there are other examples of how in large areas of abstract thought the majority will generally be right. The speed of light example is a pretty weak one. Here are some more broad examples.
First, if you ask people to do arithmetic with small numbers they are far more likely to get it correct than to make a mistake. Even in questions where they are likely to make a mistake (involving larger numbers) the plurality answers are generally correct.
Second, there’s a lot of evidence that across a wide variety of fields, of varying degrees of abstraction, crowds do quite well. In one famous but unscientific example, 91% of the time in the show “Who Wants to be a Millionaire” the audience when asked got the right answer often by a high percentage. They also did better than “smart” people. The way this works is the contestants who are trying to answer multiple choice questions have a set of different “lifelines” which they can each use once. One of those lifelines is to poll the audience for which of the four answers they think is correct. The audience was generally correct a large fraction of the time.
Third, for more abstracted issues one can look at things like the GSS data. For the GSS data although large fractions of the public get some science questions wrong, for every factual science issue, the majority, and generally a clear majority. This is not limited to the GSS but has been true for other studies that more specifically are trying to study scientific knowledge levels. Other studies have shown similar numbers.
One should also consider that by most metrics there are a lot more false hypotheses than true ones. If you pick a random hypothesis people are most likely going to be able to recognize it as imply wrong. (e.g. If I said “True or false the tides are caused by the sun and moon influencing the Earth with __” and I had in that blank any of {elephants, lasers, the Illuminati, Grover Cleveland, Gandalf} people would likely say false to any of them. If I had in that blank “electromagnetism” a slightly larger percentage would might say true, but it would almost certainly be tiny. And this is a short hypothesis. Almost any long hypothesis will simply have the absurdity heuristic applied to it. This means that at a very weak level, people will have to be right most of the time simply because they will discount absurd or overly convoluted hypotheses.
The question that seems to be more interesting is whether of the set of hypotheses that have come to attention either from evidence or from historical accident, whether people perform better than randomly. I don’t know, and I’m not sure this is even well-defined. But claiming some form of this, that on the boundary of interesting non-trivial hypotheses, the majority does not better than random chance, might be an easier claim to make.
You said The fact of widespread theism is evidence for [theism is true]. This sounds mistaken to me. How do you derive it?
That seems accurate to me. Humans seem to have a bias towards induction and induction seems to work. Lots of humans thinking something seems to happen more for true things than for untrue things.
It isn’t especially strong evidence. It is also evidence that you have probably already taken into account by the time you choose to ask “is theism true?” and should avoid double counting.
Hmm...I guess I just assumed that people were more likely to believe something true than something false, other things being equal. What do you think about that?
Also, I should note this: Even if people aren’t more likely to believe something true than false, the original finding (that theism causes happiness) should still militate against theism, because however wrong people may go in their beliefs, surely they are even less likely to believe something true if believing a false thing can make them happy. Right?
In general I reject this. Maybe it holds for small classes of statements that regard direct experience (for example “stuff falls down”, “getting hit by a car is bad for you”) but certainly not for abstract things like theism.
To believe in something because others do allows false beliefs to circularly maintain themselves. On the other hand, if you believed in something because others believe in it for the right reasons then learning that many people believe in something for the wrong reason (e.g. because it makes them happy) will decrease your estimate.
As crazy and improbable as an idea might seem, surely it would seem even less credible if you learned that literally nobody believed it. Turning this around, wouldn’t it seem at least a little more probable if a bunch of people believed it? And shouldn’t this hold for abstract beliefs, too?
Bear in mind that “more likely” is an extremely weak claim. To say that it certainly doesn’t hold for abstract things is, therefore, a rather strong claim. Perhaps stronger than you intend. It would be extremely surprising if humans weren’t on average slightly better than random at arriving at correct abstract beliefs even of that type.
Most people believe that it is impossible to travel faster than light, although they don’t understand why that is and it is all very abstract to them. I speculate that this might be connected with the fact that it is impossible to travel faster than light.
This is too opaque for me to understand. What is the moral of that comment?
I was suggesting that people’s beliefs are correlated with reality even in abstract areas.
Oh that simple.. well yes sometimes the majority is right and sometimes it is wrong.
You propose a theory of maximum entropy, apparently retreating from your original statement that “`people are more likely to believe true things’ holds only for a limited class of theories.” Can you suggest a test for what sort of beliefs are likely to be [un]correlated with truth?
Your comment doesn’t seem to respond to Alex in a useful way. Alex’s point is not just that the majority is sometimes right and sometimes wrong but that there’s a tendency for it to be more right than wrong even for abstract issues. In this context, simply saying what you have said seems to be a restatement of your earlier argument rather than anything new.
Incidentally, there are other examples of how in large areas of abstract thought the majority will generally be right. The speed of light example is a pretty weak one. Here are some more broad examples.
First, if you ask people to do arithmetic with small numbers they are far more likely to get it correct than to make a mistake. Even in questions where they are likely to make a mistake (involving larger numbers) the plurality answers are generally correct.
Second, there’s a lot of evidence that across a wide variety of fields, of varying degrees of abstraction, crowds do quite well. In one famous but unscientific example, 91% of the time in the show “Who Wants to be a Millionaire” the audience when asked got the right answer often by a high percentage. They also did better than “smart” people. The way this works is the contestants who are trying to answer multiple choice questions have a set of different “lifelines” which they can each use once. One of those lifelines is to poll the audience for which of the four answers they think is correct. The audience was generally correct a large fraction of the time.
Third, for more abstracted issues one can look at things like the GSS data. For the GSS data although large fractions of the public get some science questions wrong, for every factual science issue, the majority, and generally a clear majority. This is not limited to the GSS but has been true for other studies that more specifically are trying to study scientific knowledge levels. Other studies have shown similar numbers.
One should also consider that by most metrics there are a lot more false hypotheses than true ones. If you pick a random hypothesis people are most likely going to be able to recognize it as imply wrong. (e.g. If I said “True or false the tides are caused by the sun and moon influencing the Earth with __” and I had in that blank any of {elephants, lasers, the Illuminati, Grover Cleveland, Gandalf} people would likely say false to any of them. If I had in that blank “electromagnetism” a slightly larger percentage would might say true, but it would almost certainly be tiny. And this is a short hypothesis. Almost any long hypothesis will simply have the absurdity heuristic applied to it. This means that at a very weak level, people will have to be right most of the time simply because they will discount absurd or overly convoluted hypotheses.
The question that seems to be more interesting is whether of the set of hypotheses that have come to attention either from evidence or from historical accident, whether people perform better than randomly. I don’t know, and I’m not sure this is even well-defined. But claiming some form of this, that on the boundary of interesting non-trivial hypotheses, the majority does not better than random chance, might be an easier claim to make.