Imagine we invented a pill which increased everyone’s performance on IQ tests by one standard deviation with no side effects (note, I don’t expect to see this soon). Further, imagine that all current scientists began taking it. What benefits would you expect to see?
Let me be more specific, assume no funding changes, even though smarter scientists would almost certainly get more funding: how much would Science and Nature have to expand if they did not raise the bar for publication? My estimate: 20% with a 95% confidence interval of [3%, 100%]
I expect the foolish would come up with even cleverer ways of deluding themselves than before, which would make it even harder for them to distinguish truth from their own cherished beliefs.
The wise, who already possessed the ability to override their primitive associational thinking, would have a better ability to grasp complex theories and work through their implications. But they would be vastly outnumbered by the fools—thus, no overall improvement and a possible overall harm.
Not everyone who is socially recognized as a ‘scientist’ can actually put the principles of science into practice.
Let me get this straight, you believe that a majority of scientists would do worse science if they had a higher IQ? I doubt many scientists would agree. Do you agree that you are among a small minority holding this position?
Does this imply that a majority of scientists would do better science if they took a pill which lowered their IQ without side effects?
I know correlation does not imply causation, but do you agree that there is a positive correlation between IQ and quantity and quality of an individuals scientific publications?
Does this imply that a majority of scientists would do better science if they took a pill which lowered their IQ without side effects?
No.
I know correlation does not imply causation, but do you agree that there is a positive correlation between IQ and quantity and quality of an individuals scientific publications?
Considered across all individuals? Only a very weak one. I suggest limiting the question to scientists. In that case, the answer for ‘quantity’ would be “not strongly at all”, and ‘quality’ is so difficult to define as to be useless for this investigation.
You claim higher IQ would hurt most scientists, but a lower IQ would not help. This implies a majority of scientists have the ideal IQ for furthering science. To me this sounds like an impossible coincidence.
I might look for research on what predicts a scientists research productivity. GRE scores may be more common than IQ. Can we make terms for a bet? I claim that net of all controls, GRE or IQ scores will having nontrivial positive relationship with research productivity.
“Quality” is difficult to measure, but you give up too quickly. e.g. citations, impact factor of journal of publication
I need to clarify. Quite a lot of ‘scientists’ are terrible at putting the scientific method into practice. I try to exclude those people from the category whenever possible. I do acknowledge, though, that this will frequently lead to confusion.
A lower IQ of scientists overall would make progress slower, but generally wouldn’t impede the self-correcting properties of the method.
The so-called scientists who don’t or can’t put the method into practice would have their ability to make clever but specious arguments impaired. Possibly the reduced nonsense-sensing of the scientists would still be more than enough to identify and exclude the reduced levels of nonsense.
With reduced IQ across scientists and ‘scientists’ both, it’s entirely possible that there would be more scientific progress for the field as a whole. There are a number of necessary but not sufficient factors involved, and non-lethal but cumulatively-damaging factors as well. It’s not obvious to me that the properties measured by IQ are equally distributed across the positive and negative factors; I suspect they lend themselves to the negative more than the positive.
I agree with most of your comment, but “Do you agree that you (are) among a small minority holding this position?” is social pressure in place of a real argument. The truth is not determined by voting.
The truth is not determined by voting, but the truth is often positively correlated with peoples’ opinions. It is rational to weigh other people’s opinions. If I disagree with someone, I must ask myself why I am more likely to be correct than they are.
Annoyance had already explained his reasons for his position, and you explained reasons for yours in the rest of your comment. Once we are discussing those reasons directly, there is no need to use majority opinion as a proxy for the relative strength of those reasons.
I disagree. People’s opinions are evidence and deserve weight. Smarter, more rational, people’s opinions deserve more weight. The opinions of scientists who specialize in a relevant subject deserve still more weight. Why shouldn’t we consider this type of evidence?
First, I should point out that what I initially objected to was an appeal to an assertion of raw majority, with no weighting based on rationality, intelligence, or specialization, and in which uninformed opinions are likely to drown out evidence-informed expert opinions.
Second, the reason the opinion of a specialist can be strong evidence is that the specialist is likely to have access to evidence not generally available or known, and have superior ability to process that evidence. So, when someone discovers that a specialist disagrees with them, they should seek to learn the evidence and arguments that informed the specialist’s opinions, and then evaluate them on their merits. Ideally, at this point, the specialist’s opinion is no longer evidence, as the fundamental evidence it represents is already accounted for. As a practical matter, the specialists opinion still counts to the extent that a person is uncertain that they have learned of all the specialist’s evidence and understood all the arguments. You have not argued that this uncertainty is and will likely remain significant in this case. Rather, it seemed that you were trying to dismiss an idea because it is unpopular.
I think our disagreement is relatively small. a few remaining points:
People don’t have the time or the ability to learn all the relevant evidence and arguments on every issue. Hell, I don’t have time to learn all the relevant evidence and arguments on every issue in my discipline, nevermind subjects that I know little about.
Sometimes we mainly care about what the answer is, not why.
I don’t always have time to explain all my reasons, so citing the fact that others agree with me is easier, and depending on the context, may be every bit as useful.
*Sometimes we mainly care about what the answer is, not why.
*I don’t always have time to explain all my reasons, so citing the fact that others agree with me is easier, and depending on the context, may be every bit as useful.
In these cases, where you don’t care, or can’t be bothered to explain, the reasons for a position, it seems you lack either the time or the interest to seriously debate the issue.
People don’t have the time or the ability to learn all the relevant evidence and arguments on every issue. Hell, I don’t have time to learn all the relevant evidence and arguments on every issue in my discipline, nevermind subjects that I know little about.
This can be a valid point when you have to make policy decisions about complicated issues, but it does not apply to your appeal to the majority that I objected to.
I didn’t just appeal to the majority, I mentioned scientists’ opinions explicitly in the sentence previous to the one you are objecting to.
You’ve argued that appealing to other peoples beliefs has few benefits (which I dispute) but unless I’m missing something you haven’t named a single cost. I’m sure there are some, but I’ll let you name them if you choose.
I’m starting to think that you primarily objected to the tone of my language. You don’t really want to stop people from discussing what other people believe.
I didn’t just appeal to the majority, I mentioned scientists’ opinions explicitly in the sentence previous to the one you are objecting to.
You mentioned scientists’ opinions not about the subjects that they study, but about the impact of intelligence on the quality of their work, which they are not likely to know more about than anyone else. If you had mentioned the opinions of psychologists who had studied the effects of intelligence on scientific productivity, that would be the sort of support you are claiming. Further, you weren’t even talking about a survey of scientists’ opinions, or other evidence about what they think; you just asserted what you think they think. Now, you could make the argument that the scientists would think that for the same reasons you do, or because you believe it is really true and they would notice, but in this case your beliefs about their opinions is not additional evidence.
You’ve argued that appealing to other peoples beliefs has few benefits (which I dispute) but unless I’m missing something you haven’t named a single cost. I’m sure there are some, but I’ll let you name them if you choose.
Well, I suppose I have not explicitly stated it, but the primary cost is that it displaces discussion of the more fundamental evidence about the issue that is supposedly informing the majority or expert opinion.
And you yourself argued elsewhere that in the political process of voting that attempts to aggregate opinions, “Voters are often uninformed about how policy affects their lives”.
Even with expert opinions, it can be hard to understand what the expert thinks. I have seen people go horribly wrong by applying an expert’s idea out of context. If you don’t understand an expert’s reasoning because it is too complicated, you probably don’t understand their position well enough to generalize it.
You don’t really want to stop people from discussing what other people believe.
What I object to is using a discussion of what other people believe to shut down discussion of an opposing belief.
This is a stronger modesty argument, as distinct from simply taking the majority opinion as one of the pieces of evidence for arriving at your own conclusion.
The two are compatible only when the preferred social feedback standards match the standards of rational thought. All other social standards necessarily come into conflict. Thus, all else being equal, a randomly-chosen standard is quite unlikely to be compatible with rationality.
In actual groups, the standards aren’t chosen randomly. But humans being what they are, they usually involve primate social dynamics and associational reasoning, neither of which lend themselves to the search for truth. Generally they involve social/political ‘games’ and power struggles.
Evidence is only valid or invalid in terms of the evaluation of related claims. What is being examined? That determines, in part, what is valid evidence to be considered.
This argument is vulnerable to the reversal test. For lay people and scientists alike.
Evolution designed our brains with in-built self-deception mechanisms; it did not design those mechanisms to continue to operate optimally if the intelligence of the person concerned is artificially increased.
It is therefore reasonable to expect that increasing intelligence will, to some extent, disrupt our in-built self-deception.
Actually, now that I review this comment, I would replace this with “it is reasonable to expect that increasing intelligence will, to some extent, affect our in-built self-deception, but it may be either negative or positive”, we should look at evidence to see what actually happens.
It could also disrupt them in the wrong direction; there’s no particular reason to assume that becoming “smarter” won’t just make us better self-deceivers.
This is plausible in the individual case, but in a large group of people, each with randomly chosen cherished falsehoods, I claim that increasing the average intelligence parameter will increase the degree to which the group as a whole has true beliefs.
Cherished falsehoods are unlikely to be random. In groups that aren’t artificially selected at random from the entirety of humanity, error will tend to be correlated with others’.
There are also deep flaws in humanity as a whole, most especially on some issues.
Should we decide to believe in ghosts because most human beings share that belief, or should we rely on rational analysis and the accumulation of evidence (data derived directly from the phenomena in question, not other people’s opinions)?
It is therefore reasonable to expect that increasing intelligence will, to some extent, disrupt our in-built self-deception.
No. Your argument is specious. Evolution ‘designed’ us with all sorts of things ‘in mind’ that no longer apply. That doesn’t mean that any arbitrary aspect of our lives will have an influence if it’s changed on any other aspect. If the environmental factors / traits have no relationship with the trait we’re interested in, we have no initial reason to think that changing the conditions will affect the trait.
Consider the absurdity of taking your argumentative structure seriously:
“Nature designed us to have full heads of hair. Nature also gave us a sense of sight, which it did not design to operate optimally in hairless conditions. It is therefore reasonable to expect that shaving the head will, to some extent, disrupt our visual acuity.”
This criticism is valid if we think that the trait we vary is irrelevant to the effect we are considering.
But we have already established that intelligence is likely to affect our ability to self-deceive.
For example, we could fairly easily establish that inhaling large quantities of soot is likely to affect our lungs in some way, then apply this argument to get the conclusion that pollution is probably slightly harmful (with some small degree of certainty).
Essentially this argument says: if you perform a random intervention J that you have reason to believe will affect evolved system S, it will probably reduce the functioning of S, unless J was specifically designed to improve the functioning of S.
Stated like this I don’t find this style of argument unsound; smoking, pollution, obesity, etc are all cases in point.
This criticism is valid if we think that the trait we vary is irrelevant to the effect we are considering.
No, the criticism is valid if we have no reason to think that the traits will be causally linked. You’re making another logical fallacy—confusing two statements whose logical structure renders them non-equivalent.
(thinking trait is ~relevant) != ~(thinking trait is relevant)
Evolution designed our brains with in-built self-deception mechanisms; it did not design those mechanisms to continue to operate optimally if the intelligence of the person concerned is artificially increased. It is therefore reasonable to expect that increasing intelligence will, to some extent, disrupt our in-built self-deception.
Not if the original function of (verbal) “intelligence” was to improve our ability to deceive… and I strongly suspect this to be the case. After all, it doesn’t take a ton of (verbal) intelligence to hunt and gather.
If we evolved ever more complex ways of lying, then we must also have evolved ever more complex ways of detecting lies. It is highly plausible that increasing intelligence will increase both of these functions.
If we evolved ever more complex ways of lying, then we must also have evolved ever more complex ways of detecting lies.
Good point. Of course, that mechanism is for detecting other people’s lies, and there is some evidence that it’s specific to ideas and/or people you already disagree with or are suspicious of… meaning that increased intelligence doesn’t necessarily relate.
One of the central themes in the book I’m working on is that brains are much better at convincing themselves they’ve thought things through, when in actuality no real thinking has taken place at all.
Looking for problems with something you already believe is a good example of that: nobody does it until they have a good enough reason to actually think it through, as opposed to assuming they already did it, or not even noticing what it is they believe in the first place.
“Lying” and “being wrong” are not the same. Lying is intentionally communicating a non-truth with the intent to deceive.
And intelligence doesn’t necessarily have anything to do with our capacity to detect lies. You’re simply assuming your conclusion in a different form. Again.
Higher intelligence implies a greater capacity to work out the logical consequences of assertions and thus potentially detect inconsistencies between two assertions or an assertion and an action.
It doesn’t imply that people will have the drive to look for such contradictions, or that such a detected contradiction will be interpreted properly, nor does it imply that it will be useful at detecting lies without logical contradictions.
It seems highly reasonable that it is true that people who are able to get higher scores on IQ tests are both harder to fool and are, on any given question, more likely to believe the correct answer (this second claim is supported by the correlation between IQ and school exams). If you claim to doubt this, I think you’re just being deliberately awkward.
I suggest you read more Feynman, then. Or James Randi.
School exams, particularly in our country, measure the ability to memorize and retrieve information presented formally. They have no obvious relationship to the ability to evaluate the validity of arguments or derive truth.
They have no obvious relationship to the ability to evaluate the validity of arguments or derive truth.
I suspect that you are going too far in expecting a someone who can get 140 on an IQ test to, on average, be just as easy to fool into believing some abstract falsehood as someone who got 60 on that same IQ test. By the way, what’s your IQ?
Imagine we invented a pill which increased everyone’s performance on IQ tests by one standard deviation with no side effects (note, I don’t expect to see this soon). Further, imagine that all current scientists began taking it. What benefits would you expect to see?
Let me be more specific, assume no funding changes, even though smarter scientists would almost certainly get more funding: how much would Science and Nature have to expand if they did not raise the bar for publication? My estimate: 20% with a 95% confidence interval of [3%, 100%]
None, actually.
I expect the foolish would come up with even cleverer ways of deluding themselves than before, which would make it even harder for them to distinguish truth from their own cherished beliefs.
The wise, who already possessed the ability to override their primitive associational thinking, would have a better ability to grasp complex theories and work through their implications. But they would be vastly outnumbered by the fools—thus, no overall improvement and a possible overall harm.
Not everyone who is socially recognized as a ‘scientist’ can actually put the principles of science into practice.
Let me get this straight, you believe that a majority of scientists would do worse science if they had a higher IQ? I doubt many scientists would agree. Do you agree that you are among a small minority holding this position?
Does this imply that a majority of scientists would do better science if they took a pill which lowered their IQ without side effects?
I know correlation does not imply causation, but do you agree that there is a positive correlation between IQ and quantity and quality of an individuals scientific publications?
Edited to fix a typo
No.
Considered across all individuals? Only a very weak one. I suggest limiting the question to scientists. In that case, the answer for ‘quantity’ would be “not strongly at all”, and ‘quality’ is so difficult to define as to be useless for this investigation.
You claim higher IQ would hurt most scientists, but a lower IQ would not help. This implies a majority of scientists have the ideal IQ for furthering science. To me this sounds like an impossible coincidence.
I might look for research on what predicts a scientists research productivity. GRE scores may be more common than IQ. Can we make terms for a bet? I claim that net of all controls, GRE or IQ scores will having nontrivial positive relationship with research productivity.
“Quality” is difficult to measure, but you give up too quickly. e.g. citations, impact factor of journal of publication
I need to clarify. Quite a lot of ‘scientists’ are terrible at putting the scientific method into practice. I try to exclude those people from the category whenever possible. I do acknowledge, though, that this will frequently lead to confusion.
A lower IQ of scientists overall would make progress slower, but generally wouldn’t impede the self-correcting properties of the method.
The so-called scientists who don’t or can’t put the method into practice would have their ability to make clever but specious arguments impaired. Possibly the reduced nonsense-sensing of the scientists would still be more than enough to identify and exclude the reduced levels of nonsense.
With reduced IQ across scientists and ‘scientists’ both, it’s entirely possible that there would be more scientific progress for the field as a whole. There are a number of necessary but not sufficient factors involved, and non-lethal but cumulatively-damaging factors as well. It’s not obvious to me that the properties measured by IQ are equally distributed across the positive and negative factors; I suspect they lend themselves to the negative more than the positive.
I agree with most of your comment, but “Do you agree that you (are) among a small minority holding this position?” is social pressure in place of a real argument. The truth is not determined by voting.
The truth is not determined by voting, but the truth is often positively correlated with peoples’ opinions. It is rational to weigh other people’s opinions. If I disagree with someone, I must ask myself why I am more likely to be correct than they are.
Annoyance had already explained his reasons for his position, and you explained reasons for yours in the rest of your comment. Once we are discussing those reasons directly, there is no need to use majority opinion as a proxy for the relative strength of those reasons.
I disagree. People’s opinions are evidence and deserve weight. Smarter, more rational, people’s opinions deserve more weight. The opinions of scientists who specialize in a relevant subject deserve still more weight. Why shouldn’t we consider this type of evidence?
First, I should point out that what I initially objected to was an appeal to an assertion of raw majority, with no weighting based on rationality, intelligence, or specialization, and in which uninformed opinions are likely to drown out evidence-informed expert opinions.
Second, the reason the opinion of a specialist can be strong evidence is that the specialist is likely to have access to evidence not generally available or known, and have superior ability to process that evidence. So, when someone discovers that a specialist disagrees with them, they should seek to learn the evidence and arguments that informed the specialist’s opinions, and then evaluate them on their merits. Ideally, at this point, the specialist’s opinion is no longer evidence, as the fundamental evidence it represents is already accounted for. As a practical matter, the specialists opinion still counts to the extent that a person is uncertain that they have learned of all the specialist’s evidence and understood all the arguments. You have not argued that this uncertainty is and will likely remain significant in this case. Rather, it seemed that you were trying to dismiss an idea because it is unpopular.
I think our disagreement is relatively small. a few remaining points:
People don’t have the time or the ability to learn all the relevant evidence and arguments on every issue. Hell, I don’t have time to learn all the relevant evidence and arguments on every issue in my discipline, nevermind subjects that I know little about.
Sometimes we mainly care about what the answer is, not why.
I don’t always have time to explain all my reasons, so citing the fact that others agree with me is easier, and depending on the context, may be every bit as useful.
In these cases, where you don’t care, or can’t be bothered to explain, the reasons for a position, it seems you lack either the time or the interest to seriously debate the issue.
This can be a valid point when you have to make policy decisions about complicated issues, but it does not apply to your appeal to the majority that I objected to.
I didn’t just appeal to the majority, I mentioned scientists’ opinions explicitly in the sentence previous to the one you are objecting to.
You’ve argued that appealing to other peoples beliefs has few benefits (which I dispute) but unless I’m missing something you haven’t named a single cost. I’m sure there are some, but I’ll let you name them if you choose.
I’m starting to think that you primarily objected to the tone of my language. You don’t really want to stop people from discussing what other people believe.
You mentioned scientists’ opinions not about the subjects that they study, but about the impact of intelligence on the quality of their work, which they are not likely to know more about than anyone else. If you had mentioned the opinions of psychologists who had studied the effects of intelligence on scientific productivity, that would be the sort of support you are claiming. Further, you weren’t even talking about a survey of scientists’ opinions, or other evidence about what they think; you just asserted what you think they think. Now, you could make the argument that the scientists would think that for the same reasons you do, or because you believe it is really true and they would notice, but in this case your beliefs about their opinions is not additional evidence.
Well, I suppose I have not explicitly stated it, but the primary cost is that it displaces discussion of the more fundamental evidence about the issue that is supposedly informing the majority or expert opinion.
And you yourself argued elsewhere that in the political process of voting that attempts to aggregate opinions, “Voters are often uninformed about how policy affects their lives”.
Even with expert opinions, it can be hard to understand what the expert thinks. I have seen people go horribly wrong by applying an expert’s idea out of context. If you don’t understand an expert’s reasoning because it is too complicated, you probably don’t understand their position well enough to generalize it.
What I object to is using a discussion of what other people believe to shut down discussion of an opposing belief.
This is a stronger modesty argument, as distinct from simply taking the majority opinion as one of the pieces of evidence for arriving at your own conclusion.
Logical fallacy: stating a contingent proposition as a universal principle.
“sharp people still distinguish themselves by not assuming more than needed to keep the conversation going”
Sometimes the conversation shouldn’t be permitted to continue.
Are we looking to facilitate social interaction, or use rational argument to discover truth? The two are often, even usually, incompatible.
An interesting claim, please explain why you believe this to be true?
The two are compatible only when the preferred social feedback standards match the standards of rational thought. All other social standards necessarily come into conflict. Thus, all else being equal, a randomly-chosen standard is quite unlikely to be compatible with rationality.
In actual groups, the standards aren’t chosen randomly. But humans being what they are, they usually involve primate social dynamics and associational reasoning, neither of which lend themselves to the search for truth. Generally they involve social/political ‘games’ and power struggles.
Consensus is valid evidence (but not the only evidence).
Valid evidence of what?
Evidence is only valid or invalid in terms of the evaluation of related claims. What is being examined? That determines, in part, what is valid evidence to be considered.
This argument is vulnerable to the reversal test. For lay people and scientists alike.
Evolution designed our brains with in-built self-deception mechanisms; it did not design those mechanisms to continue to operate optimally if the intelligence of the person concerned is artificially increased.
Actually, now that I review this comment, I would replace this with “it is reasonable to expect that increasing intelligence will, to some extent, affect our in-built self-deception, but it may be either negative or positive”, we should look at evidence to see what actually happens.
It could also disrupt them in the wrong direction; there’s no particular reason to assume that becoming “smarter” won’t just make us better self-deceivers.
As Michael Shermer writes, “Smart people believe weird things because they are skilled at defending beliefs they arrived at for non-smart reasons.”
This is plausible in the individual case, but in a large group of people, each with randomly chosen cherished falsehoods, I claim that increasing the average intelligence parameter will increase the degree to which the group as a whole has true beliefs.
Cherished falsehoods are unlikely to be random. In groups that aren’t artificially selected at random from the entirety of humanity, error will tend to be correlated with others’.
There are also deep flaws in humanity as a whole, most especially on some issues.
Should we decide to believe in ghosts because most human beings share that belief, or should we rely on rational analysis and the accumulation of evidence (data derived directly from the phenomena in question, not other people’s opinions)?
No. Your argument is specious. Evolution ‘designed’ us with all sorts of things ‘in mind’ that no longer apply. That doesn’t mean that any arbitrary aspect of our lives will have an influence if it’s changed on any other aspect. If the environmental factors / traits have no relationship with the trait we’re interested in, we have no initial reason to think that changing the conditions will affect the trait.
Consider the absurdity of taking your argumentative structure seriously:
“Nature designed us to have full heads of hair. Nature also gave us a sense of sight, which it did not design to operate optimally in hairless conditions. It is therefore reasonable to expect that shaving the head will, to some extent, disrupt our visual acuity.”
This criticism is valid if we think that the trait we vary is irrelevant to the effect we are considering.
But we have already established that intelligence is likely to affect our ability to self-deceive.
For example, we could fairly easily establish that inhaling large quantities of soot is likely to affect our lungs in some way, then apply this argument to get the conclusion that pollution is probably slightly harmful (with some small degree of certainty).
Essentially this argument says: if you perform a random intervention J that you have reason to believe will affect evolved system S, it will probably reduce the functioning of S, unless J was specifically designed to improve the functioning of S.
Stated like this I don’t find this style of argument unsound; smoking, pollution, obesity, etc are all cases in point.
No, the criticism is valid if we have no reason to think that the traits will be causally linked. You’re making another logical fallacy—confusing two statements whose logical structure renders them non-equivalent.
(thinking trait is ~relevant) != ~(thinking trait is relevant)
see edited comment above
Not if the original function of (verbal) “intelligence” was to improve our ability to deceive… and I strongly suspect this to be the case. After all, it doesn’t take a ton of (verbal) intelligence to hunt and gather.
If we evolved ever more complex ways of lying, then we must also have evolved ever more complex ways of detecting lies. It is highly plausible that increasing intelligence will increase both of these functions.
Good point. Of course, that mechanism is for detecting other people’s lies, and there is some evidence that it’s specific to ideas and/or people you already disagree with or are suspicious of… meaning that increased intelligence doesn’t necessarily relate.
One of the central themes in the book I’m working on is that brains are much better at convincing themselves they’ve thought things through, when in actuality no real thinking has taken place at all.
Looking for problems with something you already believe is a good example of that: nobody does it until they have a good enough reason to actually think it through, as opposed to assuming they already did it, or not even noticing what it is they believe in the first place.
“Lying” and “being wrong” are not the same. Lying is intentionally communicating a non-truth with the intent to deceive.
And intelligence doesn’t necessarily have anything to do with our capacity to detect lies. You’re simply assuming your conclusion in a different form. Again.
Do you actually believe this?
Yep.
Higher intelligence implies a greater capacity to work out the logical consequences of assertions and thus potentially detect inconsistencies between two assertions or an assertion and an action.
It doesn’t imply that people will have the drive to look for such contradictions, or that such a detected contradiction will be interpreted properly, nor does it imply that it will be useful at detecting lies without logical contradictions.
It seems highly reasonable that it is true that people who are able to get higher scores on IQ tests are both harder to fool and are, on any given question, more likely to believe the correct answer (this second claim is supported by the correlation between IQ and school exams). If you claim to doubt this, I think you’re just being deliberately awkward.
I suggest you read more Feynman, then. Or James Randi.
School exams, particularly in our country, measure the ability to memorize and retrieve information presented formally. They have no obvious relationship to the ability to evaluate the validity of arguments or derive truth.
I suspect that you are going too far in expecting a someone who can get 140 on an IQ test to, on average, be just as easy to fool into believing some abstract falsehood as someone who got 60 on that same IQ test. By the way, what’s your IQ?
I don’t know, for a variety of reasons.