Placebo effects from ‘belief in (false) beliefs’ only work as long as self-deception is maintainable.
I think the point at which self-deception ceases to work is when you can consciously be aware of it breaking your causal models of the world. Highly intelligent people, or anyone for that matter, cannot continue to deceive themselves into believing in god or unregulated markets, or whatever complex concept take your pick, if you explicitly show how it breaks a model they cannot disagree with. Controversial topics of the day like belief in god, public policy, etc. are not single data points under contention, but tangled balls of causation that must be dealt with in a somewhat parallel fashion—to see the big picture and say, wait a minute that cannot fit unless this, and this, and this, and finally reach a dead end and have to relinquish their starting belief. The more abstract or the more complex a concept is, the easier it is to deceive yourself of its falsehood.
The limits to working memory plays a role here, and if we are to truly be less wrong, we not only have to overcome biases, but we need to amplify our rational intelligence by using tools designed for these specific purposes. What if beliefs such as ‘a personal god exists’ were as hard to believe in as ‘the sky is green’? What if it was explicitly laid out in front of someone that they absolutely could not hold a belief in something because of all the cascading links it breaks in their world model that is confirmed to be ‘reality’.
Placebo effects from ‘belief in (false) beliefs’ only work as long as self-deception is maintainable.
I think the point at which self-deception ceases to work is when you can consciously be aware of it breaking your causal models of the world. Highly intelligent people, or anyone for that matter, cannot continue to deceive themselves into believing in god or unregulated markets, or whatever complex concept take your pick, if you explicitly show how it breaks a model they cannot disagree with. Controversial topics of the day like belief in god, public policy, etc. are not single data points under contention, but tangled balls of causation that must be dealt with in a somewhat parallel fashion—to see the big picture and say, wait a minute that cannot fit unless this, and this, and this, and finally reach a dead end and have to relinquish their starting belief. The more abstract or the more complex a concept is, the easier it is to deceive yourself of its falsehood.
The limits to working memory plays a role here, and if we are to truly be less wrong, we not only have to overcome biases, but we need to amplify our rational intelligence by using tools designed for these specific purposes. What if beliefs such as ‘a personal god exists’ were as hard to believe in as ‘the sky is green’? What if it was explicitly laid out in front of someone that they absolutely could not hold a belief in something because of all the cascading links it breaks in their world model that is confirmed to be ‘reality’.
I want to work on such tools.