Here the two definitions of rationality diverge: believing the truth is now at odds with doing what works. It will obviously work better to believe what your friends and neighbors believe, so you won’t be in arguments with them and they’ll support you more when you need it.
This is only true if you can’t figure out how to handle disagreements.
It will often be better to have wrong beliefs if it keeps you from acting on the even wronger belief that you must argue with everyone who disagrees. It’s better yet to believe the truth on both fronts, and simply prioritize getting along when it is more important to get along.
If we had infinite cognitive capacity, we could just believe the truth while claiming to believe whatever works. And we could keep track of all of the evidence instead of picking and choosing which to attend to.
It’s more fundamental than that. The way you pick up a glass of water is by predicting that you will pick up a glass of water, and acting so as to minimize that prediction error. Motivated cognition is how we make things true, and we can’t get rid of it except by ceasing to act on the environment—and therefore ceasing to exist.
Motivated cognition causes no epistemic problem so long as we can realize our predictions. The tricky part comes when we struggle to fit the world to our beliefs. In these cases, there’s an apparent tension between “believing the truth” and “working towards what we want”. This is where all that sports stuff of “you have to believe you can win!” comes from, and the tendency to lose motivation once we realize we’re not going to succeed.
If we try to predict that we will win the contest despite being down 6-0 and clearly less competent, we will either have to engage in the willful delusion of pretending we’re not less competent and/or other things (which makes it harder to navigate reality, because we’re using a false map and can’t act so as to minimize the consequences of our flaws) or else we will just fail to predict success altogether and be unable to even try.
If instead, we don’t predict anything about whether we will win or lose, and instead predict that we will play to the absolute best of our abilities, then we can find out whether we win or lose, and give ourselves room to be pleasantly surprised.
The solution isn’t to “believe the truth” because the truth has not been set yet. The solution is to pay attention to our anticipated prediction errors, and shift to finer grain modeling when the expected error justifies the cost of thinking harder.
The only remedy I know of is to cultivate enjoying being wrong. This involves giving up a good bit of one’s self-concept as a highly intelligent individual. This gets easier if you remember that everyone else is also doing their thinking with a monkey brain that can barely chin itself on rationality.
If you stop predicting “I am a highly intelligent individual, so I’m not wrong!”, then you get to find out if you’re a highly intelligent individual, as well as all of the things that may provide evidence in that direction (i.e. being wrong about things). This much is a subset of the solution I offer.
The next part is a bit trickier because of the question of what “cultivate enjoying being wrong” means, and how exactly you go about making sure you enjoy a fundamentally bad and unpleasant thing (not saying this is impossible, my two little girls are excited to get their flu shots today).
One way to attempt this is to predict “I am the kind of person who enjoys being wrong, because that means I get to learn [which puts me above the monkeys that can’t even do this]”, which is an improvement. If you do that, then you get to learn more things you’re wrong about.… except when you’re wrong about how much you enjoy being wrong—which is certainly going to become a thing, when it matters to you most.
On top of that, the fact that it feels like “giving up” something and that it gets easier when you remember the grading curve suggests more vulnerabilities to motivated thinking, because there’s still a potential truth being avoided (“I’m dumb on the scale that matters”) and because switching to a model which yields strictly better results feels like losing something.
This is only true if you can’t figure out how to handle disagreements.
It will often be better to have wrong beliefs if it keeps you from acting on the even wronger belief that you must argue with everyone who disagrees. It’s better yet to believe the truth on both fronts, and simply prioritize getting along when it is more important to get along.
It’s more fundamental than that. The way you pick up a glass of water is by predicting that you will pick up a glass of water, and acting so as to minimize that prediction error. Motivated cognition is how we make things true, and we can’t get rid of it except by ceasing to act on the environment—and therefore ceasing to exist.
Motivated cognition causes no epistemic problem so long as we can realize our predictions. The tricky part comes when we struggle to fit the world to our beliefs. In these cases, there’s an apparent tension between “believing the truth” and “working towards what we want”. This is where all that sports stuff of “you have to believe you can win!” comes from, and the tendency to lose motivation once we realize we’re not going to succeed.
If we try to predict that we will win the contest despite being down 6-0 and clearly less competent, we will either have to engage in the willful delusion of pretending we’re not less competent and/or other things (which makes it harder to navigate reality, because we’re using a false map and can’t act so as to minimize the consequences of our flaws) or else we will just fail to predict success altogether and be unable to even try.
If instead, we don’t predict anything about whether we will win or lose, and instead predict that we will play to the absolute best of our abilities, then we can find out whether we win or lose, and give ourselves room to be pleasantly surprised.
The solution isn’t to “believe the truth” because the truth has not been set yet. The solution is to pay attention to our anticipated prediction errors, and shift to finer grain modeling when the expected error justifies the cost of thinking harder.
If you stop predicting “I am a highly intelligent individual, so I’m not wrong!”, then you get to find out if you’re a highly intelligent individual, as well as all of the things that may provide evidence in that direction (i.e. being wrong about things). This much is a subset of the solution I offer.
The next part is a bit trickier because of the question of what “cultivate enjoying being wrong” means, and how exactly you go about making sure you enjoy a fundamentally bad and unpleasant thing (not saying this is impossible, my two little girls are excited to get their flu shots today).
One way to attempt this is to predict “I am the kind of person who enjoys being wrong, because that means I get to learn [which puts me above the monkeys that can’t even do this]”, which is an improvement. If you do that, then you get to learn more things you’re wrong about.… except when you’re wrong about how much you enjoy being wrong—which is certainly going to become a thing, when it matters to you most.
On top of that, the fact that it feels like “giving up” something and that it gets easier when you remember the grading curve suggests more vulnerabilities to motivated thinking, because there’s still a potential truth being avoided (“I’m dumb on the scale that matters”) and because switching to a model which yields strictly better results feels like losing something.