I think “cares / does not care about having true beliefs” is too coarse: the actual question is, in which domains do people care about true beliefs?
Most people care about having true beliefs when it actually lets them achieve things. Few parents would prefer a false belief that their child is safe, to the true belief that their child is in danger, if the true belief allowed them to get the child out of danger. The issue is just that when we talk about things like evolution or religion, it genuinely does not matter what your beliefs are, or if it does, “false” beliefs often allow you to achieve things better.
Think of beliefs as tools. People will care about having the right tool if they couldn’t get the job done otherwise, but if the wrong tool still lets them get something done, they don’t care. Except for some weird “rationalist” guys who insist that you should have the right tools for their own sake, because there’s a theoretical chance that having the wrong tool for some problem might cause you trouble, perhaps.
If it helps, think of it as physicist/mathematician thing. A physicist might calculate something using a way that’s not quite correct and would drive the mathematician up a wall. While the physicist is like, hey, my result and method are good enough to do the job I care about, so so what if I never proved all of my assumptions.
If you want to get people to actually care and think about the truth in more domains, you need to give them habits of thought that do that in one domain, and see if it’d transfer to some other domain. E.g. this is the approach that CFAR settled on:
...the sea change that occurred in our thinking might be summarized as the shift from, “Epistemic rationality is about whole units that are about answering factual questions” to there being a truth element that appears in many skills, a point where you would like your System 1 or System 2 to see some particular fact as true, or figure out what is true, or resolve an argument about what will happen next.
We used to think of Comfort Zone Expansion[6] as being about desensitization. We would today think of it as being about, for example, correcting your System 1′s anticipation of what happens when you talk to strangers.
We used to think of Urge Propagation[6] as being about applying behaviorist conditioning techniques to yourself. Today we teach a very different technique under the same name; a technique that is about dialoging with your affective brain until system 1 and system 2 acquire a common causal model of > whether task X will in fact help with the things you most care about.
We thought of Turbocharging[6] as being about instrumental techniques for acquiring skills quickly through practice. Today we would also frame it as, “Suppose you didn’t know you were supposed to be ‘Learning Spanish’. What would an outside-ish view say about what skill you might be practicing? Is it filling in blank lines in workbooks?”
We were quite cheered when we tried entirely eliminating the Bayes unit and found that we could identify a dependency in other, clearly practical, units that wanted to call on the ability to look for evidence or identify evidence.
Our Focused Grit and Hard Decisions units are entirely “epistemic”—they are straight out just about acquiring more accurate models of the world. But they don’t feel like the old “curse of epistemic rationality” units, because they begin with an actual felt System 1 need (“what shall I do when I graduate?” or similar), and they stay in contact with System 1′s reasoning process all the way through.
When we were organizing the UK workshop at the end of 2014, there was a moment where we had the sudden realization, “Hey, maybe almost all of our curriculum is secretly epistemic rationality and we can organize it into ‘Epistemic Rationality for the Planning Brain’ on day 1 and ‘Epistemic Rationality for the Affective Brain’ on day 2, and this makes our curriculum so much denser that we’ll have room for the Hamming Question on day 3.” This didn’t work as well in practice as it did in our heads (though it still went over okay) but we think this just means that the process of our digesting this insight is ongoing.
We have hopes of making a lot of progress here in 2015. It feels like we’re back on track to teaching epistemic rationality—in ways where it’s forced by need to usefully tackle life problems, not because we tacked it on. And this in turn feels like we’re back on track toward teaching that important thing we wanted to teach, the one with strategic implications containing most of CFAR’s expected future value.
I have never met anybody who has changed their reasoning first and their habits second. You change your habits first. This is a behavioral conditioning problem largely unrelated to the logical structure and content of the behavior. Once you’ve done that, you learn the new conscious analysis and synthesis patterns.
This is why I would never attempt to debate a literal creationist. If forced to attempt to convert one, I’d try to get them to learn innocuous habits whose effectiveness depends on evolutionary principles (the simplest thing I can think of is A/B testing; once you learn that they work, and then understand how and why they work, you’re on a slippery slope towards understanding things like genetic algorithms, and from there to an appreciation of the power of evolutionary processes).
People come to consider beliefs true if those beliefs work in giving them rewards. This is similarly the case for meta-beliefs, like “having true beliefs is important”—people come to believe that true beliefs are important if they frequently work for acquiring more accurate beliefs, and this lets them perform better. If you want to make people to adopt that metabelief, come up with habits that explicitly cause them to acquire more true beliefs, and which also help them forward, and get them to adopt those habits.
Most people care about having true beliefs when it actually lets them achieve things.
Here I have a general feeling that any true belief may be useful in the future, and any false belief may be harmful in the future. I feel the world as connected. (As a most obvious example, a belief in supernatural in any area implies a belief in supernatural in general, which in turn influences all areas of life.)
Maybe “the world is connected” is one of the unspoken premises for rationality. If you don’t have it, any rationality technique will be merely something you use inside the lab.
(Of course, not everything is equally likely to be useful, so I try to get more info in some areas and ignore other areas. But I would still feel bad about making false beliefs even in the less important areas. If I don’t feel certain about my knowledge somewhere, and don’t have time to improve the knowledge, I update to “don’t know”.)
I think “cares / does not care about having true beliefs” is too coarse: the actual question is, in which domains do people care about true beliefs?
Most people care about having true beliefs when it actually lets them achieve things. Few parents would prefer a false belief that their child is safe, to the true belief that their child is in danger, if the true belief allowed them to get the child out of danger. The issue is just that when we talk about things like evolution or religion, it genuinely does not matter what your beliefs are, or if it does, “false” beliefs often allow you to achieve things better.
Think of beliefs as tools. People will care about having the right tool if they couldn’t get the job done otherwise, but if the wrong tool still lets them get something done, they don’t care. Except for some weird “rationalist” guys who insist that you should have the right tools for their own sake, because there’s a theoretical chance that having the wrong tool for some problem might cause you trouble, perhaps.
If it helps, think of it as physicist/mathematician thing. A physicist might calculate something using a way that’s not quite correct and would drive the mathematician up a wall. While the physicist is like, hey, my result and method are good enough to do the job I care about, so so what if I never proved all of my assumptions.
If you want to get people to actually care and think about the truth in more domains, you need to give them habits of thought that do that in one domain, and see if it’d transfer to some other domain. E.g. this is the approach that CFAR settled on:
Similarly Venkat:
People come to consider beliefs true if those beliefs work in giving them rewards. This is similarly the case for meta-beliefs, like “having true beliefs is important”—people come to believe that true beliefs are important if they frequently work for acquiring more accurate beliefs, and this lets them perform better. If you want to make people to adopt that metabelief, come up with habits that explicitly cause them to acquire more true beliefs, and which also help them forward, and get them to adopt those habits.
Here I have a general feeling that any true belief may be useful in the future, and any false belief may be harmful in the future. I feel the world as connected. (As a most obvious example, a belief in supernatural in any area implies a belief in supernatural in general, which in turn influences all areas of life.)
Maybe “the world is connected” is one of the unspoken premises for rationality. If you don’t have it, any rationality technique will be merely something you use inside the lab.
(Of course, not everything is equally likely to be useful, so I try to get more info in some areas and ignore other areas. But I would still feel bad about making false beliefs even in the less important areas. If I don’t feel certain about my knowledge somewhere, and don’t have time to improve the knowledge, I update to “don’t know”.)