Viliam, I indeed do understand what you’re saying. Having a belief that I know is wrong is anathema to me.
But I think you and I, and probably many Less Wrongers, are on the far end of the spectrum of having a strong emotional valuation of having true beliefs, and there are so many people who give much less of a fuck about that than we do. Moreover, they have a strong emotional value of being attached to their beliefs.
That’s why the project of spreading rationality is hard—only a small subset of the population has that strong intuitive value. This is why I’m posting about this here about the challenge I run into with Intentional Insights—how do we expand that subset through equipping those who want to learn the truth with the emotional tools they need to do so.
As a very rough intuitive model, we could divide people into three rationality stages:
R0 -- does not care about having true beliefs
R1 -- cares about having true beliefs, but does not know the rationality techniques
R2 -- cares about having true beliefs and knows the rationality techniques
I can imagine moving people from R1 to R2. More or less, you give then the Sequences to read, and connect them with the rationalist community. At least that is what worked for me. No idea about R0 though, and they happen to be a vast majority of the population.
(There is even the technical problem of how to most effectively find R1 people in the general population. Is there a better method than making a website and hopind that they will find it?)
Another problem is that if we succeed to make LW-style rationality more popular, we will inevitably get another group growing:
R3 -- does not care about having true beliefs, but learned about the rationality techniques and keywords, and uses them selectively
I think “cares / does not care about having true beliefs” is too coarse: the actual question is, in which domains do people care about true beliefs?
Most people care about having true beliefs when it actually lets them achieve things. Few parents would prefer a false belief that their child is safe, to the true belief that their child is in danger, if the true belief allowed them to get the child out of danger. The issue is just that when we talk about things like evolution or religion, it genuinely does not matter what your beliefs are, or if it does, “false” beliefs often allow you to achieve things better.
Think of beliefs as tools. People will care about having the right tool if they couldn’t get the job done otherwise, but if the wrong tool still lets them get something done, they don’t care. Except for some weird “rationalist” guys who insist that you should have the right tools for their own sake, because there’s a theoretical chance that having the wrong tool for some problem might cause you trouble, perhaps.
If it helps, think of it as physicist/mathematician thing. A physicist might calculate something using a way that’s not quite correct and would drive the mathematician up a wall. While the physicist is like, hey, my result and method are good enough to do the job I care about, so so what if I never proved all of my assumptions.
If you want to get people to actually care and think about the truth in more domains, you need to give them habits of thought that do that in one domain, and see if it’d transfer to some other domain. E.g. this is the approach that CFAR settled on:
...the sea change that occurred in our thinking might be summarized as the shift from, “Epistemic rationality is about whole units that are about answering factual questions” to there being a truth element that appears in many skills, a point where you would like your System 1 or System 2 to see some particular fact as true, or figure out what is true, or resolve an argument about what will happen next.
We used to think of Comfort Zone Expansion[6] as being about desensitization. We would today think of it as being about, for example, correcting your System 1′s anticipation of what happens when you talk to strangers.
We used to think of Urge Propagation[6] as being about applying behaviorist conditioning techniques to yourself. Today we teach a very different technique under the same name; a technique that is about dialoging with your affective brain until system 1 and system 2 acquire a common causal model of > whether task X will in fact help with the things you most care about.
We thought of Turbocharging[6] as being about instrumental techniques for acquiring skills quickly through practice. Today we would also frame it as, “Suppose you didn’t know you were supposed to be ‘Learning Spanish’. What would an outside-ish view say about what skill you might be practicing? Is it filling in blank lines in workbooks?”
We were quite cheered when we tried entirely eliminating the Bayes unit and found that we could identify a dependency in other, clearly practical, units that wanted to call on the ability to look for evidence or identify evidence.
Our Focused Grit and Hard Decisions units are entirely “epistemic”—they are straight out just about acquiring more accurate models of the world. But they don’t feel like the old “curse of epistemic rationality” units, because they begin with an actual felt System 1 need (“what shall I do when I graduate?” or similar), and they stay in contact with System 1′s reasoning process all the way through.
When we were organizing the UK workshop at the end of 2014, there was a moment where we had the sudden realization, “Hey, maybe almost all of our curriculum is secretly epistemic rationality and we can organize it into ‘Epistemic Rationality for the Planning Brain’ on day 1 and ‘Epistemic Rationality for the Affective Brain’ on day 2, and this makes our curriculum so much denser that we’ll have room for the Hamming Question on day 3.” This didn’t work as well in practice as it did in our heads (though it still went over okay) but we think this just means that the process of our digesting this insight is ongoing.
We have hopes of making a lot of progress here in 2015. It feels like we’re back on track to teaching epistemic rationality—in ways where it’s forced by need to usefully tackle life problems, not because we tacked it on. And this in turn feels like we’re back on track toward teaching that important thing we wanted to teach, the one with strategic implications containing most of CFAR’s expected future value.
I have never met anybody who has changed their reasoning first and their habits second. You change your habits first. This is a behavioral conditioning problem largely unrelated to the logical structure and content of the behavior. Once you’ve done that, you learn the new conscious analysis and synthesis patterns.
This is why I would never attempt to debate a literal creationist. If forced to attempt to convert one, I’d try to get them to learn innocuous habits whose effectiveness depends on evolutionary principles (the simplest thing I can think of is A/B testing; once you learn that they work, and then understand how and why they work, you’re on a slippery slope towards understanding things like genetic algorithms, and from there to an appreciation of the power of evolutionary processes).
People come to consider beliefs true if those beliefs work in giving them rewards. This is similarly the case for meta-beliefs, like “having true beliefs is important”—people come to believe that true beliefs are important if they frequently work for acquiring more accurate beliefs, and this lets them perform better. If you want to make people to adopt that metabelief, come up with habits that explicitly cause them to acquire more true beliefs, and which also help them forward, and get them to adopt those habits.
Most people care about having true beliefs when it actually lets them achieve things.
Here I have a general feeling that any true belief may be useful in the future, and any false belief may be harmful in the future. I feel the world as connected. (As a most obvious example, a belief in supernatural in any area implies a belief in supernatural in general, which in turn influences all areas of life.)
Maybe “the world is connected” is one of the unspoken premises for rationality. If you don’t have it, any rationality technique will be merely something you use inside the lab.
(Of course, not everything is equally likely to be useful, so I try to get more info in some areas and ignore other areas. But I would still feel bad about making false beliefs even in the less important areas. If I don’t feel certain about my knowledge somewhere, and don’t have time to improve the knowledge, I update to “don’t know”.)
Nice typology! Let’s dive into this a little deeper.
I agree that we can’t do anything with R0.
I think many people belong to R1, but there is a huge spectrum along which they place a value on having true beliefs. At the far end of the spectrum are people like you and I, and I think most Less Wrongers, before we learned about rationality—we already cared a lot about having true beliefs. For us, giving us the Sequences, and connecting with the rationalist community, was sufficient. We can call people like that R1.999, to indicate that maybe 1 out of a 1000 people is like that. That’s a rough Fermi Estimate, and I may be optimistic (I have a personal optimism bias issue), but let’s go with that for the sake of the discussion.
Now what about the people who range from R1.001 to R1.998? This is the whole point of the Intentional Insights project—how do we move these people further up the sanity waterline spectrum? The challenge is that these people’s emotional intuitions do not line up with truth-seeking. So to get them into rational thinking, we need to increase their positive emotions around rational thinking, decrease their negative emotions about letting go of their current beliefs, and even before that bring rationality to their attention.
To do so, we at InIn do several things:
1) Increase the emotional intuitive valuation they place on rational thinking. To do so, here are active steps we are taking: making engaging videos and blogs that say “yay rational thinking, you should have warm fuzzies around it and value it emotionally to reach your own goals.”
2) Decrease the negative emotions they have around letting go of their past beliefs. That’s been a challenge, and one of the reasons I wrote this discussion post. I listed above some things that worked for us. We also write blogs highlighting people’s personal stories about updating their beliefs, to make this appear more doable and cognitively easy.
3) Getting this information to people’s attention. The way we do this is through out website, through collaborating with a wide variety of reason-oriented groups, and through publishing articles and doing interviews in prominent media venues.
So those encompass the what I think it takes to move R1 to R2. I also agree about the dangers of R3, which is why it’s important to get people into a community with more advanced rationalists, otherwise they might just remain half a rationalist.
I doubt there can literally be someone who “does not care about having true beliefs.” No matter how false and irrational someone’s beliefs are, he still wants those beliefs to be true, so he still wants true beliefs. What happens is this:
Some people want to believe the truth. Position X seems likely to be true. So they want to believe X.
Other people want to believe X. If X is true, that would be a reason to believe it. So they want X to be true.
The first people will be in your categories R1 and R2. The second people will be in your category R0, in the sense that what is basically motivating them is the desire to believe a concrete position, not the desire to believe the truth. But they also have the desire to believe the truth. It is just weaker than their desire to believe X.
But as you say, if someone wants something more than the truth, he wants that more than the truth. No argument is necessarily going to change his desires.
Viliam, I indeed do understand what you’re saying. Having a belief that I know is wrong is anathema to me.
But I think you and I, and probably many Less Wrongers, are on the far end of the spectrum of having a strong emotional valuation of having true beliefs, and there are so many people who give much less of a fuck about that than we do. Moreover, they have a strong emotional value of being attached to their beliefs.
That’s why the project of spreading rationality is hard—only a small subset of the population has that strong intuitive value. This is why I’m posting about this here about the challenge I run into with Intentional Insights—how do we expand that subset through equipping those who want to learn the truth with the emotional tools they need to do so.
As a very rough intuitive model, we could divide people into three rationality stages:
R0 -- does not care about having true beliefs
R1 -- cares about having true beliefs, but does not know the rationality techniques
R2 -- cares about having true beliefs and knows the rationality techniques
I can imagine moving people from R1 to R2. More or less, you give then the Sequences to read, and connect them with the rationalist community. At least that is what worked for me. No idea about R0 though, and they happen to be a vast majority of the population.
(There is even the technical problem of how to most effectively find R1 people in the general population. Is there a better method than making a website and hopind that they will find it?)
Another problem is that if we succeed to make LW-style rationality more popular, we will inevitably get another group growing:
R3 -- does not care about having true beliefs, but learned about the rationality techniques and keywords, and uses them selectively
I think “cares / does not care about having true beliefs” is too coarse: the actual question is, in which domains do people care about true beliefs?
Most people care about having true beliefs when it actually lets them achieve things. Few parents would prefer a false belief that their child is safe, to the true belief that their child is in danger, if the true belief allowed them to get the child out of danger. The issue is just that when we talk about things like evolution or religion, it genuinely does not matter what your beliefs are, or if it does, “false” beliefs often allow you to achieve things better.
Think of beliefs as tools. People will care about having the right tool if they couldn’t get the job done otherwise, but if the wrong tool still lets them get something done, they don’t care. Except for some weird “rationalist” guys who insist that you should have the right tools for their own sake, because there’s a theoretical chance that having the wrong tool for some problem might cause you trouble, perhaps.
If it helps, think of it as physicist/mathematician thing. A physicist might calculate something using a way that’s not quite correct and would drive the mathematician up a wall. While the physicist is like, hey, my result and method are good enough to do the job I care about, so so what if I never proved all of my assumptions.
If you want to get people to actually care and think about the truth in more domains, you need to give them habits of thought that do that in one domain, and see if it’d transfer to some other domain. E.g. this is the approach that CFAR settled on:
Similarly Venkat:
People come to consider beliefs true if those beliefs work in giving them rewards. This is similarly the case for meta-beliefs, like “having true beliefs is important”—people come to believe that true beliefs are important if they frequently work for acquiring more accurate beliefs, and this lets them perform better. If you want to make people to adopt that metabelief, come up with habits that explicitly cause them to acquire more true beliefs, and which also help them forward, and get them to adopt those habits.
Here I have a general feeling that any true belief may be useful in the future, and any false belief may be harmful in the future. I feel the world as connected. (As a most obvious example, a belief in supernatural in any area implies a belief in supernatural in general, which in turn influences all areas of life.)
Maybe “the world is connected” is one of the unspoken premises for rationality. If you don’t have it, any rationality technique will be merely something you use inside the lab.
(Of course, not everything is equally likely to be useful, so I try to get more info in some areas and ignore other areas. But I would still feel bad about making false beliefs even in the less important areas. If I don’t feel certain about my knowledge somewhere, and don’t have time to improve the knowledge, I update to “don’t know”.)
Nice typology! Let’s dive into this a little deeper.
I agree that we can’t do anything with R0.
I think many people belong to R1, but there is a huge spectrum along which they place a value on having true beliefs. At the far end of the spectrum are people like you and I, and I think most Less Wrongers, before we learned about rationality—we already cared a lot about having true beliefs. For us, giving us the Sequences, and connecting with the rationalist community, was sufficient. We can call people like that R1.999, to indicate that maybe 1 out of a 1000 people is like that. That’s a rough Fermi Estimate, and I may be optimistic (I have a personal optimism bias issue), but let’s go with that for the sake of the discussion.
Now what about the people who range from R1.001 to R1.998? This is the whole point of the Intentional Insights project—how do we move these people further up the sanity waterline spectrum? The challenge is that these people’s emotional intuitions do not line up with truth-seeking. So to get them into rational thinking, we need to increase their positive emotions around rational thinking, decrease their negative emotions about letting go of their current beliefs, and even before that bring rationality to their attention.
To do so, we at InIn do several things:
1) Increase the emotional intuitive valuation they place on rational thinking. To do so, here are active steps we are taking: making engaging videos and blogs that say “yay rational thinking, you should have warm fuzzies around it and value it emotionally to reach your own goals.”
2) Decrease the negative emotions they have around letting go of their past beliefs. That’s been a challenge, and one of the reasons I wrote this discussion post. I listed above some things that worked for us. We also write blogs highlighting people’s personal stories about updating their beliefs, to make this appear more doable and cognitively easy.
3) Getting this information to people’s attention. The way we do this is through out website, through collaborating with a wide variety of reason-oriented groups, and through publishing articles and doing interviews in prominent media venues.
So those encompass the what I think it takes to move R1 to R2. I also agree about the dangers of R3, which is why it’s important to get people into a community with more advanced rationalists, otherwise they might just remain half a rationalist.
I doubt there can literally be someone who “does not care about having true beliefs.” No matter how false and irrational someone’s beliefs are, he still wants those beliefs to be true, so he still wants true beliefs. What happens is this:
Some people want to believe the truth. Position X seems likely to be true. So they want to believe X.
Other people want to believe X. If X is true, that would be a reason to believe it. So they want X to be true.
The first people will be in your categories R1 and R2. The second people will be in your category R0, in the sense that what is basically motivating them is the desire to believe a concrete position, not the desire to believe the truth. But they also have the desire to believe the truth. It is just weaker than their desire to believe X.
But as you say, if someone wants something more than the truth, he wants that more than the truth. No argument is necessarily going to change his desires.