The distinction I’m trying to make is between giving people optimized habits and memes as a package that they don’t examine and giving people the skills to optimize their own habits and memes (by examining their current habits and memes). It’s the latter I mean when I refer to spreading rationality, and it’s the latter I expect to be quite difficult to do to people who aren’t above a certain level of intelligence. It’s the former I don’t want to call spreading rationality; I want to call it something like “optimizing culture.”
What you call “rationality” is what I’d call “metarationality”. Conflating the two is understandable at this point because (a) we’d expect the people who explicitly talk about ‘rationality’ to be the people interested in metarationality, and (b) our understanding of measuring and increasing rationality is so weak right now (probably even weaker than our understanding of measuring and increasing metarationality) that we default to thinking more about metarationality than about rationality. Still, I’d like to keep the two separate.
I’m not sure which of the two is more important for us to spread. Does CFAR see better results from the metarationality it teaches (i.e., forming more accurate beliefs about one’s rationality, picking the right habits and affirmations for improving one’s rationality), or from the object-level rationality it teaches?
I don’t think I’m talking about metarationality, but I might be (or maybe I think that rationality just is metarationality). Let me be more specific: let’s pretend, for the sake of argument, that the rationalist community finds out that jogging is an optimal habit for various reasons. I would not call telling people they should jog (e.g. by teaching it in gym class in schools) spreading rationality. Spreading rationality to me is more like giving people the general tools to find out what object-level habits, such as jogging, are worth adopting.
The biggest difference between what I’m calling “rationality” and what I’m calling “optimized habits and memes” is that the former is self-correcting in a way that the latter isn’t. Suppose the rationalist community later finds out that jogging is in fact not an optimal habit for various reasons. To propagate that change through a community of people who had been given a round of optimal habits and memes looks very different from propagating that change through a community of people who had been given general rationality tools.
It feels like it would be possible to get ordinary people to adopt at least some of these, and that their adoption would actually increase the general level of rationality.
I’m skeptical that these kinds of habits and norms can actually be successfully installed in ordinary people. I think they would get distorted for various reasons:
The hard part of using the first habit is figuring out what constitutes strong evidence. You can always rationalize to yourself that some piece of evidence is actually weak if you don’t feel, on a gut level, like knowing the truth is more important than winning arguments.
There are several hard parts of using the second habit, like not getting addicted to gambling. Also, when people with inaccurate beliefs are consistently getting swindled by people with accurate beliefs, you’re training the former to stop accepting bets, not to update their beliefs. This might still be useful for weeding out bad pundits, but then the pundit community doesn’t actually have an incentive to adopt this habit.
The hard part of using the third habit is remembering what facts led you to your conclusion. Also, you can always cherrypick.
And so forth. These are all barriers I expect people with high IQ to deal with better than people with average IQ.
You’re probably right, but even distorted versions of the habits could be more useful than not having any at all, especially if the high-IQ people were more likely to actually follow their “correct” versions. Of course, there’s the possibility of some of the distorted versions being bad enough to make the habits into net negatives.
I would not call telling people they should meditate (e.g. by teaching it in health class in schools) spreading rationality. Spreading rationality to me is more like giving people the general tools to find out what object-level habits, such as meditating, are worth adopting.
I think it’s an (unanswered) empirical question whether meta-level (or general) or object-level (or specific) instruction is the best way to make people rational. Meditation might be an indispensable part of making people more rational, and it might be more efficient (both for epistemic and instrumental rationality) than teaching people more intellectualized skills or explicit doctrines. Rationality needn’t involve reasoning, unless reasoning happens to be the best way to acquire truth or victory.
On the other hand, if meditation isn’t very beneficial, or if the benefits it confers can be better acquired by other means, or if it’s more efficient to get people to meditate by teaching them metarationality (i.e., teaching them how to accurately assess and usefully enhance their belief-forming and decision-making practices) and letting them figure out meditation’s a good idea on their own, then I wouldn’t include meditation practice in my canonical Rationality Lesson Plan.
But if that’s so it’s just because teaching meditation is (relatively) inefficient for making people better map-drawers and agents. It’s not because meditation is intrinsically unlike the Core Rationality Skills by virtue of being too specific, too non-intellectualized, or too non-discursive.
ETA: Meditation might even be an important metarational skill. For instance, meditation might make us better at accurately assessing our own rationality, or at selecting good rationality habits. Being metarational is just about being good at improving your general truth-finding and goal-attaining practices; it needn’t be purely intellectual either. (Though I expect much more of metarationality than object-level rationality to be intellectualized.)
I think the big stumbling block is the desire and capability (in terms of allocating attention and willpower) to optimize one’s habits and memes, not the skills to do so.
Yes, but (a) if that skill is below a certain threshold you probably won’t be able to improve it; (b) empirically it’s a very hard skill to acquire/practice (see all the akrasia issues with the highly intelligent LW crowd).
The distinction I’m trying to make is between giving people optimized habits and memes as a package that they don’t examine and giving people the skills to optimize their own habits and memes (by examining their current habits and memes). It’s the latter I mean when I refer to spreading rationality, and it’s the latter I expect to be quite difficult to do to people who aren’t above a certain level of intelligence. It’s the former I don’t want to call spreading rationality; I want to call it something like “optimizing culture.”
What you call “rationality” is what I’d call “metarationality”. Conflating the two is understandable at this point because (a) we’d expect the people who explicitly talk about ‘rationality’ to be the people interested in metarationality, and (b) our understanding of measuring and increasing rationality is so weak right now (probably even weaker than our understanding of measuring and increasing metarationality) that we default to thinking more about metarationality than about rationality. Still, I’d like to keep the two separate.
I’m not sure which of the two is more important for us to spread. Does CFAR see better results from the metarationality it teaches (i.e., forming more accurate beliefs about one’s rationality, picking the right habits and affirmations for improving one’s rationality), or from the object-level rationality it teaches?
I don’t think I’m talking about metarationality, but I might be (or maybe I think that rationality just is metarationality). Let me be more specific: let’s pretend, for the sake of argument, that the rationalist community finds out that jogging is an optimal habit for various reasons. I would not call telling people they should jog (e.g. by teaching it in gym class in schools) spreading rationality. Spreading rationality to me is more like giving people the general tools to find out what object-level habits, such as jogging, are worth adopting.
The biggest difference between what I’m calling “rationality” and what I’m calling “optimized habits and memes” is that the former is self-correcting in a way that the latter isn’t. Suppose the rationalist community later finds out that jogging is in fact not an optimal habit for various reasons. To propagate that change through a community of people who had been given a round of optimal habits and memes looks very different from propagating that change through a community of people who had been given general rationality tools.
How about habits and norms like:
Consider it high status to change one’s mind when presented with strong evidence against their old position
Offer people bets on beliefs that are verifiable and which they hold very strongly
When asked a question, state the facts that led you to your conclusion, not the conclusion itself
Encourage people to present the strongest cases they can against their own ideas
Be upfront about when you don’t remember the source of your claim
(more)
It feels like it would be possible to get ordinary people to adopt at least some of these, and that their adoption would actually increase the general level of rationality.
I’m skeptical that these kinds of habits and norms can actually be successfully installed in ordinary people. I think they would get distorted for various reasons:
The hard part of using the first habit is figuring out what constitutes strong evidence. You can always rationalize to yourself that some piece of evidence is actually weak if you don’t feel, on a gut level, like knowing the truth is more important than winning arguments.
There are several hard parts of using the second habit, like not getting addicted to gambling. Also, when people with inaccurate beliefs are consistently getting swindled by people with accurate beliefs, you’re training the former to stop accepting bets, not to update their beliefs. This might still be useful for weeding out bad pundits, but then the pundit community doesn’t actually have an incentive to adopt this habit.
The hard part of using the third habit is remembering what facts led you to your conclusion. Also, you can always cherrypick.
And so forth. These are all barriers I expect people with high IQ to deal with better than people with average IQ.
You’re probably right, but even distorted versions of the habits could be more useful than not having any at all, especially if the high-IQ people were more likely to actually follow their “correct” versions. Of course, there’s the possibility of some of the distorted versions being bad enough to make the habits into net negatives.
I think it’s an (unanswered) empirical question whether meta-level (or general) or object-level (or specific) instruction is the best way to make people rational. Meditation might be an indispensable part of making people more rational, and it might be more efficient (both for epistemic and instrumental rationality) than teaching people more intellectualized skills or explicit doctrines. Rationality needn’t involve reasoning, unless reasoning happens to be the best way to acquire truth or victory.
On the other hand, if meditation isn’t very beneficial, or if the benefits it confers can be better acquired by other means, or if it’s more efficient to get people to meditate by teaching them metarationality (i.e., teaching them how to accurately assess and usefully enhance their belief-forming and decision-making practices) and letting them figure out meditation’s a good idea on their own, then I wouldn’t include meditation practice in my canonical Rationality Lesson Plan.
But if that’s so it’s just because teaching meditation is (relatively) inefficient for making people better map-drawers and agents. It’s not because meditation is intrinsically unlike the Core Rationality Skills by virtue of being too specific, too non-intellectualized, or too non-discursive.
ETA: Meditation might even be an important metarational skill. For instance, meditation might make us better at accurately assessing our own rationality, or at selecting good rationality habits. Being metarational is just about being good at improving your general truth-finding and goal-attaining practices; it needn’t be purely intellectual either. (Though I expect much more of metarationality than object-level rationality to be intellectualized.)
Meditation was probably an unusually bad example for me to make the point I wanted with; sorry about that. I’m going to replace it with jogging.
“Give man a fish...” ?
I think the big stumbling block is the desire and capability (in terms of allocating attention and willpower) to optimize one’s habits and memes, not the skills to do so.
Learning how to allocate attention and willpower is a skill.
Yes, but (a) if that skill is below a certain threshold you probably won’t be able to improve it; (b) empirically it’s a very hard skill to acquire/practice (see all the akrasia issues with the highly intelligent LW crowd).
Yep. Neither of those things are evidence against anything I’ve said.