I have an intuition that a better future would be one where the concept of rationality (maybe called something different, but the same idea) is normal.
I am highly skeptical of this happening with human psychology kept constant, basically because I think rationality is de facto impossible for humans who are not at least ~2 standard deviations smarter than the mean. (I also suspect that most LWers have bad priors about what mean intelligence looks like, including me.)
I think a more achievable goal is to make the concept of rationality cool. Being a movie star, for example, is cool but not normal. Rationality not being cool prevents otherwise sufficiently smart people from exploring it. My model of what raising the sanity waterline looks like in the short- to medium-term is to start from the smartest people (these are simultaneously the easiest and the highest-value people to make more rational) and work down the intelligence ladder from there.
I think ‘can we make everyone rational?’ is probably the wrong question. Better questions:
How much more rational could we make 2013 average-IQ people, by modifying their cultural environment and education? (That is, without relying on things like surgical or genetic modification.) What’s the realistic limit of improvement, and when would diminishing returns make investing in further education a waste?
How do specific rationality skills vary in teachability? Are there some skills that are especially easy to culturally transmit (i.e., ‘make cool’ in a behavior-modifying way) or to instill in ordinary people?
How hard would the above approaches be? How costly is the required research and execution?
In addition to the obvious direct benefits of being more rational (which by definition means ‘people make decisions that get them more of what they want’ and ‘people’s beliefs are better maps’), how big are indirect benefits like Qiaochu’s ‘smart people see rationality as more valuable’, or ‘governments and individuals fund altruism (including rationality training) more effectively’, or ‘purchasing and voting habits are more globally beneficial’?
Suppose we were having this discussion 200 or 500 or 1000 years ago instead, and the topic was not ‘Can we make everyone rational?’ but ‘Can we make everyone literate?’ or ‘Can we make everyone a non-racist?’ or ‘Can we make everyone irreligious?’. I think it’s clear in retrospect that those aren’t quite the right questions to be asking, and it’s also clear in retrospect that appeals to intelligence levels, as grounds for cynical skepticism, would have been very naïve.
At this point I don’t think we have nearly enough data to know all the rationality skills IQ sets a hard limit on, or whether people at a given IQ level are anywhere near those limits. Given that uncertainty, we should think seriously about the virtues and dangers of a world where LW-level rationality is as common as, today, literacy or religious disengagement is.
I think we can go very far in the direction of spreading habits and memes that cause more life success than current habits and memes, but I want to distinguish this from spreading rationality. The difference I see between them is analogous to the difference between converting people to a religion and training religious authority figures (although this analogy might prime someone reading this comment in an unproductive direction, and if so, ignore it).
Could you say more about what distinguishes ‘religious authority figures’ in this analogy? Are they much more effective and truth-bearing than most people? Is their effectiveness much more obvious and dramatic (and squick-free), making them better role models? Are they more self-aware and reflective about how and why their rationality skills work? Are they better at teaching the stuff to others?
The distinction I’m trying to make is between giving people optimized habits and memes as a package that they don’t examine and giving people the skills to optimize their own habits and memes (by examining their current habits and memes). It’s the latter I mean when I refer to spreading rationality, and it’s the latter I expect to be quite difficult to do to people who aren’t above a certain level of intelligence. It’s the former I don’t want to call spreading rationality; I want to call it something like “optimizing culture.”
What you call “rationality” is what I’d call “metarationality”. Conflating the two is understandable at this point because (a) we’d expect the people who explicitly talk about ‘rationality’ to be the people interested in metarationality, and (b) our understanding of measuring and increasing rationality is so weak right now (probably even weaker than our understanding of measuring and increasing metarationality) that we default to thinking more about metarationality than about rationality. Still, I’d like to keep the two separate.
I’m not sure which of the two is more important for us to spread. Does CFAR see better results from the metarationality it teaches (i.e., forming more accurate beliefs about one’s rationality, picking the right habits and affirmations for improving one’s rationality), or from the object-level rationality it teaches?
I don’t think I’m talking about metarationality, but I might be (or maybe I think that rationality just is metarationality). Let me be more specific: let’s pretend, for the sake of argument, that the rationalist community finds out that jogging is an optimal habit for various reasons. I would not call telling people they should jog (e.g. by teaching it in gym class in schools) spreading rationality. Spreading rationality to me is more like giving people the general tools to find out what object-level habits, such as jogging, are worth adopting.
The biggest difference between what I’m calling “rationality” and what I’m calling “optimized habits and memes” is that the former is self-correcting in a way that the latter isn’t. Suppose the rationalist community later finds out that jogging is in fact not an optimal habit for various reasons. To propagate that change through a community of people who had been given a round of optimal habits and memes looks very different from propagating that change through a community of people who had been given general rationality tools.
It feels like it would be possible to get ordinary people to adopt at least some of these, and that their adoption would actually increase the general level of rationality.
I’m skeptical that these kinds of habits and norms can actually be successfully installed in ordinary people. I think they would get distorted for various reasons:
The hard part of using the first habit is figuring out what constitutes strong evidence. You can always rationalize to yourself that some piece of evidence is actually weak if you don’t feel, on a gut level, like knowing the truth is more important than winning arguments.
There are several hard parts of using the second habit, like not getting addicted to gambling. Also, when people with inaccurate beliefs are consistently getting swindled by people with accurate beliefs, you’re training the former to stop accepting bets, not to update their beliefs. This might still be useful for weeding out bad pundits, but then the pundit community doesn’t actually have an incentive to adopt this habit.
The hard part of using the third habit is remembering what facts led you to your conclusion. Also, you can always cherrypick.
And so forth. These are all barriers I expect people with high IQ to deal with better than people with average IQ.
You’re probably right, but even distorted versions of the habits could be more useful than not having any at all, especially if the high-IQ people were more likely to actually follow their “correct” versions. Of course, there’s the possibility of some of the distorted versions being bad enough to make the habits into net negatives.
I would not call telling people they should meditate (e.g. by teaching it in health class in schools) spreading rationality. Spreading rationality to me is more like giving people the general tools to find out what object-level habits, such as meditating, are worth adopting.
I think it’s an (unanswered) empirical question whether meta-level (or general) or object-level (or specific) instruction is the best way to make people rational. Meditation might be an indispensable part of making people more rational, and it might be more efficient (both for epistemic and instrumental rationality) than teaching people more intellectualized skills or explicit doctrines. Rationality needn’t involve reasoning, unless reasoning happens to be the best way to acquire truth or victory.
On the other hand, if meditation isn’t very beneficial, or if the benefits it confers can be better acquired by other means, or if it’s more efficient to get people to meditate by teaching them metarationality (i.e., teaching them how to accurately assess and usefully enhance their belief-forming and decision-making practices) and letting them figure out meditation’s a good idea on their own, then I wouldn’t include meditation practice in my canonical Rationality Lesson Plan.
But if that’s so it’s just because teaching meditation is (relatively) inefficient for making people better map-drawers and agents. It’s not because meditation is intrinsically unlike the Core Rationality Skills by virtue of being too specific, too non-intellectualized, or too non-discursive.
ETA: Meditation might even be an important metarational skill. For instance, meditation might make us better at accurately assessing our own rationality, or at selecting good rationality habits. Being metarational is just about being good at improving your general truth-finding and goal-attaining practices; it needn’t be purely intellectual either. (Though I expect much more of metarationality than object-level rationality to be intellectualized.)
I think the big stumbling block is the desire and capability (in terms of allocating attention and willpower) to optimize one’s habits and memes, not the skills to do so.
Yes, but (a) if that skill is below a certain threshold you probably won’t be able to improve it; (b) empirically it’s a very hard skill to acquire/practice (see all the akrasia issues with the highly intelligent LW crowd).
Given that uncertainty, we should think seriously about the virtues and dangers of a world where LW-level rationality is as common as, today, literacy or religious disengagement is.
Yes, this is exactly what I’m trying to think about. You can’t know long-term historical trends in advance...but you have to make informed-ish decisions about what to try doing, and how to try doing it, anyway.
Making rationality cool = an excellent starting point. I still disagree on the rationality-intelligence thing, though; I think you could teach skills that could still meaningfully be called epistemic/instrumental rationality to people with IQ 100 and below. Not everyone, anymore than it’s possible to persuade everyone from childhood that it’s a good idea to spend money sensibly. (Gaah this is a pet peeve for me). But enough to make the world more awesome.
I’m going to register that disagreement as a bet, and if in 10 years LW is still around and enough has happened that we know who’s right, I will find this comment and collect/lose a Bayes point.
Let’s make a more specific bet: I anticipate that any attempts by CFAR in the next 10 years to broaden the demographic that attends its workshops to include people with IQ within a standard deviation of mean (say in the United States) will fail by their standards. Agree or disagree?
Agree. But “workshops” includes any future instructor-led activities they might do, including shorter formats i.e. 3-hour or 1-day, larger groups, etc.
I am highly skeptical of this happening with human psychology kept constant, basically because I think rationality is de facto impossible for humans who are not at least ~2 standard deviations smarter than the mean.
I don’t think I agree but I may be interpreting “rationality” differently to you.
Treating “rationality” as a qualitative trait, so that people are simply either rational or irrational, I’d say no one is rational, regardless of IQ; no one meets the impossibly stringent standard of making their every inference and/or decision optimally.
Treating “rationality” as a quantitative trait, so that some people are simply more rational than others, I expect IQ helps cultivate rationality everywhere along the IQ scale (except maybe the extremes). I wouldn’t expect a threshold effect around an IQ of 130, but a gradual increase in feasibility-of-being-rational as IQ goes up.
Treating “rationality” as a qualitative trait, so that people are simply either rational or irrational,
That is not what “qualitative” means. The word you want is “binary.”
To be more specific, what I am highly skeptical of is people with IQ within a standard deviation or two of the mean being capable of updating their beliefs in a way noticeably saner than baseline or acting noticeably more strategic than baseline. “Noticeable” means, for example, that if you hired a group of such people for similar jobs and looked at their performance reviews after a year you’d be able to guess, with a reasonable level of accuracy, which ones did or did not have rationality training.
That is not what “qualitative” means. The word you want is “binary.”
I’m fairly sure I used “qualitative” with a standard meaning. Namely, as an adjective indicating “descriptions or distinctions based on some quality rather than on some quantity”, a quality being a discrete feature that distinguishes one thing from another by its presence or absence (as opposed to its degree or extent). Granted, it would’ve been better to use the word “binary”; substitute that word and I think my point stands.
To be more specific, what I am highly skeptical of is people with IQ within a standard deviation or two of the mean being capable of [...]
Thanks for elaborating. That (and this subthread) clarify where you’re coming from. I think we agree that someone one or two SDs below the mean would be hard to mould into a noticeably saner or more strategic person. The lingering bit of disagreement is for people that far above the mean, with IQs of 120-125, say.
“Noticeable” means, for example, that if you hired a group of such people for similar jobs and looked at their performance reviews after a year you’d be able to guess, with a reasonable level of accuracy, which ones did or did not have rationality training.
While I wouldn’t expect to see such a stark effect of rationality training for people with IQs of 120-125, I doubt I’d see it for people with even higher IQs, either. If one randomly assigns half of a sample of workers to undergo intervention X, and X raises job performance by (e.g.) a standard deviation, job performance is still a pretty imperfect predictor of which workers experienced X. (And that’s assuming job performance can be observed without noise!) So I predict rationality training wouldn’t have an effect that’s “noticeable” in the sense you operationalize it here, even if it successfully boosted job performance among people with IQs of 120-125.
I am highly skeptical of this happening with human psychology kept constant, basically because I think rationality is de facto impossible for humans who are not at least ~2 standard deviations smarter than the mean. (I also suspect that most LWers have bad priors about what mean intelligence looks like, including me.)
I think a more achievable goal is to make the concept of rationality cool. Being a movie star, for example, is cool but not normal. Rationality not being cool prevents otherwise sufficiently smart people from exploring it. My model of what raising the sanity waterline looks like in the short- to medium-term is to start from the smartest people (these are simultaneously the easiest and the highest-value people to make more rational) and work down the intelligence ladder from there.
I think ‘can we make everyone rational?’ is probably the wrong question. Better questions:
How much more rational could we make 2013 average-IQ people, by modifying their cultural environment and education? (That is, without relying on things like surgical or genetic modification.) What’s the realistic limit of improvement, and when would diminishing returns make investing in further education a waste?
How do specific rationality skills vary in teachability? Are there some skills that are especially easy to culturally transmit (i.e., ‘make cool’ in a behavior-modifying way) or to instill in ordinary people?
How hard would the above approaches be? How costly is the required research and execution?
In addition to the obvious direct benefits of being more rational (which by definition means ‘people make decisions that get them more of what they want’ and ‘people’s beliefs are better maps’), how big are indirect benefits like Qiaochu’s ‘smart people see rationality as more valuable’, or ‘governments and individuals fund altruism (including rationality training) more effectively’, or ‘purchasing and voting habits are more globally beneficial’?
Suppose we were having this discussion 200 or 500 or 1000 years ago instead, and the topic was not ‘Can we make everyone rational?’ but ‘Can we make everyone literate?’ or ‘Can we make everyone a non-racist?’ or ‘Can we make everyone irreligious?’. I think it’s clear in retrospect that those aren’t quite the right questions to be asking, and it’s also clear in retrospect that appeals to intelligence levels, as grounds for cynical skepticism, would have been very naïve.
At this point I don’t think we have nearly enough data to know all the rationality skills IQ sets a hard limit on, or whether people at a given IQ level are anywhere near those limits. Given that uncertainty, we should think seriously about the virtues and dangers of a world where LW-level rationality is as common as, today, literacy or religious disengagement is.
I think we can go very far in the direction of spreading habits and memes that cause more life success than current habits and memes, but I want to distinguish this from spreading rationality. The difference I see between them is analogous to the difference between converting people to a religion and training religious authority figures (although this analogy might prime someone reading this comment in an unproductive direction, and if so, ignore it).
Could you say more about what distinguishes ‘religious authority figures’ in this analogy? Are they much more effective and truth-bearing than most people? Is their effectiveness much more obvious and dramatic (and squick-free), making them better role models? Are they more self-aware and reflective about how and why their rationality skills work? Are they better at teaching the stuff to others?
The distinction I’m trying to make is between giving people optimized habits and memes as a package that they don’t examine and giving people the skills to optimize their own habits and memes (by examining their current habits and memes). It’s the latter I mean when I refer to spreading rationality, and it’s the latter I expect to be quite difficult to do to people who aren’t above a certain level of intelligence. It’s the former I don’t want to call spreading rationality; I want to call it something like “optimizing culture.”
What you call “rationality” is what I’d call “metarationality”. Conflating the two is understandable at this point because (a) we’d expect the people who explicitly talk about ‘rationality’ to be the people interested in metarationality, and (b) our understanding of measuring and increasing rationality is so weak right now (probably even weaker than our understanding of measuring and increasing metarationality) that we default to thinking more about metarationality than about rationality. Still, I’d like to keep the two separate.
I’m not sure which of the two is more important for us to spread. Does CFAR see better results from the metarationality it teaches (i.e., forming more accurate beliefs about one’s rationality, picking the right habits and affirmations for improving one’s rationality), or from the object-level rationality it teaches?
I don’t think I’m talking about metarationality, but I might be (or maybe I think that rationality just is metarationality). Let me be more specific: let’s pretend, for the sake of argument, that the rationalist community finds out that jogging is an optimal habit for various reasons. I would not call telling people they should jog (e.g. by teaching it in gym class in schools) spreading rationality. Spreading rationality to me is more like giving people the general tools to find out what object-level habits, such as jogging, are worth adopting.
The biggest difference between what I’m calling “rationality” and what I’m calling “optimized habits and memes” is that the former is self-correcting in a way that the latter isn’t. Suppose the rationalist community later finds out that jogging is in fact not an optimal habit for various reasons. To propagate that change through a community of people who had been given a round of optimal habits and memes looks very different from propagating that change through a community of people who had been given general rationality tools.
How about habits and norms like:
Consider it high status to change one’s mind when presented with strong evidence against their old position
Offer people bets on beliefs that are verifiable and which they hold very strongly
When asked a question, state the facts that led you to your conclusion, not the conclusion itself
Encourage people to present the strongest cases they can against their own ideas
Be upfront about when you don’t remember the source of your claim
(more)
It feels like it would be possible to get ordinary people to adopt at least some of these, and that their adoption would actually increase the general level of rationality.
I’m skeptical that these kinds of habits and norms can actually be successfully installed in ordinary people. I think they would get distorted for various reasons:
The hard part of using the first habit is figuring out what constitutes strong evidence. You can always rationalize to yourself that some piece of evidence is actually weak if you don’t feel, on a gut level, like knowing the truth is more important than winning arguments.
There are several hard parts of using the second habit, like not getting addicted to gambling. Also, when people with inaccurate beliefs are consistently getting swindled by people with accurate beliefs, you’re training the former to stop accepting bets, not to update their beliefs. This might still be useful for weeding out bad pundits, but then the pundit community doesn’t actually have an incentive to adopt this habit.
The hard part of using the third habit is remembering what facts led you to your conclusion. Also, you can always cherrypick.
And so forth. These are all barriers I expect people with high IQ to deal with better than people with average IQ.
You’re probably right, but even distorted versions of the habits could be more useful than not having any at all, especially if the high-IQ people were more likely to actually follow their “correct” versions. Of course, there’s the possibility of some of the distorted versions being bad enough to make the habits into net negatives.
I think it’s an (unanswered) empirical question whether meta-level (or general) or object-level (or specific) instruction is the best way to make people rational. Meditation might be an indispensable part of making people more rational, and it might be more efficient (both for epistemic and instrumental rationality) than teaching people more intellectualized skills or explicit doctrines. Rationality needn’t involve reasoning, unless reasoning happens to be the best way to acquire truth or victory.
On the other hand, if meditation isn’t very beneficial, or if the benefits it confers can be better acquired by other means, or if it’s more efficient to get people to meditate by teaching them metarationality (i.e., teaching them how to accurately assess and usefully enhance their belief-forming and decision-making practices) and letting them figure out meditation’s a good idea on their own, then I wouldn’t include meditation practice in my canonical Rationality Lesson Plan.
But if that’s so it’s just because teaching meditation is (relatively) inefficient for making people better map-drawers and agents. It’s not because meditation is intrinsically unlike the Core Rationality Skills by virtue of being too specific, too non-intellectualized, or too non-discursive.
ETA: Meditation might even be an important metarational skill. For instance, meditation might make us better at accurately assessing our own rationality, or at selecting good rationality habits. Being metarational is just about being good at improving your general truth-finding and goal-attaining practices; it needn’t be purely intellectual either. (Though I expect much more of metarationality than object-level rationality to be intellectualized.)
Meditation was probably an unusually bad example for me to make the point I wanted with; sorry about that. I’m going to replace it with jogging.
“Give man a fish...” ?
I think the big stumbling block is the desire and capability (in terms of allocating attention and willpower) to optimize one’s habits and memes, not the skills to do so.
Learning how to allocate attention and willpower is a skill.
Yes, but (a) if that skill is below a certain threshold you probably won’t be able to improve it; (b) empirically it’s a very hard skill to acquire/practice (see all the akrasia issues with the highly intelligent LW crowd).
Yep. Neither of those things are evidence against anything I’ve said.
Yes, this is exactly what I’m trying to think about. You can’t know long-term historical trends in advance...but you have to make informed-ish decisions about what to try doing, and how to try doing it, anyway.
Making rationality cool = an excellent starting point. I still disagree on the rationality-intelligence thing, though; I think you could teach skills that could still meaningfully be called epistemic/instrumental rationality to people with IQ 100 and below. Not everyone, anymore than it’s possible to persuade everyone from childhood that it’s a good idea to spend money sensibly. (Gaah this is a pet peeve for me). But enough to make the world more awesome.
I’m going to register that disagreement as a bet, and if in 10 years LW is still around and enough has happened that we know who’s right, I will find this comment and collect/lose a Bayes point.
Let’s make a more specific bet: I anticipate that any attempts by CFAR in the next 10 years to broaden the demographic that attends its workshops to include people with IQ within a standard deviation of mean (say in the United States) will fail by their standards. Agree or disagree?
Agree. But “workshops” includes any future instructor-led activities they might do, including shorter formats i.e. 3-hour or 1-day, larger groups, etc.
Make rationality cool? Don’t worry, I got this.
Puts on sunglasses
I don’t think I agree but I may be interpreting “rationality” differently to you.
Treating “rationality” as a qualitative trait, so that people are simply either rational or irrational, I’d say no one is rational, regardless of IQ; no one meets the impossibly stringent standard of making their every inference and/or decision optimally.
Treating “rationality” as a quantitative trait, so that some people are simply more rational than others, I expect IQ helps cultivate rationality everywhere along the IQ scale (except maybe the extremes). I wouldn’t expect a threshold effect around an IQ of 130, but a gradual increase in feasibility-of-being-rational as IQ goes up.
That is not what “qualitative” means. The word you want is “binary.”
To be more specific, what I am highly skeptical of is people with IQ within a standard deviation or two of the mean being capable of updating their beliefs in a way noticeably saner than baseline or acting noticeably more strategic than baseline. “Noticeable” means, for example, that if you hired a group of such people for similar jobs and looked at their performance reviews after a year you’d be able to guess, with a reasonable level of accuracy, which ones did or did not have rationality training.
I’m fairly sure I used “qualitative” with a standard meaning. Namely, as an adjective indicating “descriptions or distinctions based on some quality rather than on some quantity”, a quality being a discrete feature that distinguishes one thing from another by its presence or absence (as opposed to its degree or extent). Granted, it would’ve been better to use the word “binary”; substitute that word and I think my point stands.
Thanks for elaborating. That (and this subthread) clarify where you’re coming from. I think we agree that someone one or two SDs below the mean would be hard to mould into a noticeably saner or more strategic person. The lingering bit of disagreement is for people that far above the mean, with IQs of 120-125, say.
While I wouldn’t expect to see such a stark effect of rationality training for people with IQs of 120-125, I doubt I’d see it for people with even higher IQs, either. If one randomly assigns half of a sample of workers to undergo intervention X, and X raises job performance by (e.g.) a standard deviation, job performance is still a pretty imperfect predictor of which workers experienced X. (And that’s assuming job performance can be observed without noise!) So I predict rationality training wouldn’t have an effect that’s “noticeable” in the sense you operationalize it here, even if it successfully boosted job performance among people with IQs of 120-125.