Making Rationality General-Interest
Introduction
Less Wrong currently represents a tiny, tiny, tiny segment of the population. In its current form, it might only appeal to a tiny, tiny segment of the population. Basically, the people who have a strong need for cognition, who are INTx on the Myers-Briggs (65% of us as per 2012 survey data), etc.
Raising the sanity waterline seems like a generally good idea. Smart people who believe stupid things, and go on to invest resources in stupid ways because of it, are frustrating. Trying to learn rationality skills in my 20s, when a bunch of thought patterns are already overlearned, is even more frustrating.
I have an intuition that a better future would be one where the concept of rationality (maybe called something different, but the same idea) is normal. Where it’s as obvious as the idea that you shouldn’t spend more money than you earn, or that you should live a healthy lifestyle, etc. The point isn’t that everyone currently lives debt-free, eats decently well and exercises; that isn’t the case; but they are normal things to do if you’re a minimally proactive person who cares a bit about your future. No one has ever told me that doing taekwondo to stay fit is weird and culty, or that keeping a budget will make me unhappy because I’m overthinking thing.
I think the questions of “whether we should try to do this” and “if so, how do we do it in practice?” are both valuable to discuss, and interesting.
Is making rationality general-interest a good goal?
My intuitions are far from 100% reliable. I can think of a few reasons why this might be a bad idea:
1. A little bit of rationality can be damaging; it might push people in the direction of too much contrarianism, or something else I haven’t thought of. Since introspection is imperfect, knowing a bit about cognitive biases and the mistakes that other people make might make people actually less likely to change their mind–they see other people making those well-known mistakes, but not themselves. Likewise, rationality taught only as a tool or skill, without any kind of underlying philosophy of why you should want to believe true things, might cause problems of a similar nature to martial art skills taught without the traditional, often non-violent philosophies–it could result in people abusing the skill to win fights/debates, making the larger community worse off overall. (Credit to Yan Zhang for martial arts metaphor).
2. Making the concepts general-interest, or just growing too fast, might involve watering them down or changing them in some way that the value of the LW microcommunity is lost. This could be worse for the people who currently enjoy LW even if it isn’t worse overall. I don’t know how easy it would be to avoid, or whether
3. It turns out that rationalists don’t actually win, and x-rationality, as Yvain terms it, just isn’t that amazing over-and-above already being proactive and doing stuff like keeping a budget. Yeah, you can say stuff like “the definition of rationality is that it helps you win”, but if in real life, all the people who deliberately try to increase their rationality do worse off overall, by their own standards (or even equally well, but with less time left over for other fun pursuits) than the people who aim for their life goals directly, I want to know that.
4. Making rationality general-interest is a good idea, but not the best thing to be spending time and energy on right now because of Mysterious Reasons X, Y, Z. Maybe I only think it is because of my personal bias towards liking community stuff (and wishing all of my friends were also friends with each other and liked the same activities, which would simplify my social life, but probably shouldn’t happen for good reasons).
Obviously, if any of these are the case, I want to know about it. I also want to know about it if there are other reasons, off my radar, why this is a terrible idea.
What has to change for this to happen?
I don’t really know, or I would be doing those things already (maybe, akrasia allowing). I have some ideas, though.
1. The jargon thing. I’m currently trying to compile a list of LW/CFAR jargon as a project for CFAR, and there are lots of terms I don’t know. There are terms that I’ve realized in retrospect that I was using incorrectly all along. This presents both a large initial effort for someone interested in learning about rationality via the LW route, and also might contribute to the looking-like-a-cult thing.
2. The gender ratio thing. This has been discussed before, and it’s a controversial thing to discuss, and I don’t know how much arguing about it in comments will present any solutions. It seems pretty clear that if you want to appeal to the whole population, and a group that represents 50% of the general population only represents 10% of your participants (also as per 2012 survey data, see link above), there’s going to be a problem somewhere down the road.
My data point: as a female on LW, I haven’t experienced any discrimination, and I’m a bit baffled as to why the gender ratio is so skewed in the first place. Then again, I’ve already been through the filter of not caring if I’m the only girl at a meetup group. And I do hang out in female-dominated groups (i.e. the entire field of nursing), and fit in okay, but I’m probably not all that good as a typical example to generalize from.
3. LW currently appeals to intelligent people, or at least people who self-identify as intelligent; according to the 2012 survey data, the self-reported IQ median is 138. This wouldn’t be surprising, and isn’t a problem until you want to appeal to more than 1% of the population. But intelligence and rationality are, in theory, orthogonal, or at least not the same thing. If I suffered a brain injury that reduced my IQ significantly but didn’t otherwise affects my likes and dislikes, I expect I would still be interested in improving my rationality and think it was important, perhaps even more so, but I also think I would find it frustrating. And I might feel horribly out of place.
4. Rationality in general has a bad rap; specifically, the Spock thing. And this isn’t just affecting whether or not people thing Less Wrong the site is weird; it’s affecting whether they want to think about their own decision-making.
This is only what I can think of in 5 minutes...
What’s already happening?
Meetup groups are happening. CFAR is happening. And there are groups out there practicing skills similar or related to rationality, whether or not they call it the same thing.
Conclusion
Rationality, Less Wrong and CFAR have, gradually over the last 2-3 years, become a big part of my life. It’s been fun, and I think it’s made me stronger, and I would prefer a world where as many other people as possible have that. I’d like to know if people think that’s a) a good idea, b) feasible, and c) how to do it practically.
- LW Melbourne: Report on Public Rationality Lecture by (17 Aug 2013 14:17 UTC; 21 points)
- 9 Dec 2013 11:15 UTC; 0 points) 's comment on Open Thread, December 2-8, 2013 by (
HPMoR is very very popular and broadly appealing (as rationality lit goes), so that seems to be our biggest leverage point for spreading LW to people who aren’t already academics or programmers, like the secularist and wider geeknerd communities.
Currently, we seem to be making only a little use of that resource for sustained, active, explicit community-building outreach. LW is not optimized for community discussions between any people who haven’t already spent a few months or years studying mathematics, programming, a very specific flavor of analytic philosophy, or past LW posts like the Sequences. We’re catching tons of fish friends, and throwing nearly all of them back in the ocean. The only non-LW community that seems targeted at HPMoR people is the reddit, but we’re doing almost nothing to make that reddit useful for rationality training, or appealing to any people who want to do more than geek out about the details of the plot of HPMoR itself. Plus reddit is not a great environment in general if we want to experiment, or to appeal to whoever LW doesn’t appeal to.
I suggest: Start a new website as a community hub for HPMoR fans, and more generally for the demographic ‘I’m not very mathy but I think science is neato and want advice and social support for self-improvement and for making the world a better place.’ Perhaps the website could be to CFAR what LW is to MIRI. Whereas (future-)LW focuses on high-level rationality techniques, speculative philosophical and mathematical innovation, and programming/AI, the MoR site focuses more on the low-hanging fruit of rationality, the stuff that’s relatively well-established or at least ready for beta or late-stage-alpha testing, with a stronger emphasis on community, niceness, skill cultivation, and MoR geekery.
We could call it, say, Reason Academy, and capitalize on the ‘I wish I were at Hogwarts!’ HP impulse without doing anything that explicitly raises copyright problems. (‘Rationality Academy’ makes sense for an HPMoR tie-in, but I think has limited crossover appeal because of the Spocky connotations and because it sounds clunky.) More message boards, more centralized easy-to-access low-barrier-to-understanding reads, a happier and friendlier aesthetic, more games and (eventually) a more structured, reward-centered learning environment. Is this a good thing to shoot for?
I think that this sounds like a great idea, though it also seems like one that would take a lot of effort to put together.
Two thoughts:
We can start small, maybe with just a message board (to replace and expand the functionality of the reddit, and perhaps of LW’s open threads). A few message boards aren’t hard to maintain. Then once the boards are active enough, start experimenting with expanding the functionality.
We can wait before doing much. Launching something like this right after (or right before) HPMoR concludes strikes me as a rather good idea. The site would then function as HPMoR’s Pottermore.
It’s worth thinking in more detail about what exactly we’d want out of something like this, and about risks (e.g., making LW look even more foreboding). Also, we should brainstorm features we’d implement on forums or games if we could, that aren’t easily implemented on LW or the reddit. E.g., rules that encourage people more to ask questions (and get answers where on LW we might just default to ‘go read the Sequences’), be friendly and goofy, express positive thoughts/feelings, and build strong emotional social connections, including ones that might be too cumbersome to make general practice on LW.
I am highly skeptical of this happening with human psychology kept constant, basically because I think rationality is de facto impossible for humans who are not at least ~2 standard deviations smarter than the mean. (I also suspect that most LWers have bad priors about what mean intelligence looks like, including me.)
I think a more achievable goal is to make the concept of rationality cool. Being a movie star, for example, is cool but not normal. Rationality not being cool prevents otherwise sufficiently smart people from exploring it. My model of what raising the sanity waterline looks like in the short- to medium-term is to start from the smartest people (these are simultaneously the easiest and the highest-value people to make more rational) and work down the intelligence ladder from there.
I think ‘can we make everyone rational?’ is probably the wrong question. Better questions:
How much more rational could we make 2013 average-IQ people, by modifying their cultural environment and education? (That is, without relying on things like surgical or genetic modification.) What’s the realistic limit of improvement, and when would diminishing returns make investing in further education a waste?
How do specific rationality skills vary in teachability? Are there some skills that are especially easy to culturally transmit (i.e., ‘make cool’ in a behavior-modifying way) or to instill in ordinary people?
How hard would the above approaches be? How costly is the required research and execution?
In addition to the obvious direct benefits of being more rational (which by definition means ‘people make decisions that get them more of what they want’ and ‘people’s beliefs are better maps’), how big are indirect benefits like Qiaochu’s ‘smart people see rationality as more valuable’, or ‘governments and individuals fund altruism (including rationality training) more effectively’, or ‘purchasing and voting habits are more globally beneficial’?
Suppose we were having this discussion 200 or 500 or 1000 years ago instead, and the topic was not ‘Can we make everyone rational?’ but ‘Can we make everyone literate?’ or ‘Can we make everyone a non-racist?’ or ‘Can we make everyone irreligious?’. I think it’s clear in retrospect that those aren’t quite the right questions to be asking, and it’s also clear in retrospect that appeals to intelligence levels, as grounds for cynical skepticism, would have been very naïve.
At this point I don’t think we have nearly enough data to know all the rationality skills IQ sets a hard limit on, or whether people at a given IQ level are anywhere near those limits. Given that uncertainty, we should think seriously about the virtues and dangers of a world where LW-level rationality is as common as, today, literacy or religious disengagement is.
I think we can go very far in the direction of spreading habits and memes that cause more life success than current habits and memes, but I want to distinguish this from spreading rationality. The difference I see between them is analogous to the difference between converting people to a religion and training religious authority figures (although this analogy might prime someone reading this comment in an unproductive direction, and if so, ignore it).
Could you say more about what distinguishes ‘religious authority figures’ in this analogy? Are they much more effective and truth-bearing than most people? Is their effectiveness much more obvious and dramatic (and squick-free), making them better role models? Are they more self-aware and reflective about how and why their rationality skills work? Are they better at teaching the stuff to others?
The distinction I’m trying to make is between giving people optimized habits and memes as a package that they don’t examine and giving people the skills to optimize their own habits and memes (by examining their current habits and memes). It’s the latter I mean when I refer to spreading rationality, and it’s the latter I expect to be quite difficult to do to people who aren’t above a certain level of intelligence. It’s the former I don’t want to call spreading rationality; I want to call it something like “optimizing culture.”
What you call “rationality” is what I’d call “metarationality”. Conflating the two is understandable at this point because (a) we’d expect the people who explicitly talk about ‘rationality’ to be the people interested in metarationality, and (b) our understanding of measuring and increasing rationality is so weak right now (probably even weaker than our understanding of measuring and increasing metarationality) that we default to thinking more about metarationality than about rationality. Still, I’d like to keep the two separate.
I’m not sure which of the two is more important for us to spread. Does CFAR see better results from the metarationality it teaches (i.e., forming more accurate beliefs about one’s rationality, picking the right habits and affirmations for improving one’s rationality), or from the object-level rationality it teaches?
I don’t think I’m talking about metarationality, but I might be (or maybe I think that rationality just is metarationality). Let me be more specific: let’s pretend, for the sake of argument, that the rationalist community finds out that jogging is an optimal habit for various reasons. I would not call telling people they should jog (e.g. by teaching it in gym class in schools) spreading rationality. Spreading rationality to me is more like giving people the general tools to find out what object-level habits, such as jogging, are worth adopting.
The biggest difference between what I’m calling “rationality” and what I’m calling “optimized habits and memes” is that the former is self-correcting in a way that the latter isn’t. Suppose the rationalist community later finds out that jogging is in fact not an optimal habit for various reasons. To propagate that change through a community of people who had been given a round of optimal habits and memes looks very different from propagating that change through a community of people who had been given general rationality tools.
How about habits and norms like:
Consider it high status to change one’s mind when presented with strong evidence against their old position
Offer people bets on beliefs that are verifiable and which they hold very strongly
When asked a question, state the facts that led you to your conclusion, not the conclusion itself
Encourage people to present the strongest cases they can against their own ideas
Be upfront about when you don’t remember the source of your claim
(more)
It feels like it would be possible to get ordinary people to adopt at least some of these, and that their adoption would actually increase the general level of rationality.
I’m skeptical that these kinds of habits and norms can actually be successfully installed in ordinary people. I think they would get distorted for various reasons:
The hard part of using the first habit is figuring out what constitutes strong evidence. You can always rationalize to yourself that some piece of evidence is actually weak if you don’t feel, on a gut level, like knowing the truth is more important than winning arguments.
There are several hard parts of using the second habit, like not getting addicted to gambling. Also, when people with inaccurate beliefs are consistently getting swindled by people with accurate beliefs, you’re training the former to stop accepting bets, not to update their beliefs. This might still be useful for weeding out bad pundits, but then the pundit community doesn’t actually have an incentive to adopt this habit.
The hard part of using the third habit is remembering what facts led you to your conclusion. Also, you can always cherrypick.
And so forth. These are all barriers I expect people with high IQ to deal with better than people with average IQ.
You’re probably right, but even distorted versions of the habits could be more useful than not having any at all, especially if the high-IQ people were more likely to actually follow their “correct” versions. Of course, there’s the possibility of some of the distorted versions being bad enough to make the habits into net negatives.
I think it’s an (unanswered) empirical question whether meta-level (or general) or object-level (or specific) instruction is the best way to make people rational. Meditation might be an indispensable part of making people more rational, and it might be more efficient (both for epistemic and instrumental rationality) than teaching people more intellectualized skills or explicit doctrines. Rationality needn’t involve reasoning, unless reasoning happens to be the best way to acquire truth or victory.
On the other hand, if meditation isn’t very beneficial, or if the benefits it confers can be better acquired by other means, or if it’s more efficient to get people to meditate by teaching them metarationality (i.e., teaching them how to accurately assess and usefully enhance their belief-forming and decision-making practices) and letting them figure out meditation’s a good idea on their own, then I wouldn’t include meditation practice in my canonical Rationality Lesson Plan.
But if that’s so it’s just because teaching meditation is (relatively) inefficient for making people better map-drawers and agents. It’s not because meditation is intrinsically unlike the Core Rationality Skills by virtue of being too specific, too non-intellectualized, or too non-discursive.
ETA: Meditation might even be an important metarational skill. For instance, meditation might make us better at accurately assessing our own rationality, or at selecting good rationality habits. Being metarational is just about being good at improving your general truth-finding and goal-attaining practices; it needn’t be purely intellectual either. (Though I expect much more of metarationality than object-level rationality to be intellectualized.)
Meditation was probably an unusually bad example for me to make the point I wanted with; sorry about that. I’m going to replace it with jogging.
“Give man a fish...” ?
I think the big stumbling block is the desire and capability (in terms of allocating attention and willpower) to optimize one’s habits and memes, not the skills to do so.
Learning how to allocate attention and willpower is a skill.
Yes, but (a) if that skill is below a certain threshold you probably won’t be able to improve it; (b) empirically it’s a very hard skill to acquire/practice (see all the akrasia issues with the highly intelligent LW crowd).
Yep. Neither of those things are evidence against anything I’ve said.
Yes, this is exactly what I’m trying to think about. You can’t know long-term historical trends in advance...but you have to make informed-ish decisions about what to try doing, and how to try doing it, anyway.
Making rationality cool = an excellent starting point. I still disagree on the rationality-intelligence thing, though; I think you could teach skills that could still meaningfully be called epistemic/instrumental rationality to people with IQ 100 and below. Not everyone, anymore than it’s possible to persuade everyone from childhood that it’s a good idea to spend money sensibly. (Gaah this is a pet peeve for me). But enough to make the world more awesome.
I’m going to register that disagreement as a bet, and if in 10 years LW is still around and enough has happened that we know who’s right, I will find this comment and collect/lose a Bayes point.
Let’s make a more specific bet: I anticipate that any attempts by CFAR in the next 10 years to broaden the demographic that attends its workshops to include people with IQ within a standard deviation of mean (say in the United States) will fail by their standards. Agree or disagree?
Agree. But “workshops” includes any future instructor-led activities they might do, including shorter formats i.e. 3-hour or 1-day, larger groups, etc.
Make rationality cool? Don’t worry, I got this.
Puts on sunglasses
I don’t think I agree but I may be interpreting “rationality” differently to you.
Treating “rationality” as a qualitative trait, so that people are simply either rational or irrational, I’d say no one is rational, regardless of IQ; no one meets the impossibly stringent standard of making their every inference and/or decision optimally.
Treating “rationality” as a quantitative trait, so that some people are simply more rational than others, I expect IQ helps cultivate rationality everywhere along the IQ scale (except maybe the extremes). I wouldn’t expect a threshold effect around an IQ of 130, but a gradual increase in feasibility-of-being-rational as IQ goes up.
That is not what “qualitative” means. The word you want is “binary.”
To be more specific, what I am highly skeptical of is people with IQ within a standard deviation or two of the mean being capable of updating their beliefs in a way noticeably saner than baseline or acting noticeably more strategic than baseline. “Noticeable” means, for example, that if you hired a group of such people for similar jobs and looked at their performance reviews after a year you’d be able to guess, with a reasonable level of accuracy, which ones did or did not have rationality training.
I’m fairly sure I used “qualitative” with a standard meaning. Namely, as an adjective indicating “descriptions or distinctions based on some quality rather than on some quantity”, a quality being a discrete feature that distinguishes one thing from another by its presence or absence (as opposed to its degree or extent). Granted, it would’ve been better to use the word “binary”; substitute that word and I think my point stands.
Thanks for elaborating. That (and this subthread) clarify where you’re coming from. I think we agree that someone one or two SDs below the mean would be hard to mould into a noticeably saner or more strategic person. The lingering bit of disagreement is for people that far above the mean, with IQs of 120-125, say.
While I wouldn’t expect to see such a stark effect of rationality training for people with IQs of 120-125, I doubt I’d see it for people with even higher IQs, either. If one randomly assigns half of a sample of workers to undergo intervention X, and X raises job performance by (e.g.) a standard deviation, job performance is still a pretty imperfect predictor of which workers experienced X. (And that’s assuming job performance can be observed without noise!) So I predict rationality training wouldn’t have an effect that’s “noticeable” in the sense you operationalize it here, even if it successfully boosted job performance among people with IQs of 120-125.
I’m not sure this is avoidable, because precise concepts need precise terms. One of my favorite passages from Three Worlds Collide is:
That is the sort of concept which should be one short phrase in a language used by people who evaluate hypotheses by Bayesian thinking. Inaccessibility of jargon is oftentimes a sign of real inferential distance- someone needs to know what those two concepts are mathematically for that sentence-long explanation of a single phrase to make any sense, and explaining what those concepts are mathematically is a lecture or two by itself.
(That said, I agree that in areas where a professional community has a technical term for a concept and LW has a different technical term for that concept, replacing LW’s term with the professional community’s term is probably a good move.)
It seems to me that while intelligence is not sufficient for rationality, it might be necessary for rationality. (As rationality testing becomes more common, we’ll be able to investigate that empirical claim.) I often describe rationality as “living deliberately,” and that seems like the sort of thing that appeals much more to people with more intellectual horsepower because it’s much easier for them to be deliberate.
I agree with you on the jargon thing; it’s so much easier to have a conversation about rationality-cluster with LW people because of it. (It’s also fun and ingroupy). But I do think it’s a problem overall, and partly avoidable.
We really should have a short phrase for that. Suggestions? “The evidence would be likely given the hypothesis, but the hypothesis isn’t as likely given the evidence” would at least be a bit shorter.
I would probably express it as something like “you’re confusing a high likelihood with a high posterior,” which is less precise but I suspect would be understood by a Bayesian.
There are already precise terms for most of the concepts LW discusses. It’s that LW uses its own jargon.
State three examples?
I guess that for some LW jargon there already are precise terms, but for other LW jargon there are not. Or sometimes there is a term that means something similar, but can also be misleading. Still, it could be good to reduce the unnecessary jargon.
How to do it? Perhaps by making this a separate topic—find the equivalents to LW jargon, disuss whether they really mean the same thing, and if yes, propose using the traditional expression.
What I am saying here is that (1) merely saying “there are precise terms” is not helpful without specific examples, and (2) each term should be discussed, because it may seem like the same thing to one person, but another person may find a difference.
I don’t believe you, for most part when there is already a precise term we just use that term already. For most LW jargon it is far more likely that you are confused about the concepts and propose using a wrong term than that there are already precise terms that have the same meaning.
From the grandparent:
INTP male programmer here. I’ve never posted an article and rarely comment.
One thing which keeps me from doing is actually HPMoR, and EY’s posts and sequences. They’re all really long, and seem to be considered required reading. I know its EY’s style; he seems to prefer narratives. Unfortunately I don’t have a lot of time to read all that, and much prefer a terser Hansonian style.
A shorter “getting started” guide would help me. Would it help others?
I’ve been giving the following post list to friends, to get them into LW:
1) The Twelve Virtues
2) Cognitive Biases Potentially Affecting Judgements of Global Risk
3) The Simple Truth
4) The Useful Idea of Truth
5) What do we mean by rationality?
6) What is evidence?
7) Rationality: Appreciating Cognitive Algorithms
8) Skill: The Map is Not the Territory
9) Firewalling the Optimal from the Rational
10) The Lens that Sees its Flaws
11) The Martial Art of Rationality
12) No One Can Exempt You From Rationality’s Laws
I’ll occasionally adapt it, e.g. Swapping the first two posts around, for someone who has an academic background and will be more interested in an academic paper to start with.
Anyone looking for my current, extended version, can find it here.
I’ve been thinking about this problem lately, and I agree it’s a problem. I have some tentative ideas for starting to address it, which I’ll post to Discussion next week. I’d like more data on where the stumbling blocks are, though.
Are there LW posts (by Eliezer or whoever) that you have found helpful, readable, concise, etc.? If so, what are some of the better examples? Would you say, for example, that Lukeprog and Hanson’s styles work for you about equally well?
What are some examples of specific posts (or series of posts) you haven’t gotten through? How much was a result of length, how much a result of content (e.g., too difficult or boring or mathy), and how much a result of style (e.g., too narrative or unstructured or jargony)?
What are specific ideas, perspectives, approaches, or terms you feel (or have been told) you’re currently missing out on? The more examples of this the better.
ETA: I’d be interested in others’ responses to this too.
From the articles linked from Welcome to Less Wrong:
1) http://lesswrong.com/lw/jx/we_change_our_minds_less_often_than_we_think/
The title is descriptive and the text is short and to the point. Empirical support is present and clearly stated. Of course it could be shortened quite a bit more without losing any information, but I don’t find it excessively verbose.
2) http://lesswrong.com/lw/qk/that_alien_message
Its a long post, not trivial to follow, and when reading its not clear how the effort will pay off. Perhaps this is evidence of a short attention span, but I’ve generally found that most concepts can be expressed succinctly. It might also be a habit of my profession that I try and make writings as terse and general as possible.
I suspect status and article length are highly correlated (e.g., people read autobiographies of famous people), and so longer writings might be ways to signal status.
I can produce more examples, but the above two are archetypal for me.
3) Well, I don’t know what I don’t know ;) But to list a few things:
Pros and cons of frequentist vs. bayesian approaches. Everything I read here seems pro-bayesian, but other (statistics) sites I look at promote a mix of the two approaches.
Why so little discussion of mechanisms which improve the rationality of group action and decision-making? Is that topic too close to the mind-killer, or have I missed those articles?
I find appeals to rationality during strictly normative argument irrational, because people don’t seem to adopt ethics on the basis of rationality or consistency. Thus I’m confused by the frequency of ethical discussions here. Am I missing something about ethics and rationality? Or just wrong? Something on a general rationalist approach to ethics would be helpful to me.
Agreed. I’d be quite interested which post non-LW friends found useful (and if they passed them on to anyone else). My mom ended up using Twelve Virtues as a discussion reading in the first day of her eled class (elementary education).
I don’t think think HPMoR is required reading to learn rationality from LW and related places, and is one of the few things making rationality general-interest at this point.
I do agree that a short “getting started” guide would be helpful, though.
Only a minority of respondents to the 2012 survey had read “about 75%” or “nearly all” of the sequences. So long as you’ve read the links in the welcome thread and you’re prepared to be corrected you should be fine.
How does a doubt about the usefulness of rationality coexist with a desire to spread rationality? I see that many people can reconcile these two feelings just fine, but my mind just doesn’t seem to work that way...
Well, there aren’t many things that I don’t doubt a little bit. I don’t think this a bad thing. However, in order to get anything done in life, instead of sitting in my room thinking about how much I don’t know, I have to act on a lot of things that I’m a bit doubtful about.
I thought you doubted it more than a little bit, because you linked to Yvain’s post that says there’s not much evidence. If “a little bit” means, say, 10%, then can you describe the arguments that made you 90% confident?
Yvain said that clarity of mind was one benefit he’d had. I think clarity of mind is awesome and rare, and makes it less likely that people will do stupid things for bad reasons.
I’ve met Yvain and I think he’s fairly awesome. Likewise, of the other LW people I’ve met in real life, they seem disproportionately awesome–they have clarity of mind, yes, but it seems to lead into other things, like doing things differently because you’re actually able to recognize the reason why you were doing non-useful things in the first place. Correlation not causation, of course, and I didn’t know these people 5 years ago, and even if I had, people progress in awesomeness anyway. But still. Correlation = data in a direction, and it’s not the direction of rationality being useless.
In his post Yvain distinguishes between regular rationality, which he thinks a lot of people have, and “x-rationality” that you get from long study of the Sequences’ concepts. I think a lot fewer people have even regular rationality, that it’s a continuum not a divide, and that strategically placed and worded LW concepts could push almost anymore further towards the ‘rationality’ side.
I’ve changed a lot because of my exposure to the rationality community, and in ways that I don’t think I could have attained otherwise. A lot of this is due to clarity of mind–in particular my allowing myself to think thoughts that are embarrassing or otherwise painful. Some of it’s due to specific ideas, like “notice that you’re confused” or “taboo word X”. Some of it’s due to just hanging out with a social group who think differently than my parents. See this post a year and a half ago, and this post from lately.
If such evidence is enough, then rationality would probably recommend you to spread religion instead of rationality :-) Religious people also often talk about how religion gave them wonderful feelings and improved their lives, and there are actual studies showing religious people are happier and healthier.
I feel that you haven’t mentioned an important factor, which is that LW-rationality sounds very attractive in some sense. If that’s correct, then you’re not alone in this, it took me years to learn to honestly subtract that factor.
Noted. However, before I subtract that factor, I would like to learn whether LW-style rationality seems so attractive for a good reason: long-term, averaged over many people, does it make a difference? It has for a few people. I don’t think you can conclusively say, yet, whether it’s worthwhile teaching to to everyone. In 5-10 years, when CFAR longer-term data starts coming in, then I’ll know. In the meantime, trying to spread it to other people provides data, too.
If it turns out it doesn’t help most people, I won’t keep trying to show it to other people, although I’ll probably try to stick with the community. I would still want to keep looking for something else to try to teach the other people who keep doing stupid things. Call me an idealist...
Religion, AFAICT, does not teach clarity of mind. In many cases it teaches people to follow their intuitions and gut feelings because “God is looking out for them.” This sometimes turns out well for the individual, and sometimes badly (which you’d expect; intuitions are valuable data but can be wrong if the heuristics are applied out of context). Overall I think it’s bad for society, and better if people notice that their hunches are in fact hunches and try to fact-check them. (This isn’t always possible; sometimes you have no outside-view data and you have to go with your gut feeling. But rationality would teach that, too.)
And yeah, I’m picking rationality as a thing to try to spread without having looked at all the possible alternatives. I think that’s okay. There are other people in society who are trying to spread other things for similar reasons. If there are 10 people like me, all with different agendas but for the same reasons, and we’re all paying attention to the data of the next 5 years, and it turns out that one of our methods is actually effective, I would consider that a success. I just don’t know which one of the 10 people I am yet. (If I meet one of the others, and they convince me their agenda has a higher chance of success, I would think about it and then probably agree to help them.)
Is it just me, or does your comment sound like a retreat from “we need to spread rationality because it’s a good idea” to “we need to spread rationality to figure out if it’s a good idea”?
If yes, then note that LW has existed for years and has thousands of users. Yvain was among the first contributors to LW and his early posts were already excellent. Many other good contributors, like Wei Dai (invented UDT, independently invented cryptocurrency) or Paul Christiano (IMO participant), were also good before they joined… As Yvain’s post said, it seems hard to find people who benefited a lot from LW-rationality.
I’m not sure we need more information about the usefulness of LW-rationality before we can make a conclusion. We already have a lot of evidence pointing one way, look at all the LWers who didn’t benefit. Besides, what makes you think that a study with more participants and longer duration would give different results? If anything, it’s probably going to be closer to the mean, because LW folks are self-selected, not randomly selected from the population.
I think LW has at least made me better at handling disagreements with others. For example I’m rather embarrassed when I look back on my early discussions with Nick Szabo on cryptocurrency and other topics, and I think a disagreement I had a few years ago with my business partner was also helped greatly by both of us having followed LW (or maybe it was still OB at that point).
I would say that rationality is worth trying to spread because it may be a good idea, and because it’s something I know about and can think and plan about. Do you know of another community that has a similar level of development to LW (i.e. fairly cohesive but still quite obscure) that I should also investigate? (AFAIK, CFAR is looking for such organizations for new ideas anyway.)
Also, I’m going to update from your comment in the direction of rationality outreach turning out not being the best use of my time.
For a while I satisfied my idea-spreading urges by teaching math to talented kids on a volunteer basis. If you’re very good at something (e.g. swimming), you could try teaching that, it’s a lot of fun.
Or you could spend some effort on figuring out how to measure rationality and check if someone is making progress. That’s much harder though, once you get past the obvious wrong answers like “give them a multiple choice test about rationality”. Eliezer and Anna have written a lot about this problem.
I do teach swimming; I did for many years as a job, and now I do it for fun (and for free) to the kids of my friends (and several of the CFAR staff when I was in San Francisco). It’s something I’m very good at (I may be more at teaching swimming to others than swimming myself), and it fulfills an urge, but not the idea-spreading one.
If CFAR is looking for help trying to make a rationality test, I would be happy to help, too...
Well, if the criteria for success is inventing cryptocurrency, I don’t predict that teaching rationality will have that effect on people. It’s a lot more small usefulness that compounds over time. So understanding Bayes makes it easier to assemble what you know coherently, learning to install habits helps you remember to use the skill when you’re most likely to need it… etc. That habit of reasoning might save you money, or social capital, or time. And, over the course of your life, it gives you more time and scope to act.
That’s pretty much what it does for me, so far, and it’s been a worthwhile level up. It did make a difference for me to learn and practice in a community (built in spaced repetition, yay!) rather than just reading. The reading helped, but once I have a tool, it takes practice to remember to use it, instead of my old default.
You want to find out how to spread rationality?
Read this
This article points out a pretty important obstruction to the general spread of rationality:
Rationality training does not combat a visible and immediate problem because people do not have a sense that more is possible along this dimension.
The article also points out how rationality spreads by social contact and through becoming emotionally trusted.
It’s really a very good article on how rationality spreads (or does not) in the real world.
I’m going to post multiple comments here because I have several separate thoughts about these issues and I want them to be voted on separately so I can get a better idea of people’s thoughts on this matter. My comments on this post will be posted as comments to this comment—that way, people can also vote on the concept of posting multiple thoughts as separate comments.
Another problem is that there isn’t really any standard “rationality test” or other ability to actually determine how rational someone is, though some limited steps have been taken in that direction. Stanovich is working on one, but it can’t be expected for 3+ years at this stage.
This obviously limits the extent to which we can determine whether rationalists “actually win” (my impression, incidentally, is that they do but that there are a lot of skills that help more than current “rationality training” for the average person), what forms of rationality practice yield the most benefits, and so on.
When it comes to raising the sanity waterline, I can’t help but think that the intelligence issue is likely to be a paper tiger. In fact I think LessWrong as a whole cares far too much about unusually intelligent people and that this is one of the biggest flaws of the community as a general-interest project. However, I also recognize that multiple purposes are at work here and such goal conflict may be inevitable.
Can you elaborate on this? I think intelligence is a really important component of rationality in practice (although by “unusually intelligent” you might mean a higher number of standard deviations above the mean than I do).
Sure. Most rationality “in the wild” appears to be tacit rationality and building good habits, and I don’t think that intelligence is particularly important for that. I would definitely predict, for instance, that rationality training could be accessible to people with IQs 0-1 standard deviations above the mean.
I agree that this kind of rationality exists, but I think it tends to be domain-specific and suffer from transfer issues, and I’m also skeptical that it’s easily teachable.
I agree on all points, but I don’t see strong evidence for an easily teachable form of general rationality either, regardless of how intelligent the audience may be.
One other issue is that most people who have currently worked on developing rationality are themselves very intelligent. This sounds like it wouldn’t particularly be a problem—but as Eliezer wrote in My Way:
Intelligence definitely strikes me as one of those unusual features.
Perhaps it could be said that current rationality practices, designed by the highly intelligent and largely practiced by the same, require high intelligence, but it nevertheless seems far from clear that all rationality practices require high intelligence.
Fair point.
First off, one potential problem is the term “rationality” itself. MIRI found that the term “singularity” was too corrupted by other associations to be useful, so they changed their name to avoid being associated with this. I believe that “rational” may be similarly corrupted (“logical” certainly is) and finding another term altogether might be a good tactic.
I think “rational” is probably fine. “Rationalist” may not be, but that’s more thanks to having the connotations of an *ism than because of its stem.
Agreed. What about “effective”?
That does not include the “map corresponding to territory” idea, which is very important for us. Also, it has its now negative connotation. Like “rational” has Spock, “effective” has all kinds of effective villains. At least the Spock seems harmless.
I think having two different words for epistemic and instrumental rationality would be a feature, not a bug. There’s already plenty of overlap between the two (knowing truths is useful, and can easily be subsumed in a discussion of instrumental rationality), but since they do come into conflict sometimes, it would be very valuable to have a concise way to specify which kind of rationality we’re talking about. This would also make our replacing ‘rationality’ with some other term have a function beyond euphemism treadmilling, which makes it easier to justify to the anti-PR crowd.
But I agree “effective” kind of falls flat. Is there an adjective/noun set derivable from “wins” that doesn’t make us sound like Charlie Sheen? (It can be a protologism.)
Something derived from “success”? If you don’t mind sounding like a self-help guru. “Achievement” if you don’t mind sounding like a primary school teacher. “Optimisation” is pretty accurate but I guess only really works for AI programmers or mathematicians who already have a technical understanding of it.
Huh. I don’t get that connotation at all. OTOH, this is possibly due to me not being a native speaker or consuming unusually little mainstream mass media.
I think the idea of posting multiple comments is good, as long as none of the comments is even a little bit a prerequisite for the others. I personally don’t think it’s worth voting on the idea. (Just try it out for a while and see whether you like it and whether you get any complaints.) I suggest posting the separate comments at the base level so they’ll be in their proper karmic order as independent posts; otherwise you lose most of the value of this approach, and you’ll be testing a different idea than the one you intend to ultimately implement.
I may be reading a non-existent connotation into this line, but to me it pattern matches with the belief that the human mind is a blank slate, as though you would have been rational if you hadn’t been corrupted by society.
Humans are, at bottom, animals, and structured around uncritical stimulus-response type behavior. It’s mysterious that humans are capable of transcending these things to achieve some sort of global rationality altogether. I think that learning to do so is inevitably going to be a lot of hard work, regardless of the stage of life at which one attempts it.
No no no. Not at all. I was obviously less rational as a baby than I am now. But childhood neuroplasticity is a thing; it’s easier to learn languages before age 10, and preferably before age 5. And kids have time. As a kid, when I did competitive swimming,I used to be in the pool >10 hours a week. Now, as an adult, I do taekwondo, and although there are 10 hours of class a week available, I only make 2-3.
I did learn some maladaptive thought patterns: i.e. my social anxiety spiral around “you just don’t have enough natural talent to do X”, and the kicker, “you aren’t good enough.” I know this is a pretty meaningless phrase, but it has emotional power because it’s been around so long.
Ok, thanks for clarifying. I understand.
I’m sympathetic to the points about neuroplasticity and time.
I teach math to exceptionally talented children. Something very exciting about it is that basically no such children have had the chance to be taught by a mathematician who’s a dedicated teacher, so the experiment hadn’t been performed. Some of these children are eager to and capable of learning advanced undergraduate level math at the age of 10 or so, and if they have to chance to as opposed to withering away in school, the results could be amazing.
I also had a recent shift in perspective such that I now believe that environmental factors when defined very broadly dominate genetic factors by far in determining behavior. I’m 2-4 standard deviations from the mean on a large number of ostensibly independent dimensions. Upon reflection, I realize that these may all be traceable to only ~3-4 ways in which I was unusual genetically, which then interacted and compounded over the course of my life, resulting in me being very different in so many ways. My home, school, etc. may not have been unusual, but I was interacting with the world through a different lens than other people were, with profound consequences.
So yes, I can see how learning rationality at an early age could make a big difference. For my own part, I don’t have the sense of having had to unlearn maladaptive thought patterns (even though I’ve had maladaptive thought patterns) – it’s hard to place a finger on why. I do wish that I had learned these things at a younger age. If I had learned many weak arguments style reasoning in my teens, my emotional well-being would have been significantly higher for ~10 additional years.
On the other hand, you are probably have more raw intelligence now.
Yes. But I probably had close to my current raw intelligence at age 15-16, and I was definitely reading hard books at age 8-9.
Kids definitely have more time, but otherwise they don’t necessarily learn languages easier. Or at least, secondary languages.
Wow. This article managed to surprise me. Not the fact that kids aren’t any better than adults at learning things deliberately, class-room style–I suppose I thought they would be worse at this, but better at unstructured learning-from-stuff-happening-around-them. (I suppose I thought this because the way that young children learn to speak a first language isn’t related to, or helped by, classroom instruction). But the fact that kids who started French Immersion in 7th grade are just as good as those who started in kindergarten surprised me a lot. This is a program that deliberately tries to teach less in a structured classroom way, and more the way you would learn a first language. (It doesn’t do it incredibly well, though–I went through French immersion, could read and write competently and speak stiltedly by the end of eighth grade, backslid a bit during high school due to limited class hours in French, and only became fluently bilingual in university when “immersed” among actual Quebecois Francophones.) I had massively more trouble trying to learn a third language, but this is probably mostly because a) it was Chinese (more linguistically unrelated), and b) the time thing–I thought an hour a day was a ridiculous and unrealistic amount of time to spend, and what I actually spent was more like fifteen minutes.
Thank you for new information!
Some notes/reactions in random order.
First, how do you understand rationality? Can you explain it in a couple of sentences without using links and/or references to lengthier texts?
Second, there are generally reasons for why things happen the way they happen. I don’t want to make an absolute out of that, but if a person’s behavior is seemingly irrational to you, there’s still some reason, maybe understood, maybe not understood, maybe misunderstood why the person behaves that way. Rationality will not necessarily fix that reason.
Third, consider domains like finance or trading. There is immediate, obvious feedback on how successful your decisions/actions were. Moreover, people who are consistently unsuccessful are washed out (because they don’t have any more money to trade/invest). If you define rationality as the way to win, finance and trading should be populated by high-performance rationalists. Does it look like that in real life?
Fourth, on LW at least there is much confounding between rationality and utilitarianism. The idea is that if you’re truly rational you must be a utilitarian. I don’t think so. And I suspect that making rationality popular is going to be much easier without the utilitarian baggage (in particular, that “Spock thing”).
They are not the same thing, but I don’t think they’re orthogonal. I would probably say that your intelligence puts a cap on how rational you can be. People won’t necessarily reach their cap, but it’s very hard to go beyond it. I have strong doubts about stupid people’s capabilities to be rational.
My weak definition of rationality: thinking about your own knowledge and beliefs from an outside perspective and updating/changing them if they are not helpful and don’t make sense (epistemic); noticing your goals, thinking a bit about how to achieve them, and then doing that on purpose to see if works, while paying attention if it’s not working so you can try something else; thinking about and trying to notice the actual consequences of your actions (instrumental).
Short: epistemic=believing things on purpose, instrumental=doing things on purpose for thought-out reasons.
I say weak because this isn’t a superpower; you can do it without being amazingly good at that (i.e. if you have an IQ of 90). But you can exercise without being amazingly good at any sport, and you still benefit from it. I think that also stands for basic rationality.
In a general sense, yeah. People operate inside causality. But people do things for a reason that they haven’t noticed, haven’t thought about, and might not agree with if they did think about. For example, Bob might find himself well on the path to alcoholism without realizing that his original, harmless-and-normal-seeming craving for a drink in the evening happened because it helped with his insomnia; a problem that could more healthily be addressed by booking a doctor’s appointment. (I pick this example because I recently caught myself in the early stages of this process). But from the inside, it doesn’t feel like the brain is fallible, and so even people who’ve come across research to the contrary feel like their introspection is always correct–let alone people who’ve never seen those ideas. I don’t think the IQ ceiling on understanding and benefiting from “I might be wrong about why I do this” is very high.
Interesting. I’d probably call this self-reflection. I am also wary of the “if they are not helpful and don’t make sense” criterion—it seems to depend way too much on the way a person is primed (aka strong priors). For example, if I am a strongly believing Christian, live in a Christian community, have personal experiences of sensing the godhead, etc. any attempts to explain atheism to me will be met by “not helpful and doesn’t make sense”. And “believing things on purpose” also goes there—the same person purposefully believes in Lord Jesus.
Epistemic rationality should depend on comparison to reality, not to what makes sense to me at the moment.
For instrumental here are some things possibly missing: Cost-benefit analysis. Forecasting consequences of actions. Planning (in particular, long-term planning).
But I don’t know that you can’t find all that on the self-help shelf at B&N...
It’s worth noting that this is different from how CFAR and the Sequences tend to think about rationality. They would say that someone whose beliefs are relatively unreflective and unexamined but more reliably true is more epistemically rational than someone who less reliably true beliefs who has examined and evaluated those beliefs much more carefully. I believe they’d also say that someone who acts with less deliberation and has fewer explicit reasons, but reliably gets better results, is more rational than a more reflective but ineffective individual.
Agreed. And that makes sense as a way to compare a number of individuals at a single point in time. However, if you are starting at rationality level x, and you want to progress to rationality level y over time z, I’m not sure of a better way to do it than to think deliberately about your beliefs and actions. (This may include ‘copying people who appear to do better in life’; that constitutes ‘thinking about your beliefs/goals’). Although there may well be better ways.
Right. I’m making a point about the definition of ‘rationality’, not about the best way to become rational, which might very well be heavily reflective and intellectualizing. The distinction is important because the things we intuitively associate with ‘rationality’ (e.g., explicit reasoning) might empirically turn out not to always be useful, whereas (instrumental) rationality itself is, stipulatively, maximally useful. We want to insulate ourselves against regrets of rationality.
If having accurate beliefs about yourself reliably makes you lose, then those beliefs are (instrumentally) irrational to hold. If deliberating over what to do reliably makes you lose, then such deliberation is (instrumentally) irrational. If reflecting on your preferences and coming to understand your goals better reliably makes you lose, then such practices are (instrumentally) irrational.
Agreed that it’s a good distinction to make.
Rationality decomposes into instrumental rationality (‘winning’, or effectiveness; reliably achieving one’s goals) and epistemic rationality (accuracy; reliably forming true beliefs).
How do you understand ‘utilitarianism’? What are the things about it that you think are unimportant or counterproductive for systematic rationality? (I’ll hold off on asking about what things make utilitarianism unpopular or difficult to market, for the moment.)
And this need popularizing? You mean you’ll tell people “I can teach you how the world really works and how to win” and they run away screaming “Nooooo!” ? :-D
If I said that to some random stranger, I wouldn’t expect “noooooooo”, but I might expect “get in line”.
Yes, probably for most people. First, it sounds arrogant. Second, people underestimate the possibility of dramatically improved instrumental rationality. Third, a lot of people underestimate the desirability of dramatically improved epistemic rationality, and it’s especially hard to recognize that there are falsehoods you think you know. (As opposed to thinking there are truths you don’t know, which is easier to recognize but much more trivial.)
But that’s missing the point, methinks. Even if offering to teach people those things in the abstract were the easiest sell in the world, the specific tricks that actually add up to rationality are often difficult sells, and even when they’re easy sells in principle there’s insufficient research on how best to make that sell, and insufficient funding into, y’know, making it.
How do you know that people “underestimate the desirability of dramatically improved epistemic rationality”?
Yvain (and others) have argued that people around here make precisely the opposite mistake.
Or maybe there’s a lot of utility in not coming accross geeky and selfish, so they are already being instementally rational.
First, actually, comes credibility. You want to teach me how the world really works? Prove to me that your views are correct and not mistaken.You want to teach me how to win? Show me a million bucks in your bank account.
Keep in mind that in non-LW terms you’re basically trying to teach people logic and science. The idea that by teaching common people logic and science the world can be made a better place is a very old idea, probably dating back to Enlightenment. It wasn’t an overwhelming success. It is probably an interesting and relevant question why not.
Considering that, between then and now, we’ve had an Industrial Revolution in addition to many political ones, maybe it actually was?
I agree; this is an idea I would like to hear someone else’s opinion on. My intuition is that teaching people logic and science has nothing to do with making them better people; it makes them more effective at whatever they want to do, max. Trying to teach “being a better person” has been attempted (for thousands of years in religious organizations), but maybe not enough in the same places/times as teaching science.
Also, the study of cognitive biases and how intuitions can be wrong is much more recent than the Enlightenment. Thinking that you know science and all your thoughts that feel right are right is dangerous.
Correct, but that’s what spreading the rationality into the masses aims to accomplish, no?
I don’t think teaching people rationality implies giving them a new and improved value system.
I guess CFAR should let Peter Thiel teach in their workshops. Or, more seriously, use his name (assuming he agrees with this) when promoting their workshops.
When I think about this more, there is a deeper objection. Something like this: “So, you believe you are super rational and super winning. And you are talking to me, although I don’t believe you, so you are wasting your time. Is that really the best use of your time? Why don’t you do something… I don’t know exactly what… but something thousand times more useful now, instead of talking to me? Because this kind of optimization is precisely the one you claim to be good at; obviously you’re not.”
And this is an objection that makes sense. I mean, it’s like if someone is trying to convince me that if I invest my money in his plan, my money will double… it’s not that I don’t believe in a possibility of doubling the money; it’s more like: why doesn’t this guy double his own money instead? -- Analogicaly, if you have superpowers that allow you to win, then why the heck are you not winning right now instead of talking to me?
EDIT: I guess we should reflect on our actions when we are trying to convince someone else about usefulness of rationality. I mean, if someone resists the idea of LW-style rationality, is it rational (is it winning on average) to spend my time trying to convince them, or should I just say “next” and move to another person? I mean, there are seven billion people on this planet, half million in the city where I live, so if one person does not like this idea, it’s not like my efforts are doomed… but I may doom them by wasting all my energy and time on trying to convince this specific person. Some people just aren’t interested, and that’s it.
Yep, that’s a valid and serious objection, especially in the utilitarian context.
A couple of ways to try to deal with it: (a) point out the difference between instrumentality and goals. (b) point out that rationality is not binary but a spectrum, it’s not a choice between winning all and winning nothing.
You can probably also reformulate the whole issue as “help you to deal with the life’s problems—let me show you how can you go about it without too much aggravation and hassle”...
I agree that’s an interesting and important question. If we’re looking for vaguely applicable academic terms for what’s being taught, ‘philosophy, mathematics and science’ is a better fit than ‘logic and science’, since it’s not completely obvious to me that traditional logic is very important to what we want to teach to the general public. A lot of the stuff it’s being proposed we teach is still poorly understood, and a lot of the well-understood stuff was not well-understood a hundred years ago, or even 50 years ago, or even 25 years ago. So history is a weak guide here; Enlightenment reformers shared a lot of our ideals but very little of our content.
I don’t agree. You want to teach philosophy as rationality? There are a great deal of different philosophies, which one will you teach? Or you’ll teach history of philosophy? Or meta-philosophy (which very quickly becomes yet-another-philosophy-in-the-long-list-of-those-which-tried-to-be-meta)?
And I really don’t see what math has to do with this at all. If anything, statistics is going to be more useful than math because statistics is basically a toolbox for dealing with uncertainty and that’s the really important part.
Philosophy includes epistemology, which is kind of important to epistemic ratioanlity.
Philosophy is a toolbox as well as a set of doctrines.
Various philosophies include different approaches to epistemology. Which one do you want to teach?
I agree that philosophy can be a toolbox, but so can pretty much any field of human study—from physics to literary criticism. And here we’re talking about teaching rationality, not about the virtues of a comprehensive education.
The Enlightenment? Try Ancient Greece.
I don’t think the Greeks aimed to teach hoi polloi logic and science. They were the province of a select group of philosophers.
(Pedantic upvote for not saying “the hoi polloi”.)
In the usual way: a system of normative morality which focuses on outcomes (as opposed to means) and posits that the moral outcome is the one that maximizes utility (usually understood as happiness providing positive utility and suffering/unhappiness providing negative utility).
Thing you might work from to get an elevator pitch on the spock problem thing: “Rationality = Ratios = Relative. Generally involves becoming less ‘logical’ (arguing).”
(yes, I know this isn’t actualy correct, but you have to start somewhere, and I’m not good enough with words to take it further)
Suggestion: teach rationality as an open spirit of enquiry, not as a secular rleigion that will turn you into a clone of Richard Dawkins.
Instead of down voting maybe we should instead be asking what within the LW community caused Peterdjones to say that?
How about “in addition to”? I don’t know anyone on LW who’s been turned into a Dawkins clone.
I am not sure that being a Dawkins clone is a completely bad thing either.