First, how do you understand rationality? Can you explain it in a couple of sentences without using links and/or references to lengthier texts?
Second, there are generally reasons for why things happen the way they happen. I don’t want to make an absolute out of that, but if a person’s behavior is seemingly irrational to you, there’s still some reason, maybe understood, maybe not understood, maybe misunderstood why the person behaves that way. Rationality will not necessarily fix that reason.
Third, consider domains like finance or trading. There is immediate, obvious feedback on how successful your decisions/actions were. Moreover, people who are consistently unsuccessful are washed out (because they don’t have any more money to trade/invest). If you define rationality as the way to win, finance and trading should be populated by high-performance rationalists. Does it look like that in real life?
Fourth, on LW at least there is much confounding between rationality and utilitarianism. The idea is that if you’re truly rational you must be a utilitarian. I don’t think so. And I suspect that making rationality popular is going to be much easier without the utilitarian baggage (in particular, that “Spock thing”).
intelligence and rationality are, in theory, orthogonal, or at least not the same thing.
They are not the same thing, but I don’t think they’re orthogonal. I would probably say that your intelligence puts a cap on how rational you can be. People won’t necessarily reach their cap, but it’s very hard to go beyond it. I have strong doubts about stupid people’s capabilities to be rational.
My weak definition of rationality: thinking about your own knowledge and beliefs from an outside perspective and updating/changing them if they are not helpful and don’t make sense (epistemic); noticing your goals, thinking a bit about how to achieve them, and then doing that on purpose to see if works, while paying attention if it’s not working so you can try something else; thinking about and trying to notice the actual consequences of your actions (instrumental).
Short: epistemic=believing things on purpose, instrumental=doing things on purpose for thought-out reasons.
I say weak because this isn’t a superpower; you can do it without being amazingly good at that (i.e. if you have an IQ of 90). But you can exercise without being amazingly good at any sport, and you still benefit from it. I think that also stands for basic rationality.
if a person’s behavior is seemingly irrational to you, there’s still some reason, maybe understood, maybe not understood, maybe misunderstood why the person behaves that way.
In a general sense, yeah. People operate inside causality. But people do things for a reason that they haven’t noticed, haven’t thought about, and might not agree with if they did think about. For example, Bob might find himself well on the path to alcoholism without realizing that his original, harmless-and-normal-seeming craving for a drink in the evening happened because it helped with his insomnia; a problem that could more healthily be addressed by booking a doctor’s appointment. (I pick this example because I recently caught myself in the early stages of this process). But from the inside, it doesn’t feel like the brain is fallible, and so even people who’ve come across research to the contrary feel like their introspection is always correct–let alone people who’ve never seen those ideas. I don’t think the IQ ceiling on understanding and benefiting from “I might be wrong about why I do this” is very high.
thinking about your own knowledge and beliefs from an outside perspective
Interesting. I’d probably call this self-reflection. I am also wary of the “if they are not helpful and don’t make sense” criterion—it seems to depend way too much on the way a person is primed (aka strong priors). For example, if I am a strongly believing Christian, live in a Christian community, have personal experiences of sensing the godhead, etc. any attempts to explain atheism to me will be met by “not helpful and doesn’t make sense”. And “believing things on purpose” also goes there—the same person purposefully believes in Lord Jesus.
Epistemic rationality should depend on comparison to reality, not to what makes sense to me at the moment.
For instrumental here are some things possibly missing: Cost-benefit analysis. Forecasting consequences of actions. Planning (in particular, long-term planning).
But I don’t know that you can’t find all that on the self-help shelf at B&N...
Short: epistemic=believing things on purpose, instrumental=doing things on purpose for thought-out reasons.
It’s worth noting that this is different from how CFAR and the Sequences tend to think about rationality. They would say that someone whose beliefs are relatively unreflective and unexamined but more reliably true is more epistemically rational than someone who less reliably true beliefs who has examined and evaluated those beliefs much more carefully. I believe they’d also say that someone who acts with less deliberation and has fewer explicit reasons, but reliably gets better results, is more rational than a more reflective but ineffective individual.
Agreed. And that makes sense as a way to compare a number of individuals at a single point in time. However, if you are starting at rationality level x, and you want to progress to rationality level y over time z, I’m not sure of a better way to do it than to think deliberately about your beliefs and actions. (This may include ‘copying people who appear to do better in life’; that constitutes ‘thinking about your beliefs/goals’). Although there may well be better ways.
Right. I’m making a point about the definition of ‘rationality’, not about the best way to become rational, which might very well be heavily reflective and intellectualizing. The distinction is important because the things we intuitively associate with ‘rationality’ (e.g., explicit reasoning) might empirically turn out not to always be useful, whereas (instrumental) rationality itself is, stipulatively, maximally useful. We want to insulate ourselves against regrets of rationality.
If having accurate beliefs about yourself reliably makes you lose, then those beliefs are (instrumentally) irrational to hold. If deliberating over what to do reliably makes you lose, then such deliberation is (instrumentally) irrational. If reflecting on your preferences and coming to understand your goals better reliably makes you lose, then such practices are (instrumentally) irrational.
Rationality decomposes into instrumental rationality (‘winning’, or effectiveness; reliably achieving one’s goals) and epistemic rationality (accuracy; reliably forming true beliefs).
How do you understand ‘utilitarianism’? What are the things about it that you think are unimportant or counterproductive for systematic rationality? (I’ll hold off on asking about what things make utilitarianism unpopular or difficult to market, for the moment.)
Rationality decomposes into instrumental rationality (‘winning’, or effectiveness; reliably achieving one’s goals) and epistemic rationality (accuracy; reliably forming true beliefs).
And this need popularizing? You mean you’ll tell people “I can teach you how the world really works and how to win” and they run away screaming “Nooooo!” ? :-D
Yes, probably for most people. First, it sounds arrogant. Second, people underestimate the possibility of dramatically improved instrumental rationality. Third, a lot of people underestimate the desirability of dramatically improved epistemic rationality, and it’s especially hard to recognize that there are falsehoods you think you know. (As opposed to thinking there are truths you don’t know, which is easier to recognize but much more trivial.)
But that’s missing the point, methinks. Even if offering to teach people those things in the abstract were the easiest sell in the world, the specific tricks that actually add up to rationality are often difficult sells, and even when they’re easy sells in principle there’s insufficient research on how best to make that sell, and insufficient funding into, y’know, making it.
First, actually, comes credibility. You want to teach me how the world really works? Prove to me that your views are correct and not mistaken.You want to teach me how to win? Show me a million bucks in your bank account.
Even if offering to teach people those things in the abstract were the easiest sell in the world
Keep in mind that in non-LW terms you’re basically trying to teach people logic and science. The idea that by teaching common people logic and science the world can be made a better place is a very old idea, probably dating back to Enlightenment. It wasn’t an overwhelming success. It is probably an interesting and relevant question why not.
The idea that by teaching common people logic and science the world can be made a better place is a very old idea, probably dating back to Enlightenment. It wasn’t an overwhelming success. It is probably an interesting and relevant question why not.
Considering that, between then and now, we’ve had an Industrial Revolution in addition to many political ones, maybe it actually was?
It is probably an interesting and relevant question why not.
I agree; this is an idea I would like to hear someone else’s opinion on. My intuition is that teaching people logic and science has nothing to do with making them better people; it makes them more effective at whatever they want to do, max. Trying to teach “being a better person” has been attempted (for thousands of years in religious organizations), but maybe not enough in the same places/times as teaching science.
Also, the study of cognitive biases and how intuitions can be wrong is much more recent than the Enlightenment. Thinking that you know science and all your thoughts that feel right are right is dangerous.
You want to teach me how to win? Show me a million bucks in your bank account.
I guess CFAR should let Peter Thiel teach in their workshops. Or, more seriously, use his name (assuming he agrees with this) when promoting their workshops.
When I think about this more, there is a deeper objection. Something like this: “So, you believe you are super rational and super winning. And you are talking to me, although I don’t believe you, so you are wasting your time. Is that really the best use of your time? Why don’t you do something… I don’t know exactly what… but something thousand times more useful now, instead of talking to me? Because this kind of optimization is precisely the one you claim to be good at; obviously you’re not.”
And this is an objection that makes sense. I mean, it’s like if someone is trying to convince me that if I invest my money in his plan, my money will double… it’s not that I don’t believe in a possibility of doubling the money; it’s more like: why doesn’t this guy double his own money instead? -- Analogicaly, if you have superpowers that allow you to win, then why the heck are you not winning right now instead of talking to me?
EDIT: I guess we should reflect on our actions when we are trying to convince someone else about usefulness of rationality. I mean, if someone resists the idea of LW-style rationality, is it rational (is it winning on average) to spend my time trying to convince them, or should I just say “next” and move to another person? I mean, there are seven billion people on this planet, half million in the city where I live, so if one person does not like this idea, it’s not like my efforts are doomed… but I may doom them by wasting all my energy and time on trying to convince this specific person. Some people just aren’t interested, and that’s it.
Yep, that’s a valid and serious objection, especially in the utilitarian context.
A couple of ways to try to deal with it: (a) point out the difference between instrumentality and goals. (b) point out that rationality is not binary but a spectrum, it’s not a choice between winning all and winning nothing.
You can probably also reformulate the whole issue as “help you to deal with the life’s problems—let me show you how can you go about it without too much aggravation and hassle”...
I agree that’s an interesting and important question. If we’re looking for vaguely applicable academic terms for what’s being taught, ‘philosophy, mathematics and science’ is a better fit than ‘logic and science’, since it’s not completely obvious to me that traditional logic is very important to what we want to teach to the general public. A lot of the stuff it’s being proposed we teach is still poorly understood, and a lot of the well-understood stuff was not well-understood a hundred years ago, or even 50 years ago, or even 25 years ago. So history is a weak guide here; Enlightenment reformers shared a lot of our ideals but very little of our content.
‘philosophy, mathematics and science’ is a better fit than ‘logic and science’
I don’t agree. You want to teach philosophy as rationality? There are a great deal of different philosophies, which one will you teach? Or you’ll teach history of philosophy? Or meta-philosophy (which very quickly becomes yet-another-philosophy-in-the-long-list-of-those-which-tried-to-be-meta)?
And I really don’t see what math has to do with this at all. If anything, statistics is going to be more useful than math because statistics is basically a toolbox for dealing with uncertainty and that’s the really important part.
Philosophy includes epistemology, which is kind of important to epistemic ratioanlity.
Various philosophies include different approaches to epistemology. Which one do you want to teach?
I agree that philosophy can be a toolbox, but so can pretty much any field of human study—from physics to literary criticism. And here we’re talking about teaching rationality, not about the virtues of a comprehensive education.
In the usual way: a system of normative morality which focuses on outcomes (as opposed to means) and posits that the moral outcome is the one that maximizes utility (usually understood as happiness providing positive utility and suffering/unhappiness providing negative utility).
Some notes/reactions in random order.
First, how do you understand rationality? Can you explain it in a couple of sentences without using links and/or references to lengthier texts?
Second, there are generally reasons for why things happen the way they happen. I don’t want to make an absolute out of that, but if a person’s behavior is seemingly irrational to you, there’s still some reason, maybe understood, maybe not understood, maybe misunderstood why the person behaves that way. Rationality will not necessarily fix that reason.
Third, consider domains like finance or trading. There is immediate, obvious feedback on how successful your decisions/actions were. Moreover, people who are consistently unsuccessful are washed out (because they don’t have any more money to trade/invest). If you define rationality as the way to win, finance and trading should be populated by high-performance rationalists. Does it look like that in real life?
Fourth, on LW at least there is much confounding between rationality and utilitarianism. The idea is that if you’re truly rational you must be a utilitarian. I don’t think so. And I suspect that making rationality popular is going to be much easier without the utilitarian baggage (in particular, that “Spock thing”).
They are not the same thing, but I don’t think they’re orthogonal. I would probably say that your intelligence puts a cap on how rational you can be. People won’t necessarily reach their cap, but it’s very hard to go beyond it. I have strong doubts about stupid people’s capabilities to be rational.
My weak definition of rationality: thinking about your own knowledge and beliefs from an outside perspective and updating/changing them if they are not helpful and don’t make sense (epistemic); noticing your goals, thinking a bit about how to achieve them, and then doing that on purpose to see if works, while paying attention if it’s not working so you can try something else; thinking about and trying to notice the actual consequences of your actions (instrumental).
Short: epistemic=believing things on purpose, instrumental=doing things on purpose for thought-out reasons.
I say weak because this isn’t a superpower; you can do it without being amazingly good at that (i.e. if you have an IQ of 90). But you can exercise without being amazingly good at any sport, and you still benefit from it. I think that also stands for basic rationality.
In a general sense, yeah. People operate inside causality. But people do things for a reason that they haven’t noticed, haven’t thought about, and might not agree with if they did think about. For example, Bob might find himself well on the path to alcoholism without realizing that his original, harmless-and-normal-seeming craving for a drink in the evening happened because it helped with his insomnia; a problem that could more healthily be addressed by booking a doctor’s appointment. (I pick this example because I recently caught myself in the early stages of this process). But from the inside, it doesn’t feel like the brain is fallible, and so even people who’ve come across research to the contrary feel like their introspection is always correct–let alone people who’ve never seen those ideas. I don’t think the IQ ceiling on understanding and benefiting from “I might be wrong about why I do this” is very high.
Interesting. I’d probably call this self-reflection. I am also wary of the “if they are not helpful and don’t make sense” criterion—it seems to depend way too much on the way a person is primed (aka strong priors). For example, if I am a strongly believing Christian, live in a Christian community, have personal experiences of sensing the godhead, etc. any attempts to explain atheism to me will be met by “not helpful and doesn’t make sense”. And “believing things on purpose” also goes there—the same person purposefully believes in Lord Jesus.
Epistemic rationality should depend on comparison to reality, not to what makes sense to me at the moment.
For instrumental here are some things possibly missing: Cost-benefit analysis. Forecasting consequences of actions. Planning (in particular, long-term planning).
But I don’t know that you can’t find all that on the self-help shelf at B&N...
It’s worth noting that this is different from how CFAR and the Sequences tend to think about rationality. They would say that someone whose beliefs are relatively unreflective and unexamined but more reliably true is more epistemically rational than someone who less reliably true beliefs who has examined and evaluated those beliefs much more carefully. I believe they’d also say that someone who acts with less deliberation and has fewer explicit reasons, but reliably gets better results, is more rational than a more reflective but ineffective individual.
Agreed. And that makes sense as a way to compare a number of individuals at a single point in time. However, if you are starting at rationality level x, and you want to progress to rationality level y over time z, I’m not sure of a better way to do it than to think deliberately about your beliefs and actions. (This may include ‘copying people who appear to do better in life’; that constitutes ‘thinking about your beliefs/goals’). Although there may well be better ways.
Right. I’m making a point about the definition of ‘rationality’, not about the best way to become rational, which might very well be heavily reflective and intellectualizing. The distinction is important because the things we intuitively associate with ‘rationality’ (e.g., explicit reasoning) might empirically turn out not to always be useful, whereas (instrumental) rationality itself is, stipulatively, maximally useful. We want to insulate ourselves against regrets of rationality.
If having accurate beliefs about yourself reliably makes you lose, then those beliefs are (instrumentally) irrational to hold. If deliberating over what to do reliably makes you lose, then such deliberation is (instrumentally) irrational. If reflecting on your preferences and coming to understand your goals better reliably makes you lose, then such practices are (instrumentally) irrational.
Agreed that it’s a good distinction to make.
Rationality decomposes into instrumental rationality (‘winning’, or effectiveness; reliably achieving one’s goals) and epistemic rationality (accuracy; reliably forming true beliefs).
How do you understand ‘utilitarianism’? What are the things about it that you think are unimportant or counterproductive for systematic rationality? (I’ll hold off on asking about what things make utilitarianism unpopular or difficult to market, for the moment.)
And this need popularizing? You mean you’ll tell people “I can teach you how the world really works and how to win” and they run away screaming “Nooooo!” ? :-D
If I said that to some random stranger, I wouldn’t expect “noooooooo”, but I might expect “get in line”.
Yes, probably for most people. First, it sounds arrogant. Second, people underestimate the possibility of dramatically improved instrumental rationality. Third, a lot of people underestimate the desirability of dramatically improved epistemic rationality, and it’s especially hard to recognize that there are falsehoods you think you know. (As opposed to thinking there are truths you don’t know, which is easier to recognize but much more trivial.)
But that’s missing the point, methinks. Even if offering to teach people those things in the abstract were the easiest sell in the world, the specific tricks that actually add up to rationality are often difficult sells, and even when they’re easy sells in principle there’s insufficient research on how best to make that sell, and insufficient funding into, y’know, making it.
How do you know that people “underestimate the desirability of dramatically improved epistemic rationality”?
Yvain (and others) have argued that people around here make precisely the opposite mistake.
Or maybe there’s a lot of utility in not coming accross geeky and selfish, so they are already being instementally rational.
First, actually, comes credibility. You want to teach me how the world really works? Prove to me that your views are correct and not mistaken.You want to teach me how to win? Show me a million bucks in your bank account.
Keep in mind that in non-LW terms you’re basically trying to teach people logic and science. The idea that by teaching common people logic and science the world can be made a better place is a very old idea, probably dating back to Enlightenment. It wasn’t an overwhelming success. It is probably an interesting and relevant question why not.
Considering that, between then and now, we’ve had an Industrial Revolution in addition to many political ones, maybe it actually was?
I agree; this is an idea I would like to hear someone else’s opinion on. My intuition is that teaching people logic and science has nothing to do with making them better people; it makes them more effective at whatever they want to do, max. Trying to teach “being a better person” has been attempted (for thousands of years in religious organizations), but maybe not enough in the same places/times as teaching science.
Also, the study of cognitive biases and how intuitions can be wrong is much more recent than the Enlightenment. Thinking that you know science and all your thoughts that feel right are right is dangerous.
Correct, but that’s what spreading the rationality into the masses aims to accomplish, no?
I don’t think teaching people rationality implies giving them a new and improved value system.
I guess CFAR should let Peter Thiel teach in their workshops. Or, more seriously, use his name (assuming he agrees with this) when promoting their workshops.
When I think about this more, there is a deeper objection. Something like this: “So, you believe you are super rational and super winning. And you are talking to me, although I don’t believe you, so you are wasting your time. Is that really the best use of your time? Why don’t you do something… I don’t know exactly what… but something thousand times more useful now, instead of talking to me? Because this kind of optimization is precisely the one you claim to be good at; obviously you’re not.”
And this is an objection that makes sense. I mean, it’s like if someone is trying to convince me that if I invest my money in his plan, my money will double… it’s not that I don’t believe in a possibility of doubling the money; it’s more like: why doesn’t this guy double his own money instead? -- Analogicaly, if you have superpowers that allow you to win, then why the heck are you not winning right now instead of talking to me?
EDIT: I guess we should reflect on our actions when we are trying to convince someone else about usefulness of rationality. I mean, if someone resists the idea of LW-style rationality, is it rational (is it winning on average) to spend my time trying to convince them, or should I just say “next” and move to another person? I mean, there are seven billion people on this planet, half million in the city where I live, so if one person does not like this idea, it’s not like my efforts are doomed… but I may doom them by wasting all my energy and time on trying to convince this specific person. Some people just aren’t interested, and that’s it.
Yep, that’s a valid and serious objection, especially in the utilitarian context.
A couple of ways to try to deal with it: (a) point out the difference between instrumentality and goals. (b) point out that rationality is not binary but a spectrum, it’s not a choice between winning all and winning nothing.
You can probably also reformulate the whole issue as “help you to deal with the life’s problems—let me show you how can you go about it without too much aggravation and hassle”...
I agree that’s an interesting and important question. If we’re looking for vaguely applicable academic terms for what’s being taught, ‘philosophy, mathematics and science’ is a better fit than ‘logic and science’, since it’s not completely obvious to me that traditional logic is very important to what we want to teach to the general public. A lot of the stuff it’s being proposed we teach is still poorly understood, and a lot of the well-understood stuff was not well-understood a hundred years ago, or even 50 years ago, or even 25 years ago. So history is a weak guide here; Enlightenment reformers shared a lot of our ideals but very little of our content.
I don’t agree. You want to teach philosophy as rationality? There are a great deal of different philosophies, which one will you teach? Or you’ll teach history of philosophy? Or meta-philosophy (which very quickly becomes yet-another-philosophy-in-the-long-list-of-those-which-tried-to-be-meta)?
And I really don’t see what math has to do with this at all. If anything, statistics is going to be more useful than math because statistics is basically a toolbox for dealing with uncertainty and that’s the really important part.
Philosophy includes epistemology, which is kind of important to epistemic ratioanlity.
Philosophy is a toolbox as well as a set of doctrines.
Various philosophies include different approaches to epistemology. Which one do you want to teach?
I agree that philosophy can be a toolbox, but so can pretty much any field of human study—from physics to literary criticism. And here we’re talking about teaching rationality, not about the virtues of a comprehensive education.
The Enlightenment? Try Ancient Greece.
I don’t think the Greeks aimed to teach hoi polloi logic and science. They were the province of a select group of philosophers.
(Pedantic upvote for not saying “the hoi polloi”.)
In the usual way: a system of normative morality which focuses on outcomes (as opposed to means) and posits that the moral outcome is the one that maximizes utility (usually understood as happiness providing positive utility and suffering/unhappiness providing negative utility).