I suspect you are right; the issue isn’t that these people haven’t “learned” relevant abstractions or tools. They just don’t have enough incentives to apply those tools in these context. I’m not sure you “teach” incentives, so I’m not sure there is anything you can teach which will achieve the goal stated. So I’d ask the question: how can we give people incentives to apply their tools to cases like religion?
It’s not incentive either. I have plenty of incentive, and so do my students. It’s simply that we don’t notice our beliefs as beliefs, if they’re already in our heads. (As opposed to the situation when vetting input that’s proposed as a new belief.)
Since we don’t have any kind of built-in function for listing ALL the beliefs involved in a given decision, we are often unaware of the key beliefs that are keeping us stuck in a particular area. We sit there listing all the “beliefs” we can think of, while the single most critical belief in that area isn’t registering as a “belief” at all; it just fades in as part of our background assumptions. To us, it’s something like “water is wet”—sure it’s a belief, but how could it possibly be relevant to our problem?
Usually, an irrational fear associated with something like, “but how will I pay the bills?” masquerades as simple, factual logic. But the underlying emotional belief is usually something more like, “If I don’t pay the bills, then I’m an irresponsible person and no-one will love me.” The underlying belief is invisible because we don’t look underneath the “logic” to find the emotion hiding underneath.
Unfortunately, all reasoning is motivated reasoning, which means that to find your irrational beliefs in a given area, you have to first dig up a nontrivial number of rationalizations… knowing that the rationalization you’re looking for is probably something you specifically created to prevent you from thinking about the motivation involved in the first place! (After all, revealing to others that you think you’re irresponsible isn’t good genetic fitness… and if you know, that makes it more likely you’ll unintentionally reveal it.)
A simple tool, by the way, for digging up the motivation behind seemingly “factual” statements and beliefs is to ask, “And what’s bad about that?” or “And what’s good about that?”.… usually followed by, “And what does that say/mean about YOU?” You pretty quickly discover that nearly everything in the universe revolves around you. ;-)
I’d say there’re two problems: one is incentives, as you say; the other is making “apply these tools to your own beliefs” a natural affordance for people—something that just springs to mind as a possibility, the way drinking a glass of liquid springs to mind on seeing it (even when you’re not thirsty, or when the glass contains laundry detergent).
Regarding incentives: good question. If rationality does make peoples’ lives better, but it makes their lives better in ways that aren’t obvious in prospect, we may be able to “teach” incentives by making the potential benefits of rationality more obvious to the person’s “near”-thinking system, so that the potential benefits can actually pull their behavior. (Humans are bad enough at getting to the gym, switching to more satisfying jobs in cases where this requires a bit of initial effort, etc., that peoples’ lack of acted-on motivation to apply rationality to religion does not strongly imply a lack of inventives to do so.)
Regarding building a “try this on your own beliefs” affordance (so that The Bottom Line or other techniques just naturally spring to mind): Cognitive-Behavioral Therapy people explicitly teach the “now apply this method to your own beliefs, as they come up” steps, and then have people practice those steps as homework. We should do this with rationality as well (even in Eliezer’s scenario where we skip mention of religion). The evidence for CBT’s effectiveness is fairly good AFAICT; it’s worth studying their teaching techniques.
Cognitive-Behavioral Therapy people explicitly teach the “now apply this method to your own beliefs, as they come up” steps, and then have people practice those steps as homework. We should do this with rationality as well (even in Eliezer’s scenario where we skip mention of religion). The evidence for CBT’s effectiveness is fairly good AFAICT; it’s worth studying their teaching techniques.
I think there’s a question of understanding here, not just incentives. The knowledge of minds as cognitive engines or the principle of the bottom line, is the knowledge that in full generality you can’t draw an accurate map of a city without seeing it or having some other kind of causal interaction with it. This is one of the things that readers have cited as the most important thing they learned from my writing on OB. And it’s the difference between being told an equation in school to use on a particular test, versus knowing under what (extremely general) real-world conditions you can derive it.
Like the difference between being told that gravity is 9.8 m/s^2 and being able to use that to answer written questions about gravity on a test or maybe even predict the fall of clocks off a tower, but never thinking to apply this to anything except gravity. Versus being able to do and visualize the two steps of integral calculus that get you from constant acceleration A to 1⁄2 A t^2, which is much more general than gravity.
If you knew on a gut level—as knowledge—that you couldn’t draw a map of a city without looking at it, I think the issue of incentives would be a lot mooter. There might still be incentives whether or not to communicate that understanding, whether or not to talk to others about it, etc., but on a gut level, you yourself would just know.
Even if you “just know”, this doesn’t grant you the ability to perform an instantaneous search-and-replace on the entire contents of your own brain.
Think of the difference between copying code, and function invocation. If the function is defined in one place and then reused, you can certainly make one change, and get a multitude of benefits from doing so.
However, this relies on the original programmer having recognized the pattern, and then consistently using a single abstraction throughout the code. But in practice, we usually learn variations on a theme before we learn the theme itself, and don’t always connect all our variations.
And this limitation applies equally to our declarative and procedural memories. If there’s not a shared abstraction in use, you have to search-and-replace… and the brain doesn’t have very many “indexes” you can use to do the searching with—you’re usually limited to searching by sensory information (which can include emotional responses, fortunately), or by existing abstractions. (“Off-index” or “table scan” searches are slower and unlikely to be complete, anyway—think of trying to do a search and replace on uses of the “visitor” pattern, where each application has different method names, none of which include “visit” or use “Visitor” in a class name!)
It seems to me that yours and Robin’s view of minds still contains some notion of a “decider”—that there’s some part of you that can just look and see something’s wrong and then refuse to execute that wrongness.
But if mind is just a self-modifying program, then not only are we subject to getting things wrong, we’re also subject to recording that wrongness, and perpetuating it in a variety of ways… recapitulating the hardware wrongs on a software level, in other words.
And so, while you seem to be saying, “if people were better programmers, they’d write better code”… it seems to me you’re leaving out the part where becoming a better programmer has NO effect...
Think of something you might have said to Kurt Gödel: He was a theist. (And not a dualist: he thought materialism is wrong.) In fact he believed the world is rational and also was a Leibnitzian monadology with God as the central monad. He was certainly NOT guilty of not applying Eliezer’s list of “technical, explicit understandings,” as far as I can see. I should point out that he separated the question about religion: “Religions are, for the most part, bad—but religion is not.” (Gödel in Wang, 1996.)
I suspect you are right; the issue isn’t that these people haven’t “learned” relevant abstractions or tools. They just don’t have enough incentives to apply those tools in these context. I’m not sure you “teach” incentives, so I’m not sure there is anything you can teach which will achieve the goal stated. So I’d ask the question: how can we give people incentives to apply their tools to cases like religion?
It’s not incentive either. I have plenty of incentive, and so do my students. It’s simply that we don’t notice our beliefs as beliefs, if they’re already in our heads. (As opposed to the situation when vetting input that’s proposed as a new belief.)
Since we don’t have any kind of built-in function for listing ALL the beliefs involved in a given decision, we are often unaware of the key beliefs that are keeping us stuck in a particular area. We sit there listing all the “beliefs” we can think of, while the single most critical belief in that area isn’t registering as a “belief” at all; it just fades in as part of our background assumptions. To us, it’s something like “water is wet”—sure it’s a belief, but how could it possibly be relevant to our problem?
Usually, an irrational fear associated with something like, “but how will I pay the bills?” masquerades as simple, factual logic. But the underlying emotional belief is usually something more like, “If I don’t pay the bills, then I’m an irresponsible person and no-one will love me.” The underlying belief is invisible because we don’t look underneath the “logic” to find the emotion hiding underneath.
Unfortunately, all reasoning is motivated reasoning, which means that to find your irrational beliefs in a given area, you have to first dig up a nontrivial number of rationalizations… knowing that the rationalization you’re looking for is probably something you specifically created to prevent you from thinking about the motivation involved in the first place! (After all, revealing to others that you think you’re irresponsible isn’t good genetic fitness… and if you know, that makes it more likely you’ll unintentionally reveal it.)
A simple tool, by the way, for digging up the motivation behind seemingly “factual” statements and beliefs is to ask, “And what’s bad about that?” or “And what’s good about that?”.… usually followed by, “And what does that say/mean about YOU?” You pretty quickly discover that nearly everything in the universe revolves around you. ;-)
I’d say there’re two problems: one is incentives, as you say; the other is making “apply these tools to your own beliefs” a natural affordance for people—something that just springs to mind as a possibility, the way drinking a glass of liquid springs to mind on seeing it (even when you’re not thirsty, or when the glass contains laundry detergent).
Regarding incentives: good question. If rationality does make peoples’ lives better, but it makes their lives better in ways that aren’t obvious in prospect, we may be able to “teach” incentives by making the potential benefits of rationality more obvious to the person’s “near”-thinking system, so that the potential benefits can actually pull their behavior. (Humans are bad enough at getting to the gym, switching to more satisfying jobs in cases where this requires a bit of initial effort, etc., that peoples’ lack of acted-on motivation to apply rationality to religion does not strongly imply a lack of inventives to do so.)
Regarding building a “try this on your own beliefs” affordance (so that The Bottom Line or other techniques just naturally spring to mind): Cognitive-Behavioral Therapy people explicitly teach the “now apply this method to your own beliefs, as they come up” steps, and then have people practice those steps as homework. We should do this with rationality as well (even in Eliezer’s scenario where we skip mention of religion). The evidence for CBT’s effectiveness is fairly good AFAICT; it’s worth studying their teaching techniques.
Great! Links?
I think there’s a question of understanding here, not just incentives. The knowledge of minds as cognitive engines or the principle of the bottom line, is the knowledge that in full generality you can’t draw an accurate map of a city without seeing it or having some other kind of causal interaction with it. This is one of the things that readers have cited as the most important thing they learned from my writing on OB. And it’s the difference between being told an equation in school to use on a particular test, versus knowing under what (extremely general) real-world conditions you can derive it.
Like the difference between being told that gravity is 9.8 m/s^2 and being able to use that to answer written questions about gravity on a test or maybe even predict the fall of clocks off a tower, but never thinking to apply this to anything except gravity. Versus being able to do and visualize the two steps of integral calculus that get you from constant acceleration A to 1⁄2 A t^2, which is much more general than gravity.
If you knew on a gut level—as knowledge—that you couldn’t draw a map of a city without looking at it, I think the issue of incentives would be a lot mooter. There might still be incentives whether or not to communicate that understanding, whether or not to talk to others about it, etc., but on a gut level, you yourself would just know.
Even if you “just know”, this doesn’t grant you the ability to perform an instantaneous search-and-replace on the entire contents of your own brain.
Think of the difference between copying code, and function invocation. If the function is defined in one place and then reused, you can certainly make one change, and get a multitude of benefits from doing so.
However, this relies on the original programmer having recognized the pattern, and then consistently using a single abstraction throughout the code. But in practice, we usually learn variations on a theme before we learn the theme itself, and don’t always connect all our variations.
And this limitation applies equally to our declarative and procedural memories. If there’s not a shared abstraction in use, you have to search-and-replace… and the brain doesn’t have very many “indexes” you can use to do the searching with—you’re usually limited to searching by sensory information (which can include emotional responses, fortunately), or by existing abstractions. (“Off-index” or “table scan” searches are slower and unlikely to be complete, anyway—think of trying to do a search and replace on uses of the “visitor” pattern, where each application has different method names, none of which include “visit” or use “Visitor” in a class name!)
It seems to me that yours and Robin’s view of minds still contains some notion of a “decider”—that there’s some part of you that can just look and see something’s wrong and then refuse to execute that wrongness.
But if mind is just a self-modifying program, then not only are we subject to getting things wrong, we’re also subject to recording that wrongness, and perpetuating it in a variety of ways… recapitulating the hardware wrongs on a software level, in other words.
And so, while you seem to be saying, “if people were better programmers, they’d write better code”… it seems to me you’re leaving out the part where becoming a better programmer has NO effect...
On all the code you’ve already written.
Think of something you might have said to Kurt Gödel: He was a theist. (And not a dualist: he thought materialism is wrong.) In fact he believed the world is rational and also was a Leibnitzian monadology with God as the central monad. He was certainly NOT guilty of not applying Eliezer’s list of “technical, explicit understandings,” as far as I can see. I should point out that he separated the question about religion: “Religions are, for the most part, bad—but religion is not.” (Gödel in Wang, 1996.)