It seems to me that the principal issue is that, even if you know all those things… that doesn’t guarantee that you’re actually applying them to your own beliefs or thought processes. There is no “view source” button for the brain, nor even a way to get a stack trace of how you arrived at a particular conclusion… and even if there were, most of us, most of the time, would not push the button or look at the trace, if we were happy with our existing/expected results.
In addition, most people are astonishingly bad at reasoning from the general to the specific… which means that if you don’t mention religion explicitly in your hypothetical course, very few people will actually apply the skills in a religious context… especially if that part of their life is working out just fine, from their point of view.
It may be fictional evidence, but I think S.P. Somtow’s idea that “The breaking of joy is the beginning of wisdom” has some applicability here… as even highly-motivated individuals have trouble learning to see their beliefs, as beliefs—and therefore subject to the skills of rationality.
That is, if you think something is part of the territory, you’re not going to apply something you think of as map-reading skills.
Hm, in fact, here’s an interesting example. One of my students in the Mind Hackers’ Guild just posted to our forum, complaining that by eliminating all his negative motivation regarding work, he now had no positive motivation either. But it was not apparent to him that the very fact he considered this a problem, was also an example of negative motivation.
That’s because even though I teach people that ALL negative motivation is counterproductive for achieving long-term, directional goals (as opposed to very short-term or avoidance goals), people still assume that “negative motivation” means “motivations I don’t like, or already know are irrational”… and so they make exceptions for all the things they think are “just the way it is”. (Like in this man’s case, an irrational fear linked to his need to “pay the bills”.)
And this happens routinely with people, no matter how explicitly and repeatedly I state that, “no, you have to include those too”. It seems like people still have to go through the process at least once or twice with someone pointing one of these out, before they “get it” that those other motivations also “count”.
Heck, truth be told, I still sometimes take a while to find what hidden assumption in my thinking is leading to interference… even at times when I’d happily push the “view source” button or look at the stack trace… if only that were possible.
But since I routinely and trivially notice these map-territory confusions when my students do them, even without a view-source button—heck, I can spot them from just a few words in the middle of their forum posts! -- I have to conclude that there is something innate at issue, besides me just not being a good enough teacher. After all, if I can spot these things in them, but not me, there must be some sort of bias at work.
I suspect you are right; the issue isn’t that these people haven’t “learned” relevant abstractions or tools. They just don’t have enough incentives to apply those tools in these context. I’m not sure you “teach” incentives, so I’m not sure there is anything you can teach which will achieve the goal stated. So I’d ask the question: how can we give people incentives to apply their tools to cases like religion?
It’s not incentive either. I have plenty of incentive, and so do my students. It’s simply that we don’t notice our beliefs as beliefs, if they’re already in our heads. (As opposed to the situation when vetting input that’s proposed as a new belief.)
Since we don’t have any kind of built-in function for listing ALL the beliefs involved in a given decision, we are often unaware of the key beliefs that are keeping us stuck in a particular area. We sit there listing all the “beliefs” we can think of, while the single most critical belief in that area isn’t registering as a “belief” at all; it just fades in as part of our background assumptions. To us, it’s something like “water is wet”—sure it’s a belief, but how could it possibly be relevant to our problem?
Usually, an irrational fear associated with something like, “but how will I pay the bills?” masquerades as simple, factual logic. But the underlying emotional belief is usually something more like, “If I don’t pay the bills, then I’m an irresponsible person and no-one will love me.” The underlying belief is invisible because we don’t look underneath the “logic” to find the emotion hiding underneath.
Unfortunately, all reasoning is motivated reasoning, which means that to find your irrational beliefs in a given area, you have to first dig up a nontrivial number of rationalizations… knowing that the rationalization you’re looking for is probably something you specifically created to prevent you from thinking about the motivation involved in the first place! (After all, revealing to others that you think you’re irresponsible isn’t good genetic fitness… and if you know, that makes it more likely you’ll unintentionally reveal it.)
A simple tool, by the way, for digging up the motivation behind seemingly “factual” statements and beliefs is to ask, “And what’s bad about that?” or “And what’s good about that?”.… usually followed by, “And what does that say/mean about YOU?” You pretty quickly discover that nearly everything in the universe revolves around you. ;-)
I’d say there’re two problems: one is incentives, as you say; the other is making “apply these tools to your own beliefs” a natural affordance for people—something that just springs to mind as a possibility, the way drinking a glass of liquid springs to mind on seeing it (even when you’re not thirsty, or when the glass contains laundry detergent).
Regarding incentives: good question. If rationality does make peoples’ lives better, but it makes their lives better in ways that aren’t obvious in prospect, we may be able to “teach” incentives by making the potential benefits of rationality more obvious to the person’s “near”-thinking system, so that the potential benefits can actually pull their behavior. (Humans are bad enough at getting to the gym, switching to more satisfying jobs in cases where this requires a bit of initial effort, etc., that peoples’ lack of acted-on motivation to apply rationality to religion does not strongly imply a lack of inventives to do so.)
Regarding building a “try this on your own beliefs” affordance (so that The Bottom Line or other techniques just naturally spring to mind): Cognitive-Behavioral Therapy people explicitly teach the “now apply this method to your own beliefs, as they come up” steps, and then have people practice those steps as homework. We should do this with rationality as well (even in Eliezer’s scenario where we skip mention of religion). The evidence for CBT’s effectiveness is fairly good AFAICT; it’s worth studying their teaching techniques.
Cognitive-Behavioral Therapy people explicitly teach the “now apply this method to your own beliefs, as they come up” steps, and then have people practice those steps as homework. We should do this with rationality as well (even in Eliezer’s scenario where we skip mention of religion). The evidence for CBT’s effectiveness is fairly good AFAICT; it’s worth studying their teaching techniques.
I think there’s a question of understanding here, not just incentives. The knowledge of minds as cognitive engines or the principle of the bottom line, is the knowledge that in full generality you can’t draw an accurate map of a city without seeing it or having some other kind of causal interaction with it. This is one of the things that readers have cited as the most important thing they learned from my writing on OB. And it’s the difference between being told an equation in school to use on a particular test, versus knowing under what (extremely general) real-world conditions you can derive it.
Like the difference between being told that gravity is 9.8 m/s^2 and being able to use that to answer written questions about gravity on a test or maybe even predict the fall of clocks off a tower, but never thinking to apply this to anything except gravity. Versus being able to do and visualize the two steps of integral calculus that get you from constant acceleration A to 1⁄2 A t^2, which is much more general than gravity.
If you knew on a gut level—as knowledge—that you couldn’t draw a map of a city without looking at it, I think the issue of incentives would be a lot mooter. There might still be incentives whether or not to communicate that understanding, whether or not to talk to others about it, etc., but on a gut level, you yourself would just know.
Even if you “just know”, this doesn’t grant you the ability to perform an instantaneous search-and-replace on the entire contents of your own brain.
Think of the difference between copying code, and function invocation. If the function is defined in one place and then reused, you can certainly make one change, and get a multitude of benefits from doing so.
However, this relies on the original programmer having recognized the pattern, and then consistently using a single abstraction throughout the code. But in practice, we usually learn variations on a theme before we learn the theme itself, and don’t always connect all our variations.
And this limitation applies equally to our declarative and procedural memories. If there’s not a shared abstraction in use, you have to search-and-replace… and the brain doesn’t have very many “indexes” you can use to do the searching with—you’re usually limited to searching by sensory information (which can include emotional responses, fortunately), or by existing abstractions. (“Off-index” or “table scan” searches are slower and unlikely to be complete, anyway—think of trying to do a search and replace on uses of the “visitor” pattern, where each application has different method names, none of which include “visit” or use “Visitor” in a class name!)
It seems to me that yours and Robin’s view of minds still contains some notion of a “decider”—that there’s some part of you that can just look and see something’s wrong and then refuse to execute that wrongness.
But if mind is just a self-modifying program, then not only are we subject to getting things wrong, we’re also subject to recording that wrongness, and perpetuating it in a variety of ways… recapitulating the hardware wrongs on a software level, in other words.
And so, while you seem to be saying, “if people were better programmers, they’d write better code”… it seems to me you’re leaving out the part where becoming a better programmer has NO effect...
Think of something you might have said to Kurt Gödel: He was a theist. (And not a dualist: he thought materialism is wrong.) In fact he believed the world is rational and also was a Leibnitzian monadology with God as the central monad. He was certainly NOT guilty of not applying Eliezer’s list of “technical, explicit understandings,” as far as I can see. I should point out that he separated the question about religion: “Religions are, for the most part, bad—but religion is not.” (Gödel in Wang, 1996.)
It seems to me that the principal issue is that, even if you know all those things… that doesn’t guarantee that you’re actually applying them to your own beliefs or thought processes. There is no “view source” button for the brain, nor even a way to get a stack trace of how you arrived at a particular conclusion… and even if there were, most of us, most of the time, would not push the button or look at the trace, if we were happy with our existing/expected results.
In addition, most people are astonishingly bad at reasoning from the general to the specific… which means that if you don’t mention religion explicitly in your hypothetical course, very few people will actually apply the skills in a religious context… especially if that part of their life is working out just fine, from their point of view.
It may be fictional evidence, but I think S.P. Somtow’s idea that “The breaking of joy is the beginning of wisdom” has some applicability here… as even highly-motivated individuals have trouble learning to see their beliefs, as beliefs—and therefore subject to the skills of rationality.
That is, if you think something is part of the territory, you’re not going to apply something you think of as map-reading skills.
Hm, in fact, here’s an interesting example. One of my students in the Mind Hackers’ Guild just posted to our forum, complaining that by eliminating all his negative motivation regarding work, he now had no positive motivation either. But it was not apparent to him that the very fact he considered this a problem, was also an example of negative motivation.
That’s because even though I teach people that ALL negative motivation is counterproductive for achieving long-term, directional goals (as opposed to very short-term or avoidance goals), people still assume that “negative motivation” means “motivations I don’t like, or already know are irrational”… and so they make exceptions for all the things they think are “just the way it is”. (Like in this man’s case, an irrational fear linked to his need to “pay the bills”.)
And this happens routinely with people, no matter how explicitly and repeatedly I state that, “no, you have to include those too”. It seems like people still have to go through the process at least once or twice with someone pointing one of these out, before they “get it” that those other motivations also “count”.
Heck, truth be told, I still sometimes take a while to find what hidden assumption in my thinking is leading to interference… even at times when I’d happily push the “view source” button or look at the stack trace… if only that were possible.
But since I routinely and trivially notice these map-territory confusions when my students do them, even without a view-source button—heck, I can spot them from just a few words in the middle of their forum posts! -- I have to conclude that there is something innate at issue, besides me just not being a good enough teacher. After all, if I can spot these things in them, but not me, there must be some sort of bias at work.
I suspect you are right; the issue isn’t that these people haven’t “learned” relevant abstractions or tools. They just don’t have enough incentives to apply those tools in these context. I’m not sure you “teach” incentives, so I’m not sure there is anything you can teach which will achieve the goal stated. So I’d ask the question: how can we give people incentives to apply their tools to cases like religion?
It’s not incentive either. I have plenty of incentive, and so do my students. It’s simply that we don’t notice our beliefs as beliefs, if they’re already in our heads. (As opposed to the situation when vetting input that’s proposed as a new belief.)
Since we don’t have any kind of built-in function for listing ALL the beliefs involved in a given decision, we are often unaware of the key beliefs that are keeping us stuck in a particular area. We sit there listing all the “beliefs” we can think of, while the single most critical belief in that area isn’t registering as a “belief” at all; it just fades in as part of our background assumptions. To us, it’s something like “water is wet”—sure it’s a belief, but how could it possibly be relevant to our problem?
Usually, an irrational fear associated with something like, “but how will I pay the bills?” masquerades as simple, factual logic. But the underlying emotional belief is usually something more like, “If I don’t pay the bills, then I’m an irresponsible person and no-one will love me.” The underlying belief is invisible because we don’t look underneath the “logic” to find the emotion hiding underneath.
Unfortunately, all reasoning is motivated reasoning, which means that to find your irrational beliefs in a given area, you have to first dig up a nontrivial number of rationalizations… knowing that the rationalization you’re looking for is probably something you specifically created to prevent you from thinking about the motivation involved in the first place! (After all, revealing to others that you think you’re irresponsible isn’t good genetic fitness… and if you know, that makes it more likely you’ll unintentionally reveal it.)
A simple tool, by the way, for digging up the motivation behind seemingly “factual” statements and beliefs is to ask, “And what’s bad about that?” or “And what’s good about that?”.… usually followed by, “And what does that say/mean about YOU?” You pretty quickly discover that nearly everything in the universe revolves around you. ;-)
I’d say there’re two problems: one is incentives, as you say; the other is making “apply these tools to your own beliefs” a natural affordance for people—something that just springs to mind as a possibility, the way drinking a glass of liquid springs to mind on seeing it (even when you’re not thirsty, or when the glass contains laundry detergent).
Regarding incentives: good question. If rationality does make peoples’ lives better, but it makes their lives better in ways that aren’t obvious in prospect, we may be able to “teach” incentives by making the potential benefits of rationality more obvious to the person’s “near”-thinking system, so that the potential benefits can actually pull their behavior. (Humans are bad enough at getting to the gym, switching to more satisfying jobs in cases where this requires a bit of initial effort, etc., that peoples’ lack of acted-on motivation to apply rationality to religion does not strongly imply a lack of inventives to do so.)
Regarding building a “try this on your own beliefs” affordance (so that The Bottom Line or other techniques just naturally spring to mind): Cognitive-Behavioral Therapy people explicitly teach the “now apply this method to your own beliefs, as they come up” steps, and then have people practice those steps as homework. We should do this with rationality as well (even in Eliezer’s scenario where we skip mention of religion). The evidence for CBT’s effectiveness is fairly good AFAICT; it’s worth studying their teaching techniques.
Great! Links?
I think there’s a question of understanding here, not just incentives. The knowledge of minds as cognitive engines or the principle of the bottom line, is the knowledge that in full generality you can’t draw an accurate map of a city without seeing it or having some other kind of causal interaction with it. This is one of the things that readers have cited as the most important thing they learned from my writing on OB. And it’s the difference between being told an equation in school to use on a particular test, versus knowing under what (extremely general) real-world conditions you can derive it.
Like the difference between being told that gravity is 9.8 m/s^2 and being able to use that to answer written questions about gravity on a test or maybe even predict the fall of clocks off a tower, but never thinking to apply this to anything except gravity. Versus being able to do and visualize the two steps of integral calculus that get you from constant acceleration A to 1⁄2 A t^2, which is much more general than gravity.
If you knew on a gut level—as knowledge—that you couldn’t draw a map of a city without looking at it, I think the issue of incentives would be a lot mooter. There might still be incentives whether or not to communicate that understanding, whether or not to talk to others about it, etc., but on a gut level, you yourself would just know.
Even if you “just know”, this doesn’t grant you the ability to perform an instantaneous search-and-replace on the entire contents of your own brain.
Think of the difference between copying code, and function invocation. If the function is defined in one place and then reused, you can certainly make one change, and get a multitude of benefits from doing so.
However, this relies on the original programmer having recognized the pattern, and then consistently using a single abstraction throughout the code. But in practice, we usually learn variations on a theme before we learn the theme itself, and don’t always connect all our variations.
And this limitation applies equally to our declarative and procedural memories. If there’s not a shared abstraction in use, you have to search-and-replace… and the brain doesn’t have very many “indexes” you can use to do the searching with—you’re usually limited to searching by sensory information (which can include emotional responses, fortunately), or by existing abstractions. (“Off-index” or “table scan” searches are slower and unlikely to be complete, anyway—think of trying to do a search and replace on uses of the “visitor” pattern, where each application has different method names, none of which include “visit” or use “Visitor” in a class name!)
It seems to me that yours and Robin’s view of minds still contains some notion of a “decider”—that there’s some part of you that can just look and see something’s wrong and then refuse to execute that wrongness.
But if mind is just a self-modifying program, then not only are we subject to getting things wrong, we’re also subject to recording that wrongness, and perpetuating it in a variety of ways… recapitulating the hardware wrongs on a software level, in other words.
And so, while you seem to be saying, “if people were better programmers, they’d write better code”… it seems to me you’re leaving out the part where becoming a better programmer has NO effect...
On all the code you’ve already written.
Think of something you might have said to Kurt Gödel: He was a theist. (And not a dualist: he thought materialism is wrong.) In fact he believed the world is rational and also was a Leibnitzian monadology with God as the central monad. He was certainly NOT guilty of not applying Eliezer’s list of “technical, explicit understandings,” as far as I can see. I should point out that he separated the question about religion: “Religions are, for the most part, bad—but religion is not.” (Gödel in Wang, 1996.)