Such people have no problem with the idea of magic,
because everything is magic to them, even science.
Years ago I and three other people were training for a tech
support job. Our trainer was explaining something (the
tracert command)
but I didn’t understand it because his explanation didn’t
seem to make sense. After asking him more questions about
it, I realized from his contradictory answers that he didn’t
understand it either. The reason I mention this is that my
three fellow trainees had no problem with his explanation,
one even explicitly saying that she thought it made perfect
sense.
Huh. I guess that if I tell myself, “Most people simply do not expect reality to make sense, and are trying to do entirely different things when they engage in the social activity of talking about it”, then I do feel a little less confused.
Most people simply do not expect reality to make sense
More precisely, different people are probably using different definitions of “make sense”… and you might find it easier to make sense of if you had a more detailed understanding of the ways in which people “make sense”. (Certainly, it’s what helped me become aware of the issue in the first place.)
So, here are some short snippets from the book “Using Your Brain For A Change”, wherein the author comments on various cognitive strategies he’s observed people using in order to decide whether they “understand” something:
There are several kinds of understanding, and some of them are a lot more useful than others. One kind of understanding allows you to justify things, and gives you reasons for not being able to do anything different.…
A second kind of understanding simply allows you to have a good feeling: “Ahhhh.” It’s sort of like salivating to a bell: it’s a conditioned response, and all you get is that good feeling. That’s the kind of thing that can lead to saying, “Oh, yes, ‘ego’ is that one up there on the chart. I’ve seen that before; yes, I understand.” That kind of understanding also doesn’t teach you to be able to do anything.
A third kind of understanding allows you to talk about things with important sounding concepts, and sometimes even equations.… Concepts can be useful, but only if they have an experiential basis [i.e. “near” beliefs that “pay rent”], and only if they allow you to do something different.
Obviously, we are talking mostly about “clicking” being something more like this latter category of sense-making, but the author actually did mention how certain kinds of “fuzzy” understanding would actually be more helpful in social interaction:
However, a fuzzy, bright understanding will be good for some things. For example, this is probably someone who would be lots of fun at a party. She’ll be a very responsive person, because all she needs to do to feel like she understands what someone says is to fuzz up her [mental] pictures. It doesn’t take a lot of information to be able to make a bright, fuzzy movie. She can do that really quickly, and then have a lot of feelings watching that bright movie. Her kind of understanding is the kind I talked about earlier, that doesn’t have much to do with the outside world. It helps her feel better, but it won’t be much help in coping with actual problems.
Most of the chapter concerned itself with various cognitive strategies of detailed understanding used by a scientist, a pilot, an engineer, and so on, but it also pointed out:
What I want you all to realize is that all of you are in the same position as that … woman who fuzzes images. No matter how good you think your process of understanding is, there will always be times and places where another process would work much better for you. Earlier someone gave us the process a scientist used—economical little pictures with diagrams. That will work marvelously well for [understanding] the physical world, but I’ll predict that person has difficulties understanding people—a common problem for scientists. (Man: Yes, that’s true.)
Anyway, that chapter was a big clue for me towards “clicking” on the idea that the first two obstacles to be overcome in communicating a new concept are 1) getting people to realize that there’s something to “get”, and 2) getting them to get that they don’t already “get” it. (And both of these can be quite difficult, especially if the other person thinks they have a higher social status than you.)
Is the rest of it insightful too, or did you quote the only good part?
There are a lot of other good parts, especially if you care more about practice than theory. However, I find that personally, I can’t make use of many of the techniques provided without the assistance of a partner to co-ordinate the exercises. It’s too difficult to pay attention to both the steps in the book and what’s going on in my head at the same time.
I’m still confused, but now my eyes are wide with horror, too. I don’t dispute what pjeby said; in retrospect it seems terribly obvious. But how can we deal with it? Is there any way to get someone to start expecting reality to make sense?
I have a TA job teaching people how to program, and I watch as people go from desperately trying to solve problems by blindly adapting example code that they don’t understand to actually thinking and being able to translate their thoughts into working, understandable programs. I think the key of it is to be thrust into situations that require understanding instead of just guessing the teacher’s password—the search space is too big for brute force. The class is all hands-on, doing toy problems that keep people struggling near the edge of their ability. And it works, somehow! I’m always amazed when they actually, truly learn something. I think this habit of expecting to understand things can be taught in at least one field, albeit painfully.
Is this something that people can learn in general? How? I consider this a hugely important question.
I wouldn’t be surprised if thinking this way about computer programs transfers fairly well to other fields if people are reminded to think like programmers or something like that. There are certainly a disproportionate number of computer programmers on Less Wrong, right?
Certainly; I think this is a case where there are 3 types of causality going on:
Using Less Wrong makes you a better programmer. (This is pretty weak; for most programmers, there are probably other things that will improve your programming skill a hundred times faster than reading Less Wrong.)
Improving as a programmer makes you more attracted to Less Wrong.
Innate rationality aptitude makes you a better programmer and more attracted to Less Wrong. (The strongest factor.)
I am planning an article about how to use LW-ideas for debugging. However there is a meta-idea behind a lot of LW-ideas that I have not yet seen really written down and I wonder what would be the right term to use. It is roughly that in order to figure out what could cause an effect, you need to look at not only stuff but primarily on the differences between stuff. So if a bug appears in situation 1 and not in 2, don’t look at all aspects of situation 1, just the aspects that differ from situation 2. Does this have a name? It sounds very basic but I was not really doing this before, because I had the mentality that to really solve a problem I need to understand all parts of a “machine”, not just the malfunctioning ones.
I don’t think it really does… or even that it is necessarily true. The kind of issues I find tend to have a more smoother distribution. It really depends on the categorization. Is user error one category or one per module or one per function or?…
I think not, but but it may matter what is your native language. As mine is not English and programming languages are generally based on it, when I was 12 and exploring Turbo Pascal, I simply thought of it as Precise-English, while what my language tutor taught me was Sloppy-English. (I still don’t really understand on the gut level why they tend to compare programming with math! Math is Numberish and Greekletterish to me, while programming is Precise-English! Apparently I really really care how symbols look like, for some reason.) Anyway, if it is your own native language it may be far more confusing why a precise version of it exists and can be executed by a computer and maybe if you overcome that challenge that helps you. I think I would dislike it if programming languages would be based on my language because it would always confuse me that that when they call a class a class, I think of a school class. For example when I was learning assembler, I got really confused by using the term accumulator. I thought those belong into cars—we call batteries accumulators here. I got to be at least 16 years old when I finally understood that words can have multiple meanings and all of them can be correct usage, but even now at 37 I don’t exactly like it. It is untidy… but if I had to deal with that sort of challenge that could potentially make me a better problem solver.
Is there any way to get someone to start expecting reality to make sense?
If I want to condition someone into applying some framing technique T, I can put them in situations where their naive framing Fn obtains no reward and an alternate framing Fa does, and Fa is a small inferential step away from Fn when using T, and no framings easily arrived at using any other technique are rewarded.
The programming example you give is a good one. There’s a particular technique required to get from a naive framing of a problem to a program that solves that problem, and until you get the knack of thinking that way your programs don’t work, and writing a working program is far more rewarding than anything else you might do in a programming class.
Something similar happens with puzzle-solving, which is another activity that a lot of soi-disant rationalists emphasize.
But… is any of that the same as getting people to “expect reality to make sense”; is it the same as that “click” the OP is talking about? Is any of it the same as what the LW community refers to as “being rational”?
I’m not sure, actually. The problem is that in all of these cases the technique comes out of an existing scenario with an implicit goal, and we are trying to map that post facto to some other goal (rationality, click, expecting reality to make sense).
The more reliable approach would be to start from an operational definition of our goal (or a subset of our goal, if that’s too hard) and artificially construct scenarios whose reward conditions depend on spanning inferential distances that are short using those operations and long otherwise… perhaps as part of a “Methods of Rationality” video game or something like that.
I guess that if I tell myself, “Most people simply do not expect reality to make sense, and are trying to do entirely different things when they engage in the social activity of talking about it”, then I do feel a little less confused.
This is a testable hypothesis. To test it, see how deeply wise you appear when explaining to people who seem crazy that everything actually does have an underlying explanation, and then giving a quick and salient example.
During my military radio ops course, I realized that the woman teaching us about different frequencies literally thought that ‘higher’ frequencies were higher off the ground. Like you, I found her explanations deeply confusing, though I suspect most of the other candidates would have said it made sense. (Despite being false, this theory was good enough to enable radio operations—though presumably not engineering).
Thankfully I already had a decent founding in EM, otherwise I would have yet more cached garbage to clear—sometimes it’s worse than finding the duplicate mp3s in my music library.
Our trainer was explaining something (the tracert command) but I didn’t understand it because his explanation didn’t seem to make sense.
Could you clarify? To properly understand how traceroute works one would need to know about the TTL field in the IP header (and how it’s normally decremented by routers) and the ICMP TTL Exceeded message. But I’m not sure that a tech support drone would be expected to understand any of these.
To properly understand how traceroute works one would need
to know about the TTL field
I did learn about this on my own that day, but the original
confusion was at a quite different level: I asked whether
the times on each line measured the distance between that
router and the previous one, or between that router and the
source. His answer: “Both.” A charitable interpretation
of this would be “They measure round trip times between the
source and that router, but it’s just a matter of arithmetic
to use those to estimate round trip times between any two
routers in the list”—but I asked him if this was what he
meant and he said no. We went back and forth for a while
until he told me to just research it myself.
Edit: I think I remember him saying something like “You’re expecting it to be logical, but things aren’t always logical”.
Jesus Christ. “Things aren’t always logical.” The hallmark of a magic-thinker. Of course everything is always logical. The only case it doesn’t seem that way is when one lacks understanding.
Years ago I and three other people were training for a tech support job. Our trainer was explaining something (the tracert command) but I didn’t understand it because his explanation didn’t seem to make sense. After asking him more questions about it, I realized from his contradictory answers that he didn’t understand it either. The reason I mention this is that my three fellow trainees had no problem with his explanation, one even explicitly saying that she thought it made perfect sense.
Huh. I guess that if I tell myself, “Most people simply do not expect reality to make sense, and are trying to do entirely different things when they engage in the social activity of talking about it”, then I do feel a little less confused.
More precisely, different people are probably using different definitions of “make sense”… and you might find it easier to make sense of if you had a more detailed understanding of the ways in which people “make sense”. (Certainly, it’s what helped me become aware of the issue in the first place.)
So, here are some short snippets from the book “Using Your Brain For A Change”, wherein the author comments on various cognitive strategies he’s observed people using in order to decide whether they “understand” something:
Obviously, we are talking mostly about “clicking” being something more like this latter category of sense-making, but the author actually did mention how certain kinds of “fuzzy” understanding would actually be more helpful in social interaction:
Most of the chapter concerned itself with various cognitive strategies of detailed understanding used by a scientist, a pilot, an engineer, and so on, but it also pointed out:
Anyway, that chapter was a big clue for me towards “clicking” on the idea that the first two obstacles to be overcome in communicating a new concept are 1) getting people to realize that there’s something to “get”, and 2) getting them to get that they don’t already “get” it. (And both of these can be quite difficult, especially if the other person thinks they have a higher social status than you.)
Would you recommend that book? (“Using Your Brain For A Change”)
Is the rest of it insightful too, or did you quote the only good part?
There are a lot of other good parts, especially if you care more about practice than theory. However, I find that personally, I can’t make use of many of the techniques provided without the assistance of a partner to co-ordinate the exercises. It’s too difficult to pay attention to both the steps in the book and what’s going on in my head at the same time.
I’m still confused, but now my eyes are wide with horror, too. I don’t dispute what pjeby said; in retrospect it seems terribly obvious. But how can we deal with it? Is there any way to get someone to start expecting reality to make sense?
I have a TA job teaching people how to program, and I watch as people go from desperately trying to solve problems by blindly adapting example code that they don’t understand to actually thinking and being able to translate their thoughts into working, understandable programs. I think the key of it is to be thrust into situations that require understanding instead of just guessing the teacher’s password—the search space is too big for brute force. The class is all hands-on, doing toy problems that keep people struggling near the edge of their ability. And it works, somehow! I’m always amazed when they actually, truly learn something. I think this habit of expecting to understand things can be taught in at least one field, albeit painfully.
Is this something that people can learn in general? How? I consider this a hugely important question.
I wouldn’t be surprised if thinking this way about computer programs transfers fairly well to other fields if people are reminded to think like programmers or something like that. There are certainly a disproportionate number of computer programmers on Less Wrong, right?
And those that aren’t computer programmers would display a disproportionate amount of aptitude if they tried.
Certainly; I think this is a case where there are 3 types of causality going on:
Using Less Wrong makes you a better programmer. (This is pretty weak; for most programmers, there are probably other things that will improve your programming skill a hundred times faster than reading Less Wrong.)
Improving as a programmer makes you more attracted to Less Wrong.
Innate rationality aptitude makes you a better programmer and more attracted to Less Wrong. (The strongest factor.)
I am planning an article about how to use LW-ideas for debugging. However there is a meta-idea behind a lot of LW-ideas that I have not yet seen really written down and I wonder what would be the right term to use. It is roughly that in order to figure out what could cause an effect, you need to look at not only stuff but primarily on the differences between stuff. So if a bug appears in situation 1 and not in 2, don’t look at all aspects of situation 1, just the aspects that differ from situation 2. Does this have a name? It sounds very basic but I was not really doing this before, because I had the mentality that to really solve a problem I need to understand all parts of a “machine”, not just the malfunctioning ones.
Does it not follow from the Pareto principle?
I don’t think it really does… or even that it is necessarily true. The kind of issues I find tend to have a more smoother distribution. It really depends on the categorization. Is user error one category or one per module or one per function or?…
I think not, but but it may matter what is your native language. As mine is not English and programming languages are generally based on it, when I was 12 and exploring Turbo Pascal, I simply thought of it as Precise-English, while what my language tutor taught me was Sloppy-English. (I still don’t really understand on the gut level why they tend to compare programming with math! Math is Numberish and Greekletterish to me, while programming is Precise-English! Apparently I really really care how symbols look like, for some reason.) Anyway, if it is your own native language it may be far more confusing why a precise version of it exists and can be executed by a computer and maybe if you overcome that challenge that helps you. I think I would dislike it if programming languages would be based on my language because it would always confuse me that that when they call a class a class, I think of a school class. For example when I was learning assembler, I got really confused by using the term accumulator. I thought those belong into cars—we call batteries accumulators here. I got to be at least 16 years old when I finally understood that words can have multiple meanings and all of them can be correct usage, but even now at 37 I don’t exactly like it. It is untidy… but if I had to deal with that sort of challenge that could potentially make me a better problem solver.
If I want to condition someone into applying some framing technique T, I can put them in situations where their naive framing Fn obtains no reward and an alternate framing Fa does, and Fa is a small inferential step away from Fn when using T, and no framings easily arrived at using any other technique are rewarded.
The programming example you give is a good one. There’s a particular technique required to get from a naive framing of a problem to a program that solves that problem, and until you get the knack of thinking that way your programs don’t work, and writing a working program is far more rewarding than anything else you might do in a programming class.
Something similar happens with puzzle-solving, which is another activity that a lot of soi-disant rationalists emphasize.
But… is any of that the same as getting people to “expect reality to make sense”; is it the same as that “click” the OP is talking about? Is any of it the same as what the LW community refers to as “being rational”?
I’m not sure, actually. The problem is that in all of these cases the technique comes out of an existing scenario with an implicit goal, and we are trying to map that post facto to some other goal (rationality, click, expecting reality to make sense).
The more reliable approach would be to start from an operational definition of our goal (or a subset of our goal, if that’s too hard) and artificially construct scenarios whose reward conditions depend on spanning inferential distances that are short using those operations and long otherwise… perhaps as part of a “Methods of Rationality” video game or something like that.
This is a testable hypothesis. To test it, see how deeply wise you appear when explaining to people who seem crazy that everything actually does have an underlying explanation, and then giving a quick and salient example.
During my military radio ops course, I realized that the woman teaching us about different frequencies literally thought that ‘higher’ frequencies were higher off the ground. Like you, I found her explanations deeply confusing, though I suspect most of the other candidates would have said it made sense. (Despite being false, this theory was good enough to enable radio operations—though presumably not engineering).
Thankfully I already had a decent founding in EM, otherwise I would have yet more cached garbage to clear—sometimes it’s worse than finding the duplicate mp3s in my music library.
Could you clarify? To properly understand how traceroute works one would need to know about the TTL field in the IP header (and how it’s normally decremented by routers) and the ICMP TTL Exceeded message. But I’m not sure that a tech support drone would be expected to understand any of these.
I did learn about this on my own that day, but the original confusion was at a quite different level: I asked whether the times on each line measured the distance between that router and the previous one, or between that router and the source. His answer: “Both.” A charitable interpretation of this would be “They measure round trip times between the source and that router, but it’s just a matter of arithmetic to use those to estimate round trip times between any two routers in the list”—but I asked him if this was what he meant and he said no. We went back and forth for a while until he told me to just research it myself.
Edit: I think I remember him saying something like “You’re expecting it to be logical, but things aren’t always logical”.
Jesus Christ. “Things aren’t always logical.” The hallmark of a magic-thinker. Of course everything is always logical. The only case it doesn’t seem that way is when one lacks understanding.
Looks like he was just repeating various teacher’s passwords.