My best guess is that clickiness has something to do with failure to compartmentalize—missing, or failing to use, the mental gear that lets human beings believe two contradictory things at the same time. Clicky people would tend to be people who take all of their beliefs at face value.
One of the things that I’ve noticed about this is that most people do not expect to understand things. For most people, the universe is a mysterious place filled with random events beyond their ability to comprehend or control. Think “guessing the teacher’s password”, but not just in school or knowledge, but about everything.
Such people have no problem with the idea of magic, because everything is magic to them, even science.
An anecdote: once, when I still worked as software developer/department manager in a corporation, my boss was congratulating me on a million dollar project (revenue, not cost) that my team had just turned in precisely on time with no crises.
Well, not congratulating me, exactly. He was saying, “wow, that turned out really well”, and I felt oddly uncomfortable. After getting off the phone, I realized a day or so later that he was talking about it like it was luck, like, “wow, what nice weather we had.”
So I called him back and had a little chat about it. The idea that the project had succeeded because I designed it that way had not occurred to him, and the idea that I had done it by the way I negotiated the requirements in the first place—as opposed to heroic efforts during the project—was quite an eye opener for him.
Fortunately, he (and his boss) were “clicky” enough in other areas (i.e., they didn’t believe computers were magic, for example) that I was able to make the math of what I was doing click for them at that “teachable moment”.
Unfortunately, most people, in most areas of their lives treat everything as magic. They’re not used to being able to understand or control anything but the simplest of things, so it doesn’t occur to them to even try. Instead, they just go along with whatever everybody else is thinking or doing.
For such (most) people, reality is social, rather than something you understand/ control.
(Side note: I find myself often trying to find a way to express grasp/control as a pair, because really the two are the same. If you really grasp something, you should be able to control it, at least in principle.)
Such people have no problem with the idea of magic,
because everything is magic to them, even science.
Years ago I and three other people were training for a tech
support job. Our trainer was explaining something (the
tracert command)
but I didn’t understand it because his explanation didn’t
seem to make sense. After asking him more questions about
it, I realized from his contradictory answers that he didn’t
understand it either. The reason I mention this is that my
three fellow trainees had no problem with his explanation,
one even explicitly saying that she thought it made perfect
sense.
Huh. I guess that if I tell myself, “Most people simply do not expect reality to make sense, and are trying to do entirely different things when they engage in the social activity of talking about it”, then I do feel a little less confused.
Most people simply do not expect reality to make sense
More precisely, different people are probably using different definitions of “make sense”… and you might find it easier to make sense of if you had a more detailed understanding of the ways in which people “make sense”. (Certainly, it’s what helped me become aware of the issue in the first place.)
So, here are some short snippets from the book “Using Your Brain For A Change”, wherein the author comments on various cognitive strategies he’s observed people using in order to decide whether they “understand” something:
There are several kinds of understanding, and some of them are a lot more useful than others. One kind of understanding allows you to justify things, and gives you reasons for not being able to do anything different.…
A second kind of understanding simply allows you to have a good feeling: “Ahhhh.” It’s sort of like salivating to a bell: it’s a conditioned response, and all you get is that good feeling. That’s the kind of thing that can lead to saying, “Oh, yes, ‘ego’ is that one up there on the chart. I’ve seen that before; yes, I understand.” That kind of understanding also doesn’t teach you to be able to do anything.
A third kind of understanding allows you to talk about things with important sounding concepts, and sometimes even equations.… Concepts can be useful, but only if they have an experiential basis [i.e. “near” beliefs that “pay rent”], and only if they allow you to do something different.
Obviously, we are talking mostly about “clicking” being something more like this latter category of sense-making, but the author actually did mention how certain kinds of “fuzzy” understanding would actually be more helpful in social interaction:
However, a fuzzy, bright understanding will be good for some things. For example, this is probably someone who would be lots of fun at a party. She’ll be a very responsive person, because all she needs to do to feel like she understands what someone says is to fuzz up her [mental] pictures. It doesn’t take a lot of information to be able to make a bright, fuzzy movie. She can do that really quickly, and then have a lot of feelings watching that bright movie. Her kind of understanding is the kind I talked about earlier, that doesn’t have much to do with the outside world. It helps her feel better, but it won’t be much help in coping with actual problems.
Most of the chapter concerned itself with various cognitive strategies of detailed understanding used by a scientist, a pilot, an engineer, and so on, but it also pointed out:
What I want you all to realize is that all of you are in the same position as that … woman who fuzzes images. No matter how good you think your process of understanding is, there will always be times and places where another process would work much better for you. Earlier someone gave us the process a scientist used—economical little pictures with diagrams. That will work marvelously well for [understanding] the physical world, but I’ll predict that person has difficulties understanding people—a common problem for scientists. (Man: Yes, that’s true.)
Anyway, that chapter was a big clue for me towards “clicking” on the idea that the first two obstacles to be overcome in communicating a new concept are 1) getting people to realize that there’s something to “get”, and 2) getting them to get that they don’t already “get” it. (And both of these can be quite difficult, especially if the other person thinks they have a higher social status than you.)
Is the rest of it insightful too, or did you quote the only good part?
There are a lot of other good parts, especially if you care more about practice than theory. However, I find that personally, I can’t make use of many of the techniques provided without the assistance of a partner to co-ordinate the exercises. It’s too difficult to pay attention to both the steps in the book and what’s going on in my head at the same time.
I’m still confused, but now my eyes are wide with horror, too. I don’t dispute what pjeby said; in retrospect it seems terribly obvious. But how can we deal with it? Is there any way to get someone to start expecting reality to make sense?
I have a TA job teaching people how to program, and I watch as people go from desperately trying to solve problems by blindly adapting example code that they don’t understand to actually thinking and being able to translate their thoughts into working, understandable programs. I think the key of it is to be thrust into situations that require understanding instead of just guessing the teacher’s password—the search space is too big for brute force. The class is all hands-on, doing toy problems that keep people struggling near the edge of their ability. And it works, somehow! I’m always amazed when they actually, truly learn something. I think this habit of expecting to understand things can be taught in at least one field, albeit painfully.
Is this something that people can learn in general? How? I consider this a hugely important question.
I wouldn’t be surprised if thinking this way about computer programs transfers fairly well to other fields if people are reminded to think like programmers or something like that. There are certainly a disproportionate number of computer programmers on Less Wrong, right?
Certainly; I think this is a case where there are 3 types of causality going on:
Using Less Wrong makes you a better programmer. (This is pretty weak; for most programmers, there are probably other things that will improve your programming skill a hundred times faster than reading Less Wrong.)
Improving as a programmer makes you more attracted to Less Wrong.
Innate rationality aptitude makes you a better programmer and more attracted to Less Wrong. (The strongest factor.)
I am planning an article about how to use LW-ideas for debugging. However there is a meta-idea behind a lot of LW-ideas that I have not yet seen really written down and I wonder what would be the right term to use. It is roughly that in order to figure out what could cause an effect, you need to look at not only stuff but primarily on the differences between stuff. So if a bug appears in situation 1 and not in 2, don’t look at all aspects of situation 1, just the aspects that differ from situation 2. Does this have a name? It sounds very basic but I was not really doing this before, because I had the mentality that to really solve a problem I need to understand all parts of a “machine”, not just the malfunctioning ones.
I don’t think it really does… or even that it is necessarily true. The kind of issues I find tend to have a more smoother distribution. It really depends on the categorization. Is user error one category or one per module or one per function or?…
I think not, but but it may matter what is your native language. As mine is not English and programming languages are generally based on it, when I was 12 and exploring Turbo Pascal, I simply thought of it as Precise-English, while what my language tutor taught me was Sloppy-English. (I still don’t really understand on the gut level why they tend to compare programming with math! Math is Numberish and Greekletterish to me, while programming is Precise-English! Apparently I really really care how symbols look like, for some reason.) Anyway, if it is your own native language it may be far more confusing why a precise version of it exists and can be executed by a computer and maybe if you overcome that challenge that helps you. I think I would dislike it if programming languages would be based on my language because it would always confuse me that that when they call a class a class, I think of a school class. For example when I was learning assembler, I got really confused by using the term accumulator. I thought those belong into cars—we call batteries accumulators here. I got to be at least 16 years old when I finally understood that words can have multiple meanings and all of them can be correct usage, but even now at 37 I don’t exactly like it. It is untidy… but if I had to deal with that sort of challenge that could potentially make me a better problem solver.
Is there any way to get someone to start expecting reality to make sense?
If I want to condition someone into applying some framing technique T, I can put them in situations where their naive framing Fn obtains no reward and an alternate framing Fa does, and Fa is a small inferential step away from Fn when using T, and no framings easily arrived at using any other technique are rewarded.
The programming example you give is a good one. There’s a particular technique required to get from a naive framing of a problem to a program that solves that problem, and until you get the knack of thinking that way your programs don’t work, and writing a working program is far more rewarding than anything else you might do in a programming class.
Something similar happens with puzzle-solving, which is another activity that a lot of soi-disant rationalists emphasize.
But… is any of that the same as getting people to “expect reality to make sense”; is it the same as that “click” the OP is talking about? Is any of it the same as what the LW community refers to as “being rational”?
I’m not sure, actually. The problem is that in all of these cases the technique comes out of an existing scenario with an implicit goal, and we are trying to map that post facto to some other goal (rationality, click, expecting reality to make sense).
The more reliable approach would be to start from an operational definition of our goal (or a subset of our goal, if that’s too hard) and artificially construct scenarios whose reward conditions depend on spanning inferential distances that are short using those operations and long otherwise… perhaps as part of a “Methods of Rationality” video game or something like that.
I guess that if I tell myself, “Most people simply do not expect reality to make sense, and are trying to do entirely different things when they engage in the social activity of talking about it”, then I do feel a little less confused.
This is a testable hypothesis. To test it, see how deeply wise you appear when explaining to people who seem crazy that everything actually does have an underlying explanation, and then giving a quick and salient example.
During my military radio ops course, I realized that the woman teaching us about different frequencies literally thought that ‘higher’ frequencies were higher off the ground. Like you, I found her explanations deeply confusing, though I suspect most of the other candidates would have said it made sense. (Despite being false, this theory was good enough to enable radio operations—though presumably not engineering).
Thankfully I already had a decent founding in EM, otherwise I would have yet more cached garbage to clear—sometimes it’s worse than finding the duplicate mp3s in my music library.
Our trainer was explaining something (the tracert command) but I didn’t understand it because his explanation didn’t seem to make sense.
Could you clarify? To properly understand how traceroute works one would need to know about the TTL field in the IP header (and how it’s normally decremented by routers) and the ICMP TTL Exceeded message. But I’m not sure that a tech support drone would be expected to understand any of these.
To properly understand how traceroute works one would need
to know about the TTL field
I did learn about this on my own that day, but the original
confusion was at a quite different level: I asked whether
the times on each line measured the distance between that
router and the previous one, or between that router and the
source. His answer: “Both.” A charitable interpretation
of this would be “They measure round trip times between the
source and that router, but it’s just a matter of arithmetic
to use those to estimate round trip times between any two
routers in the list”—but I asked him if this was what he
meant and he said no. We went back and forth for a while
until he told me to just research it myself.
Edit: I think I remember him saying something like “You’re expecting it to be logical, but things aren’t always logical”.
Jesus Christ. “Things aren’t always logical.” The hallmark of a magic-thinker. Of course everything is always logical. The only case it doesn’t seem that way is when one lacks understanding.
This is overwhelmingly how I perceive most people. This in particular: ‘reality is social’.
I have personally traced the difference, in myself, to receiving this book at around the age of three or four. It has illustrations of gadgets and appliances, with cut-out views of their internals. I learned almost as soon as I was capable of learning, that nothing is a mysterious black box, things that seem magical have internal detail, and there are explanations for how they work. Whether or not I had anything like a pre-existing disposition that made me love and devour the book in the first place, I still consider it to have had a bigger impact on my whole world view than anything else I can remember.
I got Macaulay’s The Way Things Work (the original) at a slightly higher age. I suspect a big reason I became a computer scientist was the joy of puzzling through the adder diagrams and understanding why they worked.
So I called him back and had a little chat about it. The idea that the project had succeeded because I designed it that way had not occurred to him, and the idea that I had done it by the way I negotiated the requirements in the first place—as opposed to heroic efforts during the project—was quite an eye opener for him.
The Inside View says, ‘we succeeded because of careful planning of X, Y, and Z, and our own awesomeness.’ The Outside View says, ‘most large software projects fail, but some succeed anyway.’
The Inside View says, ‘we succeeded because of careful planning of X, Y, and Z, and our own awesomeness.’ The Outside View says, ‘most large software projects fail, but some succeed anyway.’
What makes you think it was the only one, or one of a few out of many?
The specific project was only relevant because my bosses prior to that point in time already implicitly understood that there was something my team was doing that got our projects done on time when others under their authority were struggling—but they attributed it to intelligence or skill on my part, rather than our methodology/philosophy.
The newer boss, OTOH, didn’t have any direct familiarity with my track record, and so didn’t attribute the success to me at all, except that obviously I hadn’t screwed it up.
Ask programming students what a trivial code snippet of an unknown language does.
Some form a consistent model. Right or wrong, these can learn programming.
Others imagine a different working every time they encounter the same instruction. These will fail no matter what. I suspect they treat it as a discussion, where repeating a question means a new answer is wanted.
There are cases where understanding something may lower your status or at least seem so, and this may play a role. About 10-20 years ago, it was computers, understanding them made you a geek with everything that comes with it. So people were very proudly saying “I am not a techie, I just use it but I don’t understand it!” meaning roughly that their social status is largely higher than that of techies. Of course they did not mean to have higher social status than Bill Gates, rather than just the local IT department.
There is something similar going on with young men being really proud about not knowing to cook, as this brag suggests either affording to eat out or being really attractive and always finding girlfriends who like to cook.
The point is, ignorance can be a luxury and that way a pretty big status signal, affording to not understand certain things can be like that. On a parallel Earth, I could imagine the richest kids even claiming they cannot read because it would be a huge “I don’t need to work to survive!” message.
The point is, this can easily he internalized. “I don’t want to look like the kind of person who needs this knowledge” → “I don’t understand”
(Side note: I find myself often trying to find a way to express grasp/control as a pair,
because really the two are the same. If you really grasp something, you should be
able to control it, at least in principle.)
Well, anything mathematical would be an exception to that, at the least.
One of the things that I’ve noticed about this is that most people do not expect to understand things. For most people, the universe is a mysterious place filled with random events beyond their ability to comprehend or control. Think “guessing the teacher’s password”, but not just in school or knowledge, but about everything.
Such people have no problem with the idea of magic, because everything is magic to them, even science.
An anecdote: once, when I still worked as software developer/department manager in a corporation, my boss was congratulating me on a million dollar project (revenue, not cost) that my team had just turned in precisely on time with no crises.
Well, not congratulating me, exactly. He was saying, “wow, that turned out really well”, and I felt oddly uncomfortable. After getting off the phone, I realized a day or so later that he was talking about it like it was luck, like, “wow, what nice weather we had.”
So I called him back and had a little chat about it. The idea that the project had succeeded because I designed it that way had not occurred to him, and the idea that I had done it by the way I negotiated the requirements in the first place—as opposed to heroic efforts during the project—was quite an eye opener for him.
Fortunately, he (and his boss) were “clicky” enough in other areas (i.e., they didn’t believe computers were magic, for example) that I was able to make the math of what I was doing click for them at that “teachable moment”.
Unfortunately, most people, in most areas of their lives treat everything as magic. They’re not used to being able to understand or control anything but the simplest of things, so it doesn’t occur to them to even try. Instead, they just go along with whatever everybody else is thinking or doing.
For such (most) people, reality is social, rather than something you understand/ control.
(Side note: I find myself often trying to find a way to express grasp/control as a pair, because really the two are the same. If you really grasp something, you should be able to control it, at least in principle.)
Years ago I and three other people were training for a tech support job. Our trainer was explaining something (the tracert command) but I didn’t understand it because his explanation didn’t seem to make sense. After asking him more questions about it, I realized from his contradictory answers that he didn’t understand it either. The reason I mention this is that my three fellow trainees had no problem with his explanation, one even explicitly saying that she thought it made perfect sense.
Huh. I guess that if I tell myself, “Most people simply do not expect reality to make sense, and are trying to do entirely different things when they engage in the social activity of talking about it”, then I do feel a little less confused.
More precisely, different people are probably using different definitions of “make sense”… and you might find it easier to make sense of if you had a more detailed understanding of the ways in which people “make sense”. (Certainly, it’s what helped me become aware of the issue in the first place.)
So, here are some short snippets from the book “Using Your Brain For A Change”, wherein the author comments on various cognitive strategies he’s observed people using in order to decide whether they “understand” something:
Obviously, we are talking mostly about “clicking” being something more like this latter category of sense-making, but the author actually did mention how certain kinds of “fuzzy” understanding would actually be more helpful in social interaction:
Most of the chapter concerned itself with various cognitive strategies of detailed understanding used by a scientist, a pilot, an engineer, and so on, but it also pointed out:
Anyway, that chapter was a big clue for me towards “clicking” on the idea that the first two obstacles to be overcome in communicating a new concept are 1) getting people to realize that there’s something to “get”, and 2) getting them to get that they don’t already “get” it. (And both of these can be quite difficult, especially if the other person thinks they have a higher social status than you.)
Would you recommend that book? (“Using Your Brain For A Change”)
Is the rest of it insightful too, or did you quote the only good part?
There are a lot of other good parts, especially if you care more about practice than theory. However, I find that personally, I can’t make use of many of the techniques provided without the assistance of a partner to co-ordinate the exercises. It’s too difficult to pay attention to both the steps in the book and what’s going on in my head at the same time.
I’m still confused, but now my eyes are wide with horror, too. I don’t dispute what pjeby said; in retrospect it seems terribly obvious. But how can we deal with it? Is there any way to get someone to start expecting reality to make sense?
I have a TA job teaching people how to program, and I watch as people go from desperately trying to solve problems by blindly adapting example code that they don’t understand to actually thinking and being able to translate their thoughts into working, understandable programs. I think the key of it is to be thrust into situations that require understanding instead of just guessing the teacher’s password—the search space is too big for brute force. The class is all hands-on, doing toy problems that keep people struggling near the edge of their ability. And it works, somehow! I’m always amazed when they actually, truly learn something. I think this habit of expecting to understand things can be taught in at least one field, albeit painfully.
Is this something that people can learn in general? How? I consider this a hugely important question.
I wouldn’t be surprised if thinking this way about computer programs transfers fairly well to other fields if people are reminded to think like programmers or something like that. There are certainly a disproportionate number of computer programmers on Less Wrong, right?
And those that aren’t computer programmers would display a disproportionate amount of aptitude if they tried.
Certainly; I think this is a case where there are 3 types of causality going on:
Using Less Wrong makes you a better programmer. (This is pretty weak; for most programmers, there are probably other things that will improve your programming skill a hundred times faster than reading Less Wrong.)
Improving as a programmer makes you more attracted to Less Wrong.
Innate rationality aptitude makes you a better programmer and more attracted to Less Wrong. (The strongest factor.)
I am planning an article about how to use LW-ideas for debugging. However there is a meta-idea behind a lot of LW-ideas that I have not yet seen really written down and I wonder what would be the right term to use. It is roughly that in order to figure out what could cause an effect, you need to look at not only stuff but primarily on the differences between stuff. So if a bug appears in situation 1 and not in 2, don’t look at all aspects of situation 1, just the aspects that differ from situation 2. Does this have a name? It sounds very basic but I was not really doing this before, because I had the mentality that to really solve a problem I need to understand all parts of a “machine”, not just the malfunctioning ones.
Does it not follow from the Pareto principle?
I don’t think it really does… or even that it is necessarily true. The kind of issues I find tend to have a more smoother distribution. It really depends on the categorization. Is user error one category or one per module or one per function or?…
I think not, but but it may matter what is your native language. As mine is not English and programming languages are generally based on it, when I was 12 and exploring Turbo Pascal, I simply thought of it as Precise-English, while what my language tutor taught me was Sloppy-English. (I still don’t really understand on the gut level why they tend to compare programming with math! Math is Numberish and Greekletterish to me, while programming is Precise-English! Apparently I really really care how symbols look like, for some reason.) Anyway, if it is your own native language it may be far more confusing why a precise version of it exists and can be executed by a computer and maybe if you overcome that challenge that helps you. I think I would dislike it if programming languages would be based on my language because it would always confuse me that that when they call a class a class, I think of a school class. For example when I was learning assembler, I got really confused by using the term accumulator. I thought those belong into cars—we call batteries accumulators here. I got to be at least 16 years old when I finally understood that words can have multiple meanings and all of them can be correct usage, but even now at 37 I don’t exactly like it. It is untidy… but if I had to deal with that sort of challenge that could potentially make me a better problem solver.
If I want to condition someone into applying some framing technique T, I can put them in situations where their naive framing Fn obtains no reward and an alternate framing Fa does, and Fa is a small inferential step away from Fn when using T, and no framings easily arrived at using any other technique are rewarded.
The programming example you give is a good one. There’s a particular technique required to get from a naive framing of a problem to a program that solves that problem, and until you get the knack of thinking that way your programs don’t work, and writing a working program is far more rewarding than anything else you might do in a programming class.
Something similar happens with puzzle-solving, which is another activity that a lot of soi-disant rationalists emphasize.
But… is any of that the same as getting people to “expect reality to make sense”; is it the same as that “click” the OP is talking about? Is any of it the same as what the LW community refers to as “being rational”?
I’m not sure, actually. The problem is that in all of these cases the technique comes out of an existing scenario with an implicit goal, and we are trying to map that post facto to some other goal (rationality, click, expecting reality to make sense).
The more reliable approach would be to start from an operational definition of our goal (or a subset of our goal, if that’s too hard) and artificially construct scenarios whose reward conditions depend on spanning inferential distances that are short using those operations and long otherwise… perhaps as part of a “Methods of Rationality” video game or something like that.
This is a testable hypothesis. To test it, see how deeply wise you appear when explaining to people who seem crazy that everything actually does have an underlying explanation, and then giving a quick and salient example.
During my military radio ops course, I realized that the woman teaching us about different frequencies literally thought that ‘higher’ frequencies were higher off the ground. Like you, I found her explanations deeply confusing, though I suspect most of the other candidates would have said it made sense. (Despite being false, this theory was good enough to enable radio operations—though presumably not engineering).
Thankfully I already had a decent founding in EM, otherwise I would have yet more cached garbage to clear—sometimes it’s worse than finding the duplicate mp3s in my music library.
Could you clarify? To properly understand how traceroute works one would need to know about the TTL field in the IP header (and how it’s normally decremented by routers) and the ICMP TTL Exceeded message. But I’m not sure that a tech support drone would be expected to understand any of these.
I did learn about this on my own that day, but the original confusion was at a quite different level: I asked whether the times on each line measured the distance between that router and the previous one, or between that router and the source. His answer: “Both.” A charitable interpretation of this would be “They measure round trip times between the source and that router, but it’s just a matter of arithmetic to use those to estimate round trip times between any two routers in the list”—but I asked him if this was what he meant and he said no. We went back and forth for a while until he told me to just research it myself.
Edit: I think I remember him saying something like “You’re expecting it to be logical, but things aren’t always logical”.
Jesus Christ. “Things aren’t always logical.” The hallmark of a magic-thinker. Of course everything is always logical. The only case it doesn’t seem that way is when one lacks understanding.
Looks like he was just repeating various teacher’s passwords.
This is overwhelmingly how I perceive most people. This in particular: ‘reality is social’.
I have personally traced the difference, in myself, to receiving this book at around the age of three or four. It has illustrations of gadgets and appliances, with cut-out views of their internals. I learned almost as soon as I was capable of learning, that nothing is a mysterious black box, things that seem magical have internal detail, and there are explanations for how they work. Whether or not I had anything like a pre-existing disposition that made me love and devour the book in the first place, I still consider it to have had a bigger impact on my whole world view than anything else I can remember.
I got Macaulay’s The Way Things Work (the original) at a slightly higher age. I suspect a big reason I became a computer scientist was the joy of puzzling through the adder diagrams and understanding why they worked.
I traced those adder diagrams as a child as well, and it surely was a formative experience.
This is mine which I recieved at around age six. I don’t recall how many tens of times I read and reread those pages.
This is worth an entire post by itself. Cheers.
Yes, please!
The Inside View says, ‘we succeeded because of careful planning of X, Y, and Z, and our own awesomeness.’ The Outside View says, ‘most large software projects fail, but some succeed anyway.’
What makes you think it was the only one, or one of a few out of many?
The specific project was only relevant because my bosses prior to that point in time already implicitly understood that there was something my team was doing that got our projects done on time when others under their authority were struggling—but they attributed it to intelligence or skill on my part, rather than our methodology/philosophy.
The newer boss, OTOH, didn’t have any direct familiarity with my track record, and so didn’t attribute the success to me at all, except that obviously I hadn’t screwed it up.
Reminiscent of [CODING HORROR] Separating Programming Sheep from Non-Programming Goats
Ask programming students what a trivial code snippet of an unknown language does.
Some form a consistent model.
Right or wrong, these can learn programming.
Others imagine a different working every time they encounter the same instruction.
These will fail no matter what.
I suspect they treat it as a discussion, where repeating a question means a new answer is wanted.
Not usually a fan of your thoughts, but these seem right on the money.
Sorry, old comment, but:
There are cases where understanding something may lower your status or at least seem so, and this may play a role. About 10-20 years ago, it was computers, understanding them made you a geek with everything that comes with it. So people were very proudly saying “I am not a techie, I just use it but I don’t understand it!” meaning roughly that their social status is largely higher than that of techies. Of course they did not mean to have higher social status than Bill Gates, rather than just the local IT department.
There is something similar going on with young men being really proud about not knowing to cook, as this brag suggests either affording to eat out or being really attractive and always finding girlfriends who like to cook.
The point is, ignorance can be a luxury and that way a pretty big status signal, affording to not understand certain things can be like that. On a parallel Earth, I could imagine the richest kids even claiming they cannot read because it would be a huge “I don’t need to work to survive!” message.
The point is, this can easily he internalized. “I don’t want to look like the kind of person who needs this knowledge” → “I don’t understand”
thus the concept of gods.
Relevant quote: http://lesswrong.com/lw/26y/rationality_quotes_may_2010/1y6j?c=1
Well, anything mathematical would be an exception to that, at the least.
If you really grasp something mathematical, you ought to be able to apply it—at least in principle.
OK but that’s not really what “control” normally means, is it? “Manipulate” might be a better word here.
“Manipulate” would also extend the thinking-as-holding metaphor of “grasp”.
(I have to admit that I was confused by “control” as well.)