I’m still confused, but now my eyes are wide with horror, too. I don’t dispute what pjeby said; in retrospect it seems terribly obvious. But how can we deal with it? Is there any way to get someone to start expecting reality to make sense?
I have a TA job teaching people how to program, and I watch as people go from desperately trying to solve problems by blindly adapting example code that they don’t understand to actually thinking and being able to translate their thoughts into working, understandable programs. I think the key of it is to be thrust into situations that require understanding instead of just guessing the teacher’s password—the search space is too big for brute force. The class is all hands-on, doing toy problems that keep people struggling near the edge of their ability. And it works, somehow! I’m always amazed when they actually, truly learn something. I think this habit of expecting to understand things can be taught in at least one field, albeit painfully.
Is this something that people can learn in general? How? I consider this a hugely important question.
I wouldn’t be surprised if thinking this way about computer programs transfers fairly well to other fields if people are reminded to think like programmers or something like that. There are certainly a disproportionate number of computer programmers on Less Wrong, right?
Certainly; I think this is a case where there are 3 types of causality going on:
Using Less Wrong makes you a better programmer. (This is pretty weak; for most programmers, there are probably other things that will improve your programming skill a hundred times faster than reading Less Wrong.)
Improving as a programmer makes you more attracted to Less Wrong.
Innate rationality aptitude makes you a better programmer and more attracted to Less Wrong. (The strongest factor.)
I am planning an article about how to use LW-ideas for debugging. However there is a meta-idea behind a lot of LW-ideas that I have not yet seen really written down and I wonder what would be the right term to use. It is roughly that in order to figure out what could cause an effect, you need to look at not only stuff but primarily on the differences between stuff. So if a bug appears in situation 1 and not in 2, don’t look at all aspects of situation 1, just the aspects that differ from situation 2. Does this have a name? It sounds very basic but I was not really doing this before, because I had the mentality that to really solve a problem I need to understand all parts of a “machine”, not just the malfunctioning ones.
I don’t think it really does… or even that it is necessarily true. The kind of issues I find tend to have a more smoother distribution. It really depends on the categorization. Is user error one category or one per module or one per function or?…
I think not, but but it may matter what is your native language. As mine is not English and programming languages are generally based on it, when I was 12 and exploring Turbo Pascal, I simply thought of it as Precise-English, while what my language tutor taught me was Sloppy-English. (I still don’t really understand on the gut level why they tend to compare programming with math! Math is Numberish and Greekletterish to me, while programming is Precise-English! Apparently I really really care how symbols look like, for some reason.) Anyway, if it is your own native language it may be far more confusing why a precise version of it exists and can be executed by a computer and maybe if you overcome that challenge that helps you. I think I would dislike it if programming languages would be based on my language because it would always confuse me that that when they call a class a class, I think of a school class. For example when I was learning assembler, I got really confused by using the term accumulator. I thought those belong into cars—we call batteries accumulators here. I got to be at least 16 years old when I finally understood that words can have multiple meanings and all of them can be correct usage, but even now at 37 I don’t exactly like it. It is untidy… but if I had to deal with that sort of challenge that could potentially make me a better problem solver.
Is there any way to get someone to start expecting reality to make sense?
If I want to condition someone into applying some framing technique T, I can put them in situations where their naive framing Fn obtains no reward and an alternate framing Fa does, and Fa is a small inferential step away from Fn when using T, and no framings easily arrived at using any other technique are rewarded.
The programming example you give is a good one. There’s a particular technique required to get from a naive framing of a problem to a program that solves that problem, and until you get the knack of thinking that way your programs don’t work, and writing a working program is far more rewarding than anything else you might do in a programming class.
Something similar happens with puzzle-solving, which is another activity that a lot of soi-disant rationalists emphasize.
But… is any of that the same as getting people to “expect reality to make sense”; is it the same as that “click” the OP is talking about? Is any of it the same as what the LW community refers to as “being rational”?
I’m not sure, actually. The problem is that in all of these cases the technique comes out of an existing scenario with an implicit goal, and we are trying to map that post facto to some other goal (rationality, click, expecting reality to make sense).
The more reliable approach would be to start from an operational definition of our goal (or a subset of our goal, if that’s too hard) and artificially construct scenarios whose reward conditions depend on spanning inferential distances that are short using those operations and long otherwise… perhaps as part of a “Methods of Rationality” video game or something like that.
I’m still confused, but now my eyes are wide with horror, too. I don’t dispute what pjeby said; in retrospect it seems terribly obvious. But how can we deal with it? Is there any way to get someone to start expecting reality to make sense?
I have a TA job teaching people how to program, and I watch as people go from desperately trying to solve problems by blindly adapting example code that they don’t understand to actually thinking and being able to translate their thoughts into working, understandable programs. I think the key of it is to be thrust into situations that require understanding instead of just guessing the teacher’s password—the search space is too big for brute force. The class is all hands-on, doing toy problems that keep people struggling near the edge of their ability. And it works, somehow! I’m always amazed when they actually, truly learn something. I think this habit of expecting to understand things can be taught in at least one field, albeit painfully.
Is this something that people can learn in general? How? I consider this a hugely important question.
I wouldn’t be surprised if thinking this way about computer programs transfers fairly well to other fields if people are reminded to think like programmers or something like that. There are certainly a disproportionate number of computer programmers on Less Wrong, right?
And those that aren’t computer programmers would display a disproportionate amount of aptitude if they tried.
Certainly; I think this is a case where there are 3 types of causality going on:
Using Less Wrong makes you a better programmer. (This is pretty weak; for most programmers, there are probably other things that will improve your programming skill a hundred times faster than reading Less Wrong.)
Improving as a programmer makes you more attracted to Less Wrong.
Innate rationality aptitude makes you a better programmer and more attracted to Less Wrong. (The strongest factor.)
I am planning an article about how to use LW-ideas for debugging. However there is a meta-idea behind a lot of LW-ideas that I have not yet seen really written down and I wonder what would be the right term to use. It is roughly that in order to figure out what could cause an effect, you need to look at not only stuff but primarily on the differences between stuff. So if a bug appears in situation 1 and not in 2, don’t look at all aspects of situation 1, just the aspects that differ from situation 2. Does this have a name? It sounds very basic but I was not really doing this before, because I had the mentality that to really solve a problem I need to understand all parts of a “machine”, not just the malfunctioning ones.
Does it not follow from the Pareto principle?
I don’t think it really does… or even that it is necessarily true. The kind of issues I find tend to have a more smoother distribution. It really depends on the categorization. Is user error one category or one per module or one per function or?…
I think not, but but it may matter what is your native language. As mine is not English and programming languages are generally based on it, when I was 12 and exploring Turbo Pascal, I simply thought of it as Precise-English, while what my language tutor taught me was Sloppy-English. (I still don’t really understand on the gut level why they tend to compare programming with math! Math is Numberish and Greekletterish to me, while programming is Precise-English! Apparently I really really care how symbols look like, for some reason.) Anyway, if it is your own native language it may be far more confusing why a precise version of it exists and can be executed by a computer and maybe if you overcome that challenge that helps you. I think I would dislike it if programming languages would be based on my language because it would always confuse me that that when they call a class a class, I think of a school class. For example when I was learning assembler, I got really confused by using the term accumulator. I thought those belong into cars—we call batteries accumulators here. I got to be at least 16 years old when I finally understood that words can have multiple meanings and all of them can be correct usage, but even now at 37 I don’t exactly like it. It is untidy… but if I had to deal with that sort of challenge that could potentially make me a better problem solver.
If I want to condition someone into applying some framing technique T, I can put them in situations where their naive framing Fn obtains no reward and an alternate framing Fa does, and Fa is a small inferential step away from Fn when using T, and no framings easily arrived at using any other technique are rewarded.
The programming example you give is a good one. There’s a particular technique required to get from a naive framing of a problem to a program that solves that problem, and until you get the knack of thinking that way your programs don’t work, and writing a working program is far more rewarding than anything else you might do in a programming class.
Something similar happens with puzzle-solving, which is another activity that a lot of soi-disant rationalists emphasize.
But… is any of that the same as getting people to “expect reality to make sense”; is it the same as that “click” the OP is talking about? Is any of it the same as what the LW community refers to as “being rational”?
I’m not sure, actually. The problem is that in all of these cases the technique comes out of an existing scenario with an implicit goal, and we are trying to map that post facto to some other goal (rationality, click, expecting reality to make sense).
The more reliable approach would be to start from an operational definition of our goal (or a subset of our goal, if that’s too hard) and artificially construct scenarios whose reward conditions depend on spanning inferential distances that are short using those operations and long otherwise… perhaps as part of a “Methods of Rationality” video game or something like that.