Is there any way to get someone to start expecting reality to make sense?
If I want to condition someone into applying some framing technique T, I can put them in situations where their naive framing Fn obtains no reward and an alternate framing Fa does, and Fa is a small inferential step away from Fn when using T, and no framings easily arrived at using any other technique are rewarded.
The programming example you give is a good one. There’s a particular technique required to get from a naive framing of a problem to a program that solves that problem, and until you get the knack of thinking that way your programs don’t work, and writing a working program is far more rewarding than anything else you might do in a programming class.
Something similar happens with puzzle-solving, which is another activity that a lot of soi-disant rationalists emphasize.
But… is any of that the same as getting people to “expect reality to make sense”; is it the same as that “click” the OP is talking about? Is any of it the same as what the LW community refers to as “being rational”?
I’m not sure, actually. The problem is that in all of these cases the technique comes out of an existing scenario with an implicit goal, and we are trying to map that post facto to some other goal (rationality, click, expecting reality to make sense).
The more reliable approach would be to start from an operational definition of our goal (or a subset of our goal, if that’s too hard) and artificially construct scenarios whose reward conditions depend on spanning inferential distances that are short using those operations and long otherwise… perhaps as part of a “Methods of Rationality” video game or something like that.
If I want to condition someone into applying some framing technique T, I can put them in situations where their naive framing Fn obtains no reward and an alternate framing Fa does, and Fa is a small inferential step away from Fn when using T, and no framings easily arrived at using any other technique are rewarded.
The programming example you give is a good one. There’s a particular technique required to get from a naive framing of a problem to a program that solves that problem, and until you get the knack of thinking that way your programs don’t work, and writing a working program is far more rewarding than anything else you might do in a programming class.
Something similar happens with puzzle-solving, which is another activity that a lot of soi-disant rationalists emphasize.
But… is any of that the same as getting people to “expect reality to make sense”; is it the same as that “click” the OP is talking about? Is any of it the same as what the LW community refers to as “being rational”?
I’m not sure, actually. The problem is that in all of these cases the technique comes out of an existing scenario with an implicit goal, and we are trying to map that post facto to some other goal (rationality, click, expecting reality to make sense).
The more reliable approach would be to start from an operational definition of our goal (or a subset of our goal, if that’s too hard) and artificially construct scenarios whose reward conditions depend on spanning inferential distances that are short using those operations and long otherwise… perhaps as part of a “Methods of Rationality” video game or something like that.