I acknowledge & respect this criticism, but for two reasons I maintain Simon had a worthwhile insight(!) here that bears on rationality:
Insight, intuition & recognition aren’t quite the same, but they overlap greatly and are closely related.
Simon’s comment, although not literally true, is a fertile hypothesis that not only opens eyeholes into the black boxes of “insight” & “intuition”, but produces useful predictions about how minds solve problems.
I should justify those. Chapter 4 of Simon’s The Sciences of the Artificial, “Remembering and Learning: Memory as Environment for Thought”, is relevant here. It uses chess as a test case:
[...] one part of the grandmaster’s chess skill resides in the 50,000 chunks stored in memory, and in the index (in the form of a structure of feature tests) that allows him to recognize any one of these chunks on the chess board and to access the information in long-term memory that is associated with it. The information associated with familiar patterns may include knowledge about what to do when the pattern is encountered. Thus the experienced chess player who recognizes the feature called an open file thinks immediately of the possibility of moving a rook to that file. The move may or may not be the best one, but it is one that should be considered whenever an open file is present. The expert recognizes not only the situation in which he finds himself, but also what action might be appropriate for dealing with it. [...]
When playing a “rapid transit” game, at ten seconds a move, or fifty opponents simultaneously, going rapidly from one board to the next, a chess master is operating mostly “intuitively,” that is, by recognizing board features and the moves that they suggest. The master will not play as well as in a tournament, where about three minutes, on the average, can be devoted to each move, but nonetheless will play relatively strong chess. A person’s skill may decline from grandmaster level to the level of a master, or from master to expert, but it will by no means vanish. Hence recognition capabilities, and the information associated with the patterns that can be recognized, constitute a very large component of chess skill.⁵ [The footnote refers to a paperin Psychological Science.]
The seemingly mysterious insights & intuitions of the chessmaster derive from being able to recognize many memorized patterns. This conclusion applies to more than chess; Simon’s footnote points to a champion backgammon-playing program based on pattern recognition, and a couple of pages before that he refers to doctors’ reliance on recognizing many features of diseases to make rapid medical diagnoses.
From what I’ve seen this even holds true in maths & science, where people are raised to the level of geniuses for their insights & intuitions. Here’s cousin_it noticing that Terry Tao’s insights constitute series of incremental, well-understood steps, consistent with Tao generating insights by recognizing familiar features of problems that allow him to exploit memorized logical steps. My conversations with higher ability mathematicians & physicists confirm this; when they talk through a problem, it’s clear that they do better than me by being better at recognizing particular features (such as symmetries, or similarities to problems with a known solution) and applying stock tricks they’ve already memorized to exploit those features. Stepping out of cognitive psychology and into the sociology & history of science, the near ubiquity of multiple discovery in science is more evidence that insight is the result of external cues prompting receptive minds to recognize the applicability of an idea or heuristic to a particular problem.
The reduction of insight & intuition to recognition isn’t wholly watertight, as you note, but the gains from demystifying them by doing the reduction more than outweigh (IMO) the losses incurred by this oversimplification. There are also further gains because the insight-is-intuition-is-recognition hypothesis results in further predictions & explanations:
Prediction: long-term practice is necessary for mastery of a sufficiently complicated domain, because the powerful intuition indicative of mastery requires memorization of many patterns so that one can recognize those patterns.
Prediction: consistently learning new domain-specific patterns (so that one can recognize them later) should, with a very high probability, engender mastery of that domain. (Putting it another way: long-term practice, done correctly, is sufficient for mastery.)
Explanation of why “[i]n a couple of domains [chess and classical music composition] where the matter has been studied, we do know that even the most talented people require approximately a decade to reach top professional proficiency” (TSotA, p. 91).
Prediction: “When a domain reaches a point where the knowledge for skillful professional practice cannot be acquired in a decade, more or less, then several adaptive developments are likely to occur. Specialization will usually increase (as it has, for example, in medicine), and practitioners will make increasing use of books and other external reference aids in their work” (TSotA, p. 92).
Prediction: “It is probably safe to say that the chemist must know as much as a diligent person can learn in about a decade of study” (TSotA, p. 93).
Explanation of Eliezer’s experience with being deep: the people EY spoke to perceived him as deep (i.e. insightful) but EY knew his remarks came from a pre-existing system of intuitions (transhumanism and knowledge of cognitive biases) which allowed him to immediately respond to (or “complete”) patterns as he recognized them.
Prediction: “This accumulation of experience may allow people to behave in ways that are very nearly optimal in situations to which their experience is pertinent, but will be of little help when genuinely novel situations are presented” (“On How to Decide What to Do”, p. 503).
Prediction: one can write a computer program that plays a game or solves a problem by mechanically recognizing relevant features of the input and making cached feature-specific responses.
I know I’ve gone on at length here, but your criticism deserved a comprehensive reply, and I wanted to show I wasn’t just being flippant when I quoted Simon. I agree he was hyperbolic, but I reckon his hyperbole was sufficiently minor & insightful as to be RQ-worthy.
Independent of whether the particular quote is labelled a rationality quote, Simon had an undeniable insight in the linked article and your explanation thereof is superb! To the extent that this level of research, organisation and explanation seems almost wasted on a comment. I’ll look forward to reading your future contributions (be they comments or, if you have a topic worth explaining, posts).
The interview that’s linked with the name is excellent, though. In an AI context (“as far as I [the AI guy] am concerned”), the quote makes more sense.
The interview that’s linked with the name is excellent, though. In an AI context, the quote makes more sense.
I’d upvote a link to the article if it were posted in an open thread. I downvote it (and all equally irrational ‘rationalist quotes’) when they are presented as such here.
Yea I sometimes struggle with that: Taken at face value, the quote is of course trivially wrong. However, it can be steelmanned in a few interesting ways. Then again, so can a great many random quotes. If, say, EY posted that quote, people may upvote after thinking of a steelmanned version. Whereas with someone else, fewer readers will bother, and downvote since at a first approximation the statement is wrong. What do, I wonder?
(Example: “If you meet the Buddha on the road, kill him!”—Well downvoted, because killing is wrong! Or upvoted, because e.g. even “you may hold no sacred beliefs” isn’t sacred? Let’s find out.)
Calling different but somewhat related things the same when they are not does not warrant “rationality quote” status.
I acknowledge & respect this criticism, but for two reasons I maintain Simon had a worthwhile insight(!) here that bears on rationality:
Insight, intuition & recognition aren’t quite the same, but they overlap greatly and are closely related.
Simon’s comment, although not literally true, is a fertile hypothesis that not only opens eyeholes into the black boxes of “insight” & “intuition”, but produces useful predictions about how minds solve problems.
I should justify those. Chapter 4 of Simon’s The Sciences of the Artificial, “Remembering and Learning: Memory as Environment for Thought”, is relevant here. It uses chess as a test case:
The seemingly mysterious insights & intuitions of the chessmaster derive from being able to recognize many memorized patterns. This conclusion applies to more than chess; Simon’s footnote points to a champion backgammon-playing program based on pattern recognition, and a couple of pages before that he refers to doctors’ reliance on recognizing many features of diseases to make rapid medical diagnoses.
From what I’ve seen this even holds true in maths & science, where people are raised to the level of geniuses for their insights & intuitions. Here’s cousin_it noticing that Terry Tao’s insights constitute series of incremental, well-understood steps, consistent with Tao generating insights by recognizing familiar features of problems that allow him to exploit memorized logical steps. My conversations with higher ability mathematicians & physicists confirm this; when they talk through a problem, it’s clear that they do better than me by being better at recognizing particular features (such as symmetries, or similarities to problems with a known solution) and applying stock tricks they’ve already memorized to exploit those features. Stepping out of cognitive psychology and into the sociology & history of science, the near ubiquity of multiple discovery in science is more evidence that insight is the result of external cues prompting receptive minds to recognize the applicability of an idea or heuristic to a particular problem.
The reduction of insight & intuition to recognition isn’t wholly watertight, as you note, but the gains from demystifying them by doing the reduction more than outweigh (IMO) the losses incurred by this oversimplification. There are also further gains because the insight-is-intuition-is-recognition hypothesis results in further predictions & explanations:
Prediction: long-term practice is necessary for mastery of a sufficiently complicated domain, because the powerful intuition indicative of mastery requires memorization of many patterns so that one can recognize those patterns.
Prediction: consistently learning new domain-specific patterns (so that one can recognize them later) should, with a very high probability, engender mastery of that domain. (Putting it another way: long-term practice, done correctly, is sufficient for mastery.)
Explanation of why “[i]n a couple of domains [chess and classical music composition] where the matter has been studied, we do know that even the most talented people require approximately a decade to reach top professional proficiency” (TSotA, p. 91).
Prediction: “When a domain reaches a point where the knowledge for skillful professional practice cannot be acquired in a decade, more or less, then several adaptive developments are likely to occur. Specialization will usually increase (as it has, for example, in medicine), and practitioners will make increasing use of books and other external reference aids in their work” (TSotA, p. 92).
Prediction: “It is probably safe to say that the chemist must know as much as a diligent person can learn in about a decade of study” (TSotA, p. 93).
Explanation of Eliezer’s experience with being deep: the people EY spoke to perceived him as deep (i.e. insightful) but EY knew his remarks came from a pre-existing system of intuitions (transhumanism and knowledge of cognitive biases) which allowed him to immediately respond to (or “complete”) patterns as he recognized them.
Explanation of how intensive childhood training produced some famous geniuses and domain experts (the Polgár sisters, William James Sidis, John Stuart Mill, Norbert Wiener).
Prediction: “This accumulation of experience may allow people to behave in ways that are very nearly optimal in situations to which their experience is pertinent, but will be of little help when genuinely novel situations are presented” (“On How to Decide What to Do”, p. 503).
Prediction: one can write a computer program that plays a game or solves a problem by mechanically recognizing relevant features of the input and making cached feature-specific responses.
I know I’ve gone on at length here, but your criticism deserved a comprehensive reply, and I wanted to show I wasn’t just being flippant when I quoted Simon. I agree he was hyperbolic, but I reckon his hyperbole was sufficiently minor & insightful as to be RQ-worthy.
Independent of whether the particular quote is labelled a rationality quote, Simon had an undeniable insight in the linked article and your explanation thereof is superb! To the extent that this level of research, organisation and explanation seems almost wasted on a comment. I’ll look forward to reading your future contributions (be they comments or, if you have a topic worth explaining, posts).
The interview that’s linked with the name is excellent, though. In an AI context (“as far as I [the AI guy] am concerned”), the quote makes more sense.
I’d upvote a link to the article if it were posted in an open thread. I downvote it (and all equally irrational ‘rationalist quotes’) when they are presented as such here.
Yea I sometimes struggle with that: Taken at face value, the quote is of course trivially wrong. However, it can be steelmanned in a few interesting ways. Then again, so can a great many random quotes. If, say, EY posted that quote, people may upvote after thinking of a steelmanned version. Whereas with someone else, fewer readers will bother, and downvote since at a first approximation the statement is wrong. What do, I wonder?
(Example: “If you meet the Buddha on the road, kill him!”—Well downvoted, because killing is wrong! Or upvoted, because e.g. even “you may hold no sacred beliefs” isn’t sacred? Let’s find out.)