Beyond Smart and Stupid
I’ve often wondered about people who appear to be very smart, and do very stupid things. One theory is that people are smart and stupid independently in different domains. Another theory is that “smart” and “stupid” are oversimplifications. In line with the second theory, here is an ad-hoc set of axes of intelligence, based only on my own observations.
Memory
This may cover several sub-categories. The advantages of a good memory should be obvious.
Ability to follow instructions
This is more important than it sounds. I’m including the ability to verify mathematical proofs, and the ability to design a regression analysis for a psychology study. Things that aren’t research or engineering, but taking known solutions and applying them.
I came up with this because I have some friends who have difficult, complex jobs that they are good at—and yet frequently say incoherent things.
Doctors are smart. Yet doctors are forbidden by law from being creative. They’re great at following instructions. Becoming an MD means transforming yourself into a giant look-up table. You memorize anatomy, physiology, diseases, symptoms, diagnostics, and treatments. For every disease, there is an approved set of clinical diagnostic criteria, an approved set of laboratory diagnostics, an approved way of interpreting those tests, an approved way of presenting the results to the patient, and an approved set of possible treatments. After using your knowledge to observe the patient, rule out some branches of the tree of possibilities, make further tests, and discern what the underlying problem is, if instead of retrieving the approved treatment from your look-up table, you ask the engineering question, “How can we fix this?”, you are on the path to losing your license.
This ability is strongly correlated with memory.
Ability to think outside the box
This may be the same thing as creativity; but calling it “thinking outside the box” is less vague. “Ability to not conform” is also part of it. This may be anti-correlated with memory and the ability to follow instructions; it’s probably more difficult to think of new approaches if old ones leap quickly to mind, just like it’s difficult to compose music if every theme in your head turns into a Beatles tune after two bars.
This is the distinction between an M.S. and a Ph.D. (and between an M.D. and a Ph.D. - can you tell which I have?) The only purpose of the years of agony of doing a dissertation no one will read is to show that you can do something original.
You can be great at thinking outside the box, and still be crazy. Google Ment.if.ex, without the dots. (Do not write his name in the comments without the dots. Writing his name online summons him. I’m not joking.)
Ability to notice success and failure
A friend kept telling me about a woman he knew who he thought would be great for me. He told me she was smart, pretty, friendly, and fun. Eventually I gave in and told him I’d like to meet her.
Instead of doing what had worked on me—telling her that I was right for her, smart, pretty, etc. - he told her that I was interested in her. Predictably, she said, “That’s creepy—I don’t even know him!” I asked him how it was that, in his thirty-some years of life, he hadn’t noticed that that never works. He said he wanted to be straightforward, not sneaky.
Sadly, morals are a big cause of not being able to notice success and failure. Someone who believes they’re doing the right thing doesn’t allow themself to ask whether they succeed or fail.
The friend I mentioned is very good at following instructions. If you’re following instructions, you might not be checking up on whether you’re succeeding or failing. Rationalists are often bad at noticing success and failure. Maybe it’s because we’re good at following instructions—instructions on how to be rational. We’re likely to follow our program of, say, trying to reason someone into a political view, or into liking us, without noticing that that doesn’t work.
Categorization
This is a big one. It’s pretty close to “analytical ability”. By categorization I mean the ability to notice when two words mean the same thing, or similar things, or different things. Or when two situations or systems are similar. Or when one assumption is really two or three. Analogical reasoning requires good categorization skills. So does analytic thinking. A major fault in most people’s analytic ability is their inability to keep their terms straight, and use them consistently.
I include under categorization, the ability to generalize appropriately for the task. Overgeneralizing during analysis leads to sloppy thinking; undergeneralizing while brainstorming stifles creativity.
Social intelligence
Is this a useful primitive category? Lots of people think it is. Perhaps I don’t have enough of it to understand it.
- 4 Apr 2012 3:11 UTC; 4 points) 's comment on Welcome to Less Wrong! (2012) by (
For what that’s worth, when I reflect on my past blunders, the worst ones I can think of were due to misunderstandings of the unwritten and unspoken de facto rules according to which various institutions and human interactions work in practice. In these situations, I would either act according to the official rules and the respectable pious principles in situations where you’re expected to break them, or I would break them in ways that seemed inconsequential to me but were in fact serious. (Sometimes I’d even feel bad for breaking them when there seemed to be no alternative, when in fact such breaking was tacitly considered business as usual.)
To me it seems evident that the ability to figure out the de facto rules quickly, instinctively, and accurately is mostly independent of general intelligence. It is certainly one of the key abilities that differentiate high achievers (and, conversely, big-time losers) from the rest. Its relation with other aspects of human social behavior and social skills is a complex and fascinating open question. For example, the talent for rule navigation seems to be largely independent of charisma, even though both can be a solid basis for high achievement. (Some historical events provide fascinating examples of clashes between super-charismatic and ingenious rule-navigating types—think Trotsky vs. Stalin.)
This, incidentally, is a topic where I have found the insight from OB/LW about status and signaling significantly helpful in clearing up some confusions. Still, there are issues where I can’t get my head around the de facto rules. For example, when it comes to certain beliefs that are nowadays considered disreputable, I observe people who were severely penalized just for suggesting that they might harbor them, but at the same time other people who have expressed them pretty openly without getting into any problems. Clearly there must be some significant differences involved, but I have nothing except vague hypotheses.
Great first two paragraphs. As to the third paragraph, I have two questions. Do you know any specific examples where people were penalized for merely suggesting they might harbor disreputable ideas? And how do you know that differences in whether people get away with these things aren’t just due to random chance?
I wouldn’t like to get into specific examples, not just because the issues are extremely contentious, but also because I don’t want to write things like “X has expressed belief Y” in an easily googlable form and on a high-ranking website.
But to answer your questions, yes, I have seen several occasions where people publicly wrote or said something that suggested disreputable views only remotely and indirectly, and as a result were exposed to public shaming campaigns of the sort that may tar one’s reputation with serious consequences, especially now that this stuff will forever come up when someone googles their names. In at least one of these cases, I am certain that the words were entirely innocent of the imputed meaning. (Feel free to PM me if you’re curious about the details.)
Even when it comes to open and explicit expressions of dangerous views, I still observe vast differences. I’m sure that sometimes this is due to random chance, for example if a journalist randomly decides to make a big deal out of something that would have otherwise passed unnoticed. However, this can’t possibly be the whole story, since I have seen people repeatedly say and write in prominent public venues practically the same things that got others in trouble, without any apparent bad consequences. There are possible explanations that occur to me in each particular case, but I’m not sure if any of them are correct.
I’d say chance is already a factor (is someone digging for dirt against that person? Is the topic currently “hot”?), and in general “does it make a good soundbite?”. Disreputable opinions don’t get repeated as much when they are phrased in academic jargon, or indirectly implied in a way that can only be understood with a lot of context. There’s also the question of incentives, i.e. people are more likely to dig up dirt on the president of a law school than on an average Joe.
I agree that all these considerations can be significant, but I don’t think they are sufficient to explain everything I’ve seen.
Would you mind clarifying this a little? While I’d certainly believe such situations exist, I can’t think of any unmuddled examples offhand, and it seems like an interesting test case for social analysis.
I’d be interested in reading more about the unwritten and unspoken de facto rules, and about what can be ignored and what can’t. That’s the kind of thing I tend to be bad at, so I’d like to get the experience of others.
Well, any really interesting examples are likely to be controversial, since they necessarily involve repudiating some official rules, accepted norms, or respectable principles. Also, this sort of knowledge can be extremely valuable and not given away easily, or even admitted to, by those who have it. This is assuming they even have the ability to articulate it explicitly rather than just playing by instinct—the latter is of course superior in practice, since it enables perfect duplicity between pious words and effective actions. Of course, at the same time, lots of people will talk nonsense about these topics as a status-gaining ploy.
Some examples would still be nice, even if controversial.
Some that I can think of:
A lot of what Pick-Up Artists talk about, i.e. the way a boy is “supposed” to behave to get a girl isn’t always the way that actually works (I remember reading something about how the “traditional” wooing behavior made more sense in a context where you were mainly going after the approval of the girl’s parents, but I haven’t researched the subject in depth).
Much milder, “it’s better to ask for forgiveness than permission”, i.e. bypassing “official” hierarchy to get crap done
That some churches don’t care that much about the actual professed belief
For many students, networking and contacts is more useful for the future than the degree you get or what you learn in classes (that’s not a very big secret is it?)
When it’s OK to ask for certain fees to be waived, to ask for a discount, to haggle
When it’s OK to bribe someone (probably much more relevant in less-industrialized countries)
A lot of stuff is probably specific to a culture, or even to an organization.
Yes, these are all good examples. Some other ones that come to mind are:
Traffic rules: the ones that other drivers expect you to follow and cops actually enforce are significantly different from the formal ones. (For example, speed limits.)
Dealing with bureaucracies, both governmental and private ones. Their real operational rules are usually different from the formal ones, and you can use this not only to save time and effort, but also to exploit all kinds of opportunities that theoretically shouldn’t exist at all.
Excusing your offenses and failures by presenting them as something that, while clearly not good, is still within the bounds of what happens to reasonable, respectable, high-status people. If you pull this off successfully, people will be much more forgiving, and the punishments and reputational consequences far milder—and you can be much bolder in your endeavors, knowing that you have this safety exit if you’re unlucky. This basically means exploiting people’s unwritten practical rules for judgment, which may treat very differently things that are theoretically supposed to be equally bad.
The exact bounds to which you can push self-promotion without risking being exposed as a liar and cheater. This is essential since if you’re not an extraordinary achiever whose deeds speak for themselves, you’re stuck in a nasty arms race in which everyone is putting spin and embellishing the truth. However, it’s far from clear which rules determine in practice where exactly this stops being business as usual and enters dangerous territory.
By the way, my thoughts on this matter were at one point stimulated by this shrewd quote by Lord Keynes:
One distinction I always liked might be called “fertile” vs. “focused” intelligence.
Fertile intelligence is the ability to come up with ideas. People with fertile minds come up with lots of startling, original ideas … and often lots of wrong ones. They’re quick to recognize analogies (“Wow, X works just like Y!”). On the other hand they can get hung up on non sequiturs and ideas that seem neat but don’t make sense. Maybe the real underlying skill here is very perceptive and quick pattern-matching.
Focused intelligence is the idea to see what’s important and ignore the rest. People with focused minds aren’t great at coming up with original ideas; they solve problems by saying “What’s this problem really about?” and attacking it head-on. They’re good at constructing logical arguments because they don’t get sidetracked. They tend to be very predictable. They ignore things that “just might work” and stay where most of the probability mass is.
When a “fertile” mind solves a problem, you look at the solution and think “Wow, how did she ever think to do it that way?” When a “focused” mind solves a problem, you look at the solution and think “Wow, that’s the only natural way to do it… why didn’t everybody do it this way before?”
John Stuart Mill claimed that women tended to have fertile minds while men had focused minds (the words are my own, but I think I’m capturing his descriptions.) I’m not sure if it’s true generally of men and women, but I do see people fall into those types.
Just ran across this quote from John Holt and thought it might apply to this discussion:
I’d say somewhere between your “categorisation” and “thinking outside the box” axes would be taking ideas from one domain and figuring out their applications in others. That would be one example of creativity being aided by powerful memory.
Not primitive enough. I’d subdivide it at least into something like “people-reading”, “mind-theorising”, “interest-tracking” and “performance”. There are plenty of cases of people being good at one of those but terrible at another.
I also think expressive and language skills could usefully be added to the list. Musical and kinesthetic aptitudes could be added for completeless, although they’re probably not within the scope of the “intelligence” you seem to be talking about.
I’d phrase it as people who are good at thinking (that could follow instructions, think outside the box, have a good short-term memory, etc.) failing to make good decisions, which can be attributed to several causes:
1) The decision doesn’t require thinking as much as “hard-coded” instincts or experience in that domain—this would cover a lot of social domains like seduction, negotiation, reading people’s mood, getting out of a fight, reassuring someone who’s afraid. In some cases, relying on thinking can make things worse.
2) The decisions requires thinking, but for some reason thinking isn’t actually used, or is overridden by an emotions, or is used to shoot oneself in the foot. This would include things like playing the lottery, choosing a career based on a superficial impression and no research, or coming up with clever reasons to keep believing in the pyramid scheme you signed into. Thinking about important and personal things might be especially painful.
I think your comments on medical doctors go some what too far. Doctors who approach medicine with an engineering perspective of “how can I fix this” are stupid—the effects of most interventions are subtle or counter-intuitive and thus can only be reliably determined by quality clinical trials.
Much of being a doctor comes down to pattern recognition—what you have consciously decided to memorise is only part of the story and lays only the foundations for future learning. For instance, even with the textbook in front of you I doubt most could competently perform a clinical examination—it is often difficult to tell the difference between normal variation and a pathological sign.
Performing a procedure is also not a simple as being a gigantic look up table, also you neglect that many medical doctors will be involved with research at some point in their careers as research plays a huge part of this profession.
Medical doctors must also apply their EQ to treat well, which is not not wrote learned. I do agree however that having a large knowledge base is a key part of the profession, more so than for engineers and such.
Regarding thinking outside of the box I do not think it would be anti-correlated with memory at all, in fact the opposite. True thinking out of the box doesn’t happen by magic, it involves thinking about a problem and getting to know it totally intimately from there you can start to see new solutions. Additionally I think I have read that memory correlates strongly with problem solving and other forms of intelligence, and it may be that memory and cognition are really applications of the same fundamental thing—I’ve not completed by studies of cognitive science yet but it seems that information storage and computation aren’t truely separate in neural networks.
A doctor faces a patient whose problem has resisted decision-tree diagnosis—decision trees augmented by intangibles of experience and judgement, sure. The patient wants some creative debugging, which might at least fail differently. Will they get their wish? Not likely: what’s in it for the doctor? The patient has some power of exit, not much help against a cartel. To this patient, to first order, Phil Goetz is right, and your points partly elaborate why he’s right and partly list higher-order corrections.
(I did my best to put it dispassionately, but I’m rather angry about this.)
Um… what so you’d rather have diagnoses that are not based upon data? Or a diagnosis which is made up versus no diagnosis? I don’t quite understand what you mean. Illnesses in the human body cannot be solved in the same way as an engineering problem, particularly at the margins. Most of the medical knowledge that could be derived without careful and large clinical trials is already known—I’m not sure what you expect a single doctor to do.
Furthermore, note that most patients will not die undiagnosed—bar situations, such as in geriatric patients, where many things are so suboptimal that you just can’t sort out what is killing them and what is just background noise. It is very rare that “creative debugging” would be of any use at all.
Secondly, many patients in a terminal situation often what more medicine. They feel that not treating with aggressive chemotherapy or some such treatment is giving up. This is not always the case, it is often in terminal illnesses that palliative care is the best option and avoiding aggressive treatment will in fact lead to a longer life. No amount of debugging will change that.
Let me stress once again that it is not often a patient will die where a diagnosis has not been achieved where the correct diagnosis would have materially changed the outcome.
Related: What Intelligence Tests Miss happens to specifically use the expression “smart but acting stupid” (and seeks to explain it via various biases and mindware problems).
Upvoted because I think this approach is useful.
There’s another axis of intelligence that I’ve frequently noticed: the ability to imagine consequences.
Citation needed.
What if the instructions say, loosely speaking, “observe, plan, act, measure, adapt”?
If the instructions say that, and people don’t follow the “measure, adapt” part, then they’re not good at following instructions.
If the instructions don’t say that, then they’re lousy instructions.
Do you intend this in the sense of taking ideas seriously? I would agree that such an ability merits its own axis.
Ya got me. I can’t support this as a claim. Consider it a hypothesis.
I’m assuming you know of Gardner?
Nope. Thanks. Those intelligences are all very broad—kinesthetic intelligence exists, and can make you rich; but it’s not of interest to me in this context. It doesn’t surprise me that someone can be a great baseball player, a great musician, a great poet, or a great painter or architect, and still say stupid things. The only ones listed by Gardner that interest me in this context are Linguistic, Mathematical, and Intrapersonal.
I don’t think “naturalistic” and “musical” intelligence belong even in Gardner’s list. Those are skills people practice.
Do you think Gardner’s theory solves the problem Phil puts forth? If so, how? Or are you saying “maybe it solves it, maybe it doesn’t, but it’s related, so let’s look at it for extra insight into the problem”?
Ah, right. Let me clarify. I’m just saying that it’s clearly related. Lots of relevant research on this question has been done in the context of Gardner’s theory.
I am skeptical of the claim that Gardner has a theory. Lots of papers cite Gardner, but I am skeptical of the claim that it is useful to label them “research.”
As something of a hobby psychoceramicist I found this to be a fascinating suggestion. Thank you :-)
I have always thought that noticing success and failure was a key part of being a great rationalist. How can you improve if you can not tell a success from a failure? How can you decided to update or say oops?
There are many instances of great rationalists who never admit to being wrong, even after being demonstrated convincingly to be wrong.
Can you give an example? My initial reaction is that I could only consider such rationalists decent and only if their failure to update was in a narrow field. Great rationalist should be good and epistemic rationality and should be able to update based on convincing evidence.
One of the reasons I dislike “rationalist” as a term is that it tends to produce namespace collisions between its senses of “one who practices rationality” and “one who produces results useful to rational decision-making”. I suspect one such collision is responsible for this particular confusion.
I am not sure I follow your second definition let me reword part of your two definitions to make sure I parsed it correctly.
“one who practices rationality” vs “one who produces results considered rational in retrospect”
Do these match your pair?
I was trying to make more of a use/implementation distinction. People around here frequently use the word “rationalist” to refer to the people involved in creating or popularizing the theory of rationality, but it often happens that those people failed to fully internalize their theory, applied it only selectively, or (generously) lived in a cultural environment that limited its full expression.
Your pair also looks like a useful distinction, but I’d break that one down more in terms of conscious awareness of the art. A lot of disciplines demand aspects of instrumental rationality, but producing good results in them isn’t necessarily the result of a formalizable process, so I don’t think it’s proper to speak of every high-level businessman or professional poker player, say, as a master rationalist.
I agree completely. I do not think of them as my pair, they were just a tool to help understand your pair.
I now think I understand the pair you were trying to communicate. When I read great rationalist I think of someone who has successfully applied rationality over a great breadth of their life. So “one who practices rationality” but are not “one who produces results useful to rational decision-making” and those that “one who produces results useful to rational decision-making” but are not “one who practices rationality” have both only implement rationality in a limited breadth of their life and I would not have described either as great rationalists, at least when keeping all other variables equal.
Fair enough, but I’m not trying to establish a definition, only to point out that people here use the word to indicate both components alone as well as their conjunction, and that doing so has the potential to generate confusion.
I offer the following as a data point for calibration: I think you had already communicated effectively that you were not trying to establish a definition.
I also agree.
I can give examples of great rationalists who never admitted to being wrong in particular debates. Some defenders of the phlogiston theory whose names I can’t remember. Einstein. Numerous famous biologists in their attacks on sociobiology. Other famous biologists in their attacks on group selection. I’m pretty sure you could come up with a nonending stream of examples if you studied the history of science.
There are a few people here on LessWrong whom I think are great rationalists, in an absolute sense; but who AFAIK have never acknowledged being caught in a mistake. That indicates something wrong. Even if you really never have made a mistake, that would indicate that you haven’t tried anything hard.
(P.S. - You get no credit for changing your mind by realizing your error yourself, partial credit for changing your mind after reading something written by a non-threatening dead person, and full credit for admitting to someone during an argument that they were right and you were wrong.)
I’d agree with that assessment.
But I give plenty of credit for that. :)
Ok, I think I understand your meaning of “great rationalist” now. You are talking about people who helped humanity make great advances but at some point claimed certainty where they should have said “I don’t know.” They failed to discern the edge of their knowledge, ran off a cliff and then denied it. Would that be fair?
Am sceptical—everyone makes mistakes—that’s an important way of learning.
Surely everyone has at least one “mistake” anecdote, that’s innocent enough to confess to—even if they mostly want to signal their flawlessness.
There are many instances of great rationalists who never admit to being wrong in one particular debate, even after being demonstrated convincingly to be wrong.
A much weaker thesis.
Perhaps don’t be too hard on people for what they don’t do.
Instead, treat each message you do receive as a blessing!
This. Up voted.