Recently I have been thinking about imaginary expertise. It seems remarkably easy for human brains to conflate “I know more about this subject than most people” with “I know a lot about this subject”. LessWrongers read widely over many areas, and as a result I think we are more vulnerable to doing this.
It’s easy for a legitimate expert to spot imaginary expertise in action, but do principles exist to identify it, both in ourselves and others, if we ourselves aren’t experts? Here are a few candidates for spotting imaginary expertise. I invite you to suggest your own.
Rules and Tips vs Principles At some point, a complex idea from [topic] was distilled down into a simple piece of advice for neonates. One of those neonates took it as gospel, and told all their friends how this advice formed the fundamental basis of [topic]. Examples include “if someone touches their nose, they’re lying” and “never end a sentence with a preposition”.
If someone offers a rule like this, but can’t articulate a principled basis for why it exists, I tend to assume they’re an imaginary expert on the subject. If I can’t offer a principled basis for any such rule I provide myself, I should probably go away and do some research.
Grandstanding over esoteric terminology I’ve noticed that, when addressing a lay audience, experts in fields I’m familiar with rarely invoke esoteric terminology unless they have to. Imaginary experts, on the other hand, seem to throw around the most obscure terminology they know, often outside of a context where it makes sense.
I suspect being on the receiving end of this feels like Getting Eulered, and dishing it out feels like “I’m going to say something that makes you feel stupid”.
Heterodoxy I have observed that imaginary experts often buy into the crackpot narrative to some extent, whereby established experts in the field are all wrong, or misguided, or slaves to an intellectually-bankrupt paradigm. This conveniently insulates the imaginary expert from criticism over not having read important orthodox material on the subject: why should they waste their time reading such worthless material?
In others, this probably rings crackpot-bells. In oneself, this is presumably much more difficult to notice, and falls into the wider problem of figuring out which fields of inquiry have value. If we have strong views on an established field of study we’ve never directly engaged in, we should probably subject those views to scrutiny.
I agree with what you wrote. Having said this, let’s go meta and see what happens when people will use the “rules and tips” you have provided here.
A crackpot may explain their theory without using any scientific terminology, even where a scientist would be forced to use some. I have seem many people “disprove” the theory of relativity without using a single equation.
If there is a frequent myth in your field that most of the half-educated people believe, trying to disprove this myth will sound very similar to a crackpot narrative. Or if there was an important change in your field 20 years ago, and most people haven’t heard about it yet, but many of them have read the older books written by experts, explaining the change will also sound like contradicting all experts.
In response to your second point, I’ve found “field myths” to be quite processable by everyday folk when put in the right context. The term “medical myth” seems to be in common parlance, and I’ve occasionally likened such facts to people believing women have more ribs than men, (i.e. something that lots of people have been told, and believe, but which is demonstrably false).
It does seem a bit hazardous to have “myths” as a readily-available category to throw ideas in, though. Such upstanding journalistic tropes as Ten Myths About [Controversial Political Subject] seem to teach people that any position for which they hold a remotely plausible counterargument is a “myth”.
There was a post some time ago which I can’t find right now which talked about the danger of one field looking down upon other fields. The prototype example is hard sciences looking down on soft sciences. The soft sciences are seen as less rigorous or less mathematically well founded and thus if you belong to the higher domain you think yourself better qualified to judge the presumably less rigorous fields.
It was this post by Robin Hanson. However, I am not sure that “status” (which suggests having something like a one-dimensional order) is the right explanation here. For example, philosophers tend to say things about a lot of other fields and people from other fields tend to say a lot of things about philosophy. Therefore, unless one considers all those fields to be of equal status, one-dimensional order doesn’t seem to be applicable as an explanation.
It seems to me that:
if field A has a lot of widely applicable methods that are abstract enough and/or have enough modelling capabilities so as to be able to model a wide variety of objects, including most/all objects that are the object of research of field B, then the experts of field A will often express their opinions about the object of research of field B, theories of field B, and the state of field B itself. In other words, if field A has a widely applicable powerful method, then experts of field A will try to apply this method to other fields as well. For example, physics has a lot of powerful methods that are applicable well beyond physics. Some ideas developed in philosophy are often abstract enough so as to be (or seem to be) applicable to a wide variety of things.
Note, however, that in some cases these methods may fail to capture important features of the object of field B. On the other hand, experts of field B have incentives to claim that methods of field A are not applicable to their field, because otherwise field B is redundant. In response, they may start to focus on the aspects of their field that are harder to model using field A’s tools, whether those aspects are important or not. Yet another interesting case is when there are two fields A and A_1 whose methods are capable of modelling certain aspects of the object of research of field B. Then experts of A may denounce the application of A_1′s methods to the field B (and vice versa) as failing to capture important things and contradicting findings of field A.
if the object of research of field A is closely related to human experiences that are (very) common and/or concepts that are related to common sense concepts (e.g.cover the same subject), then people of other fields will often express their opinions about the object of field A, theories of field A and field A itself. For example, many concepts of philosophy, psychology (and maybe most/all social sciences) are directly related to human experiences and common sense concepts, therefore a layperson is more likely to speculate about the topics of those fields.
Well, of course there is the idea that the findings of two research fields A and B should not contradict each other. And there is this informal “purity” rank, i.e. if A is to the right of B, then experts of A will regard findings of B as wrong, whereas experts of B will regard theories of A as failing to capture important features of the object of research of their field. However, it doesn’t seem to me that all fields can be neatly ordered this way. One reason is there is at least two possible partial orders. One is the “reductionist” way, i.e. if the object of field B is made out of the stuff that is the object of field A, then field A is to the right of object B. Another way is to arrange fields according to their methods. If field B borrows a lot of methods from the field A, but not vice versa (in other words, methods of A can be used to model the object of field B), then field B can be said to be to the left of A. In some cases, these two orders do not coincide. For example, the object of economics is made out of the object of psychology, however, it is my impression that most subfields of economics borrow their methods directly from mathematics (or physics), i.e. in the second partial ordering psychology is not between economics and mathematics. By the way, conflating these two orders might give rise to intuitions that are not necessarily grounded in reality. For example, mathematics (despite the fact that not all physicists’ results are formalized yet), is probably to the right of physics when we order fields according to their methods. That gives us intuition that mathematics is to the right of physics in the “reductionist” order as well, i.e. the object of physics is made out of mathematical structures. Well, this might turn out to be true but you should not proclaim this as casually as some people do.
if a research area has reached a dead end and further progress is impossible except perhaps if some extraordinary path-breaking genius shows the way, or in an area that has never even had a viable and sound approach to begin with, it’s unrealistic to expect that members of the academic establishment will openly admit this situation and decide it’s time for a career change.
Thus: You are likely having imaginary expertise if all low hanging fruits in your field have been taken.
Ideological/venal Interest Heuristic
You should be doubtful about your expertise when
things under discussion are ideologically charged or a matter in which powerful interest groups have a stake.
Note that these concern your expertise despite you actually being an expert.
You should note that these cut both ways—given a field of research, they reduce (ceteris paribus) the trustworthiness of both layperson’s independent attempts to form an opinion about this field and the trustwortiness of the established experts in that field.
If we have strong views on an established field of study we’ve never directly engaged in, we should probably subject those views to scrutiny.
Sounds like a pretty good idea to me. You’ve listed some symptoms, I guess if you catch yourself engaging in such behaviour, there are potential solutions?
Avoid heterodoxy by reading opposing viewpoints while steelmanning as much as possible.
I have an ongoing project to read an introductory textbook for every subject I claim to be interested in. This won’t make me an expert in those subjects, but it should hopefully stop me saying things that cause actual experts to facepalm.
I can see what you mean, but rephrasing what someone else has said is the opposite of using unnecessarily confusing phrasing, no? It’s just good practice, and a big part of what the guide to words stuff was all about.
But you’re right, I should probably put in a link for steelmanning, just in case.
You’ve got a worthwhile project there, but I gather that in sufficiently complex fields (like engineering) there’are a lot of rules and tips because there isn’t a complete set of principles.
I didn’t actually have any examples—the idea that engineering has methods which aren’t firmly based in math or physics was just something I’d heard enough times to have some belief that it was true. The link has some examples.
I’m not sure these are quite the same “rules and tips”. There’s still a rationale behind model assumptions that an expert could explain, even if that rationale is “this is essentially arbitrary, but in practice we find it works quite well”.
At some point, a complex idea from [topic] was distilled down into a simple piece of advice for neonates. One of those neonates took it as gospel, and told all their friends how this advice formed the fundamental basis of [topic]. Examples include “if someone touches their nose, they’re lying”
The reasoning for “if someone touches their nose, they’re lying” is:
If someone lies they feel anxiety. It’s well known that anxiety can produce effects like someone getting red because of changes in blood circulation due to tension changes. Some tension changes can make a person want to touch the corresponding area.
Something along those lines is the motivation for that tip. The problem is that there are many reason why someone touches their nose and it’s no good tell unless you calibrated it for a person.
Of course you could say that the fact that I know the explanation suggest that I know something about the topic. I’m probably better than average at lie detection. On the other hand I have not practiced that skill well enough to trust my skills to draw strong conclusions.
On the other hand in biology there are many cases where I have true beliefs where the justification isn’t deeper than “the textbook says so”. There no underlying logic that can tell you what a neurotransmitter does in every case. Evolution is quite random.
In many cases I could tell you which textbook or university lecture is responsible for me believing a certain fact about biology.
Facts backed by empirical research are better than facts backed by theoretical reasoning like the one in the nose touching example.
I’ve heard a few justifications for the nose-touching example, and the one you provided is new to me.
The nose-touching example could alternatively have an empirical basis rather than a theoretical one. It came to mind because areas like body language, “soft skills” and the like are, in my experience, rife with people who’ve learned a list of such “facts”.
That’s a good start, now take those ideas about imaginary expertise and use them to do some introspection… :)
Anyway, it’s hard to clearly define what an ‘imagined expert’ is. All it means is that the person overestimates their knowledge of a subject, which affects ALL human beings, even bona-fide experts. Expertise in anything other than a very small slice of human knowledge is obviously impossible.
It’s easy to spot lack of knowledge in another person but not so easy to spot it in yourself.
About ‘crackpottery’, I don’t like labelling people as crackpots. Instead it’s much better to talk about crackpot ideas. There is no distinction between ‘crackpot’ and ‘wrong’. All ‘crackpot’ means is that the idea has very low likelihood of being true given our knowledge about the world; this is just the definition of ‘wrong’. It is in human nature to be immediately suspicious of consensus; that’s not the problem (in fact it’s part of the reason why science exists in the first place). The problem is when people try to push alternative agendas by pushing wrong information as correct information. That’s not a problem of imagined expertise; it’s a problem of willful misguiding of others.
I don’t see sixes_and_sevens labeling people as crackpots:
I have observed that imaginary experts often buy into the crackpot narrative to some extent, whereby established experts in the field are all wrong, or misguided, or slaves to an intellectually-bankrupt paradigm.
(Emphasis added.)
In other words, it’s not that someone has or lacks the crackpot label. Rather, there is a “crackpot narrative”, a sort of failure mode of reasoning, which people can subscribe to (or repeat to themselves or others) to a greater or lesser extent.
The difference is significant. It’s like the difference between saying “Joe is a biased person” and saying “Joe sure does seem to exhibit fundamental attribution error an awful lot of the time, doesn’t he?”
Recently I have been thinking about imaginary expertise. It seems remarkably easy for human brains to conflate “I know more about this subject than most people” with “I know a lot about this subject”. LessWrongers read widely over many areas, and as a result I think we are more vulnerable to doing this.
It’s easy for a legitimate expert to spot imaginary expertise in action, but do principles exist to identify it, both in ourselves and others, if we ourselves aren’t experts? Here are a few candidates for spotting imaginary expertise. I invite you to suggest your own.
Rules and Tips vs Principles
At some point, a complex idea from [topic] was distilled down into a simple piece of advice for neonates. One of those neonates took it as gospel, and told all their friends how this advice formed the fundamental basis of [topic]. Examples include “if someone touches their nose, they’re lying” and “never end a sentence with a preposition”.
If someone offers a rule like this, but can’t articulate a principled basis for why it exists, I tend to assume they’re an imaginary expert on the subject. If I can’t offer a principled basis for any such rule I provide myself, I should probably go away and do some research.
Grandstanding over esoteric terminology
I’ve noticed that, when addressing a lay audience, experts in fields I’m familiar with rarely invoke esoteric terminology unless they have to. Imaginary experts, on the other hand, seem to throw around the most obscure terminology they know, often outside of a context where it makes sense.
I suspect being on the receiving end of this feels like Getting Eulered, and dishing it out feels like “I’m going to say something that makes you feel stupid”.
Heterodoxy
I have observed that imaginary experts often buy into the crackpot narrative to some extent, whereby established experts in the field are all wrong, or misguided, or slaves to an intellectually-bankrupt paradigm. This conveniently insulates the imaginary expert from criticism over not having read important orthodox material on the subject: why should they waste their time reading such worthless material?
In others, this probably rings crackpot-bells. In oneself, this is presumably much more difficult to notice, and falls into the wider problem of figuring out which fields of inquiry have value. If we have strong views on an established field of study we’ve never directly engaged in, we should probably subject those views to scrutiny.
I agree with what you wrote. Having said this, let’s go meta and see what happens when people will use the “rules and tips” you have provided here.
A crackpot may explain their theory without using any scientific terminology, even where a scientist would be forced to use some. I have seem many people “disprove” the theory of relativity without using a single equation.
If there is a frequent myth in your field that most of the half-educated people believe, trying to disprove this myth will sound very similar to a crackpot narrative. Or if there was an important change in your field 20 years ago, and most people haven’t heard about it yet, but many of them have read the older books written by experts, explaining the change will also sound like contradicting all experts.
In response to your second point, I’ve found “field myths” to be quite processable by everyday folk when put in the right context. The term “medical myth” seems to be in common parlance, and I’ve occasionally likened such facts to people believing women have more ribs than men, (i.e. something that lots of people have been told, and believe, but which is demonstrably false).
It does seem a bit hazardous to have “myths” as a readily-available category to throw ideas in, though. Such upstanding journalistic tropes as Ten Myths About [Controversial Political Subject] seem to teach people that any position for which they hold a remotely plausible counterargument is a “myth”.
Status of your Domain
There was a post some time ago which I can’t find right now which talked about the danger of one field looking down upon other fields. The prototype example is hard sciences looking down on soft sciences. The soft sciences are seen as less rigorous or less mathematically well founded and thus if you belong to the higher domain you think yourself better qualified to judge the presumably less rigorous fields.
It was this post by Robin Hanson. However, I am not sure that “status” (which suggests having something like a one-dimensional order) is the right explanation here. For example, philosophers tend to say things about a lot of other fields and people from other fields tend to say a lot of things about philosophy. Therefore, unless one considers all those fields to be of equal status, one-dimensional order doesn’t seem to be applicable as an explanation.
It seems to me that:
if field A has a lot of widely applicable methods that are abstract enough and/or have enough modelling capabilities so as to be able to model a wide variety of objects, including most/all objects that are the object of research of field B, then the experts of field A will often express their opinions about the object of research of field B, theories of field B, and the state of field B itself. In other words, if field A has a widely applicable powerful method, then experts of field A will try to apply this method to other fields as well. For example, physics has a lot of powerful methods that are applicable well beyond physics. Some ideas developed in philosophy are often abstract enough so as to be (or seem to be) applicable to a wide variety of things. Note, however, that in some cases these methods may fail to capture important features of the object of field B. On the other hand, experts of field B have incentives to claim that methods of field A are not applicable to their field, because otherwise field B is redundant. In response, they may start to focus on the aspects of their field that are harder to model using field A’s tools, whether those aspects are important or not. Yet another interesting case is when there are two fields A and A_1 whose methods are capable of modelling certain aspects of the object of research of field B. Then experts of A may denounce the application of A_1′s methods to the field B (and vice versa) as failing to capture important things and contradicting findings of field A.
if the object of research of field A is closely related to human experiences that are (very) common and/or concepts that are related to common sense concepts (e.g.cover the same subject), then people of other fields will often express their opinions about the object of field A, theories of field A and field A itself. For example, many concepts of philosophy, psychology (and maybe most/all social sciences) are directly related to human experiences and common sense concepts, therefore a layperson is more likely to speculate about the topics of those fields.
Well, of course there is the idea that the findings of two research fields A and B should not contradict each other. And there is this informal “purity” rank, i.e. if A is to the right of B, then experts of A will regard findings of B as wrong, whereas experts of B will regard theories of A as failing to capture important features of the object of research of their field. However, it doesn’t seem to me that all fields can be neatly ordered this way. One reason is there is at least two possible partial orders. One is the “reductionist” way, i.e. if the object of field B is made out of the stuff that is the object of field A, then field A is to the right of object B. Another way is to arrange fields according to their methods. If field B borrows a lot of methods from the field A, but not vice versa (in other words, methods of A can be used to model the object of field B), then field B can be said to be to the left of A. In some cases, these two orders do not coincide. For example, the object of economics is made out of the object of psychology, however, it is my impression that most subfields of economics borrow their methods directly from mathematics (or physics), i.e. in the second partial ordering psychology is not between economics and mathematics. By the way, conflating these two orders might give rise to intuitions that are not necessarily grounded in reality. For example, mathematics (despite the fact that not all physicists’ results are formalized yet), is probably to the right of physics when we order fields according to their methods. That gives us intuition that mathematics is to the right of physics in the “reductionist” order as well, i.e. the object of physics is made out of mathematical structures. Well, this might turn out to be true but you should not proclaim this as casually as some people do.
Some indications can be taken from Some Heuristics for Evaluating the Soundness of the Academic Mainstream in Unfamiliar Fields:
Low Hanging Fruit Heuristic
Thus: You are likely having imaginary expertise if all low hanging fruits in your field have been taken.
Ideological/venal Interest Heuristic
You should be doubtful about your expertise when
Note that these concern your expertise despite you actually being an expert.
You should note that these cut both ways—given a field of research, they reduce (ceteris paribus) the trustworthiness of both layperson’s independent attempts to form an opinion about this field and the trustwortiness of the established experts in that field.
Making predictions about the subject matter and seeing whether those come true, is one of the great ways to measure expertise.
I blame World of Darkness circa 1998.
So kind of like an intellectual chuunibyou.
Sounds like a pretty good idea to me. You’ve listed some symptoms, I guess if you catch yourself engaging in such behaviour, there are potential solutions?
Avoid heterodoxy by reading opposing viewpoints while steelmanning as much as possible.
To avoid grandstanding via vocab, keep things simple. Make sure you can play taboo with words, or up goer five.
For rules and tips, make sure you’re clear with yourself on when your rules don’t apply as well as when they’re useful.
But really, it is a hard one to notice in yourself, and a hard one to fix without actually becoming an expert.
I have an ongoing project to read an introductory textbook for every subject I claim to be interested in. This won’t make me an expert in those subjects, but it should hopefully stop me saying things that cause actual experts to facepalm.
I present exhibit A.
(Meant lightly, but this really is a good example of phrasing that could be unnecessarily confusing to an average reader.)
I can see what you mean, but rephrasing what someone else has said is the opposite of using unnecessarily confusing phrasing, no? It’s just good practice, and a big part of what the guide to words stuff was all about.
But you’re right, I should probably put in a link for steelmanning, just in case.
Great insight. I added it to my Anki deck. I think this could have been a Discussion post of its own (even at this length).
If I collect any more indicators of imaginary expertise, I may assemble it into a discussion post.
You’ve got a worthwhile project there, but I gather that in sufficiently complex fields (like engineering) there’are a lot of rules and tips because there isn’t a complete set of principles.
Can you provide any examples? I have introductory exposure to engineering, but can’t readily pinpoint the sort of rules or tips you’re referring to.
Page down to Assumptions, Preditions, and Simplifications.
I didn’t actually have any examples—the idea that engineering has methods which aren’t firmly based in math or physics was just something I’d heard enough times to have some belief that it was true. The link has some examples.
I’m not sure these are quite the same “rules and tips”. There’s still a rationale behind model assumptions that an expert could explain, even if that rationale is “this is essentially arbitrary, but in practice we find it works quite well”.
The reasoning for “if someone touches their nose, they’re lying” is: If someone lies they feel anxiety. It’s well known that anxiety can produce effects like someone getting red because of changes in blood circulation due to tension changes. Some tension changes can make a person want to touch the corresponding area.
Something along those lines is the motivation for that tip. The problem is that there are many reason why someone touches their nose and it’s no good tell unless you calibrated it for a person.
Of course you could say that the fact that I know the explanation suggest that I know something about the topic. I’m probably better than average at lie detection. On the other hand I have not practiced that skill well enough to trust my skills to draw strong conclusions.
On the other hand in biology there are many cases where I have true beliefs where the justification isn’t deeper than “the textbook says so”. There no underlying logic that can tell you what a neurotransmitter does in every case. Evolution is quite random. In many cases I could tell you which textbook or university lecture is responsible for me believing a certain fact about biology.
Facts backed by empirical research are better than facts backed by theoretical reasoning like the one in the nose touching example.
I’ve heard a few justifications for the nose-touching example, and the one you provided is new to me.
The nose-touching example could alternatively have an empirical basis rather than a theoretical one. It came to mind because areas like body language, “soft skills” and the like are, in my experience, rife with people who’ve learned a list of such “facts”.
That’s a good start, now take those ideas about imaginary expertise and use them to do some introspection… :)
Anyway, it’s hard to clearly define what an ‘imagined expert’ is. All it means is that the person overestimates their knowledge of a subject, which affects ALL human beings, even bona-fide experts. Expertise in anything other than a very small slice of human knowledge is obviously impossible.
It’s easy to spot lack of knowledge in another person but not so easy to spot it in yourself.
About ‘crackpottery’, I don’t like labelling people as crackpots. Instead it’s much better to talk about crackpot ideas. There is no distinction between ‘crackpot’ and ‘wrong’. All ‘crackpot’ means is that the idea has very low likelihood of being true given our knowledge about the world; this is just the definition of ‘wrong’. It is in human nature to be immediately suspicious of consensus; that’s not the problem (in fact it’s part of the reason why science exists in the first place). The problem is when people try to push alternative agendas by pushing wrong information as correct information. That’s not a problem of imagined expertise; it’s a problem of willful misguiding of others.
I don’t see sixes_and_sevens labeling people as crackpots:
(Emphasis added.)
In other words, it’s not that someone has or lacks the crackpot label. Rather, there is a “crackpot narrative”, a sort of failure mode of reasoning, which people can subscribe to (or repeat to themselves or others) to a greater or lesser extent.
The difference is significant. It’s like the difference between saying “Joe is a biased person” and saying “Joe sure does seem to exhibit fundamental attribution error an awful lot of the time, doesn’t he?”