I’m pretty sure an outside view would say it is LWers rather than domain experts who are more likely to be wrong, even when accounting for the selection-confounding Carl Schulman notes: I don’t think many people have prior convictions about decision theory before they study it.
I’ve noted it previously, but when the LW consensus are that certain views are not just correct but settled questions (obviously compatibilism re. free will, obviously atheism, obviously one-box, obviously not moral realism etc.), despite the balance of domain experts disagreeing with said consensus, this screams Dunning-Kruger effect.
I don’t think this is true in every domain. If the domain is bridge building, for example, I have some confidence that the domain experts have built a bridge or two and know what it takes to keep them up and running; if they didn’t, they wouldn’t have a job. That is, bridge building is a domain in which you are forced to repeatedly make contact with reality, and that keeps your thoughts about bridge building honest. Many domains have this property, but not all of them do. Philosophy is a domain that I suspect may not have this making-contact-with-reality property (philosophers are not paid to resolve philosophical problems, they are paid to write philosophy papers, which means they’re actually incentivized not to settle questions); some parts of martial arts might be another, and some parts of psychotherapy might be a third, just so it doesn’t sound like I’m picking on philosophy uniquely.
I agree with the signs of the effects you suggest re. philosophers being incentivized to disagree, but that shouldn’t explain (taking the strongest example of my case, two-boxing), why the majority of philosophers take the objectively less plausible view.
But plausibly LWers have the same sort of effects explaining their contra-philosophy-experts consensus. Also I don’t see how the LWers are more likely to be put in touch with reality re. these questions than philosophers.
I don’t think many people have prior convictions about decision theory before they study it.
You picked literally the most extreme case, where 52.5% of undergraduates answered “insufficiently familiar,” followed by 46.1% for A- vs B-theory of time. The average for all other questions was just under 12%, 8.8% for moral realism, 0.9% for free will, 0% for atheism.
For Newcomb most undergrads are not familiar enough with the problem to have an opinion, but people do have differing strong intuitions on first encountering the problem. However, the swing in favor of two boxing for Newcomb from those undergrads with an opinion to target faculty is a relatively large chance in ratio of support from 16:18 to 31:21. Learning about dominance arguments and so forth really does sway people.
I just looked through all the PhilPapers survey questions, comparing undergrads vs target faculty with the coarse breakdown. For each question I selected the plurality non-”Other” (which included insufficient knowledge, not sure, etc) option, and recorded the swing in opinion from philosophy undergraduates to philosophy professors, to within a point.
Now, there is a lot of selection filter between undergraduates and target faculty; the faculty will tend to be people who think philosophy is more worthwhile, keen on graduate education, and will be smarter with associated views (e.g. atheism is higher at more elite schools and among those with higher GRE scores, which correlate with becoming faculty). This is not a direct measure of the effect of philosophy training and study on particular people, but it’s still interesting as suggestive evidence about the degree to which philosophical study and careers inform (or otherwise influence) philosophical opinion.
In my Google Doc I recorded an average swing from undergraduates to target faculty of ~10% in the direction of the target faculty plurality, which is respectable but not huge. Compatibilism rises 18 points, atheism 10 points, moral realism 12 points, physicalism 4 points, two-boxing by 15, deontology by 10, egalitarianism by 10. Zombies and personal identity/teletransporter barely move. The biggest swing is ~30 points in favor of non-skeptical realism about the external world.
That said, I agree the LWers who answered the survey questions in a LW thread were overconfident, that the average level of philosophical thinking here is lower quality than you would find in elite philosophy students and faculty (although not uniformly, if for no other reason because some such people read and comment at LW), and that some prominent posters are pretty overconfident (although note that philosophers themselves tend to be very confident in their views despite the similarly confident disagreement of their epistemic peers with rival views, far more than your account would suggest is reasonable, or than I would).
Please cite the specific part of the original Dunning-Kruger paper which would apply here. I don’t think you’ve read it or understand what the effect actually is.
People tend to hold overly favorable views of their abilities in many social and intellectual domains. The authors suggest that this overestimation occurs, in part, because people who are unskilled in these domains suffer a dual burden: Not only do these people reach erroneous conclusions and make unfortunate choices, but their incompetence robs them of the metacognitive ability to realize it.
The papers results obviously are not directly applicable, but the general they report (people who are not good at X tend to overestimate their ability at X relative to others) is labelled Dunning-Kruger by most is applicable.
To spell it out (in case I’ve misunderstood what Dunning-Kruger is supposed to connate), the explanation I was suggesting was:
LWers generally hold views at variance with the balance of domain experts on issues like decision theory, and when they agree with the consensus view of experts, they tend to be much more confident of these views than implied by the split of opinion (e.g. free will being ‘fully and completely dissolved problem’ on the wiki via compatibilism despite 30% or whatever of specialists disagreeing with it). When confronted with the evidence of expert disagreement, LWers generally assume the experts getting it wrong, and think something is going wrong with philosophy training.
Yet objectively/outside view wise, the philosophers who specialize in (for example) free will are by far epistemically superior to LWers on questions of free will: they’ve spent much more time thinking about it, read much more relevant literature, have much stronger credentials in philosophy, etc. Furthermore, the reasons offered by LWers as to why (for example) compatibilism is obviously true are pretty primitive (and responded to) compared to the discussion had in academia.
So the explanation that best fits the facts is that LWers are not that great at philosophy, and overestimate their ability relative to actual philosophers. Hence the response to expert disagreement with them is to assert the experts must be systematically irrational/biased etc.
So, as I thought: you had not read it before, or you would not be quoting the abstract at me, or rather, would be quoting more relevant parts from the paper.
The papers results obviously are not directly applicable, but the general they report (people who are not good at X tend to overestimate their ability at X relative to others) is labelled Dunning-Kruger by most is applicable.
No, it is not. If you actually read the paper, you would have learned that this is not directly applicable and there’s no reason to expect that there would even be an indirect applicability. From the full abstract which you chose not to quote, we immediately find at least two areas where DK should break:
Across 4 studies, the authors found that participants scoring in the bottom quartile on tests of humor, grammar, and logic grossly overestimated their test performance and ability.
The average LWer—never mind the people doing most of the commenting and posting—is easily in the 95th+ percentile on logic and grammar.
Besides that, LW is obsessed with ‘meta’ issues, which knocks out the ‘lack of metacognitive ability’ which is the other scissor of DK.
Thirdly, DK is generally thought to apply when there is no feedback which can compensate for the imperfect self-assessment; however, LW is notorious for being highly critical and fractious and agreeing on very little (the surveys reveal that we can’t even agree on atheism!).
Fourth, the part of DK you don’t focus on is how the top quartile reliably underestimates its own performance (see the graphs on pg1124-1126). Unless you have an objective indicator that LWers are very bad at philosophy—and I would note here that LWers routinely exceed the performance I observed of my philosophy classmates and even published philosophy papers I’ve read, like the dreck that gets published in JET, where I spent more than a few posts here going through and dissecting individual papers—it at least as plausible that LWers are actually underestimating their performance. The top quartile, by the way, in the third experiment actually increased its self-assessed performance by observing the performance of others, and in the fourth experiment this was due to overestimating the performance of others before observing their actual performance Application of this to LW is left as an exercise to the reader...
LWers generally hold views at variance with the balance of domain experts on issues like decision theory, and when they agree with the consensus view of experts, they tend to be much more confident of these views than implied by the split of opinion (e.g. free will being ‘fully and completely dissolved problem’ on the wiki via compatibilism despite 30% or whatever of specialists disagreeing with it).
A wiki page is a wiki page. If you were informed about LW views, you would be citing the surveys, which are designed for that purpose.
(And are you sure that 30% is right there? Because if 30% disagree, then 70% agree...)
When confronted with the evidence of expert disagreement, LWers generally assume the experts getting it wrong, and think something is going wrong with philosophy training.
Experts think much the same thing: philosophers have always been the harshest critics of philosophers. This does not distinguish LWers from philosophers.
So the explanation that best fits the facts is that LWers are not that great at philosophy, and overestimate their ability relative to actual philosophers.
As I’ve shown above, none of that holds, and you have distorted badly the DK research to fit your claims. You have not read the paper, you do not understand why it applies, you have no evidence for your meta thesis aside from disagreeing with an unknown and uncited fraction of experts, and you are apparently unaware of your ignorance in these points.
Compatibilism doesn’t belong on that list; a majority of philosophers surveyed agree, and it seems like most opposition is concentrated within Philosophy of Religion, which I don’t think is the most relevant subfield. (The correlation between philosophers of religion and libertarianism was the second highest found.)
True, but LW seems to be overconfident in compatibilism compared to the spread of expert opinion. It doesn’t seem it should be considered ‘settled’ or ‘obvious’ when >10% of domain experts disagree.
I’m pretty sure an outside view would say it is LWers rather than domain experts who are more likely to be wrong, even when accounting for the selection-confounding Carl Schulman notes: I don’t think many people have prior convictions about decision theory before they study it.
I observe that in some cases this can be both a rational thing believe and simultaneously wrong. (In fact this is the case whenever either a high status belief is incorrect or someone is mistaken about the relevance of a domain of authority to a particular question.)
I’ve noted it previously, but when the LW consensus are that certain views are not just correct but settled questions (obviously compatibilism re. free will, obviously atheism, obviously one-box, obviously not moral realism etc.), despite the balance of domain experts disagreeing with said consensus, this screams Dunning-Kruger effect.
It does scream that. Indeed, anyone who has literally no other information than that a subculture has a belief along those lines that contradicts an authority that the observer has reason to trust more then Dunning-Kruger is prompted as a likely hypothesis.
Nevertheless: Obviously compatibilism re. free will, obviously atheism, obviously one-box, obviously not moral realism!
The ‘outside view’ is useful sometimes but it is inherently, by design, about what one would believe if one was ignorant. It is reasoning as though one does not have access to most kinds of evidence but completely confident in beliefs about reference class applicability. In particular in this case it would require being ignorant not merely of lesswrong beliefs but also to be ignorant of philosophophy, philosophy of science and sociology literature too.
Not how helpful this is, but my knowledge of these fields tends to confirm that LW arguments on these tend to recapitulate work already done in the relevant academic circles, but with far inferior quality.
If LWers look at a smattering of academic literature and think the opposite, then fair enough. Yet I think LWers generally form their views on these topics based on LW work, and not look at at least some of the academic work on these topics. If so, I think they should take the outside view argument seriously, as their confidence in LW work doesn’t confirm the ‘we’re really right about this because we’ve got the better reasons’ over dunning-kruger explanations.
I’m pretty sure an outside view would say it is LWers rather than domain experts who are more likely to be wrong, even when accounting for the selection-confounding Carl Schulman notes: I don’t think many people have prior convictions about decision theory before they study it.
I’ve noted it previously, but when the LW consensus are that certain views are not just correct but settled questions (obviously compatibilism re. free will, obviously atheism, obviously one-box, obviously not moral realism etc.), despite the balance of domain experts disagreeing with said consensus, this screams Dunning-Kruger effect.
I don’t think this is true in every domain. If the domain is bridge building, for example, I have some confidence that the domain experts have built a bridge or two and know what it takes to keep them up and running; if they didn’t, they wouldn’t have a job. That is, bridge building is a domain in which you are forced to repeatedly make contact with reality, and that keeps your thoughts about bridge building honest. Many domains have this property, but not all of them do. Philosophy is a domain that I suspect may not have this making-contact-with-reality property (philosophers are not paid to resolve philosophical problems, they are paid to write philosophy papers, which means they’re actually incentivized not to settle questions); some parts of martial arts might be another, and some parts of psychotherapy might be a third, just so it doesn’t sound like I’m picking on philosophy uniquely.
I agree with the signs of the effects you suggest re. philosophers being incentivized to disagree, but that shouldn’t explain (taking the strongest example of my case, two-boxing), why the majority of philosophers take the objectively less plausible view.
But plausibly LWers have the same sort of effects explaining their contra-philosophy-experts consensus. Also I don’t see how the LWers are more likely to be put in touch with reality re. these questions than philosophers.
Fair point.
You picked literally the most extreme case, where 52.5% of undergraduates answered “insufficiently familiar,” followed by 46.1% for A- vs B-theory of time. The average for all other questions was just under 12%, 8.8% for moral realism, 0.9% for free will, 0% for atheism.
For Newcomb most undergrads are not familiar enough with the problem to have an opinion, but people do have differing strong intuitions on first encountering the problem. However, the swing in favor of two boxing for Newcomb from those undergrads with an opinion to target faculty is a relatively large chance in ratio of support from 16:18 to 31:21. Learning about dominance arguments and so forth really does sway people.
I just looked through all the PhilPapers survey questions, comparing undergrads vs target faculty with the coarse breakdown. For each question I selected the plurality non-”Other” (which included insufficient knowledge, not sure, etc) option, and recorded the swing in opinion from philosophy undergraduates to philosophy professors, to within a point.
Now, there is a lot of selection filter between undergraduates and target faculty; the faculty will tend to be people who think philosophy is more worthwhile, keen on graduate education, and will be smarter with associated views (e.g. atheism is higher at more elite schools and among those with higher GRE scores, which correlate with becoming faculty). This is not a direct measure of the effect of philosophy training and study on particular people, but it’s still interesting as suggestive evidence about the degree to which philosophical study and careers inform (or otherwise influence) philosophical opinion.
In my Google Doc I recorded an average swing from undergraduates to target faculty of ~10% in the direction of the target faculty plurality, which is respectable but not huge. Compatibilism rises 18 points, atheism 10 points, moral realism 12 points, physicalism 4 points, two-boxing by 15, deontology by 10, egalitarianism by 10. Zombies and personal identity/teletransporter barely move. The biggest swing is ~30 points in favor of non-skeptical realism about the external world.
That said, I agree the LWers who answered the survey questions in a LW thread were overconfident, that the average level of philosophical thinking here is lower quality than you would find in elite philosophy students and faculty (although not uniformly, if for no other reason because some such people read and comment at LW), and that some prominent posters are pretty overconfident (although note that philosophers themselves tend to be very confident in their views despite the similarly confident disagreement of their epistemic peers with rival views, far more than your account would suggest is reasonable, or than I would).
Please cite the specific part of the original Dunning-Kruger paper which would apply here. I don’t think you’ve read it or understand what the effect actually is.
From the abstract:
The papers results obviously are not directly applicable, but the general they report (people who are not good at X tend to overestimate their ability at X relative to others) is labelled Dunning-Kruger by most is applicable.
To spell it out (in case I’ve misunderstood what Dunning-Kruger is supposed to connate), the explanation I was suggesting was:
LWers generally hold views at variance with the balance of domain experts on issues like decision theory, and when they agree with the consensus view of experts, they tend to be much more confident of these views than implied by the split of opinion (e.g. free will being ‘fully and completely dissolved problem’ on the wiki via compatibilism despite 30% or whatever of specialists disagreeing with it). When confronted with the evidence of expert disagreement, LWers generally assume the experts getting it wrong, and think something is going wrong with philosophy training.
Yet objectively/outside view wise, the philosophers who specialize in (for example) free will are by far epistemically superior to LWers on questions of free will: they’ve spent much more time thinking about it, read much more relevant literature, have much stronger credentials in philosophy, etc. Furthermore, the reasons offered by LWers as to why (for example) compatibilism is obviously true are pretty primitive (and responded to) compared to the discussion had in academia.
So the explanation that best fits the facts is that LWers are not that great at philosophy, and overestimate their ability relative to actual philosophers. Hence the response to expert disagreement with them is to assert the experts must be systematically irrational/biased etc.
So, as I thought: you had not read it before, or you would not be quoting the abstract at me, or rather, would be quoting more relevant parts from the paper.
No, it is not. If you actually read the paper, you would have learned that this is not directly applicable and there’s no reason to expect that there would even be an indirect applicability. From the full abstract which you chose not to quote, we immediately find at least two areas where DK should break:
The average LWer—never mind the people doing most of the commenting and posting—is easily in the 95th+ percentile on logic and grammar.
Besides that, LW is obsessed with ‘meta’ issues, which knocks out the ‘lack of metacognitive ability’ which is the other scissor of DK.
Thirdly, DK is generally thought to apply when there is no feedback which can compensate for the imperfect self-assessment; however, LW is notorious for being highly critical and fractious and agreeing on very little (the surveys reveal that we can’t even agree on atheism!).
Fourth, the part of DK you don’t focus on is how the top quartile reliably underestimates its own performance (see the graphs on pg1124-1126). Unless you have an objective indicator that LWers are very bad at philosophy—and I would note here that LWers routinely exceed the performance I observed of my philosophy classmates and even published philosophy papers I’ve read, like the dreck that gets published in JET, where I spent more than a few posts here going through and dissecting individual papers—it at least as plausible that LWers are actually underestimating their performance. The top quartile, by the way, in the third experiment actually increased its self-assessed performance by observing the performance of others, and in the fourth experiment this was due to overestimating the performance of others before observing their actual performance Application of this to LW is left as an exercise to the reader...
A wiki page is a wiki page. If you were informed about LW views, you would be citing the surveys, which are designed for that purpose.
(And are you sure that 30% is right there? Because if 30% disagree, then 70% agree...)
Experts think much the same thing: philosophers have always been the harshest critics of philosophers. This does not distinguish LWers from philosophers.
As I’ve shown above, none of that holds, and you have distorted badly the DK research to fit your claims. You have not read the paper, you do not understand why it applies, you have no evidence for your meta thesis aside from disagreeing with an unknown and uncited fraction of experts, and you are apparently unaware of your ignorance in these points.
Compatibilism doesn’t belong on that list; a majority of philosophers surveyed agree, and it seems like most opposition is concentrated within Philosophy of Religion, which I don’t think is the most relevant subfield. (The correlation between philosophers of religion and libertarianism was the second highest found.)
True, but LW seems to be overconfident in compatibilism compared to the spread of expert opinion. It doesn’t seem it should be considered ‘settled’ or ‘obvious’ when >10% of domain experts disagree.
I observe that in some cases this can be both a rational thing believe and simultaneously wrong. (In fact this is the case whenever either a high status belief is incorrect or someone is mistaken about the relevance of a domain of authority to a particular question.)
It does scream that. Indeed, anyone who has literally no other information than that a subculture has a belief along those lines that contradicts an authority that the observer has reason to trust more then Dunning-Kruger is prompted as a likely hypothesis.
Nevertheless: Obviously compatibilism re. free will, obviously atheism, obviously one-box, obviously not moral realism!
The ‘outside view’ is useful sometimes but it is inherently, by design, about what one would believe if one was ignorant. It is reasoning as though one does not have access to most kinds of evidence but completely confident in beliefs about reference class applicability. In particular in this case it would require being ignorant not merely of lesswrong beliefs but also to be ignorant of philosophophy, philosophy of science and sociology literature too.
Not how helpful this is, but my knowledge of these fields tends to confirm that LW arguments on these tend to recapitulate work already done in the relevant academic circles, but with far inferior quality.
If LWers look at a smattering of academic literature and think the opposite, then fair enough. Yet I think LWers generally form their views on these topics based on LW work, and not look at at least some of the academic work on these topics. If so, I think they should take the outside view argument seriously, as their confidence in LW work doesn’t confirm the ‘we’re really right about this because we’ve got the better reasons’ over dunning-kruger explanations.