Please cite the specific part of the original Dunning-Kruger paper which would apply here. I don’t think you’ve read it or understand what the effect actually is.
People tend to hold overly favorable views of their abilities in many social and intellectual domains. The authors suggest that this overestimation occurs, in part, because people who are unskilled in these domains suffer a dual burden: Not only do these people reach erroneous conclusions and make unfortunate choices, but their incompetence robs them of the metacognitive ability to realize it.
The papers results obviously are not directly applicable, but the general they report (people who are not good at X tend to overestimate their ability at X relative to others) is labelled Dunning-Kruger by most is applicable.
To spell it out (in case I’ve misunderstood what Dunning-Kruger is supposed to connate), the explanation I was suggesting was:
LWers generally hold views at variance with the balance of domain experts on issues like decision theory, and when they agree with the consensus view of experts, they tend to be much more confident of these views than implied by the split of opinion (e.g. free will being ‘fully and completely dissolved problem’ on the wiki via compatibilism despite 30% or whatever of specialists disagreeing with it). When confronted with the evidence of expert disagreement, LWers generally assume the experts getting it wrong, and think something is going wrong with philosophy training.
Yet objectively/outside view wise, the philosophers who specialize in (for example) free will are by far epistemically superior to LWers on questions of free will: they’ve spent much more time thinking about it, read much more relevant literature, have much stronger credentials in philosophy, etc. Furthermore, the reasons offered by LWers as to why (for example) compatibilism is obviously true are pretty primitive (and responded to) compared to the discussion had in academia.
So the explanation that best fits the facts is that LWers are not that great at philosophy, and overestimate their ability relative to actual philosophers. Hence the response to expert disagreement with them is to assert the experts must be systematically irrational/biased etc.
So, as I thought: you had not read it before, or you would not be quoting the abstract at me, or rather, would be quoting more relevant parts from the paper.
The papers results obviously are not directly applicable, but the general they report (people who are not good at X tend to overestimate their ability at X relative to others) is labelled Dunning-Kruger by most is applicable.
No, it is not. If you actually read the paper, you would have learned that this is not directly applicable and there’s no reason to expect that there would even be an indirect applicability. From the full abstract which you chose not to quote, we immediately find at least two areas where DK should break:
Across 4 studies, the authors found that participants scoring in the bottom quartile on tests of humor, grammar, and logic grossly overestimated their test performance and ability.
The average LWer—never mind the people doing most of the commenting and posting—is easily in the 95th+ percentile on logic and grammar.
Besides that, LW is obsessed with ‘meta’ issues, which knocks out the ‘lack of metacognitive ability’ which is the other scissor of DK.
Thirdly, DK is generally thought to apply when there is no feedback which can compensate for the imperfect self-assessment; however, LW is notorious for being highly critical and fractious and agreeing on very little (the surveys reveal that we can’t even agree on atheism!).
Fourth, the part of DK you don’t focus on is how the top quartile reliably underestimates its own performance (see the graphs on pg1124-1126). Unless you have an objective indicator that LWers are very bad at philosophy—and I would note here that LWers routinely exceed the performance I observed of my philosophy classmates and even published philosophy papers I’ve read, like the dreck that gets published in JET, where I spent more than a few posts here going through and dissecting individual papers—it at least as plausible that LWers are actually underestimating their performance. The top quartile, by the way, in the third experiment actually increased its self-assessed performance by observing the performance of others, and in the fourth experiment this was due to overestimating the performance of others before observing their actual performance Application of this to LW is left as an exercise to the reader...
LWers generally hold views at variance with the balance of domain experts on issues like decision theory, and when they agree with the consensus view of experts, they tend to be much more confident of these views than implied by the split of opinion (e.g. free will being ‘fully and completely dissolved problem’ on the wiki via compatibilism despite 30% or whatever of specialists disagreeing with it).
A wiki page is a wiki page. If you were informed about LW views, you would be citing the surveys, which are designed for that purpose.
(And are you sure that 30% is right there? Because if 30% disagree, then 70% agree...)
When confronted with the evidence of expert disagreement, LWers generally assume the experts getting it wrong, and think something is going wrong with philosophy training.
Experts think much the same thing: philosophers have always been the harshest critics of philosophers. This does not distinguish LWers from philosophers.
So the explanation that best fits the facts is that LWers are not that great at philosophy, and overestimate their ability relative to actual philosophers.
As I’ve shown above, none of that holds, and you have distorted badly the DK research to fit your claims. You have not read the paper, you do not understand why it applies, you have no evidence for your meta thesis aside from disagreeing with an unknown and uncited fraction of experts, and you are apparently unaware of your ignorance in these points.
Please cite the specific part of the original Dunning-Kruger paper which would apply here. I don’t think you’ve read it or understand what the effect actually is.
From the abstract:
The papers results obviously are not directly applicable, but the general they report (people who are not good at X tend to overestimate their ability at X relative to others) is labelled Dunning-Kruger by most is applicable.
To spell it out (in case I’ve misunderstood what Dunning-Kruger is supposed to connate), the explanation I was suggesting was:
LWers generally hold views at variance with the balance of domain experts on issues like decision theory, and when they agree with the consensus view of experts, they tend to be much more confident of these views than implied by the split of opinion (e.g. free will being ‘fully and completely dissolved problem’ on the wiki via compatibilism despite 30% or whatever of specialists disagreeing with it). When confronted with the evidence of expert disagreement, LWers generally assume the experts getting it wrong, and think something is going wrong with philosophy training.
Yet objectively/outside view wise, the philosophers who specialize in (for example) free will are by far epistemically superior to LWers on questions of free will: they’ve spent much more time thinking about it, read much more relevant literature, have much stronger credentials in philosophy, etc. Furthermore, the reasons offered by LWers as to why (for example) compatibilism is obviously true are pretty primitive (and responded to) compared to the discussion had in academia.
So the explanation that best fits the facts is that LWers are not that great at philosophy, and overestimate their ability relative to actual philosophers. Hence the response to expert disagreement with them is to assert the experts must be systematically irrational/biased etc.
So, as I thought: you had not read it before, or you would not be quoting the abstract at me, or rather, would be quoting more relevant parts from the paper.
No, it is not. If you actually read the paper, you would have learned that this is not directly applicable and there’s no reason to expect that there would even be an indirect applicability. From the full abstract which you chose not to quote, we immediately find at least two areas where DK should break:
The average LWer—never mind the people doing most of the commenting and posting—is easily in the 95th+ percentile on logic and grammar.
Besides that, LW is obsessed with ‘meta’ issues, which knocks out the ‘lack of metacognitive ability’ which is the other scissor of DK.
Thirdly, DK is generally thought to apply when there is no feedback which can compensate for the imperfect self-assessment; however, LW is notorious for being highly critical and fractious and agreeing on very little (the surveys reveal that we can’t even agree on atheism!).
Fourth, the part of DK you don’t focus on is how the top quartile reliably underestimates its own performance (see the graphs on pg1124-1126). Unless you have an objective indicator that LWers are very bad at philosophy—and I would note here that LWers routinely exceed the performance I observed of my philosophy classmates and even published philosophy papers I’ve read, like the dreck that gets published in JET, where I spent more than a few posts here going through and dissecting individual papers—it at least as plausible that LWers are actually underestimating their performance. The top quartile, by the way, in the third experiment actually increased its self-assessed performance by observing the performance of others, and in the fourth experiment this was due to overestimating the performance of others before observing their actual performance Application of this to LW is left as an exercise to the reader...
A wiki page is a wiki page. If you were informed about LW views, you would be citing the surveys, which are designed for that purpose.
(And are you sure that 30% is right there? Because if 30% disagree, then 70% agree...)
Experts think much the same thing: philosophers have always been the harshest critics of philosophers. This does not distinguish LWers from philosophers.
As I’ve shown above, none of that holds, and you have distorted badly the DK research to fit your claims. You have not read the paper, you do not understand why it applies, you have no evidence for your meta thesis aside from disagreeing with an unknown and uncited fraction of experts, and you are apparently unaware of your ignorance in these points.