I also disagree with philosophers, disproportionately regarding their own areas of expertise, but the pattern of reasoning here is pretty suspect. The observation is: experts are uniformly less likely to share LW views than non-experts. The conclusion is: experts are no good.
I think you should tread carefully. This is the sort of thing that gets people (and communities) in epistemic trouble.
ETA: more analysis here, using the general undergrad vs target faculty comparison, instead of comparing grad students and faculty within an AOS.
This should be taken very seriously. In the case of philosophy of religion I think what’s happening is a selection effect: people who believe in theist religion are disproportionately likely to think it worthwhile to study philosophy of religion, i.e. the theism predates their expertise in the philosophy of religion, and isn’t a result of it. Similarly moral anti-realists are going to be less interested in in meta-ethics, and in general people who think a field is pointless or nonsense won’t go into it.
Now, I am going to try to test that for religion, meta-ethics, and decision theory by comparing graduate students with a specialty in the field to target (elite) faculty with specialties in the field in the PhilPapers data, available at http://philpapers.org/surveys/results.pl . It looks like target faculty philosophers of religion and meta-ethicists are actually less theistic and less moral realist than graduate students specializing in those areas, suggesting that selection effects rather than learning explain the views of these specialists. There weren’t enough data points for decision theory to draw conclusions. I haven’t tried any other analyses or looked at other subjects yet, or otherwise applied a publication bias filter.
Graduate students with philosophy of religion as an Area of Specialization (AOS):
God: theism or atheism?
Accept: theism 29 / 43 (67.4%)
Lean toward: theism 4 / 43 (9.3%)
Lean toward: atheism 3 / 43 (7.0%)
Accept: atheism 2 / 43 (4.7%)
Agnostic/undecided 1 / 43 (2.3%)
There is no fact of the matter 1 / 43 (2.3%)
Accept another alternative 1 / 43 (2.3%)
Accept an intermediate view 1 / 43 (2.3%)
Reject both 1 / 43 (2.3%)
Target faculty with philosophy of religion as AOS:
Accept: moral realism 50 / 116 (43.1%)
Lean toward: moral realism 25 / 116 (21.6%)
Accept: moral anti-realism 19 / 116 (16.4%)
Lean toward: moral anti-realism 9 / 116 (7.8%)
Agnostic/undecided 4 / 116 (3.4%)
Accept an intermediate view 4 / 116 (3.4%)
Accept another alternative 3 / 116 (2.6%)
Reject both 2 / 116 (1.7%)
Target faculty with a meta-ethics AOS:
Meta-ethics: moral realism or moral anti-realism?
Accept: moral realism 42 / 102 (41.2%)
Accept: moral anti-realism 17 / 102 (16.7%)
Lean toward: moral realism 15 / 102 (14.7%)
Lean toward: moral anti-realism 10 / 102 (9.8%)
Accept an intermediate view 7 / 102 (6.9%)
The question is too unclear to answer 6 / 102 (5.9%)
Accept another alternative 3 / 102 (2.9%)
Agnostic/undecided 2 / 102 (2.0%)
Graduate students in decision theory:
Newcomb’s problem: one box or two boxes?
Accept: two boxes 3 / 9 (33.3%)
Accept another alternative 1 / 9 (11.1%)
Accept an intermediate view 1 / 9 (11.1%)
Lean toward: one box 1 / 9 (11.1%)
Accept: one box 1 / 9 (11.1%)
Insufficiently familiar with the issue 1 / 9 (11.1%)
The question is too unclear to answer 1 / 9 (11.1%)
Target faculty in decision theory:
Newcomb’s problem: one box or two boxes?
Accept: two boxes 13 / 31 (41.9%)
Accept: one box 7 / 31 (22.6%)
Lean toward: two boxes 6 / 31 (19.4%)
Other 2 / 31 (6.5%)
Agnostic/undecided 2 / 31 (6.5%)
Lean toward: one box 1 / 31 (3.2%)
In the case of philosophy of religion I think what’s happening is a selection effect: people who believe in theist religion are disproportionately likely to think it worthwhile to study philosophy of religion, i.e. the theism predates their expertise in the philosophy of religion, and isn’t a result of it.
I’ll give you a slightly different spin on the bias. More evolutionary bias than selection bias.
People who assert that a field is worthwhile are more likely to be successful in that field.
We actually see this across a lot of fields besides philosophy, and it’s not LW-specific. For example, simply adding up a few simple scores does better than experts at predicting job performance.
It’s been shown that expertise is only valuable in fields where there is a short enough and frequent enough feedback loop for a person to actually develop expertise—and there is something coherent to develop the expertise in. Outside of such fields, experts are just blowhards with status.
Given the nature of the field, the prior expectation for philosophers having any genuine expertise at anything except impressing people, should be set quite low. (Much like we should expect expert short-term stock pickers to not be expert at anything besides being lucky.)
Of course, one could argue that LW regulars get even less rapid feedback on these issues than the professional philosophers do. The philosophers at least are frequently forced to debate their ideas with people who disagree, while LW posters mostly discuss these things with each other—that is, with a group that is self-selected for thinking in a similar way. We don’t have the kind of diversity of opinion that is exemplified by these survey results.
However see my comment above for evidence suggesting that the views of the specialists are those they brought with them to the field (or shifting away from the plurality view), i.e. that the skew of views among specialists is NOT due to such feedback.
It’s been shown that expertise is only valuable in fields where there is a short enough and frequent enough feedback loop for a person to actually develop expertise—and there is something coherent to develop the expertise in
What do you think philosophy is lacking? An (analytical) philosopher who makes a logic error is hauled up very quickly by their peers. That’s your feedback loop. So is “something coherent” lacking? Phil. certainly doesn’t have a set of established results like engineering, or the more settled areas of science. It does have a lot of necessary skill in formulating, expressing and criticising ideas and arguments. Musicians aren’t non-experts just because there is barely
such a thing as a musical fact. Philosophy isn’t broken science.
OK, so philosophers manage to avoid logical errors. Good for them. However, they make more complicated errors (see A Human’s Guide To Words for some examples), as well as sometimes errors of probability. The thing that philosophers develop expertise in is writing interesting arguments and counterarguments. But these arguments are castles built on air; there is no underlying truth to most of the questions they ask (or, if there is an underlying truth, there is no penalty for being wrong about it). And even some of the “settled” positions are only settled because of path-dependence—that is, once they became popular, anyone with conflicting intuitions would simply never become a philosopher (see Buckwalter and Stich for more on this).
Scientists (at least in theory) have all of the same skills that philosophers should have—formulating theories and arguments, catching logical errors, etc. It’s just that in science, the arguments are (when done correctly) constrained to be about the real world.
Why is that a problem? Science deals with empirical reality, philosophy of science deals with meta-level issues. Each to their own.
Because if there is no fact of the matter on the “meta-level issues”, then you’re not actually dealing with “meta-level issues”. You are dealing with words, and your success in dealing with words is what’s being measured. Your argument is that expertise develops by feedback, but the feedback that philosophers get isn’t the right kind of feedback.
I don’t know what you mean by “fact of the matter”. It’s not a problem that meta-level isn’t object level, any more than it’s a problem that cats aren’t dogs. I also don’t think that there is any problem in identifying the meta level.
Philosophers “don’t deal with words” in the sense that linguists. They use words to do things, as do many other specialities. You seem to be making the complaint that success isn’t well defined in philosophy, but that would require treating object level science as much more algorithmic than it actually is. What makes a scientific theory
a good theory? Most scientists agree on it?
I don’t know what you mean by “fact of the matter”.
An actual truth about the world.
I don’t know what you mean by that. Is Gresham’s law such a truth?
What makes a scientific theory a good theory?
Have you read A Technical Explanation of Technical Explanation?
My question was rhetorical. Science does not deal entirely in directly observable empirical facts—which might be what you meant by “actual truths about the world”. Those who fly under the Bayesian flag by and large don’t either: most of the material on this site is just as indirect/meta-levle/higher-level as philosophy.
I just don’t see anything that justifies the “Boo!” rhetoric.
Actually, perhaps you should try The Simple Truth, because you seem totally confused.
Yes, a lot of the material on this site is philosophy; I would argue that it is correspondingly more likely to be wrong, precisely because is not subject to the same feedback loops as science. This is why EY keeps asking, “How do I use this to build an AI?”
So...is Gresham;s Law an actual truth about the world?
As far as I can tell, yes (in a limited form), but I’m prepared for an economist to tell me otherwise.
The focus of the question was “about the world”. Gresham’s law, if true, is not a direct empirical fact like the metling point of aluminium, not is it built into the fabric of the universe, since it is indefinable without humans and their economic activity.
perhaps you should try The Simple Truth
Now I’m confused. Is that likely to be wrong or not?
If we consider it as a definition, then it is either useful or not useful.
So this is about the “true” part, not about the “actual world” part? In that case, You are’;t complaining that philosophy ins;t connected to reality, your claiming that it is all false. In that case I will have to ask you when and how you became omniscient.
The focus of the question was “about the world”. Gresham’s law, if true, is not a direct empirical fact like the melting point of aluminium, not is it built into the fabric of the universe, since it is indefinable without humans and their economic activity.
Humans are part of the world.
So this is about the “true” part, not about the “actual world” part? In that case, You aren’t complaining that philosophy isn’t connected to reality, your claiming that it is all false. In that case I will have to ask you when and how you became omniscient.
I’m afraid I don’t understand what you’re saying here. Yes, if you are confused about what truth means, a definition would be useful; I think The Simple Truth is a pretty useful one (if rather long-winded, as is typical for Yudkowsky). It doesn’t tell you much about the actual world (except that it hints at a reasonable justification for induction, which is developed more fully elsewhere).
But I’m not sure why you think I am claiming philosophy is all false.
The focus of the question was “about the world”. Gresham’s law, if true, is not a direct empirical fact like the melting point of aluminium, not is it built into the fabric of the universe, since it is indefinable without humans and their economic activity.
Humans are part of the world.
Then there is no reason why some philosopihical claims about human nature could not count as Actual Truths About The World, refuting your original point.
That depends on what you mean by “human nature,” but yes, some such claims could. However, they aren’t judged based on this (outside of experimental philosophy, of course). So, there is no feedback loop.
OK, it has been established that you attach True to the sentence:
“Philosophers are not judged based on whether their claims accurately describe the world”.
The question is what that means. We have established that philosophical claims
can be about the world, and it seems uncontroversial that some of the make true claims some of the time, since they all disagree with each other and therefore can’t all be wrong.
The problem is presumably the epistemology, the justification. Perhaps you mean that philosophy doesn’t use enough empiricism. Although it does use empiricism sometimes, and it is not that every scientific question can
be settled empirically.
Just a friendly advice. Having looked through your comment history I have noticed that you have trouble interpreting the statements of others charitably. This is fine for debate-style arguments, but is not a great idea on this forum, where winning is defined by collectively constructing a more accurate map, not as an advantage in a zero-sum game. (Admittedly, this is the ideal case, the practice is unfortunately different.) Anyway, consider reading the comments you are replying to in the best possible way first.
If you honestly do not understand the point the comment you are replying to is making, a better choice is asking the commenter to clarify, rather than continuing to argue based on this lack of understanding. TheOtherDave does it almost to a fault, feel free to read some of his threads. Asking me does not help, I did not write the comment you didn’t understand.
It’s been shown that expertise is only valuable in fields where there is a short enough and frequent enough feedback loop for a person to actually develop expertise—and there is something coherent to develop the expertise in. Outside of such fields, experts are just blowhards with status.
I don’t disagree with this, but do you happen to have a cite?
I would also point out that feedback which consists solely of the opinions of other experts probably shouldn’t count as feedback. Too much danger of groupthink.
The finding that expertise is only valuable in fields where there is a sufficiently short and frequent feedback look plausibly explains why professional philosophers are no better than the general population at answering philosophical questions. However, it doesn’t explain the observation that philosophical expertise seems to be negatively correlated with true philosophical beliefs, as opposed to merely uncorrelated. Why are philosophers of religion less likely to believe the truth about religion, moral philosophers less likely to believe the truth about morality, and metaphysicians less likely to believe the truth about reality, than their colleagues with different areas of expertise?
I would guess that those particular fields look more interesting when you make the wrong assumptions to begin with. I mean, it’s much less interesting to talk about God when you accept there is none. Or to talk about metaphysics, when you accept that the answer will most likely come from physics. (I don’t know about morality.)
I’m pretty sure an outside view would say it is LWers rather than domain experts who are more likely to be wrong, even when accounting for the selection-confounding Carl Schulman notes: I don’t think many people have prior convictions about decision theory before they study it.
I’ve noted it previously, but when the LW consensus are that certain views are not just correct but settled questions (obviously compatibilism re. free will, obviously atheism, obviously one-box, obviously not moral realism etc.), despite the balance of domain experts disagreeing with said consensus, this screams Dunning-Kruger effect.
I don’t think this is true in every domain. If the domain is bridge building, for example, I have some confidence that the domain experts have built a bridge or two and know what it takes to keep them up and running; if they didn’t, they wouldn’t have a job. That is, bridge building is a domain in which you are forced to repeatedly make contact with reality, and that keeps your thoughts about bridge building honest. Many domains have this property, but not all of them do. Philosophy is a domain that I suspect may not have this making-contact-with-reality property (philosophers are not paid to resolve philosophical problems, they are paid to write philosophy papers, which means they’re actually incentivized not to settle questions); some parts of martial arts might be another, and some parts of psychotherapy might be a third, just so it doesn’t sound like I’m picking on philosophy uniquely.
I agree with the signs of the effects you suggest re. philosophers being incentivized to disagree, but that shouldn’t explain (taking the strongest example of my case, two-boxing), why the majority of philosophers take the objectively less plausible view.
But plausibly LWers have the same sort of effects explaining their contra-philosophy-experts consensus. Also I don’t see how the LWers are more likely to be put in touch with reality re. these questions than philosophers.
I don’t think many people have prior convictions about decision theory before they study it.
You picked literally the most extreme case, where 52.5% of undergraduates answered “insufficiently familiar,” followed by 46.1% for A- vs B-theory of time. The average for all other questions was just under 12%, 8.8% for moral realism, 0.9% for free will, 0% for atheism.
For Newcomb most undergrads are not familiar enough with the problem to have an opinion, but people do have differing strong intuitions on first encountering the problem. However, the swing in favor of two boxing for Newcomb from those undergrads with an opinion to target faculty is a relatively large chance in ratio of support from 16:18 to 31:21. Learning about dominance arguments and so forth really does sway people.
I just looked through all the PhilPapers survey questions, comparing undergrads vs target faculty with the coarse breakdown. For each question I selected the plurality non-”Other” (which included insufficient knowledge, not sure, etc) option, and recorded the swing in opinion from philosophy undergraduates to philosophy professors, to within a point.
Now, there is a lot of selection filter between undergraduates and target faculty; the faculty will tend to be people who think philosophy is more worthwhile, keen on graduate education, and will be smarter with associated views (e.g. atheism is higher at more elite schools and among those with higher GRE scores, which correlate with becoming faculty). This is not a direct measure of the effect of philosophy training and study on particular people, but it’s still interesting as suggestive evidence about the degree to which philosophical study and careers inform (or otherwise influence) philosophical opinion.
In my Google Doc I recorded an average swing from undergraduates to target faculty of ~10% in the direction of the target faculty plurality, which is respectable but not huge. Compatibilism rises 18 points, atheism 10 points, moral realism 12 points, physicalism 4 points, two-boxing by 15, deontology by 10, egalitarianism by 10. Zombies and personal identity/teletransporter barely move. The biggest swing is ~30 points in favor of non-skeptical realism about the external world.
That said, I agree the LWers who answered the survey questions in a LW thread were overconfident, that the average level of philosophical thinking here is lower quality than you would find in elite philosophy students and faculty (although not uniformly, if for no other reason because some such people read and comment at LW), and that some prominent posters are pretty overconfident (although note that philosophers themselves tend to be very confident in their views despite the similarly confident disagreement of their epistemic peers with rival views, far more than your account would suggest is reasonable, or than I would).
Please cite the specific part of the original Dunning-Kruger paper which would apply here. I don’t think you’ve read it or understand what the effect actually is.
People tend to hold overly favorable views of their abilities in many social and intellectual domains. The authors suggest that this overestimation occurs, in part, because people who are unskilled in these domains suffer a dual burden: Not only do these people reach erroneous conclusions and make unfortunate choices, but their incompetence robs them of the metacognitive ability to realize it.
The papers results obviously are not directly applicable, but the general they report (people who are not good at X tend to overestimate their ability at X relative to others) is labelled Dunning-Kruger by most is applicable.
To spell it out (in case I’ve misunderstood what Dunning-Kruger is supposed to connate), the explanation I was suggesting was:
LWers generally hold views at variance with the balance of domain experts on issues like decision theory, and when they agree with the consensus view of experts, they tend to be much more confident of these views than implied by the split of opinion (e.g. free will being ‘fully and completely dissolved problem’ on the wiki via compatibilism despite 30% or whatever of specialists disagreeing with it). When confronted with the evidence of expert disagreement, LWers generally assume the experts getting it wrong, and think something is going wrong with philosophy training.
Yet objectively/outside view wise, the philosophers who specialize in (for example) free will are by far epistemically superior to LWers on questions of free will: they’ve spent much more time thinking about it, read much more relevant literature, have much stronger credentials in philosophy, etc. Furthermore, the reasons offered by LWers as to why (for example) compatibilism is obviously true are pretty primitive (and responded to) compared to the discussion had in academia.
So the explanation that best fits the facts is that LWers are not that great at philosophy, and overestimate their ability relative to actual philosophers. Hence the response to expert disagreement with them is to assert the experts must be systematically irrational/biased etc.
So, as I thought: you had not read it before, or you would not be quoting the abstract at me, or rather, would be quoting more relevant parts from the paper.
The papers results obviously are not directly applicable, but the general they report (people who are not good at X tend to overestimate their ability at X relative to others) is labelled Dunning-Kruger by most is applicable.
No, it is not. If you actually read the paper, you would have learned that this is not directly applicable and there’s no reason to expect that there would even be an indirect applicability. From the full abstract which you chose not to quote, we immediately find at least two areas where DK should break:
Across 4 studies, the authors found that participants scoring in the bottom quartile on tests of humor, grammar, and logic grossly overestimated their test performance and ability.
The average LWer—never mind the people doing most of the commenting and posting—is easily in the 95th+ percentile on logic and grammar.
Besides that, LW is obsessed with ‘meta’ issues, which knocks out the ‘lack of metacognitive ability’ which is the other scissor of DK.
Thirdly, DK is generally thought to apply when there is no feedback which can compensate for the imperfect self-assessment; however, LW is notorious for being highly critical and fractious and agreeing on very little (the surveys reveal that we can’t even agree on atheism!).
Fourth, the part of DK you don’t focus on is how the top quartile reliably underestimates its own performance (see the graphs on pg1124-1126). Unless you have an objective indicator that LWers are very bad at philosophy—and I would note here that LWers routinely exceed the performance I observed of my philosophy classmates and even published philosophy papers I’ve read, like the dreck that gets published in JET, where I spent more than a few posts here going through and dissecting individual papers—it at least as plausible that LWers are actually underestimating their performance. The top quartile, by the way, in the third experiment actually increased its self-assessed performance by observing the performance of others, and in the fourth experiment this was due to overestimating the performance of others before observing their actual performance Application of this to LW is left as an exercise to the reader...
LWers generally hold views at variance with the balance of domain experts on issues like decision theory, and when they agree with the consensus view of experts, they tend to be much more confident of these views than implied by the split of opinion (e.g. free will being ‘fully and completely dissolved problem’ on the wiki via compatibilism despite 30% or whatever of specialists disagreeing with it).
A wiki page is a wiki page. If you were informed about LW views, you would be citing the surveys, which are designed for that purpose.
(And are you sure that 30% is right there? Because if 30% disagree, then 70% agree...)
When confronted with the evidence of expert disagreement, LWers generally assume the experts getting it wrong, and think something is going wrong with philosophy training.
Experts think much the same thing: philosophers have always been the harshest critics of philosophers. This does not distinguish LWers from philosophers.
So the explanation that best fits the facts is that LWers are not that great at philosophy, and overestimate their ability relative to actual philosophers.
As I’ve shown above, none of that holds, and you have distorted badly the DK research to fit your claims. You have not read the paper, you do not understand why it applies, you have no evidence for your meta thesis aside from disagreeing with an unknown and uncited fraction of experts, and you are apparently unaware of your ignorance in these points.
Compatibilism doesn’t belong on that list; a majority of philosophers surveyed agree, and it seems like most opposition is concentrated within Philosophy of Religion, which I don’t think is the most relevant subfield. (The correlation between philosophers of religion and libertarianism was the second highest found.)
True, but LW seems to be overconfident in compatibilism compared to the spread of expert opinion. It doesn’t seem it should be considered ‘settled’ or ‘obvious’ when >10% of domain experts disagree.
I’m pretty sure an outside view would say it is LWers rather than domain experts who are more likely to be wrong, even when accounting for the selection-confounding Carl Schulman notes: I don’t think many people have prior convictions about decision theory before they study it.
I observe that in some cases this can be both a rational thing believe and simultaneously wrong. (In fact this is the case whenever either a high status belief is incorrect or someone is mistaken about the relevance of a domain of authority to a particular question.)
I’ve noted it previously, but when the LW consensus are that certain views are not just correct but settled questions (obviously compatibilism re. free will, obviously atheism, obviously one-box, obviously not moral realism etc.), despite the balance of domain experts disagreeing with said consensus, this screams Dunning-Kruger effect.
It does scream that. Indeed, anyone who has literally no other information than that a subculture has a belief along those lines that contradicts an authority that the observer has reason to trust more then Dunning-Kruger is prompted as a likely hypothesis.
Nevertheless: Obviously compatibilism re. free will, obviously atheism, obviously one-box, obviously not moral realism!
The ‘outside view’ is useful sometimes but it is inherently, by design, about what one would believe if one was ignorant. It is reasoning as though one does not have access to most kinds of evidence but completely confident in beliefs about reference class applicability. In particular in this case it would require being ignorant not merely of lesswrong beliefs but also to be ignorant of philosophophy, philosophy of science and sociology literature too.
Not how helpful this is, but my knowledge of these fields tends to confirm that LW arguments on these tend to recapitulate work already done in the relevant academic circles, but with far inferior quality.
If LWers look at a smattering of academic literature and think the opposite, then fair enough. Yet I think LWers generally form their views on these topics based on LW work, and not look at at least some of the academic work on these topics. If so, I think they should take the outside view argument seriously, as their confidence in LW work doesn’t confirm the ‘we’re really right about this because we’ve got the better reasons’ over dunning-kruger explanations.
I also disagree with philosophers, disproportionately regarding their own areas of expertise, but the pattern of reasoning here is pretty suspect. The observation is: experts are uniformly less likely to share LW views than non-experts. The conclusion is: experts are no good.
I think you should tread carefully. This is the sort of thing that gets people (and communities) in epistemic trouble.
ETA: more analysis here, using the general undergrad vs target faculty comparison, instead of comparing grad students and faculty within an AOS.
This should be taken very seriously. In the case of philosophy of religion I think what’s happening is a selection effect: people who believe in theist religion are disproportionately likely to think it worthwhile to study philosophy of religion, i.e. the theism predates their expertise in the philosophy of religion, and isn’t a result of it. Similarly moral anti-realists are going to be less interested in in meta-ethics, and in general people who think a field is pointless or nonsense won’t go into it.
Now, I am going to try to test that for religion, meta-ethics, and decision theory by comparing graduate students with a specialty in the field to target (elite) faculty with specialties in the field in the PhilPapers data, available at http://philpapers.org/surveys/results.pl . It looks like target faculty philosophers of religion and meta-ethicists are actually less theistic and less moral realist than graduate students specializing in those areas, suggesting that selection effects rather than learning explain the views of these specialists. There weren’t enough data points for decision theory to draw conclusions. I haven’t tried any other analyses or looked at other subjects yet, or otherwise applied a publication bias filter.
Graduate students with philosophy of religion as an Area of Specialization (AOS):
God: theism or atheism?
Accept: theism 29 / 43 (67.4%) Lean toward: theism 4 / 43 (9.3%) Lean toward: atheism 3 / 43 (7.0%) Accept: atheism 2 / 43 (4.7%) Agnostic/undecided 1 / 43 (2.3%) There is no fact of the matter 1 / 43 (2.3%) Accept another alternative 1 / 43 (2.3%) Accept an intermediate view 1 / 43 (2.3%) Reject both 1 / 43 (2.3%)
Target faculty with philosophy of religion as AOS:
God: theism or atheism?
Accept: theism 30 / 47 (63.8%) Accept: atheism 9 / 47 (19.1%) Lean toward: theism 4 / 47 (8.5%) Reject both 2 / 47 (4.3%) Agnostic/undecided 2 / 47 (4.3%)
Graduate students with a metaethics AOS:
Meta-ethics: moral realism or moral anti-realism?
Accept: moral realism 50 / 116 (43.1%) Lean toward: moral realism 25 / 116 (21.6%) Accept: moral anti-realism 19 / 116 (16.4%) Lean toward: moral anti-realism 9 / 116 (7.8%) Agnostic/undecided 4 / 116 (3.4%) Accept an intermediate view 4 / 116 (3.4%) Accept another alternative 3 / 116 (2.6%) Reject both 2 / 116 (1.7%)
Target faculty with a meta-ethics AOS:
Meta-ethics: moral realism or moral anti-realism?
Accept: moral realism 42 / 102 (41.2%) Accept: moral anti-realism 17 / 102 (16.7%) Lean toward: moral realism 15 / 102 (14.7%) Lean toward: moral anti-realism 10 / 102 (9.8%) Accept an intermediate view 7 / 102 (6.9%) The question is too unclear to answer 6 / 102 (5.9%) Accept another alternative 3 / 102 (2.9%) Agnostic/undecided 2 / 102 (2.0%)
Graduate students in decision theory:
Newcomb’s problem: one box or two boxes?
Accept: two boxes 3 / 9 (33.3%) Accept another alternative 1 / 9 (11.1%) Accept an intermediate view 1 / 9 (11.1%) Lean toward: one box 1 / 9 (11.1%) Accept: one box 1 / 9 (11.1%) Insufficiently familiar with the issue 1 / 9 (11.1%) The question is too unclear to answer 1 / 9 (11.1%)
Target faculty in decision theory:
Newcomb’s problem: one box or two boxes?
Accept: two boxes 13 / 31 (41.9%) Accept: one box 7 / 31 (22.6%) Lean toward: two boxes 6 / 31 (19.4%) Other 2 / 31 (6.5%) Agnostic/undecided 2 / 31 (6.5%) Lean toward: one box 1 / 31 (3.2%)
I’ll give you a slightly different spin on the bias. More evolutionary bias than selection bias.
People who assert that a field is worthwhile are more likely to be successful in that field.
We actually see this across a lot of fields besides philosophy, and it’s not LW-specific. For example, simply adding up a few simple scores does better than experts at predicting job performance.
It’s been shown that expertise is only valuable in fields where there is a short enough and frequent enough feedback loop for a person to actually develop expertise—and there is something coherent to develop the expertise in. Outside of such fields, experts are just blowhards with status.
Given the nature of the field, the prior expectation for philosophers having any genuine expertise at anything except impressing people, should be set quite low. (Much like we should expect expert short-term stock pickers to not be expert at anything besides being lucky.)
Of course, one could argue that LW regulars get even less rapid feedback on these issues than the professional philosophers do. The philosophers at least are frequently forced to debate their ideas with people who disagree, while LW posters mostly discuss these things with each other—that is, with a group that is self-selected for thinking in a similar way. We don’t have the kind of diversity of opinion that is exemplified by these survey results.
This seems right to me.
However see my comment above for evidence suggesting that the views of the specialists are those they brought with them to the field (or shifting away from the plurality view), i.e. that the skew of views among specialists is NOT due to such feedback.
What do you think philosophy is lacking? An (analytical) philosopher who makes a logic error is hauled up very quickly by their peers. That’s your feedback loop. So is “something coherent” lacking? Phil. certainly doesn’t have a set of established results like engineering, or the more settled areas of science. It does have a lot of necessary skill in formulating, expressing and criticising ideas and arguments. Musicians aren’t non-experts just because there is barely such a thing as a musical fact. Philosophy isn’t broken science.
OK, so philosophers manage to avoid logical errors. Good for them. However, they make more complicated errors (see A Human’s Guide To Words for some examples), as well as sometimes errors of probability. The thing that philosophers develop expertise in is writing interesting arguments and counterarguments. But these arguments are castles built on air; there is no underlying truth to most of the questions they ask (or, if there is an underlying truth, there is no penalty for being wrong about it). And even some of the “settled” positions are only settled because of path-dependence—that is, once they became popular, anyone with conflicting intuitions would simply never become a philosopher (see Buckwalter and Stich for more on this).
Scientists (at least in theory) have all of the same skills that philosophers should have—formulating theories and arguments, catching logical errors, etc. It’s just that in science, the arguments are (when done correctly) constrained to be about the real world.
How do you know?
How do you know? Are you aware that much philosophy is about science.
To be fair, I have not done an exhaustive survey; “most” was hyperbole.
Sure. But there is no such constraint on philosophy of science.
Why is that a problem? Science deals with empirical reality, philosophy of science deals with meta-level issues. Each to their own.
Because if there is no fact of the matter on the “meta-level issues”, then you’re not actually dealing with “meta-level issues”. You are dealing with words, and your success in dealing with words is what’s being measured. Your argument is that expertise develops by feedback, but the feedback that philosophers get isn’t the right kind of feedback.
I don’t know what you mean by “fact of the matter”. It’s not a problem that meta-level isn’t object level, any more than it’s a problem that cats aren’t dogs. I also don’t think that there is any problem in identifying the meta level. Philosophers “don’t deal with words” in the sense that linguists. They use words to do things, as do many other specialities. You seem to be making the complaint that success isn’t well defined in philosophy, but that would require treating object level science as much more algorithmic than it actually is. What makes a scientific theory a good theory? Most scientists agree on it?
An actual truth about the world.
Have you read A Technical Explanation of Technical Explanation?
I don’t know what you mean by that. Is Gresham’s law such a truth?
My question was rhetorical. Science does not deal entirely in directly observable empirical facts—which might be what you meant by “actual truths about the world”. Those who fly under the Bayesian flag by and large don’t either: most of the material on this site is just as indirect/meta-levle/higher-level as philosophy. I just don’t see anything that justifies the “Boo!” rhetoric.
Actually, perhaps you should try The Simple Truth, because you seem totally confused.
Yes, a lot of the material on this site is philosophy; I would argue that it is correspondingly more likely to be wrong, precisely because is not subject to the same feedback loops as science. This is why EY keeps asking, “How do I use this to build an AI?”
So...is Gresham;s Law an actual truth about the world?
Now I’m confused. Is that likely to be wrong or not?
As far as I can tell, yes (in a limited form), but I’m prepared for an economist to tell me otherwise.
If we consider it as a definition, then it is either useful or not useful.
The focus of the question was “about the world”. Gresham’s law, if true, is not a direct empirical fact like the metling point of aluminium, not is it built into the fabric of the universe, since it is indefinable without humans and their economic activity.
So this is about the “true” part, not about the “actual world” part? In that case, You are’;t complaining that philosophy ins;t connected to reality, your claiming that it is all false. In that case I will have to ask you when and how you became omniscient.
Humans are part of the world.
I’m afraid I don’t understand what you’re saying here. Yes, if you are confused about what truth means, a definition would be useful; I think The Simple Truth is a pretty useful one (if rather long-winded, as is typical for Yudkowsky). It doesn’t tell you much about the actual world (except that it hints at a reasonable justification for induction, which is developed more fully elsewhere).
But I’m not sure why you think I am claiming philosophy is all false.
Then there is no reason why some philosopihical claims about human nature could not count as Actual Truths About The World, refuting your original point.
That depends on what you mean by “human nature,” but yes, some such claims could. However, they aren’t judged based on this (outside of experimental philosophy, of course). So, there is no feedback loop.
Based on what? Is Gresham’s law based on “this”?
That comment could have been more clear. My apologies.
Philosophers are not judged based on whether their claims accurately describe the world. This was my original point, which I continue to stand by.
OK, it has been established that you attach True to the sentence:
“Philosophers are not judged based on whether their claims accurately describe the world”.
The question is what that means. We have established that philosophical claims can be about the world, and it seems uncontroversial that some of the make true claims some of the time, since they all disagree with each other and therefore can’t all be wrong.
The problem is presumably the epistemology, the justification. Perhaps you mean that philosophy doesn’t use enough empiricism. Although it does use empiricism sometimes, and it is not that every scientific question can be settled empirically.
I’m going to leave this thread here, because I think I’ve made my position clear, and I don’t think we’ll get further if I re-explain it.
Doesn’t follow.
You mean there are ideas no philosopher has contemplated?
Just a friendly advice. Having looked through your comment history I have noticed that you have trouble interpreting the statements of others charitably. This is fine for debate-style arguments, but is not a great idea on this forum, where winning is defined by collectively constructing a more accurate map, not as an advantage in a zero-sum game. (Admittedly, this is the ideal case, the practice is unfortunately different.) Anyway, consider reading the comments you are replying to in the best possible way first.
Speaking of which, I I honestly had no idea what the “this” meant. Do you?
If you honestly do not understand the point the comment you are replying to is making, a better choice is asking the commenter to clarify, rather than continuing to argue based on this lack of understanding. TheOtherDave does it almost to a fault, feel free to read some of his threads. Asking me does not help, I did not write the comment you didn’t understand.
I believe I did:-
′ Based on what? Is Gresham’s law based on “this”?′
The point is that if no one can understand the comment, then I am not uncharitably pretending not to understand the comment:
I don’t disagree with this, but do you happen to have a cite?
I would also point out that feedback which consists solely of the opinions of other experts probably shouldn’t count as feedback. Too much danger of groupthink.
The finding that expertise is only valuable in fields where there is a sufficiently short and frequent feedback look plausibly explains why professional philosophers are no better than the general population at answering philosophical questions. However, it doesn’t explain the observation that philosophical expertise seems to be negatively correlated with true philosophical beliefs, as opposed to merely uncorrelated. Why are philosophers of religion less likely to believe the truth about religion, moral philosophers less likely to believe the truth about morality, and metaphysicians less likely to believe the truth about reality, than their colleagues with different areas of expertise?
Edit: this post is mostly a duplicate of this one
I would guess that those particular fields look more interesting when you make the wrong assumptions to begin with. I mean, it’s much less interesting to talk about God when you accept there is none. Or to talk about metaphysics, when you accept that the answer will most likely come from physics. (I don’t know about morality.)
I’m pretty sure an outside view would say it is LWers rather than domain experts who are more likely to be wrong, even when accounting for the selection-confounding Carl Schulman notes: I don’t think many people have prior convictions about decision theory before they study it.
I’ve noted it previously, but when the LW consensus are that certain views are not just correct but settled questions (obviously compatibilism re. free will, obviously atheism, obviously one-box, obviously not moral realism etc.), despite the balance of domain experts disagreeing with said consensus, this screams Dunning-Kruger effect.
I don’t think this is true in every domain. If the domain is bridge building, for example, I have some confidence that the domain experts have built a bridge or two and know what it takes to keep them up and running; if they didn’t, they wouldn’t have a job. That is, bridge building is a domain in which you are forced to repeatedly make contact with reality, and that keeps your thoughts about bridge building honest. Many domains have this property, but not all of them do. Philosophy is a domain that I suspect may not have this making-contact-with-reality property (philosophers are not paid to resolve philosophical problems, they are paid to write philosophy papers, which means they’re actually incentivized not to settle questions); some parts of martial arts might be another, and some parts of psychotherapy might be a third, just so it doesn’t sound like I’m picking on philosophy uniquely.
I agree with the signs of the effects you suggest re. philosophers being incentivized to disagree, but that shouldn’t explain (taking the strongest example of my case, two-boxing), why the majority of philosophers take the objectively less plausible view.
But plausibly LWers have the same sort of effects explaining their contra-philosophy-experts consensus. Also I don’t see how the LWers are more likely to be put in touch with reality re. these questions than philosophers.
Fair point.
You picked literally the most extreme case, where 52.5% of undergraduates answered “insufficiently familiar,” followed by 46.1% for A- vs B-theory of time. The average for all other questions was just under 12%, 8.8% for moral realism, 0.9% for free will, 0% for atheism.
For Newcomb most undergrads are not familiar enough with the problem to have an opinion, but people do have differing strong intuitions on first encountering the problem. However, the swing in favor of two boxing for Newcomb from those undergrads with an opinion to target faculty is a relatively large chance in ratio of support from 16:18 to 31:21. Learning about dominance arguments and so forth really does sway people.
I just looked through all the PhilPapers survey questions, comparing undergrads vs target faculty with the coarse breakdown. For each question I selected the plurality non-”Other” (which included insufficient knowledge, not sure, etc) option, and recorded the swing in opinion from philosophy undergraduates to philosophy professors, to within a point.
Now, there is a lot of selection filter between undergraduates and target faculty; the faculty will tend to be people who think philosophy is more worthwhile, keen on graduate education, and will be smarter with associated views (e.g. atheism is higher at more elite schools and among those with higher GRE scores, which correlate with becoming faculty). This is not a direct measure of the effect of philosophy training and study on particular people, but it’s still interesting as suggestive evidence about the degree to which philosophical study and careers inform (or otherwise influence) philosophical opinion.
In my Google Doc I recorded an average swing from undergraduates to target faculty of ~10% in the direction of the target faculty plurality, which is respectable but not huge. Compatibilism rises 18 points, atheism 10 points, moral realism 12 points, physicalism 4 points, two-boxing by 15, deontology by 10, egalitarianism by 10. Zombies and personal identity/teletransporter barely move. The biggest swing is ~30 points in favor of non-skeptical realism about the external world.
That said, I agree the LWers who answered the survey questions in a LW thread were overconfident, that the average level of philosophical thinking here is lower quality than you would find in elite philosophy students and faculty (although not uniformly, if for no other reason because some such people read and comment at LW), and that some prominent posters are pretty overconfident (although note that philosophers themselves tend to be very confident in their views despite the similarly confident disagreement of their epistemic peers with rival views, far more than your account would suggest is reasonable, or than I would).
Please cite the specific part of the original Dunning-Kruger paper which would apply here. I don’t think you’ve read it or understand what the effect actually is.
From the abstract:
The papers results obviously are not directly applicable, but the general they report (people who are not good at X tend to overestimate their ability at X relative to others) is labelled Dunning-Kruger by most is applicable.
To spell it out (in case I’ve misunderstood what Dunning-Kruger is supposed to connate), the explanation I was suggesting was:
LWers generally hold views at variance with the balance of domain experts on issues like decision theory, and when they agree with the consensus view of experts, they tend to be much more confident of these views than implied by the split of opinion (e.g. free will being ‘fully and completely dissolved problem’ on the wiki via compatibilism despite 30% or whatever of specialists disagreeing with it). When confronted with the evidence of expert disagreement, LWers generally assume the experts getting it wrong, and think something is going wrong with philosophy training.
Yet objectively/outside view wise, the philosophers who specialize in (for example) free will are by far epistemically superior to LWers on questions of free will: they’ve spent much more time thinking about it, read much more relevant literature, have much stronger credentials in philosophy, etc. Furthermore, the reasons offered by LWers as to why (for example) compatibilism is obviously true are pretty primitive (and responded to) compared to the discussion had in academia.
So the explanation that best fits the facts is that LWers are not that great at philosophy, and overestimate their ability relative to actual philosophers. Hence the response to expert disagreement with them is to assert the experts must be systematically irrational/biased etc.
So, as I thought: you had not read it before, or you would not be quoting the abstract at me, or rather, would be quoting more relevant parts from the paper.
No, it is not. If you actually read the paper, you would have learned that this is not directly applicable and there’s no reason to expect that there would even be an indirect applicability. From the full abstract which you chose not to quote, we immediately find at least two areas where DK should break:
The average LWer—never mind the people doing most of the commenting and posting—is easily in the 95th+ percentile on logic and grammar.
Besides that, LW is obsessed with ‘meta’ issues, which knocks out the ‘lack of metacognitive ability’ which is the other scissor of DK.
Thirdly, DK is generally thought to apply when there is no feedback which can compensate for the imperfect self-assessment; however, LW is notorious for being highly critical and fractious and agreeing on very little (the surveys reveal that we can’t even agree on atheism!).
Fourth, the part of DK you don’t focus on is how the top quartile reliably underestimates its own performance (see the graphs on pg1124-1126). Unless you have an objective indicator that LWers are very bad at philosophy—and I would note here that LWers routinely exceed the performance I observed of my philosophy classmates and even published philosophy papers I’ve read, like the dreck that gets published in JET, where I spent more than a few posts here going through and dissecting individual papers—it at least as plausible that LWers are actually underestimating their performance. The top quartile, by the way, in the third experiment actually increased its self-assessed performance by observing the performance of others, and in the fourth experiment this was due to overestimating the performance of others before observing their actual performance Application of this to LW is left as an exercise to the reader...
A wiki page is a wiki page. If you were informed about LW views, you would be citing the surveys, which are designed for that purpose.
(And are you sure that 30% is right there? Because if 30% disagree, then 70% agree...)
Experts think much the same thing: philosophers have always been the harshest critics of philosophers. This does not distinguish LWers from philosophers.
As I’ve shown above, none of that holds, and you have distorted badly the DK research to fit your claims. You have not read the paper, you do not understand why it applies, you have no evidence for your meta thesis aside from disagreeing with an unknown and uncited fraction of experts, and you are apparently unaware of your ignorance in these points.
Compatibilism doesn’t belong on that list; a majority of philosophers surveyed agree, and it seems like most opposition is concentrated within Philosophy of Religion, which I don’t think is the most relevant subfield. (The correlation between philosophers of religion and libertarianism was the second highest found.)
True, but LW seems to be overconfident in compatibilism compared to the spread of expert opinion. It doesn’t seem it should be considered ‘settled’ or ‘obvious’ when >10% of domain experts disagree.
I observe that in some cases this can be both a rational thing believe and simultaneously wrong. (In fact this is the case whenever either a high status belief is incorrect or someone is mistaken about the relevance of a domain of authority to a particular question.)
It does scream that. Indeed, anyone who has literally no other information than that a subculture has a belief along those lines that contradicts an authority that the observer has reason to trust more then Dunning-Kruger is prompted as a likely hypothesis.
Nevertheless: Obviously compatibilism re. free will, obviously atheism, obviously one-box, obviously not moral realism!
The ‘outside view’ is useful sometimes but it is inherently, by design, about what one would believe if one was ignorant. It is reasoning as though one does not have access to most kinds of evidence but completely confident in beliefs about reference class applicability. In particular in this case it would require being ignorant not merely of lesswrong beliefs but also to be ignorant of philosophophy, philosophy of science and sociology literature too.
Not how helpful this is, but my knowledge of these fields tends to confirm that LW arguments on these tend to recapitulate work already done in the relevant academic circles, but with far inferior quality.
If LWers look at a smattering of academic literature and think the opposite, then fair enough. Yet I think LWers generally form their views on these topics based on LW work, and not look at at least some of the academic work on these topics. If so, I think they should take the outside view argument seriously, as their confidence in LW work doesn’t confirm the ‘we’re really right about this because we’ve got the better reasons’ over dunning-kruger explanations.