These are some extraordinary claims. I wonder if there is a metric that mainstream analytical philosophers would agree to use to evaluate statements like
LW outperform analytic philosophy
and
LW is academic philosophy, rebooted with better people than Plato as its Pater Patriae.
Without an agreed upon evaluation criteria, this is just tooting one’s own horn, wouldn’t you agree?
On the topic of “horn-tooting”: see my philosopher-of-religion analogy. It would be hard to come up with a simple metric that would convince most philosophers of religion “LW is better than you at thinking about philosophy of religion”. If you actually wanted to reach consensus about this, you’d probably want to start with a long serious of discussions about object-level questions and thinking heuristics.
And in the interim, it shouldn’t be seen as a status grab for LWers to toot their own horn about being better at philosophy of religion. Toot away! Every toot is an opportunity to be embarrassed later when the philosophers of religion show that they were right all along.
It would be bad to toot if your audience were so credulous that they’ll just take your word for it, or if the social consequences of making mistakes were too mild to disincentivize empty boasts. But I don’t think LW or analytic philosophy are credulous or forgiving enough to make this a real risk.
If anything, there probably isn’t enough horn-tooting in those groups. People are too tempted to false modesty, or too tempted to just steer clear of the topic of relative skill levels. This makes it harder to get feedback about people’s rationality and meta-rationality, and it makes a lot of coordination problems harder.
This sounds like a very Eliezer-like approach: “I don’t have to convince you, a professional who spent decades learning and researching the subject matter, here is the truth, throw away your old culture and learn from me, even though I never bothered to learn what you learned!” While there are certainly plenty of cases where this is valid, in any kind of evidence-based sciences the odds of it being successful are slim to none (the infamous QM sequence is one example of a failed foray like that. Well, maybe not failed, just uninteresting). I want to agree with you on the philosophy of religion, of course, because, well, if you start with a failed premise, you can spend all your life analyzing noise, like the writers of Talmud did. But an outside view says that the Chesterton fence of an existing academic culture is there for a reason, including the philosophical traditions dating back millennia.
An SSC-like approach seems much more reliable in terms of advancing a particular field. Scott spends inordinate amount of time understanding the existing fences, how they came to be and why they are there still, before advancing an argument why it might be a good idea to move them, and how to test if the move is good. I think that leads to him being taken much more seriously by the professionals in the area he writes about.
I gather that both approaches have merit, as there is generally no arguing with someone who is in a “diseased discipline”, but one has to be very careful affixing that label on the whole field of research, even if it seems obvious to an outsider. Or to an insider, if you follow the debates about whether the String Theory is a diseased field in physics.
Still, except for the super-geniuses among us, it is much safer to understand the ins and outs before declaring that the giga-IQ-hours spent by humanity on a given topic are a waste or a dead end. The jury is still out on whether Eliezer and MIRI in general qualify.
Even if the jury’s out, it’s a poor courtroom that discourages the plaintiff, defendant, witnesses, and attorneys from sharing their epistemic state, for fear of offending others in the courtroom!
It may well be true that sharing your honest models of (say) philosophy of religion is a terrible idea and should never happen in public, if you want to have any hope of convincing any philosophers of religion in the future. But… well, if intellectual discourse is in as grim and lightless a state as all that, I hope we can at least have clear sights about how bad that is, and how much better it would be if we somehow found a way to just share our models of the field and discuss those plainly. I can’t say it’s impossible to end up in situations like that, but I can push for the conditional policy ‘if you end up in that kind of situation, be super clear about how terrible this is and keep an eye out for ways to improve on it’.
You don’t have to be extremely confident in your view’s stability (i.e., whether you expect to change your view a lot based on future evidence) or its transmissibility in order to have a view at all. And if people don’t share their views — or especially, if they are happier to share positive views of groups than negative ones, or otherwise have some systemic bias in what they share — the group’s aggregate beliefs will be less accurate.
So, see my conversation with Ben Levinstein and my reply to adrusi for some of my reply. An example of what I have in mind by ‘LWers outperforming’ is the 2009 PhilPapers survey: I’d expect a survey of LW users with 200+ karma to...
… have fewer than 9.1% of respondents endorse “skepticism” or “idealism” about the external world.
… have fewer than 13.7% endorse “libertarianism” about free will (roughly defined as the view “(1) that we do have free will, (2) that free will is not compatible with determinism, and (3) that determinism is therefore false”).
… have fewer than 14.6% endorse “theism”.
… have fewer than 27.1% endorse “non-physicalism” about minds.
… have fewer than 59.6% endorse “two boxes” in Newcomb’s problem, out of the people who gave a non-”Other” answer.
… have fewer than 44% endorse “deontology” or “virtue ethics”.
… have fewer than 12.2% endorse the “further-fact view” of personal identity (roughly defined as “the facts about persons and personal identity consist in some further [irreducible, non-physical] fact, typically a fact about Cartesian egos or souls”).
… have fewer than 16.9% endorse the “biological view” of personal identity (which says that, e.g., if my brain were put in a new body, I should worry about the welfare of my old brainless body, not about the welfare of my mind or brain).
… have fewer than 31.1% endorse “death” as the thing that happens in “teletransporter (new matter)” thought experiments.
… have fewer than 37% endorse the “A-theory” of time (which rejects the idea of “spacetime as a spread-out manifold with events occurring at different locations in the manifold”), out of the people who gave a non-”Other” answer.
… have fewer than 6.9% endorse an “epistemic” theory of truth (i.e., a view that what’s true is what’s knowable, or known, or verifiable, or something to that effect).
This is in no way a perfect or complete operationalization, but it at least gestures at the kind of thing I have in mind.
Well, it looks like you declare “outperforming” by your own metric, not by anything generally accepted.
(Also, I take issue with the last two. The philosophical ideas about time are generally not about time, but about “time”, i.e. about how humans perceive and understand passage of time. So distinguishing between A and B is about humans, not about time, unlike, say, Special and General Relativity, which provide a useful model of time and spacetime.
A non-epistemic theory of truth (e.g. there is an objective truth we try to learn) is detrimental in general, because it inevitably deteriorates into debates about untestables, like other branches of a hypothetical multiverse and how to behave morally in an infinite universe.)
Also, most people here, while giving lip service to non-libertarian views of free will, sneak it in anyway, as evidenced by relying on “free choice” in nearly all decision theory discussions.
Well, it looks like you declare “outperforming” by your own metric, not by anything generally accepted.
I am indeed basing my view that philosophers are wrong about stuff on investigating the specific claims philosophers make.
If there were a (short) proof that philosophers were wrong about X that philosophers already accepted, I assume they would just stop believing X and the problem would be solved.
The philosophical ideas about time are generally not about time, but about “time”, i.e. about how humans perceive and understand passage of time.
Nope, the 20th-century philosophical literature discussing time is about time itself, not about (e.g.) human psychological or cultural perceptions of time.
There is also discussion of humans’ perception and construction of time—e.g., in Kant—but that’s not the context in which A-theory and B-theory are debated.
The A-theory and B-theory were introduced in 1908, before many philosophers (or even physicsts) had heard of special relativity; and ‘this view seems unbelievably crazy given special relativity’ is in fact one of the main arguments that gets cited in the literature against the A-theory of time.
A non-epistemic theory of truth (e.g. there is an objective truth we try to learn) is detrimental in general, because it inevitably deteriorates into debates about untestables, like other branches of a hypothetical multiverse and how to behave morally in an infinite universe.)
“It’s raining” is true even if you can’t check. Also, what’s testable for one person is different from what’s testable for another person. Rather than saying that different things are ‘true’ or ‘false’ or ‘neither true nor false’ depending on which person you are, simpler to just say that “snow is white” is true iff snow is white.
It’s not like there’s any difficulty in defining a predicate that satisfies the correspondence theory of truth, and this predicate is much closer to what people ordinarily mean by “true” than any epistemic theory of truth’s “true” is. So demanding that we abandon the ordinary thing people mean by “truth” just seems confusing and unnecessary.
Doubly so when there’s uncertainty or flux about which things are testable. Who can possibly keep track of which things are true vs. false vs. meaningless, when the limits of testability are always changing? Seems exhausting.
Also, most people here, while giving lip service to non-libertarian views of free will, sneak it in anyway, as evidenced by relying on “free choice” in nearly all decision theory discussions.
This is a very bad argument. Using the phrase “free choice” doesn’t imply that you endorse libertarian free will.
Well, we may have had this argument before, likely more than once, so probably no point rehashing it. I appreciate you expressing your views succinctly though.
These are some extraordinary claims. I wonder if there is a metric that mainstream analytical philosophers would agree to use to evaluate statements like
and
Without an agreed upon evaluation criteria, this is just tooting one’s own horn, wouldn’t you agree?
On the topic of “horn-tooting”: see my philosopher-of-religion analogy. It would be hard to come up with a simple metric that would convince most philosophers of religion “LW is better than you at thinking about philosophy of religion”. If you actually wanted to reach consensus about this, you’d probably want to start with a long serious of discussions about object-level questions and thinking heuristics.
And in the interim, it shouldn’t be seen as a status grab for LWers to toot their own horn about being better at philosophy of religion. Toot away! Every toot is an opportunity to be embarrassed later when the philosophers of religion show that they were right all along.
It would be bad to toot if your audience were so credulous that they’ll just take your word for it, or if the social consequences of making mistakes were too mild to disincentivize empty boasts. But I don’t think LW or analytic philosophy are credulous or forgiving enough to make this a real risk.
If anything, there probably isn’t enough horn-tooting in those groups. People are too tempted to false modesty, or too tempted to just steer clear of the topic of relative skill levels. This makes it harder to get feedback about people’s rationality and meta-rationality, and it makes a lot of coordination problems harder.
This sounds like a very Eliezer-like approach: “I don’t have to convince you, a professional who spent decades learning and researching the subject matter, here is the truth, throw away your old culture and learn from me, even though I never bothered to learn what you learned!” While there are certainly plenty of cases where this is valid, in any kind of evidence-based sciences the odds of it being successful are slim to none (the infamous QM sequence is one example of a failed foray like that. Well, maybe not failed, just uninteresting). I want to agree with you on the philosophy of religion, of course, because, well, if you start with a failed premise, you can spend all your life analyzing noise, like the writers of Talmud did. But an outside view says that the Chesterton fence of an existing academic culture is there for a reason, including the philosophical traditions dating back millennia.
An SSC-like approach seems much more reliable in terms of advancing a particular field. Scott spends inordinate amount of time understanding the existing fences, how they came to be and why they are there still, before advancing an argument why it might be a good idea to move them, and how to test if the move is good. I think that leads to him being taken much more seriously by the professionals in the area he writes about.
I gather that both approaches have merit, as there is generally no arguing with someone who is in a “diseased discipline”, but one has to be very careful affixing that label on the whole field of research, even if it seems obvious to an outsider. Or to an insider, if you follow the debates about whether the String Theory is a diseased field in physics.
Still, except for the super-geniuses among us, it is much safer to understand the ins and outs before declaring that the giga-IQ-hours spent by humanity on a given topic are a waste or a dead end. The jury is still out on whether Eliezer and MIRI in general qualify.
Even if the jury’s out, it’s a poor courtroom that discourages the plaintiff, defendant, witnesses, and attorneys from sharing their epistemic state, for fear of offending others in the courtroom!
It may well be true that sharing your honest models of (say) philosophy of religion is a terrible idea and should never happen in public, if you want to have any hope of convincing any philosophers of religion in the future. But… well, if intellectual discourse is in as grim and lightless a state as all that, I hope we can at least have clear sights about how bad that is, and how much better it would be if we somehow found a way to just share our models of the field and discuss those plainly. I can’t say it’s impossible to end up in situations like that, but I can push for the conditional policy ‘if you end up in that kind of situation, be super clear about how terrible this is and keep an eye out for ways to improve on it’.
You don’t have to be extremely confident in your view’s stability (i.e., whether you expect to change your view a lot based on future evidence) or its transmissibility in order to have a view at all. And if people don’t share their views — or especially, if they are happier to share positive views of groups than negative ones, or otherwise have some systemic bias in what they share — the group’s aggregate beliefs will be less accurate.
So, see my conversation with Ben Levinstein and my reply to adrusi for some of my reply. An example of what I have in mind by ‘LWers outperforming’ is the 2009 PhilPapers survey: I’d expect a survey of LW users with 200+ karma to...
… have fewer than 9.1% of respondents endorse “skepticism” or “idealism” about the external world.
… have fewer than 13.7% endorse “libertarianism” about free will (roughly defined as the view “(1) that we do have free will, (2) that free will is not compatible with determinism, and (3) that determinism is therefore false”).
… have fewer than 14.6% endorse “theism”.
… have fewer than 27.1% endorse “non-physicalism” about minds.
… have fewer than 59.6% endorse “two boxes” in Newcomb’s problem, out of the people who gave a non-”Other” answer.
… have fewer than 44% endorse “deontology” or “virtue ethics”.
… have fewer than 12.2% endorse the “further-fact view” of personal identity (roughly defined as “the facts about persons and personal identity consist in some further [irreducible, non-physical] fact, typically a fact about Cartesian egos or souls”).
… have fewer than 16.9% endorse the “biological view” of personal identity (which says that, e.g., if my brain were put in a new body, I should worry about the welfare of my old brainless body, not about the welfare of my mind or brain).
… have fewer than 31.1% endorse “death” as the thing that happens in “teletransporter (new matter)” thought experiments.
… have fewer than 37% endorse the “A-theory” of time (which rejects the idea of “spacetime as a spread-out manifold with events occurring at different locations in the manifold”), out of the people who gave a non-”Other” answer.
… have fewer than 6.9% endorse an “epistemic” theory of truth (i.e., a view that what’s true is what’s knowable, or known, or verifiable, or something to that effect).
This is in no way a perfect or complete operationalization, but it at least gestures at the kind of thing I have in mind.
Well, it looks like you declare “outperforming” by your own metric, not by anything generally accepted.
(Also, I take issue with the last two. The philosophical ideas about time are generally not about time, but about “time”, i.e. about how humans perceive and understand passage of time. So distinguishing between A and B is about humans, not about time, unlike, say, Special and General Relativity, which provide a useful model of time and spacetime.
A non-epistemic theory of truth (e.g. there is an objective truth we try to learn) is detrimental in general, because it inevitably deteriorates into debates about untestables, like other branches of a hypothetical multiverse and how to behave morally in an infinite universe.)
Also, most people here, while giving lip service to non-libertarian views of free will, sneak it in anyway, as evidenced by relying on “free choice” in nearly all decision theory discussions.
I am indeed basing my view that philosophers are wrong about stuff on investigating the specific claims philosophers make.
If there were a (short) proof that philosophers were wrong about X that philosophers already accepted, I assume they would just stop believing X and the problem would be solved.
Nope, the 20th-century philosophical literature discussing time is about time itself, not about (e.g.) human psychological or cultural perceptions of time.
There is also discussion of humans’ perception and construction of time—e.g., in Kant—but that’s not the context in which A-theory and B-theory are debated.
The A-theory and B-theory were introduced in 1908, before many philosophers (or even physicsts) had heard of special relativity; and ‘this view seems unbelievably crazy given special relativity’ is in fact one of the main arguments that gets cited in the literature against the A-theory of time.
“It’s raining” is true even if you can’t check. Also, what’s testable for one person is different from what’s testable for another person. Rather than saying that different things are ‘true’ or ‘false’ or ‘neither true nor false’ depending on which person you are, simpler to just say that “snow is white” is true iff snow is white.
It’s not like there’s any difficulty in defining a predicate that satisfies the correspondence theory of truth, and this predicate is much closer to what people ordinarily mean by “true” than any epistemic theory of truth’s “true” is. So demanding that we abandon the ordinary thing people mean by “truth” just seems confusing and unnecessary.
Doubly so when there’s uncertainty or flux about which things are testable. Who can possibly keep track of which things are true vs. false vs. meaningless, when the limits of testability are always changing? Seems exhausting.
This is a very bad argument. Using the phrase “free choice” doesn’t imply that you endorse libertarian free will.
Well, we may have had this argument before, likely more than once, so probably no point rehashing it. I appreciate you expressing your views succinctly though.