I think an important piece that’s missing here is that LW simply assumes that certain answers to important questions are correct. It’s not just that there are social norms that say it’s OK to dismiss ideas as stupid if you think they’re stupid, it’s that there’s a rough consensus on which ideas are stupid.
LW has a widespread consensus on bayseian epistemology, physicalist metaphysics and consequentialist ethics (not an exhaustive list). And it has good reasons for favoring these positions, but I don’t think LW has great responses to all the arguments against these positions. Neither do the alternative positions have great responses to counterarguments from the LW-favored positions.
Analytic philosophy in the academy is stuck with a mess of incompatible views, and philosophers only occasionally succeed in organizing themselves into clusters that share answers to a wide range of fundamental questions.
And they have another problem stemming from the incentives in publishing. Since academic philosophers want citations, there’s an advantage to making arguments that don’t rely on particular answers to questions where there isn’t widespread agreement. Philosophers of science will often avoid invoking causation, for instance, since not everyone believes in it. It takes more work to argue in that fashion, and it constrains what sorts of conclusions you can arrive at.
The obvious pitfalls of organizing around a consensus on the answers to unsolved problems are obvious.
Five hundred extremely smart and well-intentioned philosophers of religion (some atheists, some Christians, some Muslims, etc.) have produced an enormous literature discussing the ins and outs of theism and the efficacy of prayer, and there continue to be a number of complexities and unsolved problems related to why certain arguments succeed or fail, even though various groups have strong (conflicting) intuitions to the effect “claim x is going to be true in the end”.
In a context like this, I would consider it an important mark in favor of a group if they were 50% better than the philosophers of religion at picking the right claims to say “claim x is going to be true in the end”, even if they are no better than the philosophers of religion at conclusively proving to a random human that they’re right. (In fact, even if they’re somewhat worse.)
To sharpen this question, we can imagine that a group of intellectuals learns that a nearby dam is going to break soon, flooding their town. They can choose to divide up their time between ‘evacuating people’ and ‘praying’. Since prayer doesn’t work (I say with confidence, even though I’ve never read any scholarly work about this), I would score a group in this context based on how well they avoid wasting scarce minutes on prayer. I would give little or no points based on how good their arguments for one allocation or another are, since lives are on the line and the end result is a clearer test. Having compelling-sounding arguments matters, but in the end the physical world judges you on whether you ended up getting the right answer, not on your reasoning per se.
To clarify a few things:
Obviously, I’m not saying the difference between LW and analytic philosophy is remotely as drastic as the difference between LW and philosophy of religion. I’m just using the extreme example to highlight a qualitative point.
Obviously, if someone comes to this thread saying ‘but two-boxing is better than one-boxing’, I will reply by giving specific counter-arguments (both formal and heuristic), not by just saying ‘my intuition is better than yours!’ and stopping there. And obviously I don’t expect a random philosopher to instantly assume I’m correct that LWers have good intuitions about this, without spending a lot of time talking with us. I can notice and give credit to someone who has a good empirical track record (by my lights), without expecting everyone on the Internet to take my word for it.
Obviously, being a LWer, I care about heuristics of good reasoning. :) And if someone gives sufficiently bad reasons for the right answer, I will worry about whether they’re going to get other answers wrong in the future.
But also, I think there’s such a thing as having good built-up intuitions about what kinds of conclusions end up turning out to be true, and about what kinds of evidence tend to deserve more weight than other kinds of evidence. This might actually be the big thing LW has over analytic philosophy, so I want to call attention to it and encourage people to poke at what this thing is.
I worry that this doesn’t really end up explaining much. We think that our answers to philosophical questions are better than what the analytics have come up with. Why? Because they seem intuitively to be better answers. What explanation do we posit for why our answers are better? Because we start out with better intuitions.
Of course our intuitions might in fact be better, as I (intuitively) think they are. But that explanation is profoundly underwhelming.
This might actually be the big thing LW has over analytic philosophy, so I want to call attention to it and encourage people to poke at what this thing is.
I’m not sure what you mean here, but maybe we’re getting at the same thing. Having some explanation for why we might expect our intuitions to be better would make this argument more substantive. I’m sure that anyone can give explanations for why their intuitions are more likely to be right, but it’s at least more constraining. Some possibilities:
LWers are more status-blind, so their intuitions are less distorted by things that are not about being right
Many LWers have a background in non-phil-of-mind cognitive sciences, like AI, neuroscience and psychiatry, which leads them to believe that someways of thinking are more apt to lead to truth than others, and then adopt the better ones
LWers are more likely than analytic philosophers to have extensive experience in a discipline where you get feedback on whether you’re right, rather than merely feedback on whether others think you are right, and that might train their intuitions in a useful direction.
I’m not confident that any of these are good explanations, but they illustrate the sort of shape of explanation that I think would be needed to give a useful answer to the question posed in the article.
Those seem like fine partial explanations to me, as do the explanations I listed in the OP. I expect multiple things went right simultaneously; if it were just a single simple tweak, we would expect many other groups to have hit on the same trick.
Many LWers have a background in non-phil-of-mind cognitive sciences, like AI, neuroscience and psychiatry, which leads them to believe that someways of thinking are more apt to lead to truth than others, and then adopt the better ones
LWers are more likely than analytic philosophers to have extensive experience in a discipline where you get feedback on whether you’re right, rather than merely feedback on whether others think you are right, and that might train their intuitions in a useful direction.
It’s common for people from other backgrounds to get frustrated with philosophy. But it isn’t a good argument to the effect that philosophy is being done wrong. Since it is a separate discipline to science , engineering, and so on, there is no particular reason to think that the same techniques will work. If there are reasons why some Weird Trick would work across all disciplines , then it would work in philosophy. But is there a one weird trick?
. Having compelling-sounding arguments matters, but in the end the physical world judges you on whether you ended up getting the right answer, not on your reasoning per se.
There is a set of claims that LW holds to be true, and a set that can be tested directly and unambiguously—where “physical reality judges you”—and they are not the same set. Ask yourself how many Lesswrongian claims other than Newcombe are directly testable.
The pragmatic or “winning” approach just doesn’t go far enough.
You can objectively show that a theory succeeds or fails at predicting observations, and at the closely related problem of achieving practical results . It is is less clear whether an explanation succeeds in explaining, and less clear still whether a model succeeds in corresponding to the territory. The lack of a test for correspondence per se, ie. the lack of an independent “standpoint” from which the map and the territory can be compared, is the is the major problem in scientific epistemology. And the lack of direct testability is one of the things that characterises philosophical problems as opposed to scientific ones—you can’t test ethics for correctness,you can’t test personal identity, you can’t test correspondence-to-reality separately from prediction-of-obsevation—so the “winning” or pragmatic approach is a particularly bad fit for philosophy.
Pragmatism, the “winning” approach, could form a basis of epistemology if the scope of epistemology were limited only to the things it can in fact prove, such as claims about future observations. Instrumentalism and Logcal positivism are well known forms of this approach. But rationalism rejects those approaches!
If you can’t make a firm commitment to instrumentalism, then you’re in the arena where, in the absence of results, you need to use reason to persuade people—you can’t have it both ways.
I think an important piece that’s missing here is that LW simply assumes that certain answers to important questions are correct. It’s not just that there are social norms that say it’s OK to dismiss ideas as stupid if you think they’re stupid, it’s that there’s a rough consensus on which ideas are stupid.
LW has a widespread consensus on bayseian epistemology, physicalist metaphysics and consequentialist ethics (not an exhaustive list). And it has good reasons for favoring these positions, but I don’t think LW has great responses to all the arguments against these positions. Neither do the alternative positions have great responses to counterarguments from the LW-favored positions.
Analytic philosophy in the academy is stuck with a mess of incompatible views, and philosophers only occasionally succeed in organizing themselves into clusters that share answers to a wide range of fundamental questions.
And they have another problem stemming from the incentives in publishing. Since academic philosophers want citations, there’s an advantage to making arguments that don’t rely on particular answers to questions where there isn’t widespread agreement. Philosophers of science will often avoid invoking causation, for instance, since not everyone believes in it. It takes more work to argue in that fashion, and it constrains what sorts of conclusions you can arrive at.
The obvious pitfalls of organizing around a consensus on the answers to unsolved problems are obvious.
I would draw an analogy like this one:
Five hundred extremely smart and well-intentioned philosophers of religion (some atheists, some Christians, some Muslims, etc.) have produced an enormous literature discussing the ins and outs of theism and the efficacy of prayer, and there continue to be a number of complexities and unsolved problems related to why certain arguments succeed or fail, even though various groups have strong (conflicting) intuitions to the effect “claim x is going to be true in the end”.
In a context like this, I would consider it an important mark in favor of a group if they were 50% better than the philosophers of religion at picking the right claims to say “claim x is going to be true in the end”, even if they are no better than the philosophers of religion at conclusively proving to a random human that they’re right. (In fact, even if they’re somewhat worse.)
To sharpen this question, we can imagine that a group of intellectuals learns that a nearby dam is going to break soon, flooding their town. They can choose to divide up their time between ‘evacuating people’ and ‘praying’. Since prayer doesn’t work (I say with confidence, even though I’ve never read any scholarly work about this), I would score a group in this context based on how well they avoid wasting scarce minutes on prayer. I would give little or no points based on how good their arguments for one allocation or another are, since lives are on the line and the end result is a clearer test. Having compelling-sounding arguments matters, but in the end the physical world judges you on whether you ended up getting the right answer, not on your reasoning per se.
To clarify a few things:
Obviously, I’m not saying the difference between LW and analytic philosophy is remotely as drastic as the difference between LW and philosophy of religion. I’m just using the extreme example to highlight a qualitative point.
Obviously, if someone comes to this thread saying ‘but two-boxing is better than one-boxing’, I will reply by giving specific counter-arguments (both formal and heuristic), not by just saying ‘my intuition is better than yours!’ and stopping there. And obviously I don’t expect a random philosopher to instantly assume I’m correct that LWers have good intuitions about this, without spending a lot of time talking with us. I can notice and give credit to someone who has a good empirical track record (by my lights), without expecting everyone on the Internet to take my word for it.
Obviously, being a LWer, I care about heuristics of good reasoning. :) And if someone gives sufficiently bad reasons for the right answer, I will worry about whether they’re going to get other answers wrong in the future.
But also, I think there’s such a thing as having good built-up intuitions about what kinds of conclusions end up turning out to be true, and about what kinds of evidence tend to deserve more weight than other kinds of evidence. This might actually be the big thing LW has over analytic philosophy, so I want to call attention to it and encourage people to poke at what this thing is.
I worry that this doesn’t really end up explaining much. We think that our answers to philosophical questions are better than what the analytics have come up with. Why? Because they seem intuitively to be better answers. What explanation do we posit for why our answers are better? Because we start out with better intuitions.
Of course our intuitions might in fact be better, as I (intuitively) think they are. But that explanation is profoundly underwhelming.
I’m not sure what you mean here, but maybe we’re getting at the same thing. Having some explanation for why we might expect our intuitions to be better would make this argument more substantive. I’m sure that anyone can give explanations for why their intuitions are more likely to be right, but it’s at least more constraining. Some possibilities:
LWers are more status-blind, so their intuitions are less distorted by things that are not about being right
Many LWers have a background in non-phil-of-mind cognitive sciences, like AI, neuroscience and psychiatry, which leads them to believe that someways of thinking are more apt to lead to truth than others, and then adopt the better ones
LWers are more likely than analytic philosophers to have extensive experience in a discipline where you get feedback on whether you’re right, rather than merely feedback on whether others think you are right, and that might train their intuitions in a useful direction.
I’m not confident that any of these are good explanations, but they illustrate the sort of shape of explanation that I think would be needed to give a useful answer to the question posed in the article.
Those seem like fine partial explanations to me, as do the explanations I listed in the OP. I expect multiple things went right simultaneously; if it were just a single simple tweak, we would expect many other groups to have hit on the same trick.
It’s common for people from other backgrounds to get frustrated with philosophy. But it isn’t a good argument to the effect that philosophy is being done wrong. Since it is a separate discipline to science , engineering, and so on, there is no particular reason to think that the same techniques will work. If there are reasons why some Weird Trick would work across all disciplines , then it would work in philosophy. But is there a one weird trick?
There is a set of claims that LW holds to be true, and a set that can be tested directly and unambiguously—where “physical reality judges you”—and they are not the same set. Ask yourself how many Lesswrongian claims other than Newcombe are directly testable.
The pragmatic or “winning” approach just doesn’t go far enough.
You can objectively show that a theory succeeds or fails at predicting observations, and at the closely related problem of achieving practical results . It is is less clear whether an explanation succeeds in explaining, and less clear still whether a model succeeds in corresponding to the territory. The lack of a test for correspondence per se, ie. the lack of an independent “standpoint” from which the map and the territory can be compared, is the is the major problem in scientific epistemology. And the lack of direct testability is one of the things that characterises philosophical problems as opposed to scientific ones—you can’t test ethics for correctness,you can’t test personal identity, you can’t test correspondence-to-reality separately from prediction-of-obsevation—so the “winning” or pragmatic approach is a particularly bad fit for philosophy.
Pragmatism, the “winning” approach, could form a basis of epistemology if the scope of epistemology were limited only to the things it can in fact prove, such as claims about future observations. Instrumentalism and Logcal positivism are well known forms of this approach. But rationalism rejects those approaches!
If you can’t make a firm commitment to instrumentalism, then you’re in the arena where, in the absence of results, you need to use reason to persuade people—you can’t have it both ways.