I think there’s something like: LessWrong sometimes tends too hard towards pragmatism and jumps past things that are deserving of closer consideration.
To be fair, though, I think LessWrong does a better job of being pragmatic enough to be useful for having an impact on the world than academic philosophy does. I just note that, like with anything, sometimes the balance seems to go too far and fails to carefully consider things that are worthy of greater consideration as a result of a desire to get on with things and say something actionable.
I think there’s something like: LessWrong sometimes tends too hard towards pragmatism and jumps past things that are deserving of closer consideration.
I agree with this. I especially agree that LWers (on average) are too prone to do things like:
Hear Eliezer’s anti-zombie argument and conclude “oh good, there’s no longer anything confusing about the Hard Problem of Consciousness!”.
Hear about Tegmark’s Mathematical Universe Hypothesis and conclude “oh good, there’s no longer anything about why there’s something rather than nothing!”.
On average, I think LWers are more likely to make important errors in the direction of ‘prematurely dismissing things that sound un-sciencey’ than to make important errors in the direction of ‘prematurely embracing un-sciencey things’.
But ‘tendency to dismiss things that sound un-sciencey’ isn’t exactly the dimension I want LW to change on, so I’m wary of optimizing LW in that direction; I’d much rather optimize it in more specific directions that are closer to the specific things I think are true and good.
I think there’s something like: LessWrong sometimes tends too hard towards pragmatism and jumps past things that are deserving of closer consideration.
To be fair, though, I think LessWrong does a better job of being pragmatic enough to be useful for having an impact on the world than academic philosophy does. I just note that, like with anything, sometimes the balance seems to go too far and fails to carefully consider things that are worthy of greater consideration as a result of a desire to get on with things and say something actionable.
I agree with this. I especially agree that LWers (on average) are too prone to do things like:
Hear Eliezer’s anti-zombie argument and conclude “oh good, there’s no longer anything confusing about the Hard Problem of Consciousness!”.
Hear about Tegmark’s Mathematical Universe Hypothesis and conclude “oh good, there’s no longer anything about why there’s something rather than nothing!”.
On average, I think LWers are more likely to make important errors in the direction of ‘prematurely dismissing things that sound un-sciencey’ than to make important errors in the direction of ‘prematurely embracing un-sciencey things’.
But ‘tendency to dismiss things that sound un-sciencey’ isn’t exactly the dimension I want LW to change on, so I’m wary of optimizing LW in that direction; I’d much rather optimize it in more specific directions that are closer to the specific things I think are true and good.
The fact that so many rationalists have made that mistake is evidence against the claim rationalists are superior philosophers.
Yep!
Then we have to evidence against the claim , but none for it.
False!
Where’s the evidence for it?