On the topic of “horn-tooting”: see my philosopher-of-religion analogy. It would be hard to come up with a simple metric that would convince most philosophers of religion “LW is better than you at thinking about philosophy of religion”. If you actually wanted to reach consensus about this, you’d probably want to start with a long serious of discussions about object-level questions and thinking heuristics.
And in the interim, it shouldn’t be seen as a status grab for LWers to toot their own horn about being better at philosophy of religion. Toot away! Every toot is an opportunity to be embarrassed later when the philosophers of religion show that they were right all along.
It would be bad to toot if your audience were so credulous that they’ll just take your word for it, or if the social consequences of making mistakes were too mild to disincentivize empty boasts. But I don’t think LW or analytic philosophy are credulous or forgiving enough to make this a real risk.
If anything, there probably isn’t enough horn-tooting in those groups. People are too tempted to false modesty, or too tempted to just steer clear of the topic of relative skill levels. This makes it harder to get feedback about people’s rationality and meta-rationality, and it makes a lot of coordination problems harder.
This sounds like a very Eliezer-like approach: “I don’t have to convince you, a professional who spent decades learning and researching the subject matter, here is the truth, throw away your old culture and learn from me, even though I never bothered to learn what you learned!” While there are certainly plenty of cases where this is valid, in any kind of evidence-based sciences the odds of it being successful are slim to none (the infamous QM sequence is one example of a failed foray like that. Well, maybe not failed, just uninteresting). I want to agree with you on the philosophy of religion, of course, because, well, if you start with a failed premise, you can spend all your life analyzing noise, like the writers of Talmud did. But an outside view says that the Chesterton fence of an existing academic culture is there for a reason, including the philosophical traditions dating back millennia.
An SSC-like approach seems much more reliable in terms of advancing a particular field. Scott spends inordinate amount of time understanding the existing fences, how they came to be and why they are there still, before advancing an argument why it might be a good idea to move them, and how to test if the move is good. I think that leads to him being taken much more seriously by the professionals in the area he writes about.
I gather that both approaches have merit, as there is generally no arguing with someone who is in a “diseased discipline”, but one has to be very careful affixing that label on the whole field of research, even if it seems obvious to an outsider. Or to an insider, if you follow the debates about whether the String Theory is a diseased field in physics.
Still, except for the super-geniuses among us, it is much safer to understand the ins and outs before declaring that the giga-IQ-hours spent by humanity on a given topic are a waste or a dead end. The jury is still out on whether Eliezer and MIRI in general qualify.
Even if the jury’s out, it’s a poor courtroom that discourages the plaintiff, defendant, witnesses, and attorneys from sharing their epistemic state, for fear of offending others in the courtroom!
It may well be true that sharing your honest models of (say) philosophy of religion is a terrible idea and should never happen in public, if you want to have any hope of convincing any philosophers of religion in the future. But… well, if intellectual discourse is in as grim and lightless a state as all that, I hope we can at least have clear sights about how bad that is, and how much better it would be if we somehow found a way to just share our models of the field and discuss those plainly. I can’t say it’s impossible to end up in situations like that, but I can push for the conditional policy ‘if you end up in that kind of situation, be super clear about how terrible this is and keep an eye out for ways to improve on it’.
You don’t have to be extremely confident in your view’s stability (i.e., whether you expect to change your view a lot based on future evidence) or its transmissibility in order to have a view at all. And if people don’t share their views — or especially, if they are happier to share positive views of groups than negative ones, or otherwise have some systemic bias in what they share — the group’s aggregate beliefs will be less accurate.
On the topic of “horn-tooting”: see my philosopher-of-religion analogy. It would be hard to come up with a simple metric that would convince most philosophers of religion “LW is better than you at thinking about philosophy of religion”. If you actually wanted to reach consensus about this, you’d probably want to start with a long serious of discussions about object-level questions and thinking heuristics.
And in the interim, it shouldn’t be seen as a status grab for LWers to toot their own horn about being better at philosophy of religion. Toot away! Every toot is an opportunity to be embarrassed later when the philosophers of religion show that they were right all along.
It would be bad to toot if your audience were so credulous that they’ll just take your word for it, or if the social consequences of making mistakes were too mild to disincentivize empty boasts. But I don’t think LW or analytic philosophy are credulous or forgiving enough to make this a real risk.
If anything, there probably isn’t enough horn-tooting in those groups. People are too tempted to false modesty, or too tempted to just steer clear of the topic of relative skill levels. This makes it harder to get feedback about people’s rationality and meta-rationality, and it makes a lot of coordination problems harder.
This sounds like a very Eliezer-like approach: “I don’t have to convince you, a professional who spent decades learning and researching the subject matter, here is the truth, throw away your old culture and learn from me, even though I never bothered to learn what you learned!” While there are certainly plenty of cases where this is valid, in any kind of evidence-based sciences the odds of it being successful are slim to none (the infamous QM sequence is one example of a failed foray like that. Well, maybe not failed, just uninteresting). I want to agree with you on the philosophy of religion, of course, because, well, if you start with a failed premise, you can spend all your life analyzing noise, like the writers of Talmud did. But an outside view says that the Chesterton fence of an existing academic culture is there for a reason, including the philosophical traditions dating back millennia.
An SSC-like approach seems much more reliable in terms of advancing a particular field. Scott spends inordinate amount of time understanding the existing fences, how they came to be and why they are there still, before advancing an argument why it might be a good idea to move them, and how to test if the move is good. I think that leads to him being taken much more seriously by the professionals in the area he writes about.
I gather that both approaches have merit, as there is generally no arguing with someone who is in a “diseased discipline”, but one has to be very careful affixing that label on the whole field of research, even if it seems obvious to an outsider. Or to an insider, if you follow the debates about whether the String Theory is a diseased field in physics.
Still, except for the super-geniuses among us, it is much safer to understand the ins and outs before declaring that the giga-IQ-hours spent by humanity on a given topic are a waste or a dead end. The jury is still out on whether Eliezer and MIRI in general qualify.
Even if the jury’s out, it’s a poor courtroom that discourages the plaintiff, defendant, witnesses, and attorneys from sharing their epistemic state, for fear of offending others in the courtroom!
It may well be true that sharing your honest models of (say) philosophy of religion is a terrible idea and should never happen in public, if you want to have any hope of convincing any philosophers of religion in the future. But… well, if intellectual discourse is in as grim and lightless a state as all that, I hope we can at least have clear sights about how bad that is, and how much better it would be if we somehow found a way to just share our models of the field and discuss those plainly. I can’t say it’s impossible to end up in situations like that, but I can push for the conditional policy ‘if you end up in that kind of situation, be super clear about how terrible this is and keep an eye out for ways to improve on it’.
You don’t have to be extremely confident in your view’s stability (i.e., whether you expect to change your view a lot based on future evidence) or its transmissibility in order to have a view at all. And if people don’t share their views — or especially, if they are happier to share positive views of groups than negative ones, or otherwise have some systemic bias in what they share — the group’s aggregate beliefs will be less accurate.