This logic would fall down entirely if it turned out that “offensive things” isn’t a natural kind, or a pre-existing category of any sort, but is instead a label attached by the “people of the sneer” themselves to anything they happen to want to mock or vilify (which is always going to be something, since—as you say—said people in fact have a goal of mocking and/or vilifying things, in general).
Inconveniently, that is precisely what turns out to be the case…
“Offensive things” isn’t a category determined primarily by the interaction of LessWrong and people of the sneer. These groups exist in a wider society that they’re signaling to. It sounds like your reasoning is “if we don’t post about the Bell Curve, they’ll just start taking offense to technological forecasting, and we’ll be back where we started but with a more restricted topic space”. But doing so would make the sneerers look stupid, because society, for better or worse, considers The Bell Curve to be offensive and does not consider technological forecasting to be offensive.
But doing so would make the sneerers look stupid, because society, for better or worse, considers The Bell Curve to be offensive and does not consider technological forecasting to be offensive.
I’m sorry, but this is a fantasy. It may seem reasonable to you that the world should work like this, but it does not.
To suggest that “the sneerers” would “look stupid” is to posit someone—a relevant someone, who has the power to determine how people and things are treated, and what is acceptable, and what is beyond the pale—for them to “look stupid” to. But in fact “the sneerers” simply are “wider society”, for all practical purposes.
“Society” considers offensive whatever it is told to consider offensive. Today, that might not include “technological forecasting”. Tomorrow, you may wake up to find that’s changed. If you point out that what we do here wasn’t “offensive” yesterday, and so why should it be offensive today, and in any case, surely we’re not guilty of anything, are we, since it’s not like we could’ve known, yesterday, that our discussions here would suddenly become “offensive”… right? … well, I wouldn’t give two cents for your chances, in the court of public opinion (Twitter division). And if you try to protest that anyone who gets offended at technological forecasting is just stupid… then may God have mercy on your soul—because “the sneerers” surely won’t.
But there are systemic reasons why Society gets told that hypotheses about genetically-mediated group differences are offensive, and mostly doesn’t (yet?) get told that technological forecasting is offensive. (If some research says Ethnicity E has higher levels of negatively-perceived Trait T, then Ethnicity E people have an incentive to discredit the research independently of its truth value—and people who perceive themselves as being in a zero-sum conflict with Ethnicity E have an incentive to promote the research independently of its truth value.)
Steven and his coalition are betting that it’s feasible to “hold the line” on only censoring the hypotheses are closely tied to political incentives like this, without doing much damage to our collective ability to think about other aspects of the world. I don’t think it works as well in practice as they think it does, due to the mechanisms described in “Entangled Truths, Contagious Lies” and “Dark Side Epistemology”—you make a seemingly harmless concession one day, and five years later, you end up claiming with perfect sincerity that dolphins are fish—but I don’t think it’s right to dismiss the strategy as fantasy.
due to the mechanisms described in “Entangled Truths, Contagious Lies” and “Dark Side Epistemology”
I’m not advocating lying. I’m advocating locally preferring to avoid subjects that force people to either lie or alienate people into preferring lies, or both. In the possible world where The Bell Curve is mostly true, not talking about it on LessWrong will not create a trail of false claims that have to be rationalized. It will create a trail of no claims. LessWrongers might fill their opinion vacuum with false claims from elsewhere, or with true claims, but either way, this is no different from what they already do about lots of subjects, and does not compromise anyone’s epistemic integrity.
I understand that. I cited a Sequences post that has the word “lies” in the title, but I’m claiming that the mechanism described in the cited posts—that distortions on one topic can spread to both adjacent topics, and to people’s understanding of what reasoning looks like—can apply more generally to distortions that aren’t direct lies.
Omitting information can be a distortion when the information would otherwise be relevant. In “A Rational Argument”, Yudkowsky gives the example of an election campaign manager publishing survey responses from their candidate, but omitting one question which would make their candidate look bad, which Yudkowsky describes as “cross[ing] the line between rationality and rationalization” (!). This is a very high standard—but what made the Sequences so valuable, is that they taught people the counterintuitive idea that this standard exists. I think there’s a lot of value in aspiring to hold one’s public reasoning to that standard.
Not infinite value, of course! If I knew for a fact that Godzilla will destroy the world if I cite a book that I would otherwise would have cited as genuinely relevant, then fine, for the sake of the sake of the world, I can not cite the book.
Maybe we just quantitatively disagree on how tough Godzilla is and how large the costs of distortions are? Maybe you’re happy to throw Sargon of Akkad under the bus, but when Steve Hsu is getting thrown under the bus, I think that’s a serious problem for the future of humanity. I think this is actually worth a fight.
With my own resources and my own name (and a pen name), I’m fighting. If someone else doesn’t want to fight with their name and their resources, I’m happy to listen to suggestions for how people with different risk tolerances can cooperate to not step on each other’s toes! In the case of the shared resource of this website, if the Frontpage/Personal distinction isn’t strong enough, then sure, “This is on our Banned Topics list; take it to /r/TheMotte, you guys” could be another point on the compromise curve. What I would hope for from the people playing the sneaky consequentialist image-management strategy, is that you guys would at least acknowledge that there is a conflict and that you’ve chosen a side.
might fill their opinion vacuum with false claims from elsewhere, or with true claims
Your posts seem to be about what happens if you filter out considerations that don’t go your way. Obviously, yes, that way you can get distortion without saying anything false. But the proposal here is to avoid certain topics and be fully honest about which topics are being avoided. This doesn’t create even a single bit of distortion. A blank canvas is not a distorted map. People can get their maps elsewhere, as they already do on many subjects, and as they will keep having to do regardless, simply because some filtering is inevitable beneath the eye of Sauron. (Distortions caused by misestimation of filtering are going to exist whether the filter has 40% strength or 30% strength. The way to minimize them is to focus on estimating correctly. A 100% strength filter is actually relatively easy to correctly estimate. And having the appearance of a forthright debate creates perverse incentives for people to distort their beliefs so they can have something inoffensive to be forthright about.)
The people going after Steve Hsu almost entirely don’t care whether LW hosts Bell Curve reviews. If adjusting allowable topic space gets us 1 util and causes 2 utils of damage distributed evenly across 99 Sargons and one Steve Hsu, that’s only 0.02 Hsu utils lost, which seems like a good trade.
I don’t have a lot of verbal energy and find the “competing grandstanding walls of text” style of discussion draining, and I don’t think the arguments I’m making are actually landing for some reason, and I’m on the verge of tapping out. Generating and posting an IM chat log could be a lot more productive. But people all seem pretty set in their opinions, so it could just be a waste of energy.
This logic would fall down entirely if it turned out that “offensive things” isn’t a natural kind, or a pre-existing category of any sort, but is instead a label attached by the “people of the sneer” themselves to anything they happen to want to mock or vilify (which is always going to be something, since—as you say—said people in fact have a goal of mocking and/or vilifying things, in general).
Inconveniently, that is precisely what turns out to be the case…
“Offensive things” isn’t a category determined primarily by the interaction of LessWrong and people of the sneer. These groups exist in a wider society that they’re signaling to. It sounds like your reasoning is “if we don’t post about the Bell Curve, they’ll just start taking offense to technological forecasting, and we’ll be back where we started but with a more restricted topic space”. But doing so would make the sneerers look stupid, because society, for better or worse, considers The Bell Curve to be offensive and does not consider technological forecasting to be offensive.
I’m sorry, but this is a fantasy. It may seem reasonable to you that the world should work like this, but it does not.
To suggest that “the sneerers” would “look stupid” is to posit someone—a relevant someone, who has the power to determine how people and things are treated, and what is acceptable, and what is beyond the pale—for them to “look stupid” to. But in fact “the sneerers” simply are “wider society”, for all practical purposes.
“Society” considers offensive whatever it is told to consider offensive. Today, that might not include “technological forecasting”. Tomorrow, you may wake up to find that’s changed. If you point out that what we do here wasn’t “offensive” yesterday, and so why should it be offensive today, and in any case, surely we’re not guilty of anything, are we, since it’s not like we could’ve known, yesterday, that our discussions here would suddenly become “offensive”… right? … well, I wouldn’t give two cents for your chances, in the court of public opinion (Twitter division). And if you try to protest that anyone who gets offended at technological forecasting is just stupid… then may God have mercy on your soul—because “the sneerers” surely won’t.
But there are systemic reasons why Society gets told that hypotheses about genetically-mediated group differences are offensive, and mostly doesn’t (yet?) get told that technological forecasting is offensive. (If some research says Ethnicity E has higher levels of negatively-perceived Trait T, then Ethnicity E people have an incentive to discredit the research independently of its truth value—and people who perceive themselves as being in a zero-sum conflict with Ethnicity E have an incentive to promote the research independently of its truth value.)
Steven and his coalition are betting that it’s feasible to “hold the line” on only censoring the hypotheses are closely tied to political incentives like this, without doing much damage to our collective ability to think about other aspects of the world. I don’t think it works as well in practice as they think it does, due to the mechanisms described in “Entangled Truths, Contagious Lies” and “Dark Side Epistemology”—you make a seemingly harmless concession one day, and five years later, you end up claiming with perfect sincerity that dolphins are fish—but I don’t think it’s right to dismiss the strategy as fantasy.
I’m not advocating lying. I’m advocating locally preferring to avoid subjects that force people to either lie or alienate people into preferring lies, or both. In the possible world where The Bell Curve is mostly true, not talking about it on LessWrong will not create a trail of false claims that have to be rationalized. It will create a trail of no claims. LessWrongers might fill their opinion vacuum with false claims from elsewhere, or with true claims, but either way, this is no different from what they already do about lots of subjects, and does not compromise anyone’s epistemic integrity.
I understand that. I cited a Sequences post that has the word “lies” in the title, but I’m claiming that the mechanism described in the cited posts—that distortions on one topic can spread to both adjacent topics, and to people’s understanding of what reasoning looks like—can apply more generally to distortions that aren’t direct lies.
Omitting information can be a distortion when the information would otherwise be relevant. In “A Rational Argument”, Yudkowsky gives the example of an election campaign manager publishing survey responses from their candidate, but omitting one question which would make their candidate look bad, which Yudkowsky describes as “cross[ing] the line between rationality and rationalization” (!). This is a very high standard—but what made the Sequences so valuable, is that they taught people the counterintuitive idea that this standard exists. I think there’s a lot of value in aspiring to hold one’s public reasoning to that standard.
Not infinite value, of course! If I knew for a fact that Godzilla will destroy the world if I cite a book that I would otherwise would have cited as genuinely relevant, then fine, for the sake of the sake of the world, I can not cite the book.
Maybe we just quantitatively disagree on how tough Godzilla is and how large the costs of distortions are? Maybe you’re happy to throw Sargon of Akkad under the bus, but when Steve Hsu is getting thrown under the bus, I think that’s a serious problem for the future of humanity. I think this is actually worth a fight.
With my own resources and my own name (and a pen name), I’m fighting. If someone else doesn’t want to fight with their name and their resources, I’m happy to listen to suggestions for how people with different risk tolerances can cooperate to not step on each other’s toes! In the case of the shared resource of this website, if the Frontpage/Personal distinction isn’t strong enough, then sure, “This is on our Banned Topics list; take it to /r/TheMotte, you guys” could be another point on the compromise curve. What I would hope for from the people playing the sneaky consequentialist image-management strategy, is that you guys would at least acknowledge that there is a conflict and that you’ve chosen a side.
For more on why I think not-making-false-claims is vastly too low of a standard to aim for, see “Firming Up Not-Lying Around Its Edge-Cases Is Less Broadly Useful Than One Might Initially Think” and “Heads I Win, Tails?—Never Heard of Her”.
Your posts seem to be about what happens if you filter out considerations that don’t go your way. Obviously, yes, that way you can get distortion without saying anything false. But the proposal here is to avoid certain topics and be fully honest about which topics are being avoided. This doesn’t create even a single bit of distortion. A blank canvas is not a distorted map. People can get their maps elsewhere, as they already do on many subjects, and as they will keep having to do regardless, simply because some filtering is inevitable beneath the eye of Sauron. (Distortions caused by misestimation of filtering are going to exist whether the filter has 40% strength or 30% strength. The way to minimize them is to focus on estimating correctly. A 100% strength filter is actually relatively easy to correctly estimate. And having the appearance of a forthright debate creates perverse incentives for people to distort their beliefs so they can have something inoffensive to be forthright about.)
The people going after Steve Hsu almost entirely don’t care whether LW hosts Bell Curve reviews. If adjusting allowable topic space gets us 1 util and causes 2 utils of damage distributed evenly across 99 Sargons and one Steve Hsu, that’s only 0.02 Hsu utils lost, which seems like a good trade.
I don’t have a lot of verbal energy and find the “competing grandstanding walls of text” style of discussion draining, and I don’t think the arguments I’m making are actually landing for some reason, and I’m on the verge of tapping out. Generating and posting an IM chat log could be a lot more productive. But people all seem pretty set in their opinions, so it could just be a waste of energy.