This is a helpful addendum. I didn’t want to bust out the slippery slope argument because I didn’t have clarity on the gears-level mechanism. But in this case, we seem to have a ratchet in which X is deemed newly offensive, and a lot of attention is focused on just this particular word or phrase X. Because “it’s just this one word,” resisting the offensive-ization is made to seem petty—wouldn’t it be such a small thing to give up, in exchange for inflicting a whole lot less suffering on others?
Next week it’ll be some other X though, and the only way this ends is if you can re-establish some sort of Schelling Fence of free discourse and resist any further calls to expand censorship, even if they’re small and have good reasons to back them up.
I think that to someone who disagrees with me, they might say that what’s in fact happening is an increase in knowledge and an improvement in culture, reflected in language. In the same way that I expect to routinely update my picture of the world when I read the newspaper, why shouldn’t I expect to routinely update my language to reflect evolving cultural understandings of how to treat other people well?
My response to this objection would be that, in much the same way as phrases like “sexual preference” can be seen as offensive for their implications, or a book can be objected to for its symbolism, mild forms of censorship or “updates” in speech codes can provoke anxiety, induce fear, and restrain thought. This may not be their intention, but it is their effect, at least at times and in the present cultural climate.
So a standard of free discourse and a Schelling Fence against expansion of censorship is justified not (just) to avoid a slippery slope of ever-expanding censorship, or to attract people with certain needs or to establish a pipeline into certain roles or jobs. Its purpose is also to create a space in which we have declared that we will strive to be less timid, not just less wrong.
We might not always prioritize or succeed in that goal, but establishing that this is a space where we are giving ourselves permission to try is a feature of explicit anti-censorship norms.
Prioritizing freedom of thought and lessening timidity isn’t always the right goal. Sometimes, inclusivity, warmth, and a sense of agreeableness and safety is the right way to organize certain spaces. Different cultural moments, or institutions, might need marginally more safe spaces. Sometimes, though, they need more risky spaces. My observation tells me that our culture is currently in need of marginally more risky spaces, even if the number of safe spaces remains the same. A way to protect LW’s status as a risky space is to protect our anti-censorship norms, and sometimes to exercise our privilege to post risky material such as this post.
My observation tells me that our culture is currently in need of marginally more risky spaces, even if the number of safe spaces remains the same.
Our culture is desperately in need of spaces that are correct about the most important technical issues, and insisting that the few such spaces that exist have to also become politically risky spaces jeopardizes their ability to function for no good reason given that the internet lets you build as many separate spaces as you want elsewhere.
I’m going to be a little nitpicky here. LW is not “becoming,” but rather already is a politically risky space, and has been for a long time. There are several good reasons, which I and others have discussed elsewhere here. They may not be persuasive to you, and that’s OK, but they do exist as reasons. Finally, the internet may let you build a separate forum elsewhere and try to attract participants, but that is a non-trivial ask.
My position is that accepting intellectual risk is part and parcel of creating an intellectual environment capable of maintaining the epistemic rigor that we both think is necessary.
It is you, and others here, who are advocating a change of the status quo to create a bigger wall between x-risk topics and political controversy. I think that this would harm the goal of preventing x-risk, on current margins, as I’ve argued elsewhere here. We both have our reasons, and I’ve written down the sort of evidence that would cause me to change my point of view.
Fortunately, I enjoy the privilege of being the winner by default in this contest, since the site’s current norms already accord with my beliefs and preferences. So I don’t feel the need to gather evidence to persuade you of my position, assuming you don’t find my arguments here compelling. However, if you do choose to make the effort to gather some of the evidence I’ve elsewhere outlined, I not only would eagerly read it, but would feel personally grateful to you for making the effort. I think those efforts would be valuable for the health of this website and also for mitigating X-risk. However, they would be time-consuming, effortful, and may not pay off in the end.
Our culture is desperately in need of spaces that are correct about the most important technical issues
I also care a lot about this; I think there are three important things to track.
First is that people might have reputations to protect or purity to maintain, and so want to be careful about what they associate with. (This is one of the reasons behind the separate Alignment Forum URL; users who wouldn’t want to post something to Less Wrong can post someplace classier.)
Second is that people might not be willing to pay costs to follow taboos. The more a space is politically safe, the less people like Robin Hanson will want to be there, because many of their ideas are easier to think of if you’re not spending any of your attention on political safety.
Third is that the core topics you care about might, at some point, become political. (Certainly AI alignment was ‘political’ for many years before it became mainstream, and will become political again as soon as it stops becoming mainstream, or if it becomes partisan.)
The first is one of the reasons why LW isn’t a free speech absolutist site, even tho with a fixed population of posters that would probably help us be more correct. But the second and third are why LW isn’t a zero-risk space either.
I don’t care about moderation decisions for this particular post, I’m just dismayed by how eager LessWrongers seem to be to rationalize shooting themselves in the foot, which is also my foot and humanity’s foot, for the short term satisfaction of getting to think of themselves as aligned with the forces of truth in a falsely constructed dichotomy against the forces of falsehood.
On any sufficiently controversial subject, responsible members of groups with vulnerable reputations will censor themselves if they have sufficiently unpopular views, which makes discussions on sufficiently controversial subjects within such groups a sham. The rationalist community should oppose shams instead of encouraging them.
Whether political pressure leaks into technical subjects mostly depends on people’s meta-level recognition that inferences subject to political pressure are unreliable, and hosting sham discussions makes this recognition harder.
The rationalist community should avoid causing people to think irrationally, and a very frequent type of irrational thinking (even among otherwise very smart people) is “this is on the same website as something offensive, so I’m not going to listen to it”. “Let’s keep putting important things on the same website as unimportant and offensive things until they learn” is not a strategy that I expect to work here.
It would be really nice to be able to stand up to left wing political entryism, and the only principled way to do this is to be very conscientious about standing up to right wing political entryism, where in this case “right wing” means any politics sufficiently offensive to the left wing, regardless of whether it thinks of itself as right wing.
I’m not as confident about these conclusions as it sounds, but my lack of confidence comes from seeing that people whose judgment I trust disagree, and it does not come from the arguments that have been given, which have not seemed to me to be good.
It would be really nice to be able to stand up to left wing political entryism, and the only principled way to do this is to be very conscientious about standing up to right wing political entryism, where in this case “right wing” means any politics sufficiently offensive to the left wing, regardless of whether it thinks of itself as right wing.
“Stand up to X by not doing anything X would be offended by” is obviously an unworkable strategy, it’s taking a negotiating stance that is maximally yielding in the ultimatum game, so should expect to receive as little surplus utility as possible in negotiation.
(Not doing anything X would be offended by is generally a strategy for working with X, not standing up to X; it could work if interests are aligned enough that it isn’t necessary to demand much in negotiation. But given your concern about “entryism” that doesn’t seem like the situation you think you’re in.)
steven0461 isn’t proposing standing up to X by not doing things that would offend X.
He is proposing standing up to the right by not doing things that would offend the left, and standing up to the left by not doing things that would offend the right. Avoiding posts like the OP here is intended to be an example of the former, which (steven0461 suggests) has value not only for its own sake but also because it lets us also stand up to the left by avoiding things that offend the right, without being hypocrites.
(steven0461′s comment seems to treat “standing up to left-wing political entryism” as a thing that’s desirable for its own sake, and “standing up to right-wing political entryism” as something we regrettably have to do too in order to do the desirable thing without hypocrisy. This seems kinda strange to me because (1) standing up to all kinds of political entryism seems to me obviously desirable for its own sake, and because (2) if for some reason left-wing political entryism is fundamentally worse than right-wing political entryism then surely that makes it not necessarily hypocritical to take a stronger stand against the former than against the latter.)
If someone proposes to do A by doing B, and B by doing C, they are proposing doing A by doing C. (Here A = “stand up to left wing entryism”, B = “stand up to right wing entryism”, C = “don’t do things that left wing people are offended by”)
EDIT: Also, the situation isn’t symmetrical, since Steven is defining right-wing to mean things the left wing is offended by, and not vice versa. Hence it’s clearly a strategy for submitting to the left, as it lets the left construct the left/right dichotomy.
I’m not sure there’s a definite fact of the matter as to when something is “doing X by doing Y” in cases like this where it’s indirect, but I think either we shouldn’t use that language so broadly as to apply to such cases or it’s not obvious that it’s unworkable to “stand up to X by not doing things that offend X”, since the obvious unworkability of that is (unless I’m misunderstanding your earlier comment) predicated on the idea that it’s a sort of appeasement of X, rather than the sort of indirect thing we’re actually talking about here.
Maybe I am also being too indirect. Regardless of whether there’s some sense in which steven0461 is proposing to “stand up to X by not doing things that would offend X”, he was unambiguously not proposing “a negotiating stance that is maximally yielding in the ultimatum game”; “not doing things that would offend X” in his comment is unambiguously not a move in any game being played with X at all. Your objection to what he wrote is just plain wrong, whether or not there is a technical sense in which he did say the thing that you objected to, because your argument against what he said was based on an understanding of it that is wrong whether or not that’s so.
[EDITED to add:] As I mention in a grandchild comment, one thing in the paragraph above is badly garbled; I was trying to say something fairly complicated in too few words and ended up talking nonsense. It’s not correct to say that “not doing things that would offend X” is not a move in any game being played with X. Rather, I claim that X in your original comment is standing in for two different albeit related Xs, who are involved in two different albeit related interactions (“games” if you like), and the two things you portray as inconsistent are not at all inconsistent because it’s entirely possible (whether or not it’s wise) to win one game while losing the other.
The game with “left-wing entryists” is one where they try to make LW a platform for left-wing propaganda. The game with “the left” is one where they try to stop LW being a platform for (what they regard as) right-wing propaganda. Steven proposes taking a firm stand against the former, and making a lot of concessions in the latter. These are not inconsistent; banning everything that smells of politics, whether wise or foolish overall, would do both of the things Steven proposes doing. He proposes making concessions to “the left” in the second game in order to resist “right-wing entryists” in the mirror-image of the first game. We might similarly make concessions to “the right” if they were complaining that LW is too leftist, by avoiding things that look to them like left-wing propaganda. I make no claims about whether any of these resistances and concessions are good strategy; I say only that they don’t exhibit the sort of logical inconsistency you are accusing Steven of.
Step 1: The left decides what is offensively right-wing
Step 2: LW people decide what to say given this
Steven is proposing a policy for step 2 that doesn’t do anything that the left has decided is offensively right-wing. This gives the left the ability to prevent arbitrary speech.
If the left is offended by negotiating for more than $1 in the ultimatum game, Steven’s proposed policy would avoid doing that, thereby yielding. (The money here is metaphorical, representing benefits LW people could get by talking about things without being attacked by the left)
I think an important cause of our disagreement is you model the relevant actors as rational strategic consequentialists trying to prevent certain kinds of speech, whereas I think they’re at least as much like a Godzilla that reflexively rages in pain and flattens some buildings whenever he’s presented with an idea that’s noxious to him. You can keep irritating Godzilla until he learns that flattening buildings doesn’t help him achieve his goals, but he’ll flatten buildings anyway because that’s just the kind of monster he is, and in this way, you and Godzilla can create arbitrary amounts of destruction together. And (to some extent) it’s not like someone constructed a reflexively-acting Godzilla so they could control your behavior, either, which would make it possible to deter that person from making future Godzillas. Godzillas seem (to some extent) to arise spontaneously out of the social dynamics of large numbers of people with imperfect procedures for deciding what they believe and care about. So it’s not clear to me that there’s an alternative to just accepting the existence of Godzilla and learning as best as you can to work around him in those cases where working around him is cheap, especially if you have a building that’s unusually important to keep intact. All this is aside from considerations of mercy to Godzilla or respect for Godzilla’s opinions.
If I make some substitutions in your comment to illustrate this view of censorious forces as reflexive instead of strategic, it goes like this:
The implied game is:
Step 1: The bull decides what is offensively red
Step 2: LW people decide what cloths to wave given this
Steven is proposing a policy for step 2 that doesn’t wave anything that the bull has decided is offensively red. This gives the bull the ability to prevent arbitrary cloth-waving.
If the bull is offended by negotiating for more than $1 in the ultimatum game, Steven’s proposed policy would avoid doing that, thereby yielding. (The money here is metaphorical, representing benefits LW people could get by waving cloths without being gored by the bull)
I think “wave your cloths at home or in another field even if it’s not as good” ends up looking clearly correct here, and if this model is partially true, then something more nuanced than an absolutist “don’t give them an inch” approach is warranted.
edit: I should clarify that when I say Godzilla flattens buildings, I’m mostly not referring to personal harm to people with unpopular opinions, but to epistemic closure to whatever is associated with those people, which you can see in action every day on e.g. Twitter.
The relevant actors aren’t consciously being strategic about it, but I think their emotions are sensitive to whether the threat of being offended seems to be working. That’s what the emotions are for, evolutionarily speaking. People are innately very good at this! When I babysit a friend’s unruly 6-year-old child who doesn’t want to put on her shoes, or talk to my mother who wishes I would call more often, or introspect on my own rage at the abject cowardice of so-called “rationalists”, the functionality of emotions as a negotiating tactic is very clear to me, even if I don’t have the same kind of deliberative control over my feelings as my speech (and the child and my mother don’t even think of themselves as doing game theory at all).
(This in itself doesn’t automatically negate your concerns, of course, but I think it’s an important modeling consideration: animals like Godzilla may be less incentivizable than Homo economicus, but they’re more like Homo economicus than a tornado or an avalanche.)
I think simplifying all this to a game with one setting and two players with human psychologies obscures a lot of what’s actually going on. If you look at people of the sneer, it’s not at all clear that saying offensive things thwarts their goals. They’re pretty happy to see offensive things being said, because it gives them opportunities to define themselves against the offensive things and look like vigilant guardians against evil. Being less offensive, while paying other costs to avoid having beliefs be distorted by political pressure (e.g. taking it elsewhere, taking pains to remember that politically pressured inferences aren’t reliable), arguably de-energizes such people more than it emboldens them.
This logic would fall down entirely if it turned out that “offensive things” isn’t a natural kind, or a pre-existing category of any sort, but is instead a label attached by the “people of the sneer” themselves to anything they happen to want to mock or vilify (which is always going to be something, since—as you say—said people in fact have a goal of mocking and/or vilifying things, in general).
Inconveniently, that is precisely what turns out to be the case…
“Offensive things” isn’t a category determined primarily by the interaction of LessWrong and people of the sneer. These groups exist in a wider society that they’re signaling to. It sounds like your reasoning is “if we don’t post about the Bell Curve, they’ll just start taking offense to technological forecasting, and we’ll be back where we started but with a more restricted topic space”. But doing so would make the sneerers look stupid, because society, for better or worse, considers The Bell Curve to be offensive and does not consider technological forecasting to be offensive.
But doing so would make the sneerers look stupid, because society, for better or worse, considers The Bell Curve to be offensive and does not consider technological forecasting to be offensive.
I’m sorry, but this is a fantasy. It may seem reasonable to you that the world should work like this, but it does not.
To suggest that “the sneerers” would “look stupid” is to posit someone—a relevant someone, who has the power to determine how people and things are treated, and what is acceptable, and what is beyond the pale—for them to “look stupid” to. But in fact “the sneerers” simply are “wider society”, for all practical purposes.
“Society” considers offensive whatever it is told to consider offensive. Today, that might not include “technological forecasting”. Tomorrow, you may wake up to find that’s changed. If you point out that what we do here wasn’t “offensive” yesterday, and so why should it be offensive today, and in any case, surely we’re not guilty of anything, are we, since it’s not like we could’ve known, yesterday, that our discussions here would suddenly become “offensive”… right? … well, I wouldn’t give two cents for your chances, in the court of public opinion (Twitter division). And if you try to protest that anyone who gets offended at technological forecasting is just stupid… then may God have mercy on your soul—because “the sneerers” surely won’t.
But there are systemic reasons why Society gets told that hypotheses about genetically-mediated group differences are offensive, and mostly doesn’t (yet?) get told that technological forecasting is offensive. (If some research says Ethnicity E has higher levels of negatively-perceived Trait T, then Ethnicity E people have an incentive to discredit the research independently of its truth value—and people who perceive themselves as being in a zero-sum conflict with Ethnicity E have an incentive to promote the research independently of its truth value.)
Steven and his coalition are betting that it’s feasible to “hold the line” on only censoring the hypotheses are closely tied to political incentives like this, without doing much damage to our collective ability to think about other aspects of the world. I don’t think it works as well in practice as they think it does, due to the mechanisms described in “Entangled Truths, Contagious Lies” and “Dark Side Epistemology”—you make a seemingly harmless concession one day, and five years later, you end up claiming with perfect sincerity that dolphins are fish—but I don’t think it’s right to dismiss the strategy as fantasy.
due to the mechanisms described in “Entangled Truths, Contagious Lies” and “Dark Side Epistemology”
I’m not advocating lying. I’m advocating locally preferring to avoid subjects that force people to either lie or alienate people into preferring lies, or both. In the possible world where The Bell Curve is mostly true, not talking about it on LessWrong will not create a trail of false claims that have to be rationalized. It will create a trail of no claims. LessWrongers might fill their opinion vacuum with false claims from elsewhere, or with true claims, but either way, this is no different from what they already do about lots of subjects, and does not compromise anyone’s epistemic integrity.
I understand that. I cited a Sequences post that has the word “lies” in the title, but I’m claiming that the mechanism described in the cited posts—that distortions on one topic can spread to both adjacent topics, and to people’s understanding of what reasoning looks like—can apply more generally to distortions that aren’t direct lies.
Omitting information can be a distortion when the information would otherwise be relevant. In “A Rational Argument”, Yudkowsky gives the example of an election campaign manager publishing survey responses from their candidate, but omitting one question which would make their candidate look bad, which Yudkowsky describes as “cross[ing] the line between rationality and rationalization” (!). This is a very high standard—but what made the Sequences so valuable, is that they taught people the counterintuitive idea that this standard exists. I think there’s a lot of value in aspiring to hold one’s public reasoning to that standard.
Not infinite value, of course! If I knew for a fact that Godzilla will destroy the world if I cite a book that I would otherwise would have cited as genuinely relevant, then fine, for the sake of the sake of the world, I can not cite the book.
Maybe we just quantitatively disagree on how tough Godzilla is and how large the costs of distortions are? Maybe you’re happy to throw Sargon of Akkad under the bus, but when Steve Hsu is getting thrown under the bus, I think that’s a serious problem for the future of humanity. I think this is actually worth a fight.
With my own resources and my own name (and a pen name), I’m fighting. If someone else doesn’t want to fight with their name and their resources, I’m happy to listen to suggestions for how people with different risk tolerances can cooperate to not step on each other’s toes! In the case of the shared resource of this website, if the Frontpage/Personal distinction isn’t strong enough, then sure, “This is on our Banned Topics list; take it to /r/TheMotte, you guys” could be another point on the compromise curve. What I would hope for from the people playing the sneaky consequentialist image-management strategy, is that you guys would at least acknowledge that there is a conflict and that you’ve chosen a side.
might fill their opinion vacuum with false claims from elsewhere, or with true claims
Your posts seem to be about what happens if you filter out considerations that don’t go your way. Obviously, yes, that way you can get distortion without saying anything false. But the proposal here is to avoid certain topics and be fully honest about which topics are being avoided. This doesn’t create even a single bit of distortion. A blank canvas is not a distorted map. People can get their maps elsewhere, as they already do on many subjects, and as they will keep having to do regardless, simply because some filtering is inevitable beneath the eye of Sauron. (Distortions caused by misestimation of filtering are going to exist whether the filter has 40% strength or 30% strength. The way to minimize them is to focus on estimating correctly. A 100% strength filter is actually relatively easy to correctly estimate. And having the appearance of a forthright debate creates perverse incentives for people to distort their beliefs so they can have something inoffensive to be forthright about.)
The people going after Steve Hsu almost entirely don’t care whether LW hosts Bell Curve reviews. If adjusting allowable topic space gets us 1 util and causes 2 utils of damage distributed evenly across 99 Sargons and one Steve Hsu, that’s only 0.02 Hsu utils lost, which seems like a good trade.
I don’t have a lot of verbal energy and find the “competing grandstanding walls of text” style of discussion draining, and I don’t think the arguments I’m making are actually landing for some reason, and I’m on the verge of tapping out. Generating and posting an IM chat log could be a lot more productive. But people all seem pretty set in their opinions, so it could just be a waste of energy.
Another way this matters: Offense takers largely get their intuitions about “will taking offense achieve my goals” from experience in a wide variety of settings and not from LessWrong specifically. Yes, theoretically, the optimal strategy is for them to estimate “will taking offense specifically against LessWrong achieve my goals”, but most actors simply aren’t paying enough attention to form a target-by-target estimate. Viewing this as a simple game theory textbook problem might lead you to think that adjusting our behavior to avoid punishment would lead to an equal number of future threats of punishment against us and is therefore pointless, when actually it would instead lead to future threats of punishment against some other entity that we shouldn’t care much about, like, I don’t know, fricking Sargon of Akkad.
I agree that offense-takers are calibrated against Society-in-general, not particular targets.
As a less-political problem with similar structure, consider ransomware attacks. If an attacker encrypts your business’s files and will sell you the encryption key for 10 Bitcoins, do you pay (in order to get your files back, as common sense and causal decision theory agree), or do you not-pay (as a galaxy-brained updateless-decision-theory play to timelessly make writing ransomware less profitable, even though that doesn’t help the copy of you in this timeline)?
It’s a tough call! If your business’s files are sufficiently important, then I can definitely see why you’d want to pay! But if someone were to try to portray the act of paying as pro-social, that would be pretty weird. If your Society knew how, law-abiding citizens would prefer to coordinate not to pay attackers, which is why the U.S. Treasury Department is cracking down on facilitating ransomware payments. But if that’s not an option …
our behavior [...] punishment against us [...] some other entity that we shouldn’t care much about
If coordinating to resist extortion isn’t an option, that makes me very interested in trying to minimize the extent to which there is a collective “us”. “We” should be emphasizing that rationality is a subject matter that anyone can study, rather than trying to get people to join our robot cult and be subject to the commands and PR concerns of our leaders. Hopefully that way, people playing a sneaky consequentialist image-management strategy and people playing a Just Get The Goddamned Right Answer strategy can at least avoid being at each other’s throats fighting over who owns the “rationalist” brand name.
if this model is partially true, then something more nuanced than an absolutist “don’t give them an inch” approach is warranted
It’s obvious to everyone in the discussion that the model is partially false and there’s also a strategic component to people’s emotions, so repeating this is not responsive.
So it’s not clear to me that there’s an alternative to just accepting the existence of Godzilla and learning as best as you can to work around him in those cases where working around him is cheap, especially if you have a building that’s unusually important to keep intact.
But of course there’s an alternative. There’s a very obvious alternative, which also happens to be the obviously and only correct action:
It still appears to me that you are completely missing the point. I acknowledge that you are getting a lot of upvotes and I’m not, suggesting that other LW readers disagree with me. I think they are wrong, but outside view suggests caution.
I notice one thing I said that was not at all what I intended to say, so let me correct that before going further. I said
“not doing things that would offend X” in his comment is unambiguously not a move in any game being played with X at all.
but what I actually meant to say was
“standing up to X” in his comment is unambiguously not a move in any game being played with X at all.
[EDITED to add:] No, that also isn’t quite right; my apologies; let me try again. What I actually mean is that “standing up to X” and “not doing things that would offend X” are events in two entirely separate games, and the latter is not a means to the former.
There are actually three separate interactions envisaged in Steven’s comment, constituting (if you want to express this in game-theoretic terms) three separate games. (1) An interaction with left-wing entryists, where they try to turn LW into a platform for leftist propaganda. (2) An interaction with right-wing entryists, where they try to turn LW into a platform for rightist propaganda. (3) An interaction with leftists, who may or may not be entryists, where they try to stop LW being a platform for right-wing propaganda or claim that it is one. (There is also (4) an interaction with rightists, along the lines of #3, which I include for the sake of symmetry.)
Steven claims that in game 1 we should strongly resist the left-wing entryists, presumably by saying something like “no, LW is not a place for left-wing propaganda”. He claims that in order to do this in a principled way we need also to say “LW is not a place for right-wing propaganda”, thus also resisting the right-wing entryists in game 2. And he claims that in order to do this credibly we need to be reluctant to post things that might be, or that look like they are, right-wing propaganda, thus giving some ground to the leftists in game 3.
Game 1 and game 3 are entirely separate, and the same move could be a declaration of victory in one and a capitulation in the other. For instance, imposing a blanket ban on all discussion of politically sensitive topics on LW would be an immediate and total victory over entryists of both stripes in games 1 and 2, and something like a total capitulation to leftists and rightists alike in games 3 and 4.
So “not doing things that would offend leftists” is not a move in any game played with left-wing entryists; “standing up to left-wing entryists” is not a move in any game played with leftists complaining about right-wing content on LW; I was trying to say both of those and ended up talking nonsense. The above is what I actually meant.
I agree that steven0461 is saying (something like) that people writing LW articles should avoid saying things that outrage left-leaning readers, and that if you view what happens on LW as a negotiation with left-leaning readers then that proposal is not a strategy that gives you much leverage.
I don’t agree that it makes any sense to say, as you did, that Steven’s proposal involves “standing up to X by not saying anything that offends X”, which is the specific thing you accused him of.
Your comment above elaborates on the thing I agree about, but doesn’t address the reasons I’ve given for disagreeing with the thing I don’t agree about. That may be partly because of the screwup on my part that I mention above.
I think the distinction is important, because the defensible accusation is of the form “Steven proposes giving too much veto power over LW to certain political groups”, which is a disagreement about strategy, whereas the one you originally made is of the form “Steven proposes something blatantly self-contradictory”, which is a disagreement about rationality, and around these parts accusations of being stupid or irrational are generally more serious than accusations of being unwise or on the wrong political team.
The above is my main objection to what you have been saying here, but I have others which I think worth airing:
It is not true that “don’t do anything that the left considers offensively right-wing” gives the left “the ability to prevent arbitrary speech”, at least not if it’s interpreted with even the slightest bit of charity, because there are many many things one could say that no one will ever consider offensively right-wing. Of course it’s possible in theory for any given group to start regarding any given thing as offensively right-wing, but I do not think it reasonable to read steven0461′s proposal as saying that literally no degree of absurdity should make us reconsider the policy he proposes.
It is not true that Steven proposes to “not do anything that the left has decided is offensively right-wing”. “Sufficiently offensive” was his actual wording. This doesn’t rule out any specific thing, but again I think any but the most uncharitable reading indicates that he is not proposing a policy of the form “never post anything that anyone finds offensive” but one of the form “when posting something that might cause offence, consider whether its potential to offend is enough to outweigh the benefits of posting it”. So, again, the proposal is not to give “the left” complete veto power over what is posted on LW.
I think it is unfortunate that most of what you’ve written rounds off Steven’s references to “left/right-wing political entryism” to “the left/right”. I do not know exactly where he draws the boundary between mere X-wing-ism and X-wing political entryism, but provided the distinction means something I think it is much more reasonable for LW to see “political entryism” of whatever stripe as an enemy to be stood up to, than for LW to see “the left” or “the right” as an enemy to be stood up to. The former is about not letting political groups co-opt LW for their political purposes. The latter is about declaring ourselves a political team and fighting opposing political teams.
standing up to all kinds of political entryism seems to me obviously desirable for its own sake
I agree it’s desirable for its own sake, but meant to give an additional argument why even those people who don’t agree it’s desirable for its own sake should be on board with it.
if for some reason left-wing political entryism is fundamentally worse than right-wing political entryism then surely that makes it not necessarily hypocritical to take a stronger stand against the former than against the latter
Not necessarily objectively hypocritical, but hypocritical in the eyes of a lot of relevant “neutral” observers.
“Stand up to X by not doing anything X would be offended by” is not what I proposed. I was temporarily defining “right wing” as “the political side that the left wing is offended by” so I could refer to posts like the OP as “right wing” without setting off a debate about how actually the OP thinks of it more as centrist that’s irrelevant to the point I was making, which is that “don’t make LessWrong either about left wing politics or about right wing politics” is a pretty easy to understand criterion and that invoking this criterion to keep LW from being about left wing politics requires also keeping LessWrong from being about right wing politics. Using such a criterion on a society-wide basis might cause people to try to redefine “1+1=2″ as right wing politics or something, but I’m advocating using it locally, in a place where we can take our notion of what is political and what is not political as given from outside by common sense and by dynamics in wider society (and use it as a Schelling point boundary for practical purposes without imagining that it consistently tracks what is good and bad to talk about). By advocating keeping certain content off one particular website, I am not advocating being “maximally yielding in an ultimatum game”, because the relevant game also takes place in a whole universe outside this website (containing your mind, your conversations with other people, and lots of other websites) that you’re free to use to adjust your degree of yielding. Nor does “standing up to political entryism” even imply standing up to offensive conclusions reached naturally in the course of thinking about ideas sought out for their importance rather than their offensiveness or their symbolic value in culture war.
I agree that LW shouldn’t be a zero-risk space, that some people will always hate us, and that this is unavoidable and only finitely bad. I’m not persuaded by reasons 2 and 3 from your comment at all in the particular case of whether people should talk about Murray. A norm of “don’t bring up highly inflammatory topics unless they’re crucial to the site’s core interests” wouldn’t stop Hanson from posting about ems, or grabby aliens, or farmers and foragers, or construal level theory, or Aumann’s theorem, and anyway, having him post on his own blog works fine. AI alignment was never political remotely like how the Bell Curve is political. (I guess some conceptual precursors came from libertarian email lists in the 90s?) If AI alignment becomes very political (e.g. because people talk about it side by side with Bell Curve reviews), we can invoke the “crucial to the site’s core interests” thing and keep discussing it anyway, ideally taking some care to avoid making people be stupid about it. If someone wants to argue that having Bell Curve discussion on r/TheMotte instead of here would cause us to lose out on something similarly important, I’m open to hearing it.
You’d have to use a broad sense of “political” to make this true (maybe amounting to “controversial”). Nobody is advocating blanket avoidance of controversial opinions, only blanket avoidance of narrow-sense politics, and even then with a strong exception of “if you can make a case that it’s genuinely important to the fate of humanity in the way that AI alignment is important to the fate of humanity, go ahead”. At no point could anyone have used the proposed norms to prevent discussion of AI alignment.
I think that to someone who disagrees with me, they might say that what’s in fact happening is an increase in knowledge and an improvement in culture, reflected in language. In the same way that I expect to routinely update my picture of the world when I read the newspaper, why shouldn’t I expect to routinely update my language to reflect evolving cultural understandings of how to treat other people well?
I know you haven’t implied that the someone could be me, but I thought I’d just clarify that I would vehemently oppose such an argument. My argument contra slippery slope is that I don’t see evidence for it. If we look ten years into the past, there hasn’t been another book like TBC every week; in fact there hasn’t been one ever. I would bet against there being another one in the next 10 years.
There may be some risk of a slippery slope on other issues, but honestly I want that to be a separate argument because I estimate this post to carry a lot more risk than the other < 4 posts/year that I mentioned. I don’t know if this is true (and it’s usually bad form to accuse others of lack of knowledge), but I genuinely wonder if others who’ve participated in this discussion just don’t know how strongly many people feel about this book. (it is of course possible to acknowledge this and still (or especially) be against censorship.)
I’m fairly aware of Murray’s public image, but wanted to go a little deeper before replying.
Here’s a review from the Washington Post this year, of Murray’s latest book. Note that, while critical of his book, it does not call him a racist. Perhaps its strongest critical language is the closing sentence:
He writes as if his conclusions are just a product of cold calculus and doesn’t pause long enough to consider that perhaps it’s the assumptions in his theorem that are antithetical to the soul of America.
It actually more portrays him as out of touch with the rise of the far right than in lockstep with it. The article does not call him a racist, predict his book will cause harm, or suggest that readers avoid it. This suggests to me that there is still room for Murray’s output to be considered by a major, relatively liberal news media outlet.
The Standard-Examiner published a positive review of the same book. They are a newspaper with a circulation of about 30,000, based out of Ogden, UT.
Looking over other the couple dozen news articles that popped up containing “Charles Murray” and “The Bell Curve” from 2021, I see several that mention protests against him, or arguments over TBC, mentioned as one of a handful of important examples of prominent debates about race and racism.
I also looked up protests against Murray. There have been a few major ones, most famously at Middlebury College, some minor ones, and some that did not attract protests. My view is that for college protests, the trigger is “close to home,” and the protest organizers depend on college advertising and social ties to motivate participation.
So we are in agreement that Murray is a prominent and controversial figure on this topic, and protests against him can provoke once-in-a-decade-level episodes of racial tension on a campus, or be viewed as arguments on par with debates over critical race theory. This isn’t just some book about a controversial topic—it was a bestseller, and is still referenced 25 years later as a major source of controversy, and which has motivated hundreds or even thousands of students to protest the author when he’s attempted to speak on their campus. There are many scholarly articles writing, and generally critically, about the book.
Despite the controversy, it’s possible in 2021 for a liberal journalist to publish a critical but essentially professional review of Murray’s new work, and for a conservative journalist to publish a positive review in their newspaper.
The way I see it, Murray is a touchstone figure, but is still only very rarely prominent in the daily news cycle. Just writing about him isn’t enough to make the article newsworthy. If lsusr was a highly prominent blogger, then this review might make the news, or be alarming enough to social media activists to outcompete other tweets and shares. But he’s not a big enough figure, and this isn’t an intense enough article, to even come close to making such a big splash.
If this article poses an issue, it’s by adding one piece of evidence to the prosecutor’s exhibit that LW is a politically problematic space. Given that, as you say, this is one of the most unusually controversy-courting posts of the year, my assessment that it is “only one more piece of evidence,” rather than a potential turning point in this site’s public image, strikes me as a point of evidence against censorship. It’s just not that big a deal.
If you would care to game out for me in a little more detail about what a long-term scenario in which AGI safety becomes tainted by association with posts such as this, to the serious detriment of humanity, please do!
Agree with all of this, but my concern is not that the coupling of [worrying about AGI] and [being anti-social-justice] happens tomorrow. (I did have some separate concerns about people being put off by the post today, but I’ve been convinced somewhere in the comments under this post that the opposite is about equally likely.) It’s that this happens when AGI saftey is a much bigger deal in the public discourse. (Not sure if you think this will never happen? I think there’s a chance it never happens but that seems widely uncertain. I would put maybe 50% on it or something? Note that even if it happens very late, say 4 years before AGI poses an existential risk, I think that’s still more than enough time for the damage to be done. EY famously argued that there is no firelarm for AGI; if you buy this then we can’t rely on “by this point the danger is so obvious that people will take safety seriously no matter what”.)
If your next question is “why worry about this now”, one reason is that I don’t have faith that mods will react in time when the risk increases (I’ve updated upward on how likely I think this is after talking to Ruby but not to 100% and who knows who’s mod in 20 years), and I have the opportunity to say something now. But even if I had full authority over how the policy changes in the future, I still wouldn’t have allowed this post because people can dig out old material if they want to write a hit piece. This post has been archived, so from this point on there will forever be the opportunity to link LW to TBC for anyone wants to do that. And if you applied the analog of security mindset to this problem (which I think is appropriate), this is not something you would allow to happen. There is precedent for people losing positions over things that have happened decades in the past.
One somewhat concrete scenario that seems plausible (but widely unlikely because it’s concrete) is that Elon Musk manages to make the issue mainstream in 15 years; someone does a deep dive and links this to LW and LW to anti-social-jutice (even though LW itself still doesn’t have that many more readers); this gets picked up a lot of people who think worrying about AGI is bad; the aforementioned coupling occurs.
The only other thing I’d say is that there is also a substantial element of randomness to what does and doesn’t create a vast backlash. You can’t look at one instance of “person with popularity level x said thing of controversy level y, nothing bad happened” and conclude that any other instance (x′,y′) with x′<x and y′<y will definitely not lead to anything bad happening.
This is a helpful addendum. I didn’t want to bust out the slippery slope argument because I didn’t have clarity on the gears-level mechanism. But in this case, we seem to have a ratchet in which X is deemed newly offensive, and a lot of attention is focused on just this particular word or phrase X. Because “it’s just this one word,” resisting the offensive-ization is made to seem petty—wouldn’t it be such a small thing to give up, in exchange for inflicting a whole lot less suffering on others?
Next week it’ll be some other X though, and the only way this ends is if you can re-establish some sort of Schelling Fence of free discourse and resist any further calls to expand censorship, even if they’re small and have good reasons to back them up.
I think that to someone who disagrees with me, they might say that what’s in fact happening is an increase in knowledge and an improvement in culture, reflected in language. In the same way that I expect to routinely update my picture of the world when I read the newspaper, why shouldn’t I expect to routinely update my language to reflect evolving cultural understandings of how to treat other people well?
My response to this objection would be that, in much the same way as phrases like “sexual preference” can be seen as offensive for their implications, or a book can be objected to for its symbolism, mild forms of censorship or “updates” in speech codes can provoke anxiety, induce fear, and restrain thought. This may not be their intention, but it is their effect, at least at times and in the present cultural climate.
So a standard of free discourse and a Schelling Fence against expansion of censorship is justified not (just) to avoid a slippery slope of ever-expanding censorship, or to attract people with certain needs or to establish a pipeline into certain roles or jobs. Its purpose is also to create a space in which we have declared that we will strive to be less timid, not just less wrong.
We might not always prioritize or succeed in that goal, but establishing that this is a space where we are giving ourselves permission to try is a feature of explicit anti-censorship norms.
Prioritizing freedom of thought and lessening timidity isn’t always the right goal. Sometimes, inclusivity, warmth, and a sense of agreeableness and safety is the right way to organize certain spaces. Different cultural moments, or institutions, might need marginally more safe spaces. Sometimes, though, they need more risky spaces. My observation tells me that our culture is currently in need of marginally more risky spaces, even if the number of safe spaces remains the same. A way to protect LW’s status as a risky space is to protect our anti-censorship norms, and sometimes to exercise our privilege to post risky material such as this post.
Our culture is desperately in need of spaces that are correct about the most important technical issues, and insisting that the few such spaces that exist have to also become politically risky spaces jeopardizes their ability to function for no good reason given that the internet lets you build as many separate spaces as you want elsewhere.
I’m going to be a little nitpicky here. LW is not “becoming,” but rather already is a politically risky space, and has been for a long time. There are several good reasons, which I and others have discussed elsewhere here. They may not be persuasive to you, and that’s OK, but they do exist as reasons. Finally, the internet may let you build a separate forum elsewhere and try to attract participants, but that is a non-trivial ask.
My position is that accepting intellectual risk is part and parcel of creating an intellectual environment capable of maintaining the epistemic rigor that we both think is necessary.
It is you, and others here, who are advocating a change of the status quo to create a bigger wall between x-risk topics and political controversy. I think that this would harm the goal of preventing x-risk, on current margins, as I’ve argued elsewhere here. We both have our reasons, and I’ve written down the sort of evidence that would cause me to change my point of view.
Fortunately, I enjoy the privilege of being the winner by default in this contest, since the site’s current norms already accord with my beliefs and preferences. So I don’t feel the need to gather evidence to persuade you of my position, assuming you don’t find my arguments here compelling. However, if you do choose to make the effort to gather some of the evidence I’ve elsewhere outlined, I not only would eagerly read it, but would feel personally grateful to you for making the effort. I think those efforts would be valuable for the health of this website and also for mitigating X-risk. However, they would be time-consuming, effortful, and may not pay off in the end.
I also care a lot about this; I think there are three important things to track.
First is that people might have reputations to protect or purity to maintain, and so want to be careful about what they associate with. (This is one of the reasons behind the separate Alignment Forum URL; users who wouldn’t want to post something to Less Wrong can post someplace classier.)
Second is that people might not be willing to pay costs to follow taboos. The more a space is politically safe, the less people like Robin Hanson will want to be there, because many of their ideas are easier to think of if you’re not spending any of your attention on political safety.
Third is that the core topics you care about might, at some point, become political. (Certainly AI alignment was ‘political’ for many years before it became mainstream, and will become political again as soon as it stops becoming mainstream, or if it becomes partisan.)
The first is one of the reasons why LW isn’t a free speech absolutist site, even tho with a fixed population of posters that would probably help us be more correct. But the second and third are why LW isn’t a zero-risk space either.
Some more points I want to make:
I don’t care about moderation decisions for this particular post, I’m just dismayed by how eager LessWrongers seem to be to rationalize shooting themselves in the foot, which is also my foot and humanity’s foot, for the short term satisfaction of getting to think of themselves as aligned with the forces of truth in a falsely constructed dichotomy against the forces of falsehood.
On any sufficiently controversial subject, responsible members of groups with vulnerable reputations will censor themselves if they have sufficiently unpopular views, which makes discussions on sufficiently controversial subjects within such groups a sham. The rationalist community should oppose shams instead of encouraging them.
Whether political pressure leaks into technical subjects mostly depends on people’s meta-level recognition that inferences subject to political pressure are unreliable, and hosting sham discussions makes this recognition harder.
The rationalist community should avoid causing people to think irrationally, and a very frequent type of irrational thinking (even among otherwise very smart people) is “this is on the same website as something offensive, so I’m not going to listen to it”. “Let’s keep putting important things on the same website as unimportant and offensive things until they learn” is not a strategy that I expect to work here.
It would be really nice to be able to stand up to left wing political entryism, and the only principled way to do this is to be very conscientious about standing up to right wing political entryism, where in this case “right wing” means any politics sufficiently offensive to the left wing, regardless of whether it thinks of itself as right wing.
I’m not as confident about these conclusions as it sounds, but my lack of confidence comes from seeing that people whose judgment I trust disagree, and it does not come from the arguments that have been given, which have not seemed to me to be good.
“Stand up to X by not doing anything X would be offended by” is obviously an unworkable strategy, it’s taking a negotiating stance that is maximally yielding in the ultimatum game, so should expect to receive as little surplus utility as possible in negotiation.
(Not doing anything X would be offended by is generally a strategy for working with X, not standing up to X; it could work if interests are aligned enough that it isn’t necessary to demand much in negotiation. But given your concern about “entryism” that doesn’t seem like the situation you think you’re in.)
steven0461 isn’t proposing standing up to X by not doing things that would offend X.
He is proposing standing up to the right by not doing things that would offend the left, and standing up to the left by not doing things that would offend the right. Avoiding posts like the OP here is intended to be an example of the former, which (steven0461 suggests) has value not only for its own sake but also because it lets us also stand up to the left by avoiding things that offend the right, without being hypocrites.
(steven0461′s comment seems to treat “standing up to left-wing political entryism” as a thing that’s desirable for its own sake, and “standing up to right-wing political entryism” as something we regrettably have to do too in order to do the desirable thing without hypocrisy. This seems kinda strange to me because (1) standing up to all kinds of political entryism seems to me obviously desirable for its own sake, and because (2) if for some reason left-wing political entryism is fundamentally worse than right-wing political entryism then surely that makes it not necessarily hypocritical to take a stronger stand against the former than against the latter.)
If someone proposes to do A by doing B, and B by doing C, they are proposing doing A by doing C. (Here A = “stand up to left wing entryism”, B = “stand up to right wing entryism”, C = “don’t do things that left wing people are offended by”)
EDIT: Also, the situation isn’t symmetrical, since Steven is defining right-wing to mean things the left wing is offended by, and not vice versa. Hence it’s clearly a strategy for submitting to the left, as it lets the left construct the left/right dichotomy.
I’m not sure there’s a definite fact of the matter as to when something is “doing X by doing Y” in cases like this where it’s indirect, but I think either we shouldn’t use that language so broadly as to apply to such cases or it’s not obvious that it’s unworkable to “stand up to X by not doing things that offend X”, since the obvious unworkability of that is (unless I’m misunderstanding your earlier comment) predicated on the idea that it’s a sort of appeasement of X, rather than the sort of indirect thing we’re actually talking about here.
Maybe I am also being too indirect. Regardless of whether there’s some sense in which steven0461 is proposing to “stand up to X by not doing things that would offend X”, he was unambiguously not proposing “a negotiating stance that is maximally yielding in the ultimatum game”; “not doing things that would offend X” in his comment is unambiguously not a move in any game being played with X at all. Your objection to what he wrote is just plain wrong, whether or not there is a technical sense in which he did say the thing that you objected to, because your argument against what he said was based on an understanding of it that is wrong whether or not that’s so.
[EDITED to add:] As I mention in a grandchild comment, one thing in the paragraph above is badly garbled; I was trying to say something fairly complicated in too few words and ended up talking nonsense. It’s not correct to say that “not doing things that would offend X” is not a move in any game being played with X. Rather, I claim that X in your original comment is standing in for two different albeit related Xs, who are involved in two different albeit related interactions (“games” if you like), and the two things you portray as inconsistent are not at all inconsistent because it’s entirely possible (whether or not it’s wise) to win one game while losing the other.
The game with “left-wing entryists” is one where they try to make LW a platform for left-wing propaganda. The game with “the left” is one where they try to stop LW being a platform for (what they regard as) right-wing propaganda. Steven proposes taking a firm stand against the former, and making a lot of concessions in the latter. These are not inconsistent; banning everything that smells of politics, whether wise or foolish overall, would do both of the things Steven proposes doing. He proposes making concessions to “the left” in the second game in order to resist “right-wing entryists” in the mirror-image of the first game. We might similarly make concessions to “the right” if they were complaining that LW is too leftist, by avoiding things that look to them like left-wing propaganda. I make no claims about whether any of these resistances and concessions are good strategy; I say only that they don’t exhibit the sort of logical inconsistency you are accusing Steven of.
The implied game is:
Step 1: The left decides what is offensively right-wing
Step 2: LW people decide what to say given this
Steven is proposing a policy for step 2 that doesn’t do anything that the left has decided is offensively right-wing. This gives the left the ability to prevent arbitrary speech.
If the left is offended by negotiating for more than $1 in the ultimatum game, Steven’s proposed policy would avoid doing that, thereby yielding. (The money here is metaphorical, representing benefits LW people could get by talking about things without being attacked by the left)
I think an important cause of our disagreement is you model the relevant actors as rational strategic consequentialists trying to prevent certain kinds of speech, whereas I think they’re at least as much like a Godzilla that reflexively rages in pain and flattens some buildings whenever he’s presented with an idea that’s noxious to him. You can keep irritating Godzilla until he learns that flattening buildings doesn’t help him achieve his goals, but he’ll flatten buildings anyway because that’s just the kind of monster he is, and in this way, you and Godzilla can create arbitrary amounts of destruction together. And (to some extent) it’s not like someone constructed a reflexively-acting Godzilla so they could control your behavior, either, which would make it possible to deter that person from making future Godzillas. Godzillas seem (to some extent) to arise spontaneously out of the social dynamics of large numbers of people with imperfect procedures for deciding what they believe and care about. So it’s not clear to me that there’s an alternative to just accepting the existence of Godzilla and learning as best as you can to work around him in those cases where working around him is cheap, especially if you have a building that’s unusually important to keep intact. All this is aside from considerations of mercy to Godzilla or respect for Godzilla’s opinions.
If I make some substitutions in your comment to illustrate this view of censorious forces as reflexive instead of strategic, it goes like this:
I think “wave your cloths at home or in another field even if it’s not as good” ends up looking clearly correct here, and if this model is partially true, then something more nuanced than an absolutist “don’t give them an inch” approach is warranted.
edit: I should clarify that when I say Godzilla flattens buildings, I’m mostly not referring to personal harm to people with unpopular opinions, but to epistemic closure to whatever is associated with those people, which you can see in action every day on e.g. Twitter.
The relevant actors aren’t consciously being strategic about it, but I think their emotions are sensitive to whether the threat of being offended seems to be working. That’s what the emotions are for, evolutionarily speaking. People are innately very good at this! When I babysit a friend’s unruly 6-year-old child who doesn’t want to put on her shoes, or talk to my mother who wishes I would call more often, or introspect on my own rage at the abject cowardice of so-called “rationalists”, the functionality of emotions as a negotiating tactic is very clear to me, even if I don’t have the same kind of deliberative control over my feelings as my speech (and the child and my mother don’t even think of themselves as doing game theory at all).
(This in itself doesn’t automatically negate your concerns, of course, but I think it’s an important modeling consideration: animals like Godzilla may be less incentivizable than Homo economicus, but they’re more like Homo economicus than a tornado or an avalanche.)
I think simplifying all this to a game with one setting and two players with human psychologies obscures a lot of what’s actually going on. If you look at people of the sneer, it’s not at all clear that saying offensive things thwarts their goals. They’re pretty happy to see offensive things being said, because it gives them opportunities to define themselves against the offensive things and look like vigilant guardians against evil. Being less offensive, while paying other costs to avoid having beliefs be distorted by political pressure (e.g. taking it elsewhere, taking pains to remember that politically pressured inferences aren’t reliable), arguably de-energizes such people more than it emboldens them.
This logic would fall down entirely if it turned out that “offensive things” isn’t a natural kind, or a pre-existing category of any sort, but is instead a label attached by the “people of the sneer” themselves to anything they happen to want to mock or vilify (which is always going to be something, since—as you say—said people in fact have a goal of mocking and/or vilifying things, in general).
Inconveniently, that is precisely what turns out to be the case…
“Offensive things” isn’t a category determined primarily by the interaction of LessWrong and people of the sneer. These groups exist in a wider society that they’re signaling to. It sounds like your reasoning is “if we don’t post about the Bell Curve, they’ll just start taking offense to technological forecasting, and we’ll be back where we started but with a more restricted topic space”. But doing so would make the sneerers look stupid, because society, for better or worse, considers The Bell Curve to be offensive and does not consider technological forecasting to be offensive.
I’m sorry, but this is a fantasy. It may seem reasonable to you that the world should work like this, but it does not.
To suggest that “the sneerers” would “look stupid” is to posit someone—a relevant someone, who has the power to determine how people and things are treated, and what is acceptable, and what is beyond the pale—for them to “look stupid” to. But in fact “the sneerers” simply are “wider society”, for all practical purposes.
“Society” considers offensive whatever it is told to consider offensive. Today, that might not include “technological forecasting”. Tomorrow, you may wake up to find that’s changed. If you point out that what we do here wasn’t “offensive” yesterday, and so why should it be offensive today, and in any case, surely we’re not guilty of anything, are we, since it’s not like we could’ve known, yesterday, that our discussions here would suddenly become “offensive”… right? … well, I wouldn’t give two cents for your chances, in the court of public opinion (Twitter division). And if you try to protest that anyone who gets offended at technological forecasting is just stupid… then may God have mercy on your soul—because “the sneerers” surely won’t.
But there are systemic reasons why Society gets told that hypotheses about genetically-mediated group differences are offensive, and mostly doesn’t (yet?) get told that technological forecasting is offensive. (If some research says Ethnicity E has higher levels of negatively-perceived Trait T, then Ethnicity E people have an incentive to discredit the research independently of its truth value—and people who perceive themselves as being in a zero-sum conflict with Ethnicity E have an incentive to promote the research independently of its truth value.)
Steven and his coalition are betting that it’s feasible to “hold the line” on only censoring the hypotheses are closely tied to political incentives like this, without doing much damage to our collective ability to think about other aspects of the world. I don’t think it works as well in practice as they think it does, due to the mechanisms described in “Entangled Truths, Contagious Lies” and “Dark Side Epistemology”—you make a seemingly harmless concession one day, and five years later, you end up claiming with perfect sincerity that dolphins are fish—but I don’t think it’s right to dismiss the strategy as fantasy.
I’m not advocating lying. I’m advocating locally preferring to avoid subjects that force people to either lie or alienate people into preferring lies, or both. In the possible world where The Bell Curve is mostly true, not talking about it on LessWrong will not create a trail of false claims that have to be rationalized. It will create a trail of no claims. LessWrongers might fill their opinion vacuum with false claims from elsewhere, or with true claims, but either way, this is no different from what they already do about lots of subjects, and does not compromise anyone’s epistemic integrity.
I understand that. I cited a Sequences post that has the word “lies” in the title, but I’m claiming that the mechanism described in the cited posts—that distortions on one topic can spread to both adjacent topics, and to people’s understanding of what reasoning looks like—can apply more generally to distortions that aren’t direct lies.
Omitting information can be a distortion when the information would otherwise be relevant. In “A Rational Argument”, Yudkowsky gives the example of an election campaign manager publishing survey responses from their candidate, but omitting one question which would make their candidate look bad, which Yudkowsky describes as “cross[ing] the line between rationality and rationalization” (!). This is a very high standard—but what made the Sequences so valuable, is that they taught people the counterintuitive idea that this standard exists. I think there’s a lot of value in aspiring to hold one’s public reasoning to that standard.
Not infinite value, of course! If I knew for a fact that Godzilla will destroy the world if I cite a book that I would otherwise would have cited as genuinely relevant, then fine, for the sake of the sake of the world, I can not cite the book.
Maybe we just quantitatively disagree on how tough Godzilla is and how large the costs of distortions are? Maybe you’re happy to throw Sargon of Akkad under the bus, but when Steve Hsu is getting thrown under the bus, I think that’s a serious problem for the future of humanity. I think this is actually worth a fight.
With my own resources and my own name (and a pen name), I’m fighting. If someone else doesn’t want to fight with their name and their resources, I’m happy to listen to suggestions for how people with different risk tolerances can cooperate to not step on each other’s toes! In the case of the shared resource of this website, if the Frontpage/Personal distinction isn’t strong enough, then sure, “This is on our Banned Topics list; take it to /r/TheMotte, you guys” could be another point on the compromise curve. What I would hope for from the people playing the sneaky consequentialist image-management strategy, is that you guys would at least acknowledge that there is a conflict and that you’ve chosen a side.
For more on why I think not-making-false-claims is vastly too low of a standard to aim for, see “Firming Up Not-Lying Around Its Edge-Cases Is Less Broadly Useful Than One Might Initially Think” and “Heads I Win, Tails?—Never Heard of Her”.
Your posts seem to be about what happens if you filter out considerations that don’t go your way. Obviously, yes, that way you can get distortion without saying anything false. But the proposal here is to avoid certain topics and be fully honest about which topics are being avoided. This doesn’t create even a single bit of distortion. A blank canvas is not a distorted map. People can get their maps elsewhere, as they already do on many subjects, and as they will keep having to do regardless, simply because some filtering is inevitable beneath the eye of Sauron. (Distortions caused by misestimation of filtering are going to exist whether the filter has 40% strength or 30% strength. The way to minimize them is to focus on estimating correctly. A 100% strength filter is actually relatively easy to correctly estimate. And having the appearance of a forthright debate creates perverse incentives for people to distort their beliefs so they can have something inoffensive to be forthright about.)
The people going after Steve Hsu almost entirely don’t care whether LW hosts Bell Curve reviews. If adjusting allowable topic space gets us 1 util and causes 2 utils of damage distributed evenly across 99 Sargons and one Steve Hsu, that’s only 0.02 Hsu utils lost, which seems like a good trade.
I don’t have a lot of verbal energy and find the “competing grandstanding walls of text” style of discussion draining, and I don’t think the arguments I’m making are actually landing for some reason, and I’m on the verge of tapping out. Generating and posting an IM chat log could be a lot more productive. But people all seem pretty set in their opinions, so it could just be a waste of energy.
Another way this matters: Offense takers largely get their intuitions about “will taking offense achieve my goals” from experience in a wide variety of settings and not from LessWrong specifically. Yes, theoretically, the optimal strategy is for them to estimate “will taking offense specifically against LessWrong achieve my goals”, but most actors simply aren’t paying enough attention to form a target-by-target estimate. Viewing this as a simple game theory textbook problem might lead you to think that adjusting our behavior to avoid punishment would lead to an equal number of future threats of punishment against us and is therefore pointless, when actually it would instead lead to future threats of punishment against some other entity that we shouldn’t care much about, like, I don’t know, fricking Sargon of Akkad.
I agree that offense-takers are calibrated against Society-in-general, not particular targets.
As a less-political problem with similar structure, consider ransomware attacks. If an attacker encrypts your business’s files and will sell you the encryption key for 10 Bitcoins, do you pay (in order to get your files back, as common sense and causal decision theory agree), or do you not-pay (as a galaxy-brained updateless-decision-theory play to timelessly make writing ransomware less profitable, even though that doesn’t help the copy of you in this timeline)?
It’s a tough call! If your business’s files are sufficiently important, then I can definitely see why you’d want to pay! But if someone were to try to portray the act of paying as pro-social, that would be pretty weird. If your Society knew how, law-abiding citizens would prefer to coordinate not to pay attackers, which is why the U.S. Treasury Department is cracking down on facilitating ransomware payments. But if that’s not an option …
If coordinating to resist extortion isn’t an option, that makes me very interested in trying to minimize the extent to which there is a collective “us”. “We” should be emphasizing that rationality is a subject matter that anyone can study, rather than trying to get people to join our robot cult and be subject to the commands and PR concerns of our leaders. Hopefully that way, people playing a sneaky consequentialist image-management strategy and people playing a Just Get The Goddamned Right Answer strategy can at least avoid being at each other’s throats fighting over who owns the “rationalist” brand name.
My claim was:
It’s obvious to everyone in the discussion that the model is partially false and there’s also a strategic component to people’s emotions, so repeating this is not responsive.
But of course there’s an alternative. There’s a very obvious alternative, which also happens to be the obviously and only correct action:
Kill Godzilla.
(Appreciate you spelling it out like this, the above is a clear articulation of one of the main perspectives I have on the situation.)
It still appears to me that you are completely missing the point. I acknowledge that you are getting a lot of upvotes and I’m not, suggesting that other LW readers disagree with me. I think they are wrong, but outside view suggests caution.
I notice one thing I said that was not at all what I intended to say, so let me correct that before going further. I said
but what I actually meant to say was
[EDITED to add:] No, that also isn’t quite right; my apologies; let me try again. What I actually mean is that “standing up to X” and “not doing things that would offend X” are events in two entirely separate games, and the latter is not a means to the former.
There are actually three separate interactions envisaged in Steven’s comment, constituting (if you want to express this in game-theoretic terms) three separate games. (1) An interaction with left-wing entryists, where they try to turn LW into a platform for leftist propaganda. (2) An interaction with right-wing entryists, where they try to turn LW into a platform for rightist propaganda. (3) An interaction with leftists, who may or may not be entryists, where they try to stop LW being a platform for right-wing propaganda or claim that it is one. (There is also (4) an interaction with rightists, along the lines of #3, which I include for the sake of symmetry.)
Steven claims that in game 1 we should strongly resist the left-wing entryists, presumably by saying something like “no, LW is not a place for left-wing propaganda”. He claims that in order to do this in a principled way we need also to say “LW is not a place for right-wing propaganda”, thus also resisting the right-wing entryists in game 2. And he claims that in order to do this credibly we need to be reluctant to post things that might be, or that look like they are, right-wing propaganda, thus giving some ground to the leftists in game 3.
Game 1 and game 3 are entirely separate, and the same move could be a declaration of victory in one and a capitulation in the other. For instance, imposing a blanket ban on all discussion of politically sensitive topics on LW would be an immediate and total victory over entryists of both stripes in games 1 and 2, and something like a total capitulation to leftists and rightists alike in games 3 and 4.
So “not doing things that would offend leftists” is not a move in any game played with left-wing entryists; “standing up to left-wing entryists” is not a move in any game played with leftists complaining about right-wing content on LW; I was trying to say both of those and ended up talking nonsense. The above is what I actually meant.
I agree that steven0461 is saying (something like) that people writing LW articles should avoid saying things that outrage left-leaning readers, and that if you view what happens on LW as a negotiation with left-leaning readers then that proposal is not a strategy that gives you much leverage.
I don’t agree that it makes any sense to say, as you did, that Steven’s proposal involves “standing up to X by not saying anything that offends X”, which is the specific thing you accused him of.
Your comment above elaborates on the thing I agree about, but doesn’t address the reasons I’ve given for disagreeing with the thing I don’t agree about. That may be partly because of the screwup on my part that I mention above.
I think the distinction is important, because the defensible accusation is of the form “Steven proposes giving too much veto power over LW to certain political groups”, which is a disagreement about strategy, whereas the one you originally made is of the form “Steven proposes something blatantly self-contradictory”, which is a disagreement about rationality, and around these parts accusations of being stupid or irrational are generally more serious than accusations of being unwise or on the wrong political team.
The above is my main objection to what you have been saying here, but I have others which I think worth airing:
It is not true that “don’t do anything that the left considers offensively right-wing” gives the left “the ability to prevent arbitrary speech”, at least not if it’s interpreted with even the slightest bit of charity, because there are many many things one could say that no one will ever consider offensively right-wing. Of course it’s possible in theory for any given group to start regarding any given thing as offensively right-wing, but I do not think it reasonable to read steven0461′s proposal as saying that literally no degree of absurdity should make us reconsider the policy he proposes.
It is not true that Steven proposes to “not do anything that the left has decided is offensively right-wing”. “Sufficiently offensive” was his actual wording. This doesn’t rule out any specific thing, but again I think any but the most uncharitable reading indicates that he is not proposing a policy of the form “never post anything that anyone finds offensive” but one of the form “when posting something that might cause offence, consider whether its potential to offend is enough to outweigh the benefits of posting it”. So, again, the proposal is not to give “the left” complete veto power over what is posted on LW.
I think it is unfortunate that most of what you’ve written rounds off Steven’s references to “left/right-wing political entryism” to “the left/right”. I do not know exactly where he draws the boundary between mere X-wing-ism and X-wing political entryism, but provided the distinction means something I think it is much more reasonable for LW to see “political entryism” of whatever stripe as an enemy to be stood up to, than for LW to see “the left” or “the right” as an enemy to be stood up to. The former is about not letting political groups co-opt LW for their political purposes. The latter is about declaring ourselves a political team and fighting opposing political teams.
I agree it’s desirable for its own sake, but meant to give an additional argument why even those people who don’t agree it’s desirable for its own sake should be on board with it.
Not necessarily objectively hypocritical, but hypocritical in the eyes of a lot of relevant “neutral” observers.
“Stand up to X by not doing anything X would be offended by” is not what I proposed. I was temporarily defining “right wing” as “the political side that the left wing is offended by” so I could refer to posts like the OP as “right wing” without setting off a debate about how actually the OP thinks of it more as centrist that’s irrelevant to the point I was making, which is that “don’t make LessWrong either about left wing politics or about right wing politics” is a pretty easy to understand criterion and that invoking this criterion to keep LW from being about left wing politics requires also keeping LessWrong from being about right wing politics. Using such a criterion on a society-wide basis might cause people to try to redefine “1+1=2″ as right wing politics or something, but I’m advocating using it locally, in a place where we can take our notion of what is political and what is not political as given from outside by common sense and by dynamics in wider society (and use it as a Schelling point boundary for practical purposes without imagining that it consistently tracks what is good and bad to talk about). By advocating keeping certain content off one particular website, I am not advocating being “maximally yielding in an ultimatum game”, because the relevant game also takes place in a whole universe outside this website (containing your mind, your conversations with other people, and lots of other websites) that you’re free to use to adjust your degree of yielding. Nor does “standing up to political entryism” even imply standing up to offensive conclusions reached naturally in the course of thinking about ideas sought out for their importance rather than their offensiveness or their symbolic value in culture war.
I agree that LW shouldn’t be a zero-risk space, that some people will always hate us, and that this is unavoidable and only finitely bad. I’m not persuaded by reasons 2 and 3 from your comment at all in the particular case of whether people should talk about Murray. A norm of “don’t bring up highly inflammatory topics unless they’re crucial to the site’s core interests” wouldn’t stop Hanson from posting about ems, or grabby aliens, or farmers and foragers, or construal level theory, or Aumann’s theorem, and anyway, having him post on his own blog works fine. AI alignment was never political remotely like how the Bell Curve is political. (I guess some conceptual precursors came from libertarian email lists in the 90s?) If AI alignment becomes very political (e.g. because people talk about it side by side with Bell Curve reviews), we can invoke the “crucial to the site’s core interests” thing and keep discussing it anyway, ideally taking some care to avoid making people be stupid about it. If someone wants to argue that having Bell Curve discussion on r/TheMotte instead of here would cause us to lose out on something similarly important, I’m open to hearing it.
Not within the mainstream politics, but within academic / corporate CS and AI departments.
You’d have to use a broad sense of “political” to make this true (maybe amounting to “controversial”). Nobody is advocating blanket avoidance of controversial opinions, only blanket avoidance of narrow-sense politics, and even then with a strong exception of “if you can make a case that it’s genuinely important to the fate of humanity in the way that AI alignment is important to the fate of humanity, go ahead”. At no point could anyone have used the proposed norms to prevent discussion of AI alignment.
I know you haven’t implied that the someone could be me, but I thought I’d just clarify that I would vehemently oppose such an argument. My argument contra slippery slope is that I don’t see evidence for it. If we look ten years into the past, there hasn’t been another book like TBC every week; in fact there hasn’t been one ever. I would bet against there being another one in the next 10 years.
There may be some risk of a slippery slope on other issues, but honestly I want that to be a separate argument because I estimate this post to carry a lot more risk than the other < 4 posts/year that I mentioned. I don’t know if this is true (and it’s usually bad form to accuse others of lack of knowledge), but I genuinely wonder if others who’ve participated in this discussion just don’t know how strongly many people feel about this book. (it is of course possible to acknowledge this and still (or especially) be against censorship.)
I’m fairly aware of Murray’s public image, but wanted to go a little deeper before replying.
Here’s a review from the Washington Post this year, of Murray’s latest book. Note that, while critical of his book, it does not call him a racist. Perhaps its strongest critical language is the closing sentence:
It actually more portrays him as out of touch with the rise of the far right than in lockstep with it. The article does not call him a racist, predict his book will cause harm, or suggest that readers avoid it. This suggests to me that there is still room for Murray’s output to be considered by a major, relatively liberal news media outlet.
The Standard-Examiner published a positive review of the same book. They are a newspaper with a circulation of about 30,000, based out of Ogden, UT.
Looking over other the couple dozen news articles that popped up containing “Charles Murray” and “The Bell Curve” from 2021, I see several that mention protests against him, or arguments over TBC, mentioned as one of a handful of important examples of prominent debates about race and racism.
I also looked up protests against Murray. There have been a few major ones, most famously at Middlebury College, some minor ones, and some that did not attract protests. My view is that for college protests, the trigger is “close to home,” and the protest organizers depend on college advertising and social ties to motivate participation.
So we are in agreement that Murray is a prominent and controversial figure on this topic, and protests against him can provoke once-in-a-decade-level episodes of racial tension on a campus, or be viewed as arguments on par with debates over critical race theory. This isn’t just some book about a controversial topic—it was a bestseller, and is still referenced 25 years later as a major source of controversy, and which has motivated hundreds or even thousands of students to protest the author when he’s attempted to speak on their campus. There are many scholarly articles writing, and generally critically, about the book.
Despite the controversy, it’s possible in 2021 for a liberal journalist to publish a critical but essentially professional review of Murray’s new work, and for a conservative journalist to publish a positive review in their newspaper.
The way I see it, Murray is a touchstone figure, but is still only very rarely prominent in the daily news cycle. Just writing about him isn’t enough to make the article newsworthy. If lsusr was a highly prominent blogger, then this review might make the news, or be alarming enough to social media activists to outcompete other tweets and shares. But he’s not a big enough figure, and this isn’t an intense enough article, to even come close to making such a big splash.
If this article poses an issue, it’s by adding one piece of evidence to the prosecutor’s exhibit that LW is a politically problematic space. Given that, as you say, this is one of the most unusually controversy-courting posts of the year, my assessment that it is “only one more piece of evidence,” rather than a potential turning point in this site’s public image, strikes me as a point of evidence against censorship. It’s just not that big a deal.
If you would care to game out for me in a little more detail about what a long-term scenario in which AGI safety becomes tainted by association with posts such as this, to the serious detriment of humanity, please do!
Agree with all of this, but my concern is not that the coupling of [worrying about AGI] and [being anti-social-justice] happens tomorrow. (I did have some separate concerns about people being put off by the post today, but I’ve been convinced somewhere in the comments under this post that the opposite is about equally likely.) It’s that this happens when AGI saftey is a much bigger deal in the public discourse. (Not sure if you think this will never happen? I think there’s a chance it never happens but that seems widely uncertain. I would put maybe 50% on it or something? Note that even if it happens very late, say 4 years before AGI poses an existential risk, I think that’s still more than enough time for the damage to be done. EY famously argued that there is no firelarm for AGI; if you buy this then we can’t rely on “by this point the danger is so obvious that people will take safety seriously no matter what”.)
If your next question is “why worry about this now”, one reason is that I don’t have faith that mods will react in time when the risk increases (I’ve updated upward on how likely I think this is after talking to Ruby but not to 100% and who knows who’s mod in 20 years), and I have the opportunity to say something now. But even if I had full authority over how the policy changes in the future, I still wouldn’t have allowed this post because people can dig out old material if they want to write a hit piece. This post has been archived, so from this point on there will forever be the opportunity to link LW to TBC for anyone wants to do that. And if you applied the analog of security mindset to this problem (which I think is appropriate), this is not something you would allow to happen. There is precedent for people losing positions over things that have happened decades in the past.
One somewhat concrete scenario that seems plausible (but widely unlikely because it’s concrete) is that Elon Musk manages to make the issue mainstream in 15 years; someone does a deep dive and links this to LW and LW to anti-social-jutice (even though LW itself still doesn’t have that many more readers); this gets picked up a lot of people who think worrying about AGI is bad; the aforementioned coupling occurs.
The only other thing I’d say is that there is also a substantial element of randomness to what does and doesn’t create a vast backlash. You can’t look at one instance of “person with popularity level x said thing of controversy level y, nothing bad happened” and conclude that any other instance (x′,y′) with x′<x and y′<y will definitely not lead to anything bad happening.