Imagine a world where having [a post mentioning the bell curve] visible on the frontpage runs a risk of destroying a lot of value. This could be through any number of mechanisms like
The site is discussed somewhere, someone claims that it’s a home for racism and points to this post as evidence. [Someone who in another universe would have become a valueable contributor to LW] sees this (but doesn’t read the post) it and decides not to check LW out.
A woke and EA-aligned person gets wind of it and henceforth thinks all x-risk related causes are unworthy of support
Someone links the article from somewhere, it gets posted on far right reddit board, a bunch of people make accounts on LessWrong to make dumb comments, someone from the NYT sees it and writes a hit piece. By this time all of the dumb comments are downvoted into invisibility (and none of them ever had high karma to begin with), but the NYT reporter just deals with this by writing that the mods had to step in and censor the most outrageous comments or something.
Question: If you think this is not worth worrying about—why? What do you know, and how do you think you know it? And in what way would a world-where-it-is-worth-worrying-about look different?
To avoid repeating arguments, there have been discussions similar to this before. Here are the arguments that I remember (I’m sure this is not exhaustive).
Pro: Not allowing posts poisons are epistemic discourse; saying ‘let’s be systematically correct about everything but {x,y,z} is a significantly worse algorithm than ‘let’s be systematically correct’, and this can have wide-ranging effects. (Zack_M_Davis strongly argued for this point, e.g. here) (this is also on one of the posts where the discussion has happened before, in this case because I made a comment arguing the post shouldn’t be on LW)
Contra: I’ve thought about this a lot since the discussion happened, and I increasingly just don’t buy that the negative effects are real. Especially not in this case, which seems more clear-cut than the dating post. The Bell Curve seems to be just about the single most controversial book in the world for a good chunk of people, just about any other book would be less of an issue. I assume the argument is that censorship is not proportional to the amount that is censored, but I don’t understand the mechanism here. How does this hurt discourse?
Pro: LessWrong obviously isn’t about this kind of stuff and anyone who takes an honest look at the site will notice that immediately. (Ben Pace argued this here.) He also said that he’s “pretty pro just fighting those fights, rather than giving in and letting people on the internet who use the representativeness heuristic to attack people decide what we get to talk about.”
I’m unconvinced by this for the same reasons I was then. I agree with the claim, but I don’t think assuming people are reasonable is realistic, and I don’t understand why we should just fight those fights. Where’s the cost-benefit calculation?
Follow the above links for arguments from Ben against the above.
Pro: LessWrong will get politicized anyway and we should start to practice. (Wei Dai, e.g. here)
This makes a lot more sense to me, but starting with a post on the Bell Curve is not the right way to do it. I would welcome some kind of actual plan for how this can be done from the moderators.
Until then, my position is that this post shouldn’t be on LessWrong. I’ve strong-downvoted it and would ultra-strong-downvote it if I could. However, I do think I’m open to evidence for the contrary. I would much welcome some kind of a cost-benefit calculation that concludes that this is a good idea. If it’s wroth doing, it’s worth doing with made-up statistics. If I were to do such a calculation, it would get a bunch of negative numbers for things like what i mentioned at the top of this comment, and almost nothing positive because the benefit of allowing this seems genuinely negligible to me.
In my capacity as moderator, I saw this post this morning and decided to leave it posted (albeit as Personal blog with reduced visibility).
I think limiting the scope of what can be discussed is costly for our ability to think about the world and figure out what’s true (a project that is overall essential to AGI outcomes, I believe) and therefore I want to minimize such limitations. That said, there are conversations that wouldn’t be worth having on LessWrong, topics that I expect would attract attention just not worth it–those I would block. However, this post didn’t feel like where I wanted to draw the line. Blocking this post feels like it would be cutting out too much for the sake of safety and giving the fear of adversaries too much control over of us and our inquiries. I liked how this post gave me a great summary of controversial material so that I now know what the backlash was in response to. I can imagine other posts where I feel differently (in fact, there was a recent post I told an author it might be better to leave off the site, though they missed my message and posted anyway, which ended up being fine).
It’s not easy to articulate where I think the line is or why this post seemed on the left of it, but it was a deliberate judgment call. I appreciate others speaking up with their concerns and their own judgment calls. If anyone ever wants to bring these up with me directly (not to say that comment threads aren’t okay), feel free to DM me or email me: ruby@lesswrong.com
To address something that was mentioned, I expect to change my response in the face of posting trends, if they seemed fraught. There are a number of measure we could potentially take then.
Thanks for being transparent. I’m very happy to see that I was wrong in saying no-one else is taking it seriously. (I didn’t notice that the post wasn’t on the frontpage, which I think proves that you did take it seriously.)
I think limiting the scope of what can be discussed is costly for our ability to think about the world and figure out what’s true (a project that is overall essential to AGI outcomes, I believe) and therefore I want to minimize such limitations.
I don’t understand this concern (which I classify as the same kind of thing voiced by Zack many times and AAB just a few comments up.) We’ve have a norm against discussing politics since before LessWrong 2.0, which doesn’t seem to have had any noticeable negative effects on our ability to discuss other topics. I think what I’m advocating for is to extend this norm by a pretty moderate amount? Like, the set of interesting topics in politics seems to me to be much larger than the set of interesting [topics with the property that they risk significant backlash from people who are concerned about social justice]. (I do see how this post is useful, but the bell curve is literally in a class that contains a single element. There seem to be < 5 posts per year which I don’t want to have on LW for these kinds of reasons, and most of them are less useful than this one.) My gears-level prediction for how much that would degrade discussion in other areas is basically zero, but at this point I must be missing something?
A difference I can see is that disallowing this post would be done explicitly out of fear or backlash whereas the norm against politics is because politics is the mind killer, but i guess I don’t see why that makes a difference (and doesn’t the mind killer argument extend to these kinds of topics anyway?)
It’s not easy to articulate where I think the line is or why this post seemed on the left of it, but it was a deliberate judgment call. I appreciate others speaking up with their concerns and their own judgment calls. If anyone ever wants to bring these up with me directly (not to say that comment threads aren’t okay), feel free to DM me or email me: ruby@lesswrong.com
I do think that if we order all posts by where they appear on this spectrum, I would put this farther to the right than any other post I remember, so we genuniely seem to differ in our judgment here.
I echo anon03 in that the title is extremely provocative, but minus the claim that this is only a descriptive statement. I think it’s obviously intentionally provocative (though I will take this back if the author says otherwise), given that the author wrote this four days ago
My favorite thing about living in the 21ˢᵗ century is that nobody can stop me from publishing whatever I want. [...] People tell me they’re worried of being cancelled by woke culture. I think this is just a convenient excuse for laziness and cowardice. What are you afraid of saying? [...] Are you afraid to say that there are significant heritable intelligence disparities between ethnic groups? It’s the obvious conclusion if you think critically about US immigration policy.
I think condemning TBC has become one of the most widely agreed on loyalty tests for many people who care about social justice. It seems clear to me that Isusr intended this post to have symbolic value, so that being provocative was an intended property. If their utility function had been to review this book because it’s very useful while minimizing risk, a very effective way to do this would have been to exclude the name from the title.
Elsewhere you write (and also ask to consolidate, so I’m responding here):
The main disagreement seems to come down to how much we would give up when disallowing posts like this. My gears model still says ‘almost nothing’ since all it would take is to extend the norm “let’s not talk about politics” to “let’s not talk about politics and extremely sensitive social-justice adjacent issues”, and I feel like that would extend the set of interesting taboo topics by something like 10%.
I think I used to endorse a model like this much more than I do now. A particular thing that I found sort of radicalizing was the “sexual preference” moment, in which a phrase that I had personally used and wouldn’t have associated with malice was overnight retconned to be a sign of bigotry, as far as I can tell primarily to score points during the nomination hearings for Amy Coney Barrett. (I don’t know anything special about Barrett’s legal decisions or whether or not she’s a bigot; I also think that sexual orientation isn’t a choice for basically anyone at the moment; I also don’t think ‘preference’ implies that it was a choice, any more than my ‘flavor preferences’ are my choice instead of being an uncontrollable fact about me.)
Supposing we agree that the taboo only covers ~10% more topics in 2020, I’m not sure I expect it will only cover 10% more topics in 2025, or 2030, or so on? And so you need to make a pitch not just “this pays for itself now” but instead something like “this will pay for itself for the whole trajectory that we care about, or it will be obvious when we should change our policy and it no longer pays for itself.”
This is a helpful addendum. I didn’t want to bust out the slippery slope argument because I didn’t have clarity on the gears-level mechanism. But in this case, we seem to have a ratchet in which X is deemed newly offensive, and a lot of attention is focused on just this particular word or phrase X. Because “it’s just this one word,” resisting the offensive-ization is made to seem petty—wouldn’t it be such a small thing to give up, in exchange for inflicting a whole lot less suffering on others?
Next week it’ll be some other X though, and the only way this ends is if you can re-establish some sort of Schelling Fence of free discourse and resist any further calls to expand censorship, even if they’re small and have good reasons to back them up.
I think that to someone who disagrees with me, they might say that what’s in fact happening is an increase in knowledge and an improvement in culture, reflected in language. In the same way that I expect to routinely update my picture of the world when I read the newspaper, why shouldn’t I expect to routinely update my language to reflect evolving cultural understandings of how to treat other people well?
My response to this objection would be that, in much the same way as phrases like “sexual preference” can be seen as offensive for their implications, or a book can be objected to for its symbolism, mild forms of censorship or “updates” in speech codes can provoke anxiety, induce fear, and restrain thought. This may not be their intention, but it is their effect, at least at times and in the present cultural climate.
So a standard of free discourse and a Schelling Fence against expansion of censorship is justified not (just) to avoid a slippery slope of ever-expanding censorship, or to attract people with certain needs or to establish a pipeline into certain roles or jobs. Its purpose is also to create a space in which we have declared that we will strive to be less timid, not just less wrong.
We might not always prioritize or succeed in that goal, but establishing that this is a space where we are giving ourselves permission to try is a feature of explicit anti-censorship norms.
Prioritizing freedom of thought and lessening timidity isn’t always the right goal. Sometimes, inclusivity, warmth, and a sense of agreeableness and safety is the right way to organize certain spaces. Different cultural moments, or institutions, might need marginally more safe spaces. Sometimes, though, they need more risky spaces. My observation tells me that our culture is currently in need of marginally more risky spaces, even if the number of safe spaces remains the same. A way to protect LW’s status as a risky space is to protect our anti-censorship norms, and sometimes to exercise our privilege to post risky material such as this post.
My observation tells me that our culture is currently in need of marginally more risky spaces, even if the number of safe spaces remains the same.
Our culture is desperately in need of spaces that are correct about the most important technical issues, and insisting that the few such spaces that exist have to also become politically risky spaces jeopardizes their ability to function for no good reason given that the internet lets you build as many separate spaces as you want elsewhere.
I’m going to be a little nitpicky here. LW is not “becoming,” but rather already is a politically risky space, and has been for a long time. There are several good reasons, which I and others have discussed elsewhere here. They may not be persuasive to you, and that’s OK, but they do exist as reasons. Finally, the internet may let you build a separate forum elsewhere and try to attract participants, but that is a non-trivial ask.
My position is that accepting intellectual risk is part and parcel of creating an intellectual environment capable of maintaining the epistemic rigor that we both think is necessary.
It is you, and others here, who are advocating a change of the status quo to create a bigger wall between x-risk topics and political controversy. I think that this would harm the goal of preventing x-risk, on current margins, as I’ve argued elsewhere here. We both have our reasons, and I’ve written down the sort of evidence that would cause me to change my point of view.
Fortunately, I enjoy the privilege of being the winner by default in this contest, since the site’s current norms already accord with my beliefs and preferences. So I don’t feel the need to gather evidence to persuade you of my position, assuming you don’t find my arguments here compelling. However, if you do choose to make the effort to gather some of the evidence I’ve elsewhere outlined, I not only would eagerly read it, but would feel personally grateful to you for making the effort. I think those efforts would be valuable for the health of this website and also for mitigating X-risk. However, they would be time-consuming, effortful, and may not pay off in the end.
Our culture is desperately in need of spaces that are correct about the most important technical issues
I also care a lot about this; I think there are three important things to track.
First is that people might have reputations to protect or purity to maintain, and so want to be careful about what they associate with. (This is one of the reasons behind the separate Alignment Forum URL; users who wouldn’t want to post something to Less Wrong can post someplace classier.)
Second is that people might not be willing to pay costs to follow taboos. The more a space is politically safe, the less people like Robin Hanson will want to be there, because many of their ideas are easier to think of if you’re not spending any of your attention on political safety.
Third is that the core topics you care about might, at some point, become political. (Certainly AI alignment was ‘political’ for many years before it became mainstream, and will become political again as soon as it stops becoming mainstream, or if it becomes partisan.)
The first is one of the reasons why LW isn’t a free speech absolutist site, even tho with a fixed population of posters that would probably help us be more correct. But the second and third are why LW isn’t a zero-risk space either.
I don’t care about moderation decisions for this particular post, I’m just dismayed by how eager LessWrongers seem to be to rationalize shooting themselves in the foot, which is also my foot and humanity’s foot, for the short term satisfaction of getting to think of themselves as aligned with the forces of truth in a falsely constructed dichotomy against the forces of falsehood.
On any sufficiently controversial subject, responsible members of groups with vulnerable reputations will censor themselves if they have sufficiently unpopular views, which makes discussions on sufficiently controversial subjects within such groups a sham. The rationalist community should oppose shams instead of encouraging them.
Whether political pressure leaks into technical subjects mostly depends on people’s meta-level recognition that inferences subject to political pressure are unreliable, and hosting sham discussions makes this recognition harder.
The rationalist community should avoid causing people to think irrationally, and a very frequent type of irrational thinking (even among otherwise very smart people) is “this is on the same website as something offensive, so I’m not going to listen to it”. “Let’s keep putting important things on the same website as unimportant and offensive things until they learn” is not a strategy that I expect to work here.
It would be really nice to be able to stand up to left wing political entryism, and the only principled way to do this is to be very conscientious about standing up to right wing political entryism, where in this case “right wing” means any politics sufficiently offensive to the left wing, regardless of whether it thinks of itself as right wing.
I’m not as confident about these conclusions as it sounds, but my lack of confidence comes from seeing that people whose judgment I trust disagree, and it does not come from the arguments that have been given, which have not seemed to me to be good.
It would be really nice to be able to stand up to left wing political entryism, and the only principled way to do this is to be very conscientious about standing up to right wing political entryism, where in this case “right wing” means any politics sufficiently offensive to the left wing, regardless of whether it thinks of itself as right wing.
“Stand up to X by not doing anything X would be offended by” is obviously an unworkable strategy, it’s taking a negotiating stance that is maximally yielding in the ultimatum game, so should expect to receive as little surplus utility as possible in negotiation.
(Not doing anything X would be offended by is generally a strategy for working with X, not standing up to X; it could work if interests are aligned enough that it isn’t necessary to demand much in negotiation. But given your concern about “entryism” that doesn’t seem like the situation you think you’re in.)
steven0461 isn’t proposing standing up to X by not doing things that would offend X.
He is proposing standing up to the right by not doing things that would offend the left, and standing up to the left by not doing things that would offend the right. Avoiding posts like the OP here is intended to be an example of the former, which (steven0461 suggests) has value not only for its own sake but also because it lets us also stand up to the left by avoiding things that offend the right, without being hypocrites.
(steven0461′s comment seems to treat “standing up to left-wing political entryism” as a thing that’s desirable for its own sake, and “standing up to right-wing political entryism” as something we regrettably have to do too in order to do the desirable thing without hypocrisy. This seems kinda strange to me because (1) standing up to all kinds of political entryism seems to me obviously desirable for its own sake, and because (2) if for some reason left-wing political entryism is fundamentally worse than right-wing political entryism then surely that makes it not necessarily hypocritical to take a stronger stand against the former than against the latter.)
If someone proposes to do A by doing B, and B by doing C, they are proposing doing A by doing C. (Here A = “stand up to left wing entryism”, B = “stand up to right wing entryism”, C = “don’t do things that left wing people are offended by”)
EDIT: Also, the situation isn’t symmetrical, since Steven is defining right-wing to mean things the left wing is offended by, and not vice versa. Hence it’s clearly a strategy for submitting to the left, as it lets the left construct the left/right dichotomy.
I’m not sure there’s a definite fact of the matter as to when something is “doing X by doing Y” in cases like this where it’s indirect, but I think either we shouldn’t use that language so broadly as to apply to such cases or it’s not obvious that it’s unworkable to “stand up to X by not doing things that offend X”, since the obvious unworkability of that is (unless I’m misunderstanding your earlier comment) predicated on the idea that it’s a sort of appeasement of X, rather than the sort of indirect thing we’re actually talking about here.
Maybe I am also being too indirect. Regardless of whether there’s some sense in which steven0461 is proposing to “stand up to X by not doing things that would offend X”, he was unambiguously not proposing “a negotiating stance that is maximally yielding in the ultimatum game”; “not doing things that would offend X” in his comment is unambiguously not a move in any game being played with X at all. Your objection to what he wrote is just plain wrong, whether or not there is a technical sense in which he did say the thing that you objected to, because your argument against what he said was based on an understanding of it that is wrong whether or not that’s so.
[EDITED to add:] As I mention in a grandchild comment, one thing in the paragraph above is badly garbled; I was trying to say something fairly complicated in too few words and ended up talking nonsense. It’s not correct to say that “not doing things that would offend X” is not a move in any game being played with X. Rather, I claim that X in your original comment is standing in for two different albeit related Xs, who are involved in two different albeit related interactions (“games” if you like), and the two things you portray as inconsistent are not at all inconsistent because it’s entirely possible (whether or not it’s wise) to win one game while losing the other.
The game with “left-wing entryists” is one where they try to make LW a platform for left-wing propaganda. The game with “the left” is one where they try to stop LW being a platform for (what they regard as) right-wing propaganda. Steven proposes taking a firm stand against the former, and making a lot of concessions in the latter. These are not inconsistent; banning everything that smells of politics, whether wise or foolish overall, would do both of the things Steven proposes doing. He proposes making concessions to “the left” in the second game in order to resist “right-wing entryists” in the mirror-image of the first game. We might similarly make concessions to “the right” if they were complaining that LW is too leftist, by avoiding things that look to them like left-wing propaganda. I make no claims about whether any of these resistances and concessions are good strategy; I say only that they don’t exhibit the sort of logical inconsistency you are accusing Steven of.
Step 1: The left decides what is offensively right-wing
Step 2: LW people decide what to say given this
Steven is proposing a policy for step 2 that doesn’t do anything that the left has decided is offensively right-wing. This gives the left the ability to prevent arbitrary speech.
If the left is offended by negotiating for more than $1 in the ultimatum game, Steven’s proposed policy would avoid doing that, thereby yielding. (The money here is metaphorical, representing benefits LW people could get by talking about things without being attacked by the left)
I think an important cause of our disagreement is you model the relevant actors as rational strategic consequentialists trying to prevent certain kinds of speech, whereas I think they’re at least as much like a Godzilla that reflexively rages in pain and flattens some buildings whenever he’s presented with an idea that’s noxious to him. You can keep irritating Godzilla until he learns that flattening buildings doesn’t help him achieve his goals, but he’ll flatten buildings anyway because that’s just the kind of monster he is, and in this way, you and Godzilla can create arbitrary amounts of destruction together. And (to some extent) it’s not like someone constructed a reflexively-acting Godzilla so they could control your behavior, either, which would make it possible to deter that person from making future Godzillas. Godzillas seem (to some extent) to arise spontaneously out of the social dynamics of large numbers of people with imperfect procedures for deciding what they believe and care about. So it’s not clear to me that there’s an alternative to just accepting the existence of Godzilla and learning as best as you can to work around him in those cases where working around him is cheap, especially if you have a building that’s unusually important to keep intact. All this is aside from considerations of mercy to Godzilla or respect for Godzilla’s opinions.
If I make some substitutions in your comment to illustrate this view of censorious forces as reflexive instead of strategic, it goes like this:
The implied game is:
Step 1: The bull decides what is offensively red
Step 2: LW people decide what cloths to wave given this
Steven is proposing a policy for step 2 that doesn’t wave anything that the bull has decided is offensively red. This gives the bull the ability to prevent arbitrary cloth-waving.
If the bull is offended by negotiating for more than $1 in the ultimatum game, Steven’s proposed policy would avoid doing that, thereby yielding. (The money here is metaphorical, representing benefits LW people could get by waving cloths without being gored by the bull)
I think “wave your cloths at home or in another field even if it’s not as good” ends up looking clearly correct here, and if this model is partially true, then something more nuanced than an absolutist “don’t give them an inch” approach is warranted.
edit: I should clarify that when I say Godzilla flattens buildings, I’m mostly not referring to personal harm to people with unpopular opinions, but to epistemic closure to whatever is associated with those people, which you can see in action every day on e.g. Twitter.
The relevant actors aren’t consciously being strategic about it, but I think their emotions are sensitive to whether the threat of being offended seems to be working. That’s what the emotions are for, evolutionarily speaking. People are innately very good at this! When I babysit a friend’s unruly 6-year-old child who doesn’t want to put on her shoes, or talk to my mother who wishes I would call more often, or introspect on my own rage at the abject cowardice of so-called “rationalists”, the functionality of emotions as a negotiating tactic is very clear to me, even if I don’t have the same kind of deliberative control over my feelings as my speech (and the child and my mother don’t even think of themselves as doing game theory at all).
(This in itself doesn’t automatically negate your concerns, of course, but I think it’s an important modeling consideration: animals like Godzilla may be less incentivizable than Homo economicus, but they’re more like Homo economicus than a tornado or an avalanche.)
I think simplifying all this to a game with one setting and two players with human psychologies obscures a lot of what’s actually going on. If you look at people of the sneer, it’s not at all clear that saying offensive things thwarts their goals. They’re pretty happy to see offensive things being said, because it gives them opportunities to define themselves against the offensive things and look like vigilant guardians against evil. Being less offensive, while paying other costs to avoid having beliefs be distorted by political pressure (e.g. taking it elsewhere, taking pains to remember that politically pressured inferences aren’t reliable), arguably de-energizes such people more than it emboldens them.
This logic would fall down entirely if it turned out that “offensive things” isn’t a natural kind, or a pre-existing category of any sort, but is instead a label attached by the “people of the sneer” themselves to anything they happen to want to mock or vilify (which is always going to be something, since—as you say—said people in fact have a goal of mocking and/or vilifying things, in general).
Inconveniently, that is precisely what turns out to be the case…
“Offensive things” isn’t a category determined primarily by the interaction of LessWrong and people of the sneer. These groups exist in a wider society that they’re signaling to. It sounds like your reasoning is “if we don’t post about the Bell Curve, they’ll just start taking offense to technological forecasting, and we’ll be back where we started but with a more restricted topic space”. But doing so would make the sneerers look stupid, because society, for better or worse, considers The Bell Curve to be offensive and does not consider technological forecasting to be offensive.
But doing so would make the sneerers look stupid, because society, for better or worse, considers The Bell Curve to be offensive and does not consider technological forecasting to be offensive.
I’m sorry, but this is a fantasy. It may seem reasonable to you that the world should work like this, but it does not.
To suggest that “the sneerers” would “look stupid” is to posit someone—a relevant someone, who has the power to determine how people and things are treated, and what is acceptable, and what is beyond the pale—for them to “look stupid” to. But in fact “the sneerers” simply are “wider society”, for all practical purposes.
“Society” considers offensive whatever it is told to consider offensive. Today, that might not include “technological forecasting”. Tomorrow, you may wake up to find that’s changed. If you point out that what we do here wasn’t “offensive” yesterday, and so why should it be offensive today, and in any case, surely we’re not guilty of anything, are we, since it’s not like we could’ve known, yesterday, that our discussions here would suddenly become “offensive”… right? … well, I wouldn’t give two cents for your chances, in the court of public opinion (Twitter division). And if you try to protest that anyone who gets offended at technological forecasting is just stupid… then may God have mercy on your soul—because “the sneerers” surely won’t.
But there are systemic reasons why Society gets told that hypotheses about genetically-mediated group differences are offensive, and mostly doesn’t (yet?) get told that technological forecasting is offensive. (If some research says Ethnicity E has higher levels of negatively-perceived Trait T, then Ethnicity E people have an incentive to discredit the research independently of its truth value—and people who perceive themselves as being in a zero-sum conflict with Ethnicity E have an incentive to promote the research independently of its truth value.)
Steven and his coalition are betting that it’s feasible to “hold the line” on only censoring the hypotheses are closely tied to political incentives like this, without doing much damage to our collective ability to think about other aspects of the world. I don’t think it works as well in practice as they think it does, due to the mechanisms described in “Entangled Truths, Contagious Lies” and “Dark Side Epistemology”—you make a seemingly harmless concession one day, and five years later, you end up claiming with perfect sincerity that dolphins are fish—but I don’t think it’s right to dismiss the strategy as fantasy.
due to the mechanisms described in “Entangled Truths, Contagious Lies” and “Dark Side Epistemology”
I’m not advocating lying. I’m advocating locally preferring to avoid subjects that force people to either lie or alienate people into preferring lies, or both. In the possible world where The Bell Curve is mostly true, not talking about it on LessWrong will not create a trail of false claims that have to be rationalized. It will create a trail of no claims. LessWrongers might fill their opinion vacuum with false claims from elsewhere, or with true claims, but either way, this is no different from what they already do about lots of subjects, and does not compromise anyone’s epistemic integrity.
I understand that. I cited a Sequences post that has the word “lies” in the title, but I’m claiming that the mechanism described in the cited posts—that distortions on one topic can spread to both adjacent topics, and to people’s understanding of what reasoning looks like—can apply more generally to distortions that aren’t direct lies.
Omitting information can be a distortion when the information would otherwise be relevant. In “A Rational Argument”, Yudkowsky gives the example of an election campaign manager publishing survey responses from their candidate, but omitting one question which would make their candidate look bad, which Yudkowsky describes as “cross[ing] the line between rationality and rationalization” (!). This is a very high standard—but what made the Sequences so valuable, is that they taught people the counterintuitive idea that this standard exists. I think there’s a lot of value in aspiring to hold one’s public reasoning to that standard.
Not infinite value, of course! If I knew for a fact that Godzilla will destroy the world if I cite a book that I would otherwise would have cited as genuinely relevant, then fine, for the sake of the sake of the world, I can not cite the book.
Maybe we just quantitatively disagree on how tough Godzilla is and how large the costs of distortions are? Maybe you’re happy to throw Sargon of Akkad under the bus, but when Steve Hsu is getting thrown under the bus, I think that’s a serious problem for the future of humanity. I think this is actually worth a fight.
With my own resources and my own name (and a pen name), I’m fighting. If someone else doesn’t want to fight with their name and their resources, I’m happy to listen to suggestions for how people with different risk tolerances can cooperate to not step on each other’s toes! In the case of the shared resource of this website, if the Frontpage/Personal distinction isn’t strong enough, then sure, “This is on our Banned Topics list; take it to /r/TheMotte, you guys” could be another point on the compromise curve. What I would hope for from the people playing the sneaky consequentialist image-management strategy, is that you guys would at least acknowledge that there is a conflict and that you’ve chosen a side.
might fill their opinion vacuum with false claims from elsewhere, or with true claims
Your posts seem to be about what happens if you filter out considerations that don’t go your way. Obviously, yes, that way you can get distortion without saying anything false. But the proposal here is to avoid certain topics and be fully honest about which topics are being avoided. This doesn’t create even a single bit of distortion. A blank canvas is not a distorted map. People can get their maps elsewhere, as they already do on many subjects, and as they will keep having to do regardless, simply because some filtering is inevitable beneath the eye of Sauron. (Distortions caused by misestimation of filtering are going to exist whether the filter has 40% strength or 30% strength. The way to minimize them is to focus on estimating correctly. A 100% strength filter is actually relatively easy to correctly estimate. And having the appearance of a forthright debate creates perverse incentives for people to distort their beliefs so they can have something inoffensive to be forthright about.)
The people going after Steve Hsu almost entirely don’t care whether LW hosts Bell Curve reviews. If adjusting allowable topic space gets us 1 util and causes 2 utils of damage distributed evenly across 99 Sargons and one Steve Hsu, that’s only 0.02 Hsu utils lost, which seems like a good trade.
I don’t have a lot of verbal energy and find the “competing grandstanding walls of text” style of discussion draining, and I don’t think the arguments I’m making are actually landing for some reason, and I’m on the verge of tapping out. Generating and posting an IM chat log could be a lot more productive. But people all seem pretty set in their opinions, so it could just be a waste of energy.
Another way this matters: Offense takers largely get their intuitions about “will taking offense achieve my goals” from experience in a wide variety of settings and not from LessWrong specifically. Yes, theoretically, the optimal strategy is for them to estimate “will taking offense specifically against LessWrong achieve my goals”, but most actors simply aren’t paying enough attention to form a target-by-target estimate. Viewing this as a simple game theory textbook problem might lead you to think that adjusting our behavior to avoid punishment would lead to an equal number of future threats of punishment against us and is therefore pointless, when actually it would instead lead to future threats of punishment against some other entity that we shouldn’t care much about, like, I don’t know, fricking Sargon of Akkad.
I agree that offense-takers are calibrated against Society-in-general, not particular targets.
As a less-political problem with similar structure, consider ransomware attacks. If an attacker encrypts your business’s files and will sell you the encryption key for 10 Bitcoins, do you pay (in order to get your files back, as common sense and causal decision theory agree), or do you not-pay (as a galaxy-brained updateless-decision-theory play to timelessly make writing ransomware less profitable, even though that doesn’t help the copy of you in this timeline)?
It’s a tough call! If your business’s files are sufficiently important, then I can definitely see why you’d want to pay! But if someone were to try to portray the act of paying as pro-social, that would be pretty weird. If your Society knew how, law-abiding citizens would prefer to coordinate not to pay attackers, which is why the U.S. Treasury Department is cracking down on facilitating ransomware payments. But if that’s not an option …
our behavior [...] punishment against us [...] some other entity that we shouldn’t care much about
If coordinating to resist extortion isn’t an option, that makes me very interested in trying to minimize the extent to which there is a collective “us”. “We” should be emphasizing that rationality is a subject matter that anyone can study, rather than trying to get people to join our robot cult and be subject to the commands and PR concerns of our leaders. Hopefully that way, people playing a sneaky consequentialist image-management strategy and people playing a Just Get The Goddamned Right Answer strategy can at least avoid being at each other’s throats fighting over who owns the “rationalist” brand name.
if this model is partially true, then something more nuanced than an absolutist “don’t give them an inch” approach is warranted
It’s obvious to everyone in the discussion that the model is partially false and there’s also a strategic component to people’s emotions, so repeating this is not responsive.
So it’s not clear to me that there’s an alternative to just accepting the existence of Godzilla and learning as best as you can to work around him in those cases where working around him is cheap, especially if you have a building that’s unusually important to keep intact.
But of course there’s an alternative. There’s a very obvious alternative, which also happens to be the obviously and only correct action:
It still appears to me that you are completely missing the point. I acknowledge that you are getting a lot of upvotes and I’m not, suggesting that other LW readers disagree with me. I think they are wrong, but outside view suggests caution.
I notice one thing I said that was not at all what I intended to say, so let me correct that before going further. I said
“not doing things that would offend X” in his comment is unambiguously not a move in any game being played with X at all.
but what I actually meant to say was
“standing up to X” in his comment is unambiguously not a move in any game being played with X at all.
[EDITED to add:] No, that also isn’t quite right; my apologies; let me try again. What I actually mean is that “standing up to X” and “not doing things that would offend X” are events in two entirely separate games, and the latter is not a means to the former.
There are actually three separate interactions envisaged in Steven’s comment, constituting (if you want to express this in game-theoretic terms) three separate games. (1) An interaction with left-wing entryists, where they try to turn LW into a platform for leftist propaganda. (2) An interaction with right-wing entryists, where they try to turn LW into a platform for rightist propaganda. (3) An interaction with leftists, who may or may not be entryists, where they try to stop LW being a platform for right-wing propaganda or claim that it is one. (There is also (4) an interaction with rightists, along the lines of #3, which I include for the sake of symmetry.)
Steven claims that in game 1 we should strongly resist the left-wing entryists, presumably by saying something like “no, LW is not a place for left-wing propaganda”. He claims that in order to do this in a principled way we need also to say “LW is not a place for right-wing propaganda”, thus also resisting the right-wing entryists in game 2. And he claims that in order to do this credibly we need to be reluctant to post things that might be, or that look like they are, right-wing propaganda, thus giving some ground to the leftists in game 3.
Game 1 and game 3 are entirely separate, and the same move could be a declaration of victory in one and a capitulation in the other. For instance, imposing a blanket ban on all discussion of politically sensitive topics on LW would be an immediate and total victory over entryists of both stripes in games 1 and 2, and something like a total capitulation to leftists and rightists alike in games 3 and 4.
So “not doing things that would offend leftists” is not a move in any game played with left-wing entryists; “standing up to left-wing entryists” is not a move in any game played with leftists complaining about right-wing content on LW; I was trying to say both of those and ended up talking nonsense. The above is what I actually meant.
I agree that steven0461 is saying (something like) that people writing LW articles should avoid saying things that outrage left-leaning readers, and that if you view what happens on LW as a negotiation with left-leaning readers then that proposal is not a strategy that gives you much leverage.
I don’t agree that it makes any sense to say, as you did, that Steven’s proposal involves “standing up to X by not saying anything that offends X”, which is the specific thing you accused him of.
Your comment above elaborates on the thing I agree about, but doesn’t address the reasons I’ve given for disagreeing with the thing I don’t agree about. That may be partly because of the screwup on my part that I mention above.
I think the distinction is important, because the defensible accusation is of the form “Steven proposes giving too much veto power over LW to certain political groups”, which is a disagreement about strategy, whereas the one you originally made is of the form “Steven proposes something blatantly self-contradictory”, which is a disagreement about rationality, and around these parts accusations of being stupid or irrational are generally more serious than accusations of being unwise or on the wrong political team.
The above is my main objection to what you have been saying here, but I have others which I think worth airing:
It is not true that “don’t do anything that the left considers offensively right-wing” gives the left “the ability to prevent arbitrary speech”, at least not if it’s interpreted with even the slightest bit of charity, because there are many many things one could say that no one will ever consider offensively right-wing. Of course it’s possible in theory for any given group to start regarding any given thing as offensively right-wing, but I do not think it reasonable to read steven0461′s proposal as saying that literally no degree of absurdity should make us reconsider the policy he proposes.
It is not true that Steven proposes to “not do anything that the left has decided is offensively right-wing”. “Sufficiently offensive” was his actual wording. This doesn’t rule out any specific thing, but again I think any but the most uncharitable reading indicates that he is not proposing a policy of the form “never post anything that anyone finds offensive” but one of the form “when posting something that might cause offence, consider whether its potential to offend is enough to outweigh the benefits of posting it”. So, again, the proposal is not to give “the left” complete veto power over what is posted on LW.
I think it is unfortunate that most of what you’ve written rounds off Steven’s references to “left/right-wing political entryism” to “the left/right”. I do not know exactly where he draws the boundary between mere X-wing-ism and X-wing political entryism, but provided the distinction means something I think it is much more reasonable for LW to see “political entryism” of whatever stripe as an enemy to be stood up to, than for LW to see “the left” or “the right” as an enemy to be stood up to. The former is about not letting political groups co-opt LW for their political purposes. The latter is about declaring ourselves a political team and fighting opposing political teams.
standing up to all kinds of political entryism seems to me obviously desirable for its own sake
I agree it’s desirable for its own sake, but meant to give an additional argument why even those people who don’t agree it’s desirable for its own sake should be on board with it.
if for some reason left-wing political entryism is fundamentally worse than right-wing political entryism then surely that makes it not necessarily hypocritical to take a stronger stand against the former than against the latter
Not necessarily objectively hypocritical, but hypocritical in the eyes of a lot of relevant “neutral” observers.
“Stand up to X by not doing anything X would be offended by” is not what I proposed. I was temporarily defining “right wing” as “the political side that the left wing is offended by” so I could refer to posts like the OP as “right wing” without setting off a debate about how actually the OP thinks of it more as centrist that’s irrelevant to the point I was making, which is that “don’t make LessWrong either about left wing politics or about right wing politics” is a pretty easy to understand criterion and that invoking this criterion to keep LW from being about left wing politics requires also keeping LessWrong from being about right wing politics. Using such a criterion on a society-wide basis might cause people to try to redefine “1+1=2″ as right wing politics or something, but I’m advocating using it locally, in a place where we can take our notion of what is political and what is not political as given from outside by common sense and by dynamics in wider society (and use it as a Schelling point boundary for practical purposes without imagining that it consistently tracks what is good and bad to talk about). By advocating keeping certain content off one particular website, I am not advocating being “maximally yielding in an ultimatum game”, because the relevant game also takes place in a whole universe outside this website (containing your mind, your conversations with other people, and lots of other websites) that you’re free to use to adjust your degree of yielding. Nor does “standing up to political entryism” even imply standing up to offensive conclusions reached naturally in the course of thinking about ideas sought out for their importance rather than their offensiveness or their symbolic value in culture war.
I agree that LW shouldn’t be a zero-risk space, that some people will always hate us, and that this is unavoidable and only finitely bad. I’m not persuaded by reasons 2 and 3 from your comment at all in the particular case of whether people should talk about Murray. A norm of “don’t bring up highly inflammatory topics unless they’re crucial to the site’s core interests” wouldn’t stop Hanson from posting about ems, or grabby aliens, or farmers and foragers, or construal level theory, or Aumann’s theorem, and anyway, having him post on his own blog works fine. AI alignment was never political remotely like how the Bell Curve is political. (I guess some conceptual precursors came from libertarian email lists in the 90s?) If AI alignment becomes very political (e.g. because people talk about it side by side with Bell Curve reviews), we can invoke the “crucial to the site’s core interests” thing and keep discussing it anyway, ideally taking some care to avoid making people be stupid about it. If someone wants to argue that having Bell Curve discussion on r/TheMotte instead of here would cause us to lose out on something similarly important, I’m open to hearing it.
You’d have to use a broad sense of “political” to make this true (maybe amounting to “controversial”). Nobody is advocating blanket avoidance of controversial opinions, only blanket avoidance of narrow-sense politics, and even then with a strong exception of “if you can make a case that it’s genuinely important to the fate of humanity in the way that AI alignment is important to the fate of humanity, go ahead”. At no point could anyone have used the proposed norms to prevent discussion of AI alignment.
I think that to someone who disagrees with me, they might say that what’s in fact happening is an increase in knowledge and an improvement in culture, reflected in language. In the same way that I expect to routinely update my picture of the world when I read the newspaper, why shouldn’t I expect to routinely update my language to reflect evolving cultural understandings of how to treat other people well?
I know you haven’t implied that the someone could be me, but I thought I’d just clarify that I would vehemently oppose such an argument. My argument contra slippery slope is that I don’t see evidence for it. If we look ten years into the past, there hasn’t been another book like TBC every week; in fact there hasn’t been one ever. I would bet against there being another one in the next 10 years.
There may be some risk of a slippery slope on other issues, but honestly I want that to be a separate argument because I estimate this post to carry a lot more risk than the other < 4 posts/year that I mentioned. I don’t know if this is true (and it’s usually bad form to accuse others of lack of knowledge), but I genuinely wonder if others who’ve participated in this discussion just don’t know how strongly many people feel about this book. (it is of course possible to acknowledge this and still (or especially) be against censorship.)
I’m fairly aware of Murray’s public image, but wanted to go a little deeper before replying.
Here’s a review from the Washington Post this year, of Murray’s latest book. Note that, while critical of his book, it does not call him a racist. Perhaps its strongest critical language is the closing sentence:
He writes as if his conclusions are just a product of cold calculus and doesn’t pause long enough to consider that perhaps it’s the assumptions in his theorem that are antithetical to the soul of America.
It actually more portrays him as out of touch with the rise of the far right than in lockstep with it. The article does not call him a racist, predict his book will cause harm, or suggest that readers avoid it. This suggests to me that there is still room for Murray’s output to be considered by a major, relatively liberal news media outlet.
The Standard-Examiner published a positive review of the same book. They are a newspaper with a circulation of about 30,000, based out of Ogden, UT.
Looking over other the couple dozen news articles that popped up containing “Charles Murray” and “The Bell Curve” from 2021, I see several that mention protests against him, or arguments over TBC, mentioned as one of a handful of important examples of prominent debates about race and racism.
I also looked up protests against Murray. There have been a few major ones, most famously at Middlebury College, some minor ones, and some that did not attract protests. My view is that for college protests, the trigger is “close to home,” and the protest organizers depend on college advertising and social ties to motivate participation.
So we are in agreement that Murray is a prominent and controversial figure on this topic, and protests against him can provoke once-in-a-decade-level episodes of racial tension on a campus, or be viewed as arguments on par with debates over critical race theory. This isn’t just some book about a controversial topic—it was a bestseller, and is still referenced 25 years later as a major source of controversy, and which has motivated hundreds or even thousands of students to protest the author when he’s attempted to speak on their campus. There are many scholarly articles writing, and generally critically, about the book.
Despite the controversy, it’s possible in 2021 for a liberal journalist to publish a critical but essentially professional review of Murray’s new work, and for a conservative journalist to publish a positive review in their newspaper.
The way I see it, Murray is a touchstone figure, but is still only very rarely prominent in the daily news cycle. Just writing about him isn’t enough to make the article newsworthy. If lsusr was a highly prominent blogger, then this review might make the news, or be alarming enough to social media activists to outcompete other tweets and shares. But he’s not a big enough figure, and this isn’t an intense enough article, to even come close to making such a big splash.
If this article poses an issue, it’s by adding one piece of evidence to the prosecutor’s exhibit that LW is a politically problematic space. Given that, as you say, this is one of the most unusually controversy-courting posts of the year, my assessment that it is “only one more piece of evidence,” rather than a potential turning point in this site’s public image, strikes me as a point of evidence against censorship. It’s just not that big a deal.
If you would care to game out for me in a little more detail about what a long-term scenario in which AGI safety becomes tainted by association with posts such as this, to the serious detriment of humanity, please do!
Agree with all of this, but my concern is not that the coupling of [worrying about AGI] and [being anti-social-justice] happens tomorrow. (I did have some separate concerns about people being put off by the post today, but I’ve been convinced somewhere in the comments under this post that the opposite is about equally likely.) It’s that this happens when AGI saftey is a much bigger deal in the public discourse. (Not sure if you think this will never happen? I think there’s a chance it never happens but that seems widely uncertain. I would put maybe 50% on it or something? Note that even if it happens very late, say 4 years before AGI poses an existential risk, I think that’s still more than enough time for the damage to be done. EY famously argued that there is no firelarm for AGI; if you buy this then we can’t rely on “by this point the danger is so obvious that people will take safety seriously no matter what”.)
If your next question is “why worry about this now”, one reason is that I don’t have faith that mods will react in time when the risk increases (I’ve updated upward on how likely I think this is after talking to Ruby but not to 100% and who knows who’s mod in 20 years), and I have the opportunity to say something now. But even if I had full authority over how the policy changes in the future, I still wouldn’t have allowed this post because people can dig out old material if they want to write a hit piece. This post has been archived, so from this point on there will forever be the opportunity to link LW to TBC for anyone wants to do that. And if you applied the analog of security mindset to this problem (which I think is appropriate), this is not something you would allow to happen. There is precedent for people losing positions over things that have happened decades in the past.
One somewhat concrete scenario that seems plausible (but widely unlikely because it’s concrete) is that Elon Musk manages to make the issue mainstream in 15 years; someone does a deep dive and links this to LW and LW to anti-social-jutice (even though LW itself still doesn’t have that many more readers); this gets picked up a lot of people who think worrying about AGI is bad; the aforementioned coupling occurs.
The only other thing I’d say is that there is also a substantial element of randomness to what does and doesn’t create a vast backlash. You can’t look at one instance of “person with popularity level x said thing of controversy level y, nothing bad happened” and conclude that any other instance (x′,y′) with x′<x and y′<y will definitely not lead to anything bad happening.
And so you need to make a pitch not just “this pays for itself now” but instead something like “this will pay for itself for the whole trajectory that we care about, or it will be obvious when we should change our policy and it no longer pays for itself.”
I don’t think it will be obvious, but I think we’ll be able to make an imperfect estimate of when to change the policy that’s still better than giving up on future evaluation of such tradeoffs and committing reputational murder-suicide immediately. (I for one like free speech and will be happy to advocate for it on LW when conditions change enough to make it seem anything other than pointlessly self-destructive.)
We’ve have a norm against discussing politics since before LessWrong 2.0, which doesn’t seem to have had any noticeable negative effects on our ability to discuss other topics.
I’m not sure whether that’s true, but separately, the norm against politics has definitely impact our ability to discuss politics. Perhaps that’s a necessary sacrifice, but it’s a sacrifice. In this particular case, both the object level (why is our society the way it is) and the meta-level (what are the actual views in this piece that got severe backlash) are relevant to our modeling of the world and I think it’d be a loss to not have this piece.
I do think that if we order all posts by where they appear on this spectrum, I would put this farther to the right than any other post I remember, so we genuniely seem to differ in our judgment here.
I’m not sure where this post would fall in my ranking (along the dimension you’re pointing at). It’s possible I agree with you that it’s at the extreme end–but there has to be a post at the extreme end. The posts that are imo (or other moderator’s opinions) over the line are ones you wouldn’t see.
I echo anon03 in that the title is extremely provocative, but minus the claim that this is only a descriptive statement.
I’d guess that it was intentionally provocative (to what degree the intention was, I don’t know), but I don’t feel inclined to tell the author they can’t do that in this case.
If I had written the post, I’d have named it differently and added caveats, etc. But I didn’t and wouldn’t have because of timidness, which makes me hesitant to place requirements on the person who actually did.
In this particular case, both the object level (why is our society the way it is) and the meta-level (what are the actual views in this piece that got severe backlash) are relevant to our modeling of the world and I think it’d be a loss to not have this piece.
I agree that the politics ban is a big sacrifice (regardless of whether the benefits outweigh it or not), and also that this particular post has a lot of value. But if you look at the set of all books for which (1) a largely positive reivew could plausibly been written by a super smart guy like lsusr, and (2) the backlash could plausibly be really bad, I think it literally contains a single element. It’s only TBC. There are a bunch of non-bookreview posts that I also wouldn’t want, but they’re very rare. It seems like we’re talking about a much smaller set of topics than what’s covered by the norm around politics.
I feel like if we wanted to find the optimal point in the value-risk space, there’s no way it’s “ban on all politics but no restriction on social justice”. There have got to be political areas with less risk and more payoff, like just all non-US politics or something.
I agree that the politics ban is a big sacrifice (regardless of whether the benefits outweigh it or not)
A global ban on political discussion by rationalists might be a big sacrifice, but it seems to me there are no major costs to asking people to take it elsewhere.
(I just edited “would be a big sacrifice” to “might be a big sacrifice”, because the same forces that cause a ban to seem like a good idea will still distort discussions even in the absence of a ban, and perhaps make them worse than useless because they encourage the false belief that a rational discussion is being had.)
Just a short note that the title seems like the correct one so that it’s searchable by the name of the book slash author. Relatedly, all book reviews on LW are called “Book Review: <Book Name>”, this one didn’t stand out as any different to me (except it adds the author’s name, which seems pretty within reasonable bounds to me).
Okay. Maybe not the ideal goal, not sure, but I think it’s pretty within range of fine things to do. There’s a fairly good case that people will search the author’s name and want to understand their ideas because he’s well-known, so it helps as a search term.
I’ll bite, but I can’t promise to engage in a lot of back-and-forth.
The site is discussed somewhere, someone claims that it’s a home for racism and points to this post as evidence. Someone else who would have otherwise become a valuable contributor reads it and decides not to check it out
A woke and EA-aligned person gets wind of it and henceforth thinks all x-risk related causes are unworthy of support
Let’s generalize. A given post on LW’s frontpage may heighten or diminish its visibility and appeal to potential newcomers, or the visibility/appeal of associated causes like X-risk. You’ve offered one reason why this post might heighten its visibility while diminishing its appeal.
Here’s an alternative scenario, in which this post heightens rather than diminishes the appeal of LW. Perhaps a post about the Bell Curve will strike somebody as a sign that this website welcomes free and open discourse, even on controversial topics, as long as it’s done thoughtfully. This might heighten, rather than diminish, LW’s appeal, for a person such as this. Indeed, hosting posts on potentially controversial topics might select for people like this, and that might not only grow the website, but reinforce its culture in a useful way.
I am not claiming that this post heightens the appeal of LW on net—only that it’s a plausible alternative hypothesis. I think that we should be very confident that a post will diminish the appeal of LW to newcomers before we advocate for communally-imposed censorship.
Not only do we have to worry that such censorship will impact the free flow of information and ideas, but that it will personally hurt the feelings of a contributor. Downvotes and calls for censorship pretty clearly risk diminishing the appeal of the website to the poster, who has already demonstrated that they care about this community. If successful, the censorship would only potentially bolster the website’s appeal for some hypothetical newcomer. It makes more sense to me to prioritize the feelings of those already involved. I don’t know how lsusr feels about your comment, but I know that when other people have downvoted or censored my posts and comments, I have felt demoralized.
Someone links the article from somewhere, it gets posted on far right reddit board, a bunch of people make accounts on LessWrong to make dumb comments, someone from the NYT sees it and writes a hit piece.
The reason I think this is unlikely is that the base rate of (blogs touching on politics making it into the NYT for far-right trolling)/(total blogs touching on politics) is low. Slate Star Codex had a large number of readers before the NYT wrote an article about it. I believe that LW must be have a readership two orders of magnitude lower than SSC/ACX (in the thousands, or even just the hundreds, for LW, in the hundreds of thousands for SSC/ACX). LW is the collective work of a bunch of mainly-anonymous bloggers posting stuff that’s largely inoffensive and ~never (recently) flagrantly attacking particular political factions. Indeed, we have some pretty strong norms against open politicization. Because its level of openly political posting and its readership are both low, I think LW is an unappealing target for a brigade or hit piece. Heck, even Glen Weyl thinks we’re not worth his time!
Edit: See habryka’s stats below for a counterpoint. I still think there’s a meaningful difference between the concentrated attention given to posts on ACX vs. the diffuse attention (of roughly equal magnitude) distributed throughout the vastness of LW.
For this reason, it once again does not seem worth creating a communal norm of censorship and a risk of hurt feelings by active posters.
Note also that, while you have posited and acted upon (via downvoting and commenting) a hypothesis of yours that the risks of this post outweigh the benefits, you’ve burdened respondants with supplying more rigor than you brought to your original post (“I would much welcome some kind of a cost-benefit calculation that concludes that this is a good idea”). It seems to me that a healthier norm would be that, before you publicly proclaim that a post is worthy of censorship, that you do the more rigorous cost/benefit calculation, and offer it up for others to critique.
Or should I fight fire with fire, by strongly-upvoting lsusr’s post to counteract your strong-downvote? In this scenario, upvotes and downvotes are being used not as a referendum on the quality of the post, but on whether or not it should be censored to protect LW. Is that how we wish this debate to be decided?
As a final question, consider that you seem to view this post in particular as exceptionally risky for LW. That means you are making an extraordinary claim: that this post, unlike almost every other LW post, is worthy of censorship. Extraordinary claims require extraordinary evidence. Have you met that standard?
I believe that LW must be have a readership two orders of magnitude lower than SSC/ACX (in the thousands, or even just the hundreds, for LW, in the hundreds of thousands for SSC/ACX)
LW’s readership is about the same order of magnitude as SSC. Depending on the mood of the HN and SEO gods.
Not that I don’t believe you, but that’s also really hard for me to wrap my head around. Can you put numbers on that claim? I’m not sure if ACX has a much smaller readership than I’d imagined, or if LW has a much bigger one, but either way I’d like to know!
That is vastly more readership than I had thought. A naive look at these numbers suggests that a small city’s worth of people read Elizabeth’s latest post. But I assume that these numbers can’t be taken at face value.
It’s very hard for me to square the idea that these websites get roughly comparable readership with my observation that ACX routinely attracts hundreds of comments on every post. LW gets 1-2 orders of magnitude fewer comments than ACX.
So while I’m updating in favor of the site’s readership being quite a bit bigger than I’d thought, I still think there’s some disconnect here between what I’m thinking of by “readership” and the magnitude of “readership” is coming across in these stats.
Note that LW gets 1-2 OOM fewer comments on the average post, but not in total. I reckon monthly comments is same OOM. And if you add up total word count on each site I suspect LW is 1 OOM bigger each month. ACX is more focused and the discussion is more focused, LW is a much broader space with lots of smaller convos.
That makes a lot of sense. I do get the feeling that, although total volume on a particular topic is more limited here, that there’s a sense of conversation and connection that I don’t get on ACX, which I think is largely due to the notification system we have here for new comments and messages.
This is weekly comments for LessWrong over the last year. Last we counted, something like 300 on a SSC post? So if there are two SSC posts/week, LessWrong is coming out ahead.
I think ACX is ahead of LW here. In October, it got 7126 comments in 14 posts, which is over 1600/week. (Two of them were private with 201 between them, still over 1500/week if you exclude them. One was an unusually high open thread, but still over 1200/week if you exclude that too.)
In September it was 10350 comments, over 2400/week. I can’t be bothered to count August properly but there are 10 threads with over 500 comments and 20 with fewer, so probably higher than October at least.
Not too far separate though, like maybe 2x but not 10x.
(E: to clarify this is “comments on posts published in the relevant month” but that shouldn’t particularly matter here)
I don’t think LW gets at all fewer comments than ACX. I think indeed LW has more comments than ACX, it’s just that LW comments are spread out over 60+ posts in a given week, whereas ACX has like 2-3 posts a week. LessWrong gets about 150-300 comments a day, which is roughly the same as what ACX gets per day.
That is vastly more readership than I had thought. A naive look at these numbers suggests that a small city’s worth of people read Elizabeth’s latest post. But I assume that these numbers can’t be taken at face value.
I think this number can be relatively straightforwardly taken at face value. Elizabeth’s post was at the top of HN for a few hours, so a lot of people saw it. A small city’s worth seems about right for the number of people who clicked through and at least skimmed it.
Extraordinary claims require extraordinary evidence. Have you met that standard?
I think the evidence that wokeism is a powerful force in the world we live in is abundant, and my primary reaction to your comment is that it feels like everything you said could have been written in a world where this isn’t so. There is an inherent asymmetry here in how many people care about which things to what degree in the real world. (As I’ve mentioned in the last discussion, I know a person who falls squarely into the second category I’ve mentioned; committed EA, very technically smart, but thinks all LW-adjacent things are poisonous, in her case because of sexism rather than racism, but it’s in the same cluster.)
Sam Harris invited the author of the Bell Curve onto his podcast 4 years ago, and as a result has a stream of hateful rhetoric targeted his way that lasts to this day. Where is the analogous observable effect into the opposite direction? If it doesn’t exist, why is postulating the opposite effect plausible in this case?
My rough cost-benefit analysis is −5/-20/-20 for the points I’ve mentioned, +1 for the advantage of being able to discuss this here, and maybe +2 for the effect of attracting people who like it for the opposite symbolism (i.e., here’s someone not afraid to discuss hard things) and I feel like I don’t want to assign a number to how it impacts Isur’s feelings. The reason I didn’t spell this out was because I thought it would come across as unnecessarily uncharitable, and it doesn’t convey much new information because I already communicated that I don’t see the upside.
Sam Harris has enormous reach, comparable to Scott’s. Also, podcasts have a different cultural significance than book reviews. Podcasts tend to come with an implicit sense of friendliness and inclusion extended toward the guest. Not so in a book review, which can be bluntly critical. So for the reasons I outlined above, I don’t think Harris’s experiences are a good reference class for what we should anticipate.
“Wokeism” is powerful, and I agree that this post elevated this site’s risk of being attacked or condemned either by the right or the left. I also agree that some people have been turned off by the views on racism or sexism they’ve been exposed to on by some posters on this site.
I also think that negativity tends to be more salient than approval. If lsusr’s post costs us one long-term reader and gains us two, I expect the one user who exits over it to complain and point to this post, making the reason for their dissatisfaction clear. By contrast, I don’t anticipate the newcomers to make a fanfare, or to even see lsusr’s post as a key reason they stick around. Instead, they’ll find themselves enjoying a site culture and abundance of posts that they find generally appealing. So I don’t think a “comparable observable effect in the opposite direction” is what you’d look for to see whether lsusr’s post enhances or diminishes the site’s appeal on net.
In fact, I am skeptical about our ability to usefully predict the effect of individual posts on driving readership to or away from this site. Which is why I don’t advocate censoring individual posts on this basis.
Sam Harris has enormous reach, comparable to Scott’s. Also, podcasts have a different cultural significance than book reviews. Podcasts tend to come with an implicit sense of friendliness and inclusion extended toward the guest. Not so in a book review, which can be bluntly critical. So for the reasons I outlined above, I don’t think Harris’s experiences are a good reference class for what we should anticipate.
I agree that the risk of anything terrible happening right now is very low for this reason. (Though I’d still estimate it to be higher than the upside.) But is “let’s rely on us being too small to get noticed by the mob” really a status quo you’re comfortable with?
I also think that negativity tends to be more salient than approval. If lsusr’s post costs us one long-term reader and gains us two, I expect the one user who exits over it to complain and point to this post, making the reason for their dissatisfaction clear. By contrast, I don’t anticipate the newcomers to make a fanfare, or to even see lsusr’s post as a key reason they stick around. Instead, they’ll find themselves enjoying a site culture and abundance of posts that they find generally appealing. So I don’t think a “comparable observable effect in the opposite direction” is what you’d look for to see whether lsusr’s post enhances or diminishes the site’s appeal on net.
This comment actually made me update somewhat because it’s harder than I thought to find an asymmetry here. But it’s still only a part of the story (and the part I’ve put the least amount of weight on.)
But is “let’s rely on us being too small to get noticed by the mob” really a status quo you’re comfortable with?
Let me rephrase that slightly, since I would object to several features of this sentence that I think are beside your main point. I do think that taking the size and context of our community into account when assessing how outsiders will see and respond to our discourse is among the absolute top considerations for judging risk accurately.
On a simple level, my framework is that we care about three factors: object-level risks and consequences, and enforcement-level risks and consequences. These are analogous to the risks and consequences from crime (object-level), and the risks and consequences from creating a police force or military (enforcement-level).
What I am arguing in this case is that the negative risks x consequences of the sort of enforcement-level behaviors you are advocating for and enacting seem to outweigh the negative risks x consequences of being brigaded or criticized in the news. Also, I’m uncertain enough about the balance of this post’s effect on inflow vs. outflow of readership to be close to 50⁄50, and expect it to be small enough either way to ignore it.
Note also that Sam Harris and Scott Alexander still have an enormous readership after their encounters with the threats you’re describing. While I can imagine a scenario in which unwanted attention becomes deeply unpleasant, I also expect to be a temporary situation. By contrast, instantiating a site culture that is self-censoring due to fear of such scenarios seems likely to be much more of a daily encumbrance—and one that still doesn’t rule out the possibility that we get attacked anyway.
I’d also note that you may be contributing to the elevation of risk with your choices of language. By using terms like “wokeism,” “mob,” and painting scrutiny as a dire threat in a public comment, it seems to me that you add potential fuel for any fire that may come raging through. My standard is that, if this is your earnest opinion, then LW ought to be a good platform for you to discuss that, even if it elevates our risk of being cast in a negative light.
Your standard, if I’m reading you right, is that your comment should be considered for potential censorship itself, due to the possibility that it does harm to the site’s reputation. Although it is perhaps not as potentially inflammatory as a review of TBC, it’s also less substantial, and potentially interacts in a synergistic way to elevate the risk. Do you think this is a risk you ought to have taken seriously before commenting? If not, why not?
My perspective is that you were right to post what you posted, because it reflected an honest concern of yours, and permits us to have a conversation about it. I don’t think you should have had to justify the existence of your comment with some sort of cost/benefit analysis. There are times when I think that such a justification is warranted, but this context is very far from that threshold. An example of a post that I think crosses that threshold would be a description of a way to inflict damage that had at least two of the following attributes: novel, convenient, or detailed. Your post is none of these, and neither is lsusr’s, so both of them pass my test for “it’s fine to talk about it.”
After reading this, I realize that I’ve done an extremely poor job communicating with everything I’ve commented on this post, so let me just try to start over.
I think what I’m really afraid of is a sequence of events that goes something like this:
Every couple of months, someone on LW makes a post like the above
In some (most?) cases, someone is going to speak up against this (in this case, we had two), there will be some discussion, but the majority will come down on the side that censorship is bad and there’s no need to take drastic action
The result is that we never establish any kind of norm nor otherwise prepare for political backlash
In ten or twenty or forty years from now, in a way that’s impossible to predict because any specific scenario is extremely unlikely, the position to be worried about AGI will get coupled to being anti social justice in the public discourse, as a result it will massively lose status and the big labs react by taking safety far less seriously and maybe we have fewer people writing papers on alignment
At that point it will be obvious to everyone that not having done anything to prevent this was a catastrophic error
After the discussion on the dating post, I’ve made some attempts to post a follow-up but chickened out of doing it because I was afraid of the reaction or maybe just because I couldn’t figure out how to approach the topic. When I saw this post, I think I originally decided not to do anything, but then anon03 said something and then somehow I thought I had to say something as well but it wasn’t well thought out because I already felt a fair amount of anxiety after having failed to write about it before. When my comment got a bunch of downvotes, the feeling of anxiety got really intense and I felt like the above mentioned scenario is definitely going to happen and I won’t be able to do anything about it because arguing for censorship is just a lost cause, and I think I then intentionally (but subconsciously) used the language you’ve just pointed out to signal that I don’t agree with the object level part of anything I’m arguing for (probably in the hopes of changing the reception?) even though I don’t think that made a lot of sense; I do think I trust people on this site to keep the two things separate. I completely agree that this risks making the problem worse. I think it was a mistake to say it.
I don’t think any of this is an argument for why I’m right, but I think that’s about what really happened.
Probably it’s significantly less than 50% that anything like what I described happens just because of the conjunction—who knows of anyone even still cares about social justice in 20 years. But it doesn’t seem nearly unlikely enough not to take seriously, and I don’t see anyone taking it seriously and it really terrifies me. I don’t completely understand why since I tend to not be very affected when thinking about x-risks. Maybe because of the feeling that it should be possible to prevent it.
I don’t think the fact that Sam still has an audience is a reason not to panic. Joe Rogan has a quadrillion times the audience of the NYT or CNN, but the social justice movement still has disproportionate power over institutions and academia, and probably that includes AI labs?
I will say that although I disagree with your opinion re: censoring this post and general risk assessment related to this issue, I don’t think you’ve expressed yourself particularly poorly. I also acknowledge that it’s hard to manage feelings of anxiety that come up in conversations with an element of conflict, in a community you care about, in regards to an issue that is important to the world. So go easier on yourself, if that helps! I too get anxious when I get downvoted, or when somebody disagrees with me, even though I’m on LW to learn, and being disagreed with and turning out to be wrong is part of that learning process.
It sounds like a broader perspective of yours is that there’s a strategy for growing the AGI safety community that involves keeping it on the good side of whatever political faction is in power. You think that we should do pretty much whatever it takes to make AGI safety research a success, and that this strategy of avoiding any potentially negative associations is important enough for achieving that outcome that we should take deliberate steps to safeguard its perception in this way. As a far-downstream consequence, we should censor posts like this, out of a general policy of expunging anything potentially controversial being associated with x-risk/AGI safety research and their attendant communities.
I think we roughly agree on the importance of x-risk and AGI safety research. If there was a cheap action I could take that I thought would reliably mitigate x-risk by 0.001%, I would take it. Downvoting a worrisome post is definitely a cheap action, so if I thought it would reliably mitigate x-risk by 0.001%, I would probably take it.
The reason I don’t take it is because I don’t share your perception that we can effectively mitigate x-risk in this way. It is not clear to me that the overall effect of posts like lsusr’s is net negative for these causes, nor that such a norm of censorship would be net beneficial.
What I do think is important is an atmosphere in which people feel freedom to follow their intellectual interests, comfort in participating in dialog and community, and a sense that their arguments are being judged on their intrinsic merit and truth-value.
The norm that our arguments should be judged based on their instrumental impact on the world seems to me to be generally harmful to epistemics. And having an environment that tries to center epistemic integrity above other concerns seems like a relatively rare and valuable thing, one that basically benefits AGI safety.
That said, people actually doing AGI research have other forums for their conversation, such as the alignment forum and various nonprofits. It’s unclear that LW is a key part of the pipeline for new AGI researchers, or forum for AGI research to be debated and discussed. If LW is just a magnet for a certain species of blogger who happens to be interested in AGI safety, among other things; and if those bloggers risk attracting a lot of scary attention while contributing minimally to the spread of AGI safety awareness or to the research itself, then that seems like a concerning scenario.
It’s also hard for me to judge. I can say that LW has played a key role for me connecting with and learning from the rationalist community. I understand AGI safety issues better for it, and am the only point of reference that several of my loved ones have for hearing about these issues.
So, N of 1, but LW has probably improved the trajectory of AGI safety by a miniscule but nonzero amount via its influence on me. And I wouldn’t have stuck around on LW if there was a lot of censorship of controversial topics. Indeed, it was the opportunity to wrestle with my attachments and frustrations with leftwing ideology via the ideas I encountered here that made this such an initially compelling online space. Take away the level of engagement with contemporary politics that we permit ourselves here, add in a greater level of censorship and anxiety about the consequences of our speech, and I might not have stuck around.
It sounds like a broader perspective of yours is that there’s a strategy for growing the AGI safety community that involves keeping it on the good side of whatever political faction is in power. You think that we should do pretty much whatever it takes to make AGI safety research a success, and that this strategy of avoiding any potentially negative associations is important enough for achieving that outcome that we should take deliberate steps to safeguard its perception in this way. As a far-downstream consequence, we should censor posts like this, out of a general policy of expunging anything potentially controversial being associated with x-risk/AGI safety research and their attendant communities.
I happily endorse this very articulate description of my perspective, with the one caveat that I would draw the line to the right of ‘anything potentially controversial’ (with the left-right axis measuring potential for backlash). I think this post falls to the right of just about any line; I think it hast he highest potential for backlash out of any post I remember seeing on LW ever. (I just said the same in a reply to Ruby, and I wasn’t being hypothetical.)
That said, people actually doing AGI research have other forums for their conversation, such as the alignment forum and various nonprofits. It’s unclear that LW is a key part of the pipeline for new AGI researchers, or forum for AGI research to be debated and discussed.
I’m probably an unusual case, but I got invited into the alignment forum by posting the Factored Cognition sequence on LW, so insofar as I count, LW has been essential. If it weren’t for the way that the two forums are connected, I wouldn’t have written the sequence. The caveat is that I’m currently not pursuing a “direct” path on alignment but am instead trying to go the academia route by doing work in the intersection of [widely recognized] and [safety-relevant] (i.e. on interpretability), so you could argue that the pipeline ultimately didn’t work. But I think (not 100% sure) at least Alex Turner is a straight-forward success story for said pipeline.
And I wouldn’t have stuck around on LW if there was a lot of censorship of controversial topics.
I think you probably want to respond on this on my reply to Rubin so that we don’t have two discussions about the same topic. My main objection is that the amount of censorship I’m advocating for seems to me to be tiny, I think less than 5 posts per year, far less than what is censored by the norm against politics.
Edit: I also want to object to this:
The norm that our arguments should be judged based on their instrumental impact on the world seems to me to be generally harmful to epistemics. And having an environment that tries to center epistemic integrity above other concerns seems like a relatively rare and valuable thing, one that basically benefits AGI safety.
I don’t think anything of what I’m saying involves judging arguments based on their impact on the world. I’m saying you shouldn’t be allowed to talk about TBC on LW in the first place. This seems like a super important distinction because it doesn’t involve lying or doing any mental gymnastics. I see it as closely analogous to the norm against politics, which I don’t think has hurt our discourse.
I don’t think anything of what I’m saying involves judging arguments based on their impact on the world.
What I mean here is that you, like most advocates of a marginal increase in censorship, justify this stance on the basis that the censored material will cause some people, perhaps its readers or its critics, to take an action with an undesirable consequence. Examples from the past have included suicidal behavior, sexual promiscuity, political revolution, or hate crimes.
To this list, you have appended “elevating X-risk.” This is what I mean by “impact on the world.”
Usually, advocates of marginal increases in censorship are afraid of the content of the published documents. In this case, you’re afraid not of what the document says on the object level, but of how the publication of that document will be perceived symbolically.
An advocate of censorship might point out that we can potentially achieve significant gains on goals with widespread support (in our society, stopping hate crimes might be an example), with only modest censorship. For example, we might not ban sales of a certain book. We just make it library policy not to purchase them. Or we restrict purchase to a certain age group. Or major publishers make a decision not to publish books advocating certain ideas, so that only minor publishing houses are able to market this material. Or we might permit individual social media platforms to ban certain articles or participants, but as long as internet service providers aren’t enacting bans, we’re OK with it.
On LW, one such form of soft censorship is the mod’s decision to keep a post off the frontpage.
To this list of soft censorship options, you are appending “posting it as a linkpost, rather than on the main site,” and assuring us that only 5 posts per year need to be subject even to this amount of censorship.
It is OK to be an advocate of a marginal increase in censorship. Understand, though, that to one such as myself, I believe that it is precisely these small marginal increases in censorship that pose a risk to X-risk, and the marginal posting of content like this book review either decreases X-risk (by reaffirming the epistemic freedom of this community) or does not affect it. If the community were larger, with less anonymity, and had a larger amount of potentially inflammatory political material, I would feel differently about this.
Your desire to marginally increase censorship feels to me a bit like a Pascal’s Mugging. You worry about a small risk of dire consequences that may never emerge, in order to justify a small but clear negative cost in the present moment. I don’t think you’re out of line to hold this belief. I just think that I’d need to see some more substantial empirical evidence that I should subscribe to this fear before I accept that we should pay this cost.
To this list of soft censorship options, you are appending “posting it as a linkpost, rather than on the main site,” and assuring us that only 5 posts per year need to be subject even to this amount of censorship.
The link thing was anon03′s idea; I want posts about TBC to be banned outright.
Other than that, I think you’ve understood my model. (And I think I understand yours except that I don’t understand the gears of the mechanism by which you think x-risk increases.)
X-risk, and AGI safety in particular, require unusual strength in gears-level reasoning to comprehend and work on; a willingness to stand up to criticism not only on technical questions but on moral/value questions; an intense, skeptical, questioning attitude; and a high value placed on altruism. Let’s call these people “rationalists.”
Even in scientific and engineering communities, and the population of rational people generally, the combination of these traits I’m referring to as “rationalism” is rare.
Rationalism causes people to have unusually high and predictable needs for a certain style and subject of debate and discourse, in a way that sets them apart from the general population.
Rationalists won’t be able to get their needs met in mainstream scientific or engineering communities, which prioritize a subset of the total rationalist package of traits.
Hence, they’ll seek an alternative community in which to get those needs met.
Rationalists who haven’t yet discovered a rationalist community won’t often have an advance knowledge of AGI safety. Instead, they’ll have thoughts and frustrations provoked by the non-rationalist society in which they grew up. It is these prosaic frustrations—often with politics—that will motivate them to seek out a different community, and to stay engaged with it.
When these people discover a community that engages with the controversial political topics they’ve seen shunned and censored in the rest of society, and doing it in a way that appears epistemically healthy to them, they’ll take it as evidence that they should stick around. It will also be a place where even AGI safety researchers and their friends can deal with the ongoing their issues and interests beyond AGI safety.
By associating with this community, they’ll pick up on ideas common in the community, like a concern for AGI safety. Some of them will turn it into a career, diminishing the amount of x-risk faced by the world.
I think that marginally increasing censorship on this site risks interfering with step 7. This site will not be recognized by proto-rationalists as a place where they can deal with the frustrations that they’re wrestling with when they first discover it. They won’t see an open attitude of free enquiry modeled, but instead see the same dynamics of fear-based censorship that they encounter almost everywhere else. Likewise, established AGI safety people and their friends will lose a space for free enquiry, a space for intellectual play and exploration that can be highly motivating. Loss of that motivation and appeal may interrupt the pipeline or staying power for people to work on X-risks of all kinds, including AGI safety.
Politics continues to affect people even after they’ve come to understand why it’s so frustrating, and having a minimal space to deal with it on this website seems useful to me. When you have very little of something, losing another piece of it feels like a pretty big deal.
When these people discover a community that engages with the controversial political topics they’ve seen shunned and censored in the rest of society, and doing it in a way that appears epistemically healthy to them, they’ll take it as evidence that they should stick around.
What has gone into forming this model? I only have one datapoint on this (which is myself). I stuck around because of the quality of discussion (people are making sense here!); I don’t think the content mattered. But I don’t have strong resistance to believing that this is how it works for other people.
I think if your model is applied to the politics ban, it would say that it’s also quite bad (maybe not as bad because most politics stuff isn’t as shunned and censored as social justice stuff)? If that’s true, how would you feel about restructuring rather than widening the censorship? Start allowing some political discussions (I also keep thinking about Wei Dai’s “it’ll go there eventually so we should practice” argument) but censor the most controversial social justice stuff. I feel like the current solution isn’t pareto optimal in the {epistemic health} x {safety against backlash} space.
Anecdotal, but about a year ago I committed to the rationalist community for exactly the reasons described. I feel more accepted in rationalist spaces than trans spaces, even though rationalists semi-frequently argue against the standard woke line and trans spaces try to be explicitly welcoming.
Just extrapolating from my own experience. For me, the content was important.
I think where my model really meets challenges is that clearly, the political content on LW has alienated some people. These people were clearly attracted here in the first place. My model says that LW is a magnet for likely AGI-safety researchers, and says nothing about it being a filter for likely AGI-safety researchers. Hence, if our political content is costing us more involvement than it’s retaining, or if the frustration experienced by those who’ve been troubled by the political content outweigh the frustration that would be experienced by those whose content would be censored, then that poses a real problem for my cost/benefit analysis.
A factor asymmetrically against increased censorship here is that censorship is, to me, intrinsically bad. It’s a little like war. Sometimes, you have to fight a war, but you should insist on really good evidence before you commit to it, because wars are terrible. Likewise, censorship sucks, and you should insist on really good evidence before you accept an increase in censorship.
It’s this factor, I think, that tilts me onto the side of preferring the present level of political censorship rather than an increase. I acknowledge and respect the people who feel they can’t participate here because they experience the environment as toxic. I think that is really unfortunate. I also think that censorship sucks, and for me, it roughly balances out with the suckiness of alienating potential participants via a lack of censorship.
This, I think, is the area where my mind is most susceptible to change. If somebody could make a strong case that LW currently has a lot of excessively toxic, alienating content, that this is the main bottleneck for wider participation, and that the number of people who’d leave if that controversial content were removed were outweighed by the number of people who’d join, then I’d be open-minded about that marginal increase in censorship.
An example of a way this evidence could be gathered would be some form of community outreach to ex-LWers and marginal LWers. We’d ask those people to give specific examples of the content they find offensive, and try both to understand why it bothers them, and why they don’t feel it’s something they can or want to tolerate. Then we’d try to form a consensus with them about limitations on political or potentially offensive speech that they would find comfortable, or at least tolerable. We’d also try to understand their level of interest in participating in a version of LW with more of these limitations in place.
Here, I am hypothesizing that there’s a group of ex-LWers or marginal LW-ers who feel a strong affinity for most of the content, while an even stronger aversion for a minority subset of the content to such a degree that they sharply curtail their participation. Such that if the offensive tiny fraction of the content were removed, they’d undergo a dramatic and lasting increase in engagement with LW. I find it unlikely that a sizeable group like this exists, but am very open to having my mind changed via some sort of survey data.
It seems more likely to me that ex/marginal-LWers are people with only a marginal interest in the site as a whole, who point to the minority of posts they find offensive as only the most salient example of what they dislike. Even if it were removed, they wouldn’t participate.
At the same time, we’d engage in community dialog with current active participants about their concerns with such a change. How strong are their feelings about such limitations? How many would likely stop reading/posting/commenting if these limitations were imposed? For the material they feel most strongly about it, why do they feel that way?
I am positing that there are a significant subset of LWers for whom the minority of posts engaging with politics are very important sources of its appeal.
How is it possible that I could simultaneously be guessing—and it is just a guess—that controversial political topics are a make-or-break screening-in feature, but not a make-or-break screening-out feature?
The reason is that there are abundant spaces online and in-person for conversation that does have the political limitations you are seeking to impose here. There are lots of spaces for conversation with a group of likeminded ideologues across the entire political spectrum, where conformity is a prerequisite of polite conversation. Hence, imposing the same sort of guardrails or ideological conformities on this website would make it similar to many other platforms. People who desire these guardrails/conformities can get what they want elsewhere. For them, LW would be a nice-to-have.
For those who desire polite and thoughtful conversation on a variety of intellectual topics, even touching on politics, LW is verging on a need-to-have. It’s rare. This is why I am guessing that a marginal increase in censorship would cost us more appeal than it would gain us.
I agree with you that the risk of being the subject of massive unwanted attention as a consequence is nonzero. I simply am guessing that it’s small enough not to be worth the ongoing short-term costs of a marginal increase in censorship.
But I do think that making the effort to thoroughly examine and gather evidence for the extent to which our political status quo serves to attract or repel people would be well worth a thorough examination. Asking at what point the inherent cost of a marginal increase in censorship becomes worth paying in exchange for a more inclusive environment seems like a reasonable question to ask. But I think this process would need a lot of community buy-in and serious effort on the part of a whole team to do it right.
The people who are already here would need persuading, and indeed, I think they deserve the effort to be persuaded to give up some of their freedom to post what they want here in exchange for, the hope would be, a larger and more vibrant community. And this effort should come with a full readiness to discover that, in fact, such restrictions would diminish the size and vibrancy and intellectual capacity of this community. If it wasn’t approached in that spirit, I think it would just fail.
In ten or twenty or forty years from now, in a way that’s impossible to predict because any specific scenario is extremely unlikely, the position to be worried about AGI will get coupled to being anti social justice in the public discourse, as a result it will massively lose status and the big labs react by taking safety far less seriously and maybe we have fewer people writing papers on alignment
So, I both think that in the past 1) people have thought the x-risk folks are weird and low-status and didn’t want to be affiliated with them, and in the present 2) people like Phil Torres are going around claiming that EAs and longtermists are white surpremacists, because of central aspects of longtermism (like thinking the present matters in large part because of its ability to impact the future). Things like “willingness to read The Bell Curve” no doubt contribute to their case, but I think focusing on that misses the degree to which the core is actually in competition with other ideologies or worldviews.
I think there’s a lot of value in trying to nudge your presentation to not trigger other people’s allergies or defenses, and trying to incorporate criticisms and alternative perspectives. I think we can’t sacrifice the core to do those things. If we disagree with people about whether the long-term matters, then we disagree with them; if they want to call us names accordingly, so much the worse for them.
If we disagree with people about whether the long-term matters, then we disagree with them; if they want to call us names accordingly, so much the worse for them.
I mean, this works until someone in a position of influence bows the the pressure, and I don’t see why this can’t happen.
I think we can’t sacrifice the core to do those things.
The main disagreement seems to come down to how much we would give up when disallowing posts like this. My gears model still says ‘almost nothing’ since all it would take is to extend the norm “let’s not talk about politics” to “let’s not talk about politics and extremely sensitive social-justice adjacent issues”, and I feel like that would extend the set of interesting taboo topics by something like 10%.
(I’ve said the same here; if you have a response to this, it might make sense to all keep it in one place.)
I like the norm of “If you’re saying something that lots of people will probably (mis)interpret as being hurtful and insulting, see if you can come up with a better way to say the same thing, such that you’re not doing that.” This is not a norm of censorship nor self-censorship, it’s a norm of clear communication and of kindness. I can easily imagine a book review of TBC that passes that test. But I think this particular post does not pass that test, not even close.
If a TBC post passed that test, well, I would still prefer that it be put off-site with a linkpost and so on, but I wouldn’t feel as strongly about it.
I think “censorship” is entirely the wrong framing. I think we can have our cake and eat it too, with just a little bit of effort and thoughtfulness.
I like the norm of “If you’re saying something that lots of people will probably (mis)interpret as being hurtful and insulting, see if you can come up with a better way to say the same thing, such that you’re doing that.” This is not a norm of censorship nor self-censorship, it’s a norm of clear communication and of kindness.
I think that this is completely wrong. Such a norm is definitely a norm of (self-)censorship—as has been discussed on Less Wrong already.
It is plainly obvious to any even remotely reasonable person that the OP is not intended as any insult to anyone, but simply as a book review / summary, just like it says. Catering, in any way whatsoever, to anyone who finds the current post “hurtful and insulting”, is an absolutely terrible idea. Doing such a thing cannot do anything but corrode Less Wrong’s epistemic standards.
Suppose that Person A finds Statement X demeaning, and you believe that X is not in fact demeaning to A, but rather A was misunderstanding X, or trusting bad secondary sources on X, or whatever.
What do you do?
APPROACH 1: You say X all the time, loudly, while you and your friends high-five each other and congratulate yourselves for sticking it to the woke snowflakes.
APPROACH 2: You try sincerely to help A understand that X is not in fact demeaning to A. That involves understanding where A is coming from, meeting A where A is currently at, defusing tension, gently explaining why you believe A is mistaken, etc. And doing all that before you loudly proclaim X.
I strongly endorse Approach 2 over 1. I think Approach 2 is more in keeping with what makes this community awesome, and Approach 2 is the right way to bring exactly the right kind of people into our community, and Approach 2 is the better way to actually “win”, i.e. get lots of people to understand that X is not demeaning, and Approach 2 is obviously what community leaders like Scott Alexander would do (as for Eliezer, um, I dunno, my model of him would strongly endorse approach 2 in principle, but also sometimes he likes to troll…), and Approach 2 has nothing to do with self-censorship.
~~
Getting back to the object level and OP. I think a lot of our disagreement is here in the details. Let me explain why I don’t think it is “plainly obvious to any even remotely reasonable person that the OP is not intended as any insult to anyone”.
Imagine that Person A believes that Charles Murray is a notorious racist, and TBC is a book that famously and successfully advocated for institutional racism via lies and deceptions. You don’t have to actually believe this—I don’t—I am merely asking you to imagine that Person A believes that.
Now look at the OP through A’s eyes. Right from the title, it’s clear that OP is treating TBC as a perfectly reasonable respectable book by a perfectly reasonable respectable person. Now A starts scanning the article, looking for any serious complaint about this book, this book which by the way personally caused me to suffer by successfully advocating for racism, and giving up after scrolling for a while and coming up empty. I think a reasonable conclusion from A’s perspective is that OP doesn’t think that the book’s racism advocacy is a big deal, or maybe OP even thinks it’s a good thing. I think it would be understandable for Person A to be insulted and leave the page without reading every word of the article.
Once again, we can lament (justifiably) that Person A is arriving here with very wrong preconceptions, probably based on trusting bad sources. But that’s the kind of mistake we should be sympathetic to. It doesn’t mean Person A is an unreasonable person. Indeed, Person A could be a very reasonable person, exactly the kind of person who we want in our community. But they’ve been trusting bad sources. Who among us hasn’t trusted bad sources at some point in our lives? I sure have!
And if Person A represents a vanishingly rare segment of society with weird idiosyncratic wrong preconceptions, maybe we can just shrug and say “Oh well, can’t please everyone.” But if Person A’s wrong preconceptions are shared by a large chunk of society, we should go for Approach 2.
Imagine that Person A believes that Charles Murray is a notorious racist, and TBC is a book that famously and successfully advocated for institutional racism via lies and deceptions. You don’t have to actually believe this—I don’t—I am merely asking you to imagine that Person A believes that.
If Person A believes this without ever having either (a) read The Bell Curve or (b) read a neutral, careful review/summary of The Bell Curve, then A is not a reasonable person.
All sorts of unreasonable people have all sorts of unreasonable and false beliefs. Should we cater to them all?
No. Of course we should not.
Now look at the OP through A’s eyes. Right from the title, it’s clear that OP is treating TBC as a perfectly reasonable respectable book by a perfectly reasonable respectable person.
The title, as I said before, is neutrally descriptive. Anyone who takes it as an endorsement is, once again… unreasonable.
Now A starts scanning the article, looking for any serious complaint about this book, this book which by the way personally caused me to suffer by successfully advocating for racism
Sorry, what? A book which you (the hypothetical Person A) have never read (and in fact have only the vaguest notion of the contents of) has personally caused you to suffer? And by successfully (!!) “advocating for racism”, at that? This is… well, “quite a leap” seems like an understatement; perhaps the appropriate metaphor would have to involve some sort of Olympic pole-vaulting event. This entire (supposed) perspective is absurd from any sane person’s perspective.
I think a reasonable conclusion from A’s perspective is that OP doesn’t think that the book’s racism advocacy is a big deal, or maybe OP even thinks it’s a good thing. I think it would be understandable for Person A to be insulted and leave the page without reading every word of the article.
No, this would actually be wildly unreasonable behavior, unworthy of any remotely rational, sane adult. Children, perhaps, may be excused for behaving in this way—and only if they’re very young.
The bottom line is: the idea that “reasonable people” think and behave in the way that you’re describing is the antithesis of what is required to maintain a sane society. If we cater to this sort of thing, here on Less Wrong, then we completely betray our raison d’etre, and surrender any pretense to “raising the sanity waterline”, “searching for truth”, etc.
Sorry, what? A book which you (the hypothetical Person A) have never read (and in fact have only the vaguest notion of the contents of) has personally caused you to suffer? And by successfully (!!) “advocating for racism”, at that? This is… well, “quite a leap” seems like an understatement; perhaps the appropriate metaphor would have to involve some sort of Olympic pole-vaulting event. This entire (supposed) perspective is absurd from any sane person’s perspective.
I have a sincere belief that The Protocols Of The Elders Of Zion directly contributed to the torture and death of some of my ancestors. I hold this belief despite having never read this book, and having only the vaguest notion of the contents of this book, and having never sought out sources that describe this book from a “neutral” point of view.
Do you view those facts as evidence that I’m an unreasonable person?
Further, if I saw a post about The Protocols Of The Elders Of Zion that conspicuously failed to mention anything about people being oppressed as a result of the book, or a post that buried said discussion until after 28 paragraphs of calm open-minded analysis, well, I think I wouldn’t read through the whole piece, and I would also jump to some conclusions about the author. I stand by this being a reasonable thing to do, given that I don’t have unlimited time.
By contrast, if I saw a post about The Protocols Of The Elders Of Zion that opened with “I get it, I know what you’ve heard about this book, but hear me out, I’m going to explain why we should give this book a chance with an open mind, notwithstanding its reputation…”, then I would certainly consider reading the piece.
Your analogy breaks down because the Bell Curve is extremely reasonable, not some forged junk like “The Protocols Of The Elders Of Zion”.
If a book mentioned here mentioned evolution and that offended some traditional religious people, would we need to give a disclaimer and potentially leave it off the site? What if some conservative religious people believe belief in evolution directly harms them? They would be regarded as insane, and so are people offended by TBC.
That’s all this is by the way, left-wing evolution denial. How likely is it that people separated for tens of thousands of years with different founder populations will have equal levels of cognitive ability. It’s impossible.
I have a sincere belief that The Protocols Of The Elders Of Zion directly contributed to the torture and death of some of my ancestors. I hold this belief despite having never read this book, and having only the vaguest notion of the contents of this book, and having never sought out sources that describe this book from a “neutral” point of view.
Do you view those facts as evidence that I’m an unreasonable person?
Yeah.
“What do you think you know, and how do you think you know it?” never stopped being the rationalist question.
As for the rest of your comment—first of all, my relative levels of interest in reading a book review of the Protocols would be precisely reversed from yours.
Secondly, I want to call attention to this bit:
“… I’m going to explain why we should give this book a chance with an open mind, notwithstanding its reputation…”
There is no particular reason to “give this book a chance”—to what? Convince us of its thesis? Persuade us that it’s harmless? No. The point of reviewing a book is to improve our understanding of the world. The Protocols of the Elders of Zion is a book which had an impact on global events, on world history. The reason to review it is to better understand that history, not to… graciously grant the Protocols the courtesy of having its allotted time in the spotlight.
If you think that the Protocols are insignificant, that they don’t matter (and thus that reading or talking about them is a total waste of our time), that is one thing—but that’s not true, is it? You yourself say that the Protocols had a terrible impact! All the things which we should strive our utmost to understand, how can a piece of writing that contributed to some of the worst atrocities in history not be among them? How do you propose to prevent history from repeating, if you refuse, not only to understand it, but even to bear its presence?
The idea that we should strenuously shut our eyes against bad things, that we should forbid any talk of that which is evil, is intellectually toxic.
And the notion that by doing so, we are actually acting in a moral way, a righteous way, is itself the root of evil.
Hmm, I think you didn’t get what I was saying. A book review of “Protocols of the Elders of Zion” is great, I’m all for it. A book review of “Protocols of the Elders of Zion” which treats it as a perfectly lovely normal book and doesn’t say anything about the book being a forgery until you get 28 paragraphs into the review and even then it’s barely mentioned is the thing that I would find extremely problematic. Wouldn’t you? Wouldn’t that seem like kind of a glaring omission? Wouldn’t that raise some questions about the author’s beliefs and motives in writing the review?
Do you view those facts as evidence that I’m an unreasonable person?
Yeah.
Do you ever, in your life, think that things are true without checking? Do you think that the radius of earth is 6380 km? (Did you check? Did you look for skeptical sources?) Do you think that lobsters are more closely related to shrimp than to silverfish? (Did you check? Did you look for skeptical sources?) Do you think that it’s dangerous to eat an entire bottle of medicine at once? (Did you check? Did you look for skeptical sources?)
I think you’re holding people up to an unreasonable standard here. You can’t do anything in life without having sources that you generally trust as being probably correct about certain things. In my life, I have at time trusted sources that in retrospect did not deserve my trust. I imagine that this is true of everyone.
Suppose we want to solve that problem. (We do, right?) I feel like you’re proposing a solution of “form a community of people who have never trusted anyone about anything”. But such community would be empty! A better solution is: have a bunch of Scott Alexanders, who accept that people currently have beliefs that are wrong, but charitably assume that maybe those people are nevertheless open to reason, and try to meet them where they are and gently persuade them that they might be mistaken. Gradually, in this way, the people (like former-me) who were trusting the wrong sources can escape of their bubble and find better sources, including sources who preach the virtues of rationality.
We’re not born with an epistemology instruction manual. We all have to find our way, and we probably won’t get it right the first time. Splitting the world into “people who already agree with me” and “people who are forever beyond reason”, that’s the wrong approach. Well, maybe it works for powerful interest groups that can bully people around. We here at lesswrong are not such a group. But we do have the superpower of ability and willingness to bring people to our side via patience and charity and good careful arguments. We should use it! :)
Hmm, I think you didn’t get what I was saying. A book review of “Protocols of the Elders of Zion” is great, I’m all for it. A book review of “Protocols of the Elders of Zion” which treats it as a perfectly lovely normal book and doesn’t say anything about the book being a forgery until you get 28 paragraphs into the review and even then it’s barely mentioned is the thing that I would find extremely problematic. Wouldn’t you? Wouldn’t that seem like kind of a glaring omission? Wouldn’t that raise some questions about the author’s beliefs and motives in writing the review?
I agree completely.
But note that here we are talking about the book’s provenance / authorship / otherwise “metadata”—and certainly not about the book’s impact, effects of its publication, etc. The latter sort of thing may properly be discussed in a “discussion section” subsequent to the main body of the review, or it may simply be left up to a Wikipedia link. I would certainly not require that it preface the book review, before I found that review “acceptable”, or forebore to question the author’s motives, or what have you.
And it would be quite unreasonable to suggest that a post titled “Book Review: The Protocols of the Elders of Zion” is somehow inherently “provocative”, “insulting”, “offensive”, etc., etc.
Do you ever, in your life, think that things are true without checking?
I certainly try not to, though bounded rationality does not permit me always to live up to this goal.
Do you think that the radius of earth is 6380 km? (Did you check? Did you look for skeptical sources?)
I have no beliefs about this one way or the other.
Do you think that lobsters are more closely related to shrimp than to silverfish? (Did you check? Did you look for skeptical sources?)
I have no beliefs about this one way or the other.
Do you think that it’s dangerous to eat an entire bottle of medicine at once? (Did you check? Did you look for skeptical sources?)
Depends on the medicine, but I am given to understand that this is often true. I have “checked” in the sense that I regularly read up on the toxicology and other pharmacokinetic properties of medications I take, or those might take, or even those I don’t plan to take. Yes, I look for skeptical sources.
My recommendation, in general, is to avoid having opinions about things that don’t affect you; aim for a neutral skepticism. For things that do affect you, investigate; don’t just stumble into beliefs. This is my policy, and it’s served me well.
I think you’re holding people up to an unreasonable standard here. You can’t do anything in life without having sources that you generally trust as being probably correct about certain things. In my life, I have at time trusted sources that in retrospect did not deserve my trust. I imagine that this is true of everyone.
The solution to this is to trust less, check more; decline to have any opinion one way or the other, where doing so doesn’t affect you. And when you have to, trust—but verify.
Strive always to be aware of just how much trust in sources you haven’t checked underlies any belief you hold—and, crucially, adjust the strength of your beliefs accordingly.
And when you’re given an opportunity to check, to verify, to investigate—seize it!
A better solution is: have a bunch of Scott Alexanders, who accept that people currently have beliefs that are wrong, but charitably assume that maybe those people are nevertheless open to reason, and try to meet them where they are and gently persuade them that they might be mistaken.
The principle of charity, as often practiced (here and in other rationalist spaces), can actually be a terrible idea.
But we do have the superpower of ability and willingness to bring people to our side via patience and charity and good careful arguments. We should use it! :)
We should use it only to the extent that it does not in any way reduce our own ability to seek, and find, the truth, and not one iota more.
we are talking about the book’s provenance / authorship / otherwise “metadata”—and certainly not about the book’s impact
A belief that “TBC was written by a racist for the express purpose of justifying racism” would seem to qualify as “worth mentioning prominently at the top” under that standard, right?
And it would be quite unreasonable to suggest that a post titled “Book Review: The Protocols of the Elders of Zion” is somehow inherently “provocative”, “insulting”, “offensive”, etc., etc.
I imagine that very few people would find the title by itself insulting; it’s really “the title in conjunction with the first paragraph or two” (i.e. far enough to see that the author is not going to talk up-front about the elephant in the room).
Hmm, maybe another better way to say it is: The title plus the genre is what might insult people. The genre of this OP is “a book review that treats the book as a serious good-faith work of nonfiction, which might have some errors, just like any nonfiction book, but also presumably has some interesting facts etc.” You don’t need to read far or carefully to know that the OP belongs to this genre. It’s a very different genre from a (reasonable) book review of “Protocols of the Elders of Zion”, or a (reasonable) book review of “Mein Kampf”, or a (reasonable) book review of “Harry Potter”.
A belief that “TBC was written by a racist for the express purpose of justifying racism” would seem to qualify as “worth mentioning prominently at the top” under that standard, right?
No, of course not (the more so because it’s a value judgment, not a statement of fact).
The rest of what you say, I have already addressed.
Approach 2 assumes that A is (a) a reasonable person and (b) coming into the situation with good faith. Usually, neither is true.
What is more, your list of two approaches is a very obvious false dichotomy, crafted in such a way as to mock the people you’re disagreeing with. Instead of either the strawman Approach 1 or the unacceptable Approach 2, I endorse the following:
APPROACH 3: Ignore the fact that A (supposedly) finds X “demeaning”. Say (or don’t say) X whenever the situation calls for it. Behave in all ways as if A’s opinion is completely irrelevant.
(Note, by the way, that Approach 2 absolutely does constitute (self-)censorship, as anything that imposes costs on a certain sort of speech—such as, for instance, requiring elaborate genuflection to supposedly “offended” parties, prior to speaking—will serve to discourage that form of speech. Of course, I suspect that this is precisely the goal—and it is also precisely why I reject your suggestion wholeheartedly. Do not feed utility monsters.)
There’s a difference between catering to an audience and proactively framing things in the least explosive way.
Maybe what you are saying is that when people try to do the latter, they inevitably end up self-censoring and catering to the (hostile) audience?
But that seems false to me. I not only think framing contoversial topics in a non-explosive way is a strategically important, underappreciated skill. In addition, I suspect that practicing the skill improves our epistemics. It forces us to engage with a critical audience of people with ideological differences. When I imagine having to write on a controversial topic, one of the readers I mentally simulate is “person who is ideologically biased against me, but still reasonable.” I don’t cater to unreasonable people, but I want to take care to not put off people who are still “in reach.” And if they’re reasonable, sometimes they have good reasons behind at least some of their concerns and their perspectives can be learnt from.
As I mentioned elsethread, if I’d written the book review I would have done what you describe. But I didn’t and probably never would have written it out of timidness, and that makes me reluctant to tell someone less timid who did something valuable that they did it wrong.
I was just commenting on the general norm. I haven’t read the OP and didn’t mean to voice an opinion on it.
I’m updating that I don’t understand how discussions work. It happens a lot that I object only to a particular feature of an argument or particular argument, yet my comments are interpreted as endorsing an entire side of a complicated debate.
FWIW, I think the “caving in” discussed/contemplated in Rafael Harth’s comments is something I find intuitively repugnant. It feels like giving up your soul for some very dubious potential benefits. Intellectually I can see some merits for it but I suspect (and very much like to believe) that it’s a bad strategy.
Maybe I would focus more on criticizing this caving in mentality if I didn’t feel like I was preaching to the choir. “Open discussion” norms feel so ingrained on Lesswrong that I’m more worried that other good norms get lost / overlooked.
Maybe I would feel different (more “under attack”) if I was more emotionally invested in the community and felt like something I helped build was under attack with norm erosion. I feel presently more concerned about dangers from evaporative cooling where many who care a not-small degree about “soft virtues in discussions related to tone/tact/welcomingness, but NOT in a strawmanned sense” end up becoming less active or avoiding the comment sections.
Edit: The virtue I mean is maybe best described as “presenting your side in a way that isn’t just persuasive to people who think like you, but even reaches the most receptive percentage of the outgroup that’s predisposed to be suspicious of you.”
This is a moot point, because anyone who finds a post title like “Book review: The Bell Curve by Charles Murray” to be “controversial”, “explosive”, etc., is manifestly unreasonable.
Even if a theoretical author cares not one whit about appearing to endorse “bad things” #scarequotes, including preemptive disclaimers is still good practice to forestall this sort of meta-commentary and keep the comments focused on the content of the post, and not the method of delivery.
Imagine a world where having [a post mentioning the bell curve] visible on the frontpage runs a risk of destroying a lot of value. This could be through any number of mechanisms like
The site is discussed somewhere, someone claims that it’s a home for racism and points to this post as evidence. [Someone who in another universe would have become a valueable contributor to LW] sees this (but doesn’t read the post) it and decides not to check LW out.
A woke and EA-aligned person gets wind of it and henceforth thinks all x-risk related causes are unworthy of support
Someone links the article from somewhere, it gets posted on far right reddit board, a bunch of people make accounts on LessWrong to make dumb comments, someone from the NYT sees it and writes a hit piece. By this time all of the dumb comments are downvoted into invisibility (and none of them ever had high karma to begin with), but the NYT reporter just deals with this by writing that the mods had to step in and censor the most outrageous comments or something.
Question: If you think this is not worth worrying about—why? What do you know, and how do you think you know it? And in what way would a world-where-it-is-worth-worrying-about look different?
To avoid repeating arguments, there have been discussions similar to this before. Here are the arguments that I remember (I’m sure this is not exhaustive).
Pro: Not allowing posts poisons are epistemic discourse; saying ‘let’s be systematically correct about everything but {x,y,z} is a significantly worse algorithm than ‘let’s be systematically correct’, and this can have wide-ranging effects. (Zack_M_Davis strongly argued for this point, e.g. here) (this is also on one of the posts where the discussion has happened before, in this case because I made a comment arguing the post shouldn’t be on LW)
Contra: But we could take it offline. (Even Hubinger, e.g., here)
Contra: I’ve thought about this a lot since the discussion happened, and I increasingly just don’t buy that the negative effects are real. Especially not in this case, which seems more clear-cut than the dating post. The Bell Curve seems to be just about the single most controversial book in the world for a good chunk of people, just about any other book would be less of an issue. I assume the argument is that censorship is not proportional to the amount that is censored, but I don’t understand the mechanism here. How does this hurt discourse?
Pro: LessWrong obviously isn’t about this kind of stuff and anyone who takes an honest look at the site will notice that immediately. (Ben Pace argued this here.) He also said that he’s “pretty pro just fighting those fights, rather than giving in and letting people on the internet who use the representativeness heuristic to attack people decide what we get to talk about.”
I’m unconvinced by this for the same reasons I was then. I agree with the claim, but I don’t think assuming people are reasonable is realistic, and I don’t understand why we should just fight those fights. Where’s the cost-benefit calculation?
Follow the above links for arguments from Ben against the above.
Pro: LessWrong will get politicized anyway and we should start to practice. (Wei Dai, e.g. here)
This makes a lot more sense to me, but starting with a post on the Bell Curve is not the right way to do it. I would welcome some kind of actual plan for how this can be done from the moderators.
Until then, my position is that this post shouldn’t be on LessWrong. I’ve strong-downvoted it and would ultra-strong-downvote it if I could. However, I do think I’m open to evidence for the contrary. I would much welcome some kind of a cost-benefit calculation that concludes that this is a good idea. If it’s wroth doing, it’s worth doing with made-up statistics. If I were to do such a calculation, it would get a bunch of negative numbers for things like what i mentioned at the top of this comment, and almost nothing positive because the benefit of allowing this seems genuinely negligible to me.
In my capacity as moderator, I saw this post this morning and decided to leave it posted (albeit as Personal blog with reduced visibility).
I think limiting the scope of what can be discussed is costly for our ability to think about the world and figure out what’s true (a project that is overall essential to AGI outcomes, I believe) and therefore I want to minimize such limitations. That said, there are conversations that wouldn’t be worth having on LessWrong, topics that I expect would attract attention just not worth it–those I would block. However, this post didn’t feel like where I wanted to draw the line. Blocking this post feels like it would be cutting out too much for the sake of safety and giving the fear of adversaries too much control over of us and our inquiries. I liked how this post gave me a great summary of controversial material so that I now know what the backlash was in response to. I can imagine other posts where I feel differently (in fact, there was a recent post I told an author it might be better to leave off the site, though they missed my message and posted anyway, which ended up being fine).
It’s not easy to articulate where I think the line is or why this post seemed on the left of it, but it was a deliberate judgment call. I appreciate others speaking up with their concerns and their own judgment calls. If anyone ever wants to bring these up with me directly (not to say that comment threads aren’t okay), feel free to DM me or email me: ruby@lesswrong.com
To address something that was mentioned, I expect to change my response in the face of posting trends, if they seemed fraught. There are a number of measure we could potentially take then.
Thanks for being transparent. I’m very happy to see that I was wrong in saying no-one else is taking it seriously. (I didn’t notice that the post wasn’t on the frontpage, which I think proves that you did take it seriously.)
I don’t understand this concern (which I classify as the same kind of thing voiced by Zack many times and AAB just a few comments up.) We’ve have a norm against discussing politics since before LessWrong 2.0, which doesn’t seem to have had any noticeable negative effects on our ability to discuss other topics. I think what I’m advocating for is to extend this norm by a pretty moderate amount? Like, the set of interesting topics in politics seems to me to be much larger than the set of interesting [topics with the property that they risk significant backlash from people who are concerned about social justice]. (I do see how this post is useful, but the bell curve is literally in a class that contains a single element. There seem to be < 5 posts per year which I don’t want to have on LW for these kinds of reasons, and most of them are less useful than this one.) My gears-level prediction for how much that would degrade discussion in other areas is basically zero, but at this point I must be missing something?
A difference I can see is that disallowing this post would be done explicitly out of fear or backlash whereas the norm against politics is because politics is the mind killer, but i guess I don’t see why that makes a difference (and doesn’t the mind killer argument extend to these kinds of topics anyway?)
I do think that if we order all posts by where they appear on this spectrum, I would put this farther to the right than any other post I remember, so we genuniely seem to differ in our judgment here.
I echo anon03 in that the title is extremely provocative, but minus the claim that this is only a descriptive statement. I think it’s obviously intentionally provocative (though I will take this back if the author says otherwise), given that the author wrote this four days ago
I think condemning TBC has become one of the most widely agreed on loyalty tests for many people who care about social justice. It seems clear to me that Isusr intended this post to have symbolic value, so that being provocative was an intended property. If their utility function had been to review this book because it’s very useful while minimizing risk, a very effective way to do this would have been to exclude the name from the title.
Elsewhere you write (and also ask to consolidate, so I’m responding here):
I think I used to endorse a model like this much more than I do now. A particular thing that I found sort of radicalizing was the “sexual preference” moment, in which a phrase that I had personally used and wouldn’t have associated with malice was overnight retconned to be a sign of bigotry, as far as I can tell primarily to score points during the nomination hearings for Amy Coney Barrett. (I don’t know anything special about Barrett’s legal decisions or whether or not she’s a bigot; I also think that sexual orientation isn’t a choice for basically anyone at the moment; I also don’t think ‘preference’ implies that it was a choice, any more than my ‘flavor preferences’ are my choice instead of being an uncontrollable fact about me.)
Supposing we agree that the taboo only covers ~10% more topics in 2020, I’m not sure I expect it will only cover 10% more topics in 2025, or 2030, or so on? And so you need to make a pitch not just “this pays for itself now” but instead something like “this will pay for itself for the whole trajectory that we care about, or it will be obvious when we should change our policy and it no longer pays for itself.”
This is a helpful addendum. I didn’t want to bust out the slippery slope argument because I didn’t have clarity on the gears-level mechanism. But in this case, we seem to have a ratchet in which X is deemed newly offensive, and a lot of attention is focused on just this particular word or phrase X. Because “it’s just this one word,” resisting the offensive-ization is made to seem petty—wouldn’t it be such a small thing to give up, in exchange for inflicting a whole lot less suffering on others?
Next week it’ll be some other X though, and the only way this ends is if you can re-establish some sort of Schelling Fence of free discourse and resist any further calls to expand censorship, even if they’re small and have good reasons to back them up.
I think that to someone who disagrees with me, they might say that what’s in fact happening is an increase in knowledge and an improvement in culture, reflected in language. In the same way that I expect to routinely update my picture of the world when I read the newspaper, why shouldn’t I expect to routinely update my language to reflect evolving cultural understandings of how to treat other people well?
My response to this objection would be that, in much the same way as phrases like “sexual preference” can be seen as offensive for their implications, or a book can be objected to for its symbolism, mild forms of censorship or “updates” in speech codes can provoke anxiety, induce fear, and restrain thought. This may not be their intention, but it is their effect, at least at times and in the present cultural climate.
So a standard of free discourse and a Schelling Fence against expansion of censorship is justified not (just) to avoid a slippery slope of ever-expanding censorship, or to attract people with certain needs or to establish a pipeline into certain roles or jobs. Its purpose is also to create a space in which we have declared that we will strive to be less timid, not just less wrong.
We might not always prioritize or succeed in that goal, but establishing that this is a space where we are giving ourselves permission to try is a feature of explicit anti-censorship norms.
Prioritizing freedom of thought and lessening timidity isn’t always the right goal. Sometimes, inclusivity, warmth, and a sense of agreeableness and safety is the right way to organize certain spaces. Different cultural moments, or institutions, might need marginally more safe spaces. Sometimes, though, they need more risky spaces. My observation tells me that our culture is currently in need of marginally more risky spaces, even if the number of safe spaces remains the same. A way to protect LW’s status as a risky space is to protect our anti-censorship norms, and sometimes to exercise our privilege to post risky material such as this post.
Our culture is desperately in need of spaces that are correct about the most important technical issues, and insisting that the few such spaces that exist have to also become politically risky spaces jeopardizes their ability to function for no good reason given that the internet lets you build as many separate spaces as you want elsewhere.
I’m going to be a little nitpicky here. LW is not “becoming,” but rather already is a politically risky space, and has been for a long time. There are several good reasons, which I and others have discussed elsewhere here. They may not be persuasive to you, and that’s OK, but they do exist as reasons. Finally, the internet may let you build a separate forum elsewhere and try to attract participants, but that is a non-trivial ask.
My position is that accepting intellectual risk is part and parcel of creating an intellectual environment capable of maintaining the epistemic rigor that we both think is necessary.
It is you, and others here, who are advocating a change of the status quo to create a bigger wall between x-risk topics and political controversy. I think that this would harm the goal of preventing x-risk, on current margins, as I’ve argued elsewhere here. We both have our reasons, and I’ve written down the sort of evidence that would cause me to change my point of view.
Fortunately, I enjoy the privilege of being the winner by default in this contest, since the site’s current norms already accord with my beliefs and preferences. So I don’t feel the need to gather evidence to persuade you of my position, assuming you don’t find my arguments here compelling. However, if you do choose to make the effort to gather some of the evidence I’ve elsewhere outlined, I not only would eagerly read it, but would feel personally grateful to you for making the effort. I think those efforts would be valuable for the health of this website and also for mitigating X-risk. However, they would be time-consuming, effortful, and may not pay off in the end.
I also care a lot about this; I think there are three important things to track.
First is that people might have reputations to protect or purity to maintain, and so want to be careful about what they associate with. (This is one of the reasons behind the separate Alignment Forum URL; users who wouldn’t want to post something to Less Wrong can post someplace classier.)
Second is that people might not be willing to pay costs to follow taboos. The more a space is politically safe, the less people like Robin Hanson will want to be there, because many of their ideas are easier to think of if you’re not spending any of your attention on political safety.
Third is that the core topics you care about might, at some point, become political. (Certainly AI alignment was ‘political’ for many years before it became mainstream, and will become political again as soon as it stops becoming mainstream, or if it becomes partisan.)
The first is one of the reasons why LW isn’t a free speech absolutist site, even tho with a fixed population of posters that would probably help us be more correct. But the second and third are why LW isn’t a zero-risk space either.
Some more points I want to make:
I don’t care about moderation decisions for this particular post, I’m just dismayed by how eager LessWrongers seem to be to rationalize shooting themselves in the foot, which is also my foot and humanity’s foot, for the short term satisfaction of getting to think of themselves as aligned with the forces of truth in a falsely constructed dichotomy against the forces of falsehood.
On any sufficiently controversial subject, responsible members of groups with vulnerable reputations will censor themselves if they have sufficiently unpopular views, which makes discussions on sufficiently controversial subjects within such groups a sham. The rationalist community should oppose shams instead of encouraging them.
Whether political pressure leaks into technical subjects mostly depends on people’s meta-level recognition that inferences subject to political pressure are unreliable, and hosting sham discussions makes this recognition harder.
The rationalist community should avoid causing people to think irrationally, and a very frequent type of irrational thinking (even among otherwise very smart people) is “this is on the same website as something offensive, so I’m not going to listen to it”. “Let’s keep putting important things on the same website as unimportant and offensive things until they learn” is not a strategy that I expect to work here.
It would be really nice to be able to stand up to left wing political entryism, and the only principled way to do this is to be very conscientious about standing up to right wing political entryism, where in this case “right wing” means any politics sufficiently offensive to the left wing, regardless of whether it thinks of itself as right wing.
I’m not as confident about these conclusions as it sounds, but my lack of confidence comes from seeing that people whose judgment I trust disagree, and it does not come from the arguments that have been given, which have not seemed to me to be good.
“Stand up to X by not doing anything X would be offended by” is obviously an unworkable strategy, it’s taking a negotiating stance that is maximally yielding in the ultimatum game, so should expect to receive as little surplus utility as possible in negotiation.
(Not doing anything X would be offended by is generally a strategy for working with X, not standing up to X; it could work if interests are aligned enough that it isn’t necessary to demand much in negotiation. But given your concern about “entryism” that doesn’t seem like the situation you think you’re in.)
steven0461 isn’t proposing standing up to X by not doing things that would offend X.
He is proposing standing up to the right by not doing things that would offend the left, and standing up to the left by not doing things that would offend the right. Avoiding posts like the OP here is intended to be an example of the former, which (steven0461 suggests) has value not only for its own sake but also because it lets us also stand up to the left by avoiding things that offend the right, without being hypocrites.
(steven0461′s comment seems to treat “standing up to left-wing political entryism” as a thing that’s desirable for its own sake, and “standing up to right-wing political entryism” as something we regrettably have to do too in order to do the desirable thing without hypocrisy. This seems kinda strange to me because (1) standing up to all kinds of political entryism seems to me obviously desirable for its own sake, and because (2) if for some reason left-wing political entryism is fundamentally worse than right-wing political entryism then surely that makes it not necessarily hypocritical to take a stronger stand against the former than against the latter.)
If someone proposes to do A by doing B, and B by doing C, they are proposing doing A by doing C. (Here A = “stand up to left wing entryism”, B = “stand up to right wing entryism”, C = “don’t do things that left wing people are offended by”)
EDIT: Also, the situation isn’t symmetrical, since Steven is defining right-wing to mean things the left wing is offended by, and not vice versa. Hence it’s clearly a strategy for submitting to the left, as it lets the left construct the left/right dichotomy.
I’m not sure there’s a definite fact of the matter as to when something is “doing X by doing Y” in cases like this where it’s indirect, but I think either we shouldn’t use that language so broadly as to apply to such cases or it’s not obvious that it’s unworkable to “stand up to X by not doing things that offend X”, since the obvious unworkability of that is (unless I’m misunderstanding your earlier comment) predicated on the idea that it’s a sort of appeasement of X, rather than the sort of indirect thing we’re actually talking about here.
Maybe I am also being too indirect. Regardless of whether there’s some sense in which steven0461 is proposing to “stand up to X by not doing things that would offend X”, he was unambiguously not proposing “a negotiating stance that is maximally yielding in the ultimatum game”; “not doing things that would offend X” in his comment is unambiguously not a move in any game being played with X at all. Your objection to what he wrote is just plain wrong, whether or not there is a technical sense in which he did say the thing that you objected to, because your argument against what he said was based on an understanding of it that is wrong whether or not that’s so.
[EDITED to add:] As I mention in a grandchild comment, one thing in the paragraph above is badly garbled; I was trying to say something fairly complicated in too few words and ended up talking nonsense. It’s not correct to say that “not doing things that would offend X” is not a move in any game being played with X. Rather, I claim that X in your original comment is standing in for two different albeit related Xs, who are involved in two different albeit related interactions (“games” if you like), and the two things you portray as inconsistent are not at all inconsistent because it’s entirely possible (whether or not it’s wise) to win one game while losing the other.
The game with “left-wing entryists” is one where they try to make LW a platform for left-wing propaganda. The game with “the left” is one where they try to stop LW being a platform for (what they regard as) right-wing propaganda. Steven proposes taking a firm stand against the former, and making a lot of concessions in the latter. These are not inconsistent; banning everything that smells of politics, whether wise or foolish overall, would do both of the things Steven proposes doing. He proposes making concessions to “the left” in the second game in order to resist “right-wing entryists” in the mirror-image of the first game. We might similarly make concessions to “the right” if they were complaining that LW is too leftist, by avoiding things that look to them like left-wing propaganda. I make no claims about whether any of these resistances and concessions are good strategy; I say only that they don’t exhibit the sort of logical inconsistency you are accusing Steven of.
The implied game is:
Step 1: The left decides what is offensively right-wing
Step 2: LW people decide what to say given this
Steven is proposing a policy for step 2 that doesn’t do anything that the left has decided is offensively right-wing. This gives the left the ability to prevent arbitrary speech.
If the left is offended by negotiating for more than $1 in the ultimatum game, Steven’s proposed policy would avoid doing that, thereby yielding. (The money here is metaphorical, representing benefits LW people could get by talking about things without being attacked by the left)
I think an important cause of our disagreement is you model the relevant actors as rational strategic consequentialists trying to prevent certain kinds of speech, whereas I think they’re at least as much like a Godzilla that reflexively rages in pain and flattens some buildings whenever he’s presented with an idea that’s noxious to him. You can keep irritating Godzilla until he learns that flattening buildings doesn’t help him achieve his goals, but he’ll flatten buildings anyway because that’s just the kind of monster he is, and in this way, you and Godzilla can create arbitrary amounts of destruction together. And (to some extent) it’s not like someone constructed a reflexively-acting Godzilla so they could control your behavior, either, which would make it possible to deter that person from making future Godzillas. Godzillas seem (to some extent) to arise spontaneously out of the social dynamics of large numbers of people with imperfect procedures for deciding what they believe and care about. So it’s not clear to me that there’s an alternative to just accepting the existence of Godzilla and learning as best as you can to work around him in those cases where working around him is cheap, especially if you have a building that’s unusually important to keep intact. All this is aside from considerations of mercy to Godzilla or respect for Godzilla’s opinions.
If I make some substitutions in your comment to illustrate this view of censorious forces as reflexive instead of strategic, it goes like this:
I think “wave your cloths at home or in another field even if it’s not as good” ends up looking clearly correct here, and if this model is partially true, then something more nuanced than an absolutist “don’t give them an inch” approach is warranted.
edit: I should clarify that when I say Godzilla flattens buildings, I’m mostly not referring to personal harm to people with unpopular opinions, but to epistemic closure to whatever is associated with those people, which you can see in action every day on e.g. Twitter.
The relevant actors aren’t consciously being strategic about it, but I think their emotions are sensitive to whether the threat of being offended seems to be working. That’s what the emotions are for, evolutionarily speaking. People are innately very good at this! When I babysit a friend’s unruly 6-year-old child who doesn’t want to put on her shoes, or talk to my mother who wishes I would call more often, or introspect on my own rage at the abject cowardice of so-called “rationalists”, the functionality of emotions as a negotiating tactic is very clear to me, even if I don’t have the same kind of deliberative control over my feelings as my speech (and the child and my mother don’t even think of themselves as doing game theory at all).
(This in itself doesn’t automatically negate your concerns, of course, but I think it’s an important modeling consideration: animals like Godzilla may be less incentivizable than Homo economicus, but they’re more like Homo economicus than a tornado or an avalanche.)
I think simplifying all this to a game with one setting and two players with human psychologies obscures a lot of what’s actually going on. If you look at people of the sneer, it’s not at all clear that saying offensive things thwarts their goals. They’re pretty happy to see offensive things being said, because it gives them opportunities to define themselves against the offensive things and look like vigilant guardians against evil. Being less offensive, while paying other costs to avoid having beliefs be distorted by political pressure (e.g. taking it elsewhere, taking pains to remember that politically pressured inferences aren’t reliable), arguably de-energizes such people more than it emboldens them.
This logic would fall down entirely if it turned out that “offensive things” isn’t a natural kind, or a pre-existing category of any sort, but is instead a label attached by the “people of the sneer” themselves to anything they happen to want to mock or vilify (which is always going to be something, since—as you say—said people in fact have a goal of mocking and/or vilifying things, in general).
Inconveniently, that is precisely what turns out to be the case…
“Offensive things” isn’t a category determined primarily by the interaction of LessWrong and people of the sneer. These groups exist in a wider society that they’re signaling to. It sounds like your reasoning is “if we don’t post about the Bell Curve, they’ll just start taking offense to technological forecasting, and we’ll be back where we started but with a more restricted topic space”. But doing so would make the sneerers look stupid, because society, for better or worse, considers The Bell Curve to be offensive and does not consider technological forecasting to be offensive.
I’m sorry, but this is a fantasy. It may seem reasonable to you that the world should work like this, but it does not.
To suggest that “the sneerers” would “look stupid” is to posit someone—a relevant someone, who has the power to determine how people and things are treated, and what is acceptable, and what is beyond the pale—for them to “look stupid” to. But in fact “the sneerers” simply are “wider society”, for all practical purposes.
“Society” considers offensive whatever it is told to consider offensive. Today, that might not include “technological forecasting”. Tomorrow, you may wake up to find that’s changed. If you point out that what we do here wasn’t “offensive” yesterday, and so why should it be offensive today, and in any case, surely we’re not guilty of anything, are we, since it’s not like we could’ve known, yesterday, that our discussions here would suddenly become “offensive”… right? … well, I wouldn’t give two cents for your chances, in the court of public opinion (Twitter division). And if you try to protest that anyone who gets offended at technological forecasting is just stupid… then may God have mercy on your soul—because “the sneerers” surely won’t.
But there are systemic reasons why Society gets told that hypotheses about genetically-mediated group differences are offensive, and mostly doesn’t (yet?) get told that technological forecasting is offensive. (If some research says Ethnicity E has higher levels of negatively-perceived Trait T, then Ethnicity E people have an incentive to discredit the research independently of its truth value—and people who perceive themselves as being in a zero-sum conflict with Ethnicity E have an incentive to promote the research independently of its truth value.)
Steven and his coalition are betting that it’s feasible to “hold the line” on only censoring the hypotheses are closely tied to political incentives like this, without doing much damage to our collective ability to think about other aspects of the world. I don’t think it works as well in practice as they think it does, due to the mechanisms described in “Entangled Truths, Contagious Lies” and “Dark Side Epistemology”—you make a seemingly harmless concession one day, and five years later, you end up claiming with perfect sincerity that dolphins are fish—but I don’t think it’s right to dismiss the strategy as fantasy.
I’m not advocating lying. I’m advocating locally preferring to avoid subjects that force people to either lie or alienate people into preferring lies, or both. In the possible world where The Bell Curve is mostly true, not talking about it on LessWrong will not create a trail of false claims that have to be rationalized. It will create a trail of no claims. LessWrongers might fill their opinion vacuum with false claims from elsewhere, or with true claims, but either way, this is no different from what they already do about lots of subjects, and does not compromise anyone’s epistemic integrity.
I understand that. I cited a Sequences post that has the word “lies” in the title, but I’m claiming that the mechanism described in the cited posts—that distortions on one topic can spread to both adjacent topics, and to people’s understanding of what reasoning looks like—can apply more generally to distortions that aren’t direct lies.
Omitting information can be a distortion when the information would otherwise be relevant. In “A Rational Argument”, Yudkowsky gives the example of an election campaign manager publishing survey responses from their candidate, but omitting one question which would make their candidate look bad, which Yudkowsky describes as “cross[ing] the line between rationality and rationalization” (!). This is a very high standard—but what made the Sequences so valuable, is that they taught people the counterintuitive idea that this standard exists. I think there’s a lot of value in aspiring to hold one’s public reasoning to that standard.
Not infinite value, of course! If I knew for a fact that Godzilla will destroy the world if I cite a book that I would otherwise would have cited as genuinely relevant, then fine, for the sake of the sake of the world, I can not cite the book.
Maybe we just quantitatively disagree on how tough Godzilla is and how large the costs of distortions are? Maybe you’re happy to throw Sargon of Akkad under the bus, but when Steve Hsu is getting thrown under the bus, I think that’s a serious problem for the future of humanity. I think this is actually worth a fight.
With my own resources and my own name (and a pen name), I’m fighting. If someone else doesn’t want to fight with their name and their resources, I’m happy to listen to suggestions for how people with different risk tolerances can cooperate to not step on each other’s toes! In the case of the shared resource of this website, if the Frontpage/Personal distinction isn’t strong enough, then sure, “This is on our Banned Topics list; take it to /r/TheMotte, you guys” could be another point on the compromise curve. What I would hope for from the people playing the sneaky consequentialist image-management strategy, is that you guys would at least acknowledge that there is a conflict and that you’ve chosen a side.
For more on why I think not-making-false-claims is vastly too low of a standard to aim for, see “Firming Up Not-Lying Around Its Edge-Cases Is Less Broadly Useful Than One Might Initially Think” and “Heads I Win, Tails?—Never Heard of Her”.
Your posts seem to be about what happens if you filter out considerations that don’t go your way. Obviously, yes, that way you can get distortion without saying anything false. But the proposal here is to avoid certain topics and be fully honest about which topics are being avoided. This doesn’t create even a single bit of distortion. A blank canvas is not a distorted map. People can get their maps elsewhere, as they already do on many subjects, and as they will keep having to do regardless, simply because some filtering is inevitable beneath the eye of Sauron. (Distortions caused by misestimation of filtering are going to exist whether the filter has 40% strength or 30% strength. The way to minimize them is to focus on estimating correctly. A 100% strength filter is actually relatively easy to correctly estimate. And having the appearance of a forthright debate creates perverse incentives for people to distort their beliefs so they can have something inoffensive to be forthright about.)
The people going after Steve Hsu almost entirely don’t care whether LW hosts Bell Curve reviews. If adjusting allowable topic space gets us 1 util and causes 2 utils of damage distributed evenly across 99 Sargons and one Steve Hsu, that’s only 0.02 Hsu utils lost, which seems like a good trade.
I don’t have a lot of verbal energy and find the “competing grandstanding walls of text” style of discussion draining, and I don’t think the arguments I’m making are actually landing for some reason, and I’m on the verge of tapping out. Generating and posting an IM chat log could be a lot more productive. But people all seem pretty set in their opinions, so it could just be a waste of energy.
Another way this matters: Offense takers largely get their intuitions about “will taking offense achieve my goals” from experience in a wide variety of settings and not from LessWrong specifically. Yes, theoretically, the optimal strategy is for them to estimate “will taking offense specifically against LessWrong achieve my goals”, but most actors simply aren’t paying enough attention to form a target-by-target estimate. Viewing this as a simple game theory textbook problem might lead you to think that adjusting our behavior to avoid punishment would lead to an equal number of future threats of punishment against us and is therefore pointless, when actually it would instead lead to future threats of punishment against some other entity that we shouldn’t care much about, like, I don’t know, fricking Sargon of Akkad.
I agree that offense-takers are calibrated against Society-in-general, not particular targets.
As a less-political problem with similar structure, consider ransomware attacks. If an attacker encrypts your business’s files and will sell you the encryption key for 10 Bitcoins, do you pay (in order to get your files back, as common sense and causal decision theory agree), or do you not-pay (as a galaxy-brained updateless-decision-theory play to timelessly make writing ransomware less profitable, even though that doesn’t help the copy of you in this timeline)?
It’s a tough call! If your business’s files are sufficiently important, then I can definitely see why you’d want to pay! But if someone were to try to portray the act of paying as pro-social, that would be pretty weird. If your Society knew how, law-abiding citizens would prefer to coordinate not to pay attackers, which is why the U.S. Treasury Department is cracking down on facilitating ransomware payments. But if that’s not an option …
If coordinating to resist extortion isn’t an option, that makes me very interested in trying to minimize the extent to which there is a collective “us”. “We” should be emphasizing that rationality is a subject matter that anyone can study, rather than trying to get people to join our robot cult and be subject to the commands and PR concerns of our leaders. Hopefully that way, people playing a sneaky consequentialist image-management strategy and people playing a Just Get The Goddamned Right Answer strategy can at least avoid being at each other’s throats fighting over who owns the “rationalist” brand name.
My claim was:
It’s obvious to everyone in the discussion that the model is partially false and there’s also a strategic component to people’s emotions, so repeating this is not responsive.
But of course there’s an alternative. There’s a very obvious alternative, which also happens to be the obviously and only correct action:
Kill Godzilla.
(Appreciate you spelling it out like this, the above is a clear articulation of one of the main perspectives I have on the situation.)
It still appears to me that you are completely missing the point. I acknowledge that you are getting a lot of upvotes and I’m not, suggesting that other LW readers disagree with me. I think they are wrong, but outside view suggests caution.
I notice one thing I said that was not at all what I intended to say, so let me correct that before going further. I said
but what I actually meant to say was
[EDITED to add:] No, that also isn’t quite right; my apologies; let me try again. What I actually mean is that “standing up to X” and “not doing things that would offend X” are events in two entirely separate games, and the latter is not a means to the former.
There are actually three separate interactions envisaged in Steven’s comment, constituting (if you want to express this in game-theoretic terms) three separate games. (1) An interaction with left-wing entryists, where they try to turn LW into a platform for leftist propaganda. (2) An interaction with right-wing entryists, where they try to turn LW into a platform for rightist propaganda. (3) An interaction with leftists, who may or may not be entryists, where they try to stop LW being a platform for right-wing propaganda or claim that it is one. (There is also (4) an interaction with rightists, along the lines of #3, which I include for the sake of symmetry.)
Steven claims that in game 1 we should strongly resist the left-wing entryists, presumably by saying something like “no, LW is not a place for left-wing propaganda”. He claims that in order to do this in a principled way we need also to say “LW is not a place for right-wing propaganda”, thus also resisting the right-wing entryists in game 2. And he claims that in order to do this credibly we need to be reluctant to post things that might be, or that look like they are, right-wing propaganda, thus giving some ground to the leftists in game 3.
Game 1 and game 3 are entirely separate, and the same move could be a declaration of victory in one and a capitulation in the other. For instance, imposing a blanket ban on all discussion of politically sensitive topics on LW would be an immediate and total victory over entryists of both stripes in games 1 and 2, and something like a total capitulation to leftists and rightists alike in games 3 and 4.
So “not doing things that would offend leftists” is not a move in any game played with left-wing entryists; “standing up to left-wing entryists” is not a move in any game played with leftists complaining about right-wing content on LW; I was trying to say both of those and ended up talking nonsense. The above is what I actually meant.
I agree that steven0461 is saying (something like) that people writing LW articles should avoid saying things that outrage left-leaning readers, and that if you view what happens on LW as a negotiation with left-leaning readers then that proposal is not a strategy that gives you much leverage.
I don’t agree that it makes any sense to say, as you did, that Steven’s proposal involves “standing up to X by not saying anything that offends X”, which is the specific thing you accused him of.
Your comment above elaborates on the thing I agree about, but doesn’t address the reasons I’ve given for disagreeing with the thing I don’t agree about. That may be partly because of the screwup on my part that I mention above.
I think the distinction is important, because the defensible accusation is of the form “Steven proposes giving too much veto power over LW to certain political groups”, which is a disagreement about strategy, whereas the one you originally made is of the form “Steven proposes something blatantly self-contradictory”, which is a disagreement about rationality, and around these parts accusations of being stupid or irrational are generally more serious than accusations of being unwise or on the wrong political team.
The above is my main objection to what you have been saying here, but I have others which I think worth airing:
It is not true that “don’t do anything that the left considers offensively right-wing” gives the left “the ability to prevent arbitrary speech”, at least not if it’s interpreted with even the slightest bit of charity, because there are many many things one could say that no one will ever consider offensively right-wing. Of course it’s possible in theory for any given group to start regarding any given thing as offensively right-wing, but I do not think it reasonable to read steven0461′s proposal as saying that literally no degree of absurdity should make us reconsider the policy he proposes.
It is not true that Steven proposes to “not do anything that the left has decided is offensively right-wing”. “Sufficiently offensive” was his actual wording. This doesn’t rule out any specific thing, but again I think any but the most uncharitable reading indicates that he is not proposing a policy of the form “never post anything that anyone finds offensive” but one of the form “when posting something that might cause offence, consider whether its potential to offend is enough to outweigh the benefits of posting it”. So, again, the proposal is not to give “the left” complete veto power over what is posted on LW.
I think it is unfortunate that most of what you’ve written rounds off Steven’s references to “left/right-wing political entryism” to “the left/right”. I do not know exactly where he draws the boundary between mere X-wing-ism and X-wing political entryism, but provided the distinction means something I think it is much more reasonable for LW to see “political entryism” of whatever stripe as an enemy to be stood up to, than for LW to see “the left” or “the right” as an enemy to be stood up to. The former is about not letting political groups co-opt LW for their political purposes. The latter is about declaring ourselves a political team and fighting opposing political teams.
I agree it’s desirable for its own sake, but meant to give an additional argument why even those people who don’t agree it’s desirable for its own sake should be on board with it.
Not necessarily objectively hypocritical, but hypocritical in the eyes of a lot of relevant “neutral” observers.
“Stand up to X by not doing anything X would be offended by” is not what I proposed. I was temporarily defining “right wing” as “the political side that the left wing is offended by” so I could refer to posts like the OP as “right wing” without setting off a debate about how actually the OP thinks of it more as centrist that’s irrelevant to the point I was making, which is that “don’t make LessWrong either about left wing politics or about right wing politics” is a pretty easy to understand criterion and that invoking this criterion to keep LW from being about left wing politics requires also keeping LessWrong from being about right wing politics. Using such a criterion on a society-wide basis might cause people to try to redefine “1+1=2″ as right wing politics or something, but I’m advocating using it locally, in a place where we can take our notion of what is political and what is not political as given from outside by common sense and by dynamics in wider society (and use it as a Schelling point boundary for practical purposes without imagining that it consistently tracks what is good and bad to talk about). By advocating keeping certain content off one particular website, I am not advocating being “maximally yielding in an ultimatum game”, because the relevant game also takes place in a whole universe outside this website (containing your mind, your conversations with other people, and lots of other websites) that you’re free to use to adjust your degree of yielding. Nor does “standing up to political entryism” even imply standing up to offensive conclusions reached naturally in the course of thinking about ideas sought out for their importance rather than their offensiveness or their symbolic value in culture war.
I agree that LW shouldn’t be a zero-risk space, that some people will always hate us, and that this is unavoidable and only finitely bad. I’m not persuaded by reasons 2 and 3 from your comment at all in the particular case of whether people should talk about Murray. A norm of “don’t bring up highly inflammatory topics unless they’re crucial to the site’s core interests” wouldn’t stop Hanson from posting about ems, or grabby aliens, or farmers and foragers, or construal level theory, or Aumann’s theorem, and anyway, having him post on his own blog works fine. AI alignment was never political remotely like how the Bell Curve is political. (I guess some conceptual precursors came from libertarian email lists in the 90s?) If AI alignment becomes very political (e.g. because people talk about it side by side with Bell Curve reviews), we can invoke the “crucial to the site’s core interests” thing and keep discussing it anyway, ideally taking some care to avoid making people be stupid about it. If someone wants to argue that having Bell Curve discussion on r/TheMotte instead of here would cause us to lose out on something similarly important, I’m open to hearing it.
Not within the mainstream politics, but within academic / corporate CS and AI departments.
You’d have to use a broad sense of “political” to make this true (maybe amounting to “controversial”). Nobody is advocating blanket avoidance of controversial opinions, only blanket avoidance of narrow-sense politics, and even then with a strong exception of “if you can make a case that it’s genuinely important to the fate of humanity in the way that AI alignment is important to the fate of humanity, go ahead”. At no point could anyone have used the proposed norms to prevent discussion of AI alignment.
I know you haven’t implied that the someone could be me, but I thought I’d just clarify that I would vehemently oppose such an argument. My argument contra slippery slope is that I don’t see evidence for it. If we look ten years into the past, there hasn’t been another book like TBC every week; in fact there hasn’t been one ever. I would bet against there being another one in the next 10 years.
There may be some risk of a slippery slope on other issues, but honestly I want that to be a separate argument because I estimate this post to carry a lot more risk than the other < 4 posts/year that I mentioned. I don’t know if this is true (and it’s usually bad form to accuse others of lack of knowledge), but I genuinely wonder if others who’ve participated in this discussion just don’t know how strongly many people feel about this book. (it is of course possible to acknowledge this and still (or especially) be against censorship.)
I’m fairly aware of Murray’s public image, but wanted to go a little deeper before replying.
Here’s a review from the Washington Post this year, of Murray’s latest book. Note that, while critical of his book, it does not call him a racist. Perhaps its strongest critical language is the closing sentence:
It actually more portrays him as out of touch with the rise of the far right than in lockstep with it. The article does not call him a racist, predict his book will cause harm, or suggest that readers avoid it. This suggests to me that there is still room for Murray’s output to be considered by a major, relatively liberal news media outlet.
The Standard-Examiner published a positive review of the same book. They are a newspaper with a circulation of about 30,000, based out of Ogden, UT.
Looking over other the couple dozen news articles that popped up containing “Charles Murray” and “The Bell Curve” from 2021, I see several that mention protests against him, or arguments over TBC, mentioned as one of a handful of important examples of prominent debates about race and racism.
I also looked up protests against Murray. There have been a few major ones, most famously at Middlebury College, some minor ones, and some that did not attract protests. My view is that for college protests, the trigger is “close to home,” and the protest organizers depend on college advertising and social ties to motivate participation.
So we are in agreement that Murray is a prominent and controversial figure on this topic, and protests against him can provoke once-in-a-decade-level episodes of racial tension on a campus, or be viewed as arguments on par with debates over critical race theory. This isn’t just some book about a controversial topic—it was a bestseller, and is still referenced 25 years later as a major source of controversy, and which has motivated hundreds or even thousands of students to protest the author when he’s attempted to speak on their campus. There are many scholarly articles writing, and generally critically, about the book.
Despite the controversy, it’s possible in 2021 for a liberal journalist to publish a critical but essentially professional review of Murray’s new work, and for a conservative journalist to publish a positive review in their newspaper.
The way I see it, Murray is a touchstone figure, but is still only very rarely prominent in the daily news cycle. Just writing about him isn’t enough to make the article newsworthy. If lsusr was a highly prominent blogger, then this review might make the news, or be alarming enough to social media activists to outcompete other tweets and shares. But he’s not a big enough figure, and this isn’t an intense enough article, to even come close to making such a big splash.
If this article poses an issue, it’s by adding one piece of evidence to the prosecutor’s exhibit that LW is a politically problematic space. Given that, as you say, this is one of the most unusually controversy-courting posts of the year, my assessment that it is “only one more piece of evidence,” rather than a potential turning point in this site’s public image, strikes me as a point of evidence against censorship. It’s just not that big a deal.
If you would care to game out for me in a little more detail about what a long-term scenario in which AGI safety becomes tainted by association with posts such as this, to the serious detriment of humanity, please do!
Agree with all of this, but my concern is not that the coupling of [worrying about AGI] and [being anti-social-justice] happens tomorrow. (I did have some separate concerns about people being put off by the post today, but I’ve been convinced somewhere in the comments under this post that the opposite is about equally likely.) It’s that this happens when AGI saftey is a much bigger deal in the public discourse. (Not sure if you think this will never happen? I think there’s a chance it never happens but that seems widely uncertain. I would put maybe 50% on it or something? Note that even if it happens very late, say 4 years before AGI poses an existential risk, I think that’s still more than enough time for the damage to be done. EY famously argued that there is no firelarm for AGI; if you buy this then we can’t rely on “by this point the danger is so obvious that people will take safety seriously no matter what”.)
If your next question is “why worry about this now”, one reason is that I don’t have faith that mods will react in time when the risk increases (I’ve updated upward on how likely I think this is after talking to Ruby but not to 100% and who knows who’s mod in 20 years), and I have the opportunity to say something now. But even if I had full authority over how the policy changes in the future, I still wouldn’t have allowed this post because people can dig out old material if they want to write a hit piece. This post has been archived, so from this point on there will forever be the opportunity to link LW to TBC for anyone wants to do that. And if you applied the analog of security mindset to this problem (which I think is appropriate), this is not something you would allow to happen. There is precedent for people losing positions over things that have happened decades in the past.
One somewhat concrete scenario that seems plausible (but widely unlikely because it’s concrete) is that Elon Musk manages to make the issue mainstream in 15 years; someone does a deep dive and links this to LW and LW to anti-social-jutice (even though LW itself still doesn’t have that many more readers); this gets picked up a lot of people who think worrying about AGI is bad; the aforementioned coupling occurs.
The only other thing I’d say is that there is also a substantial element of randomness to what does and doesn’t create a vast backlash. You can’t look at one instance of “person with popularity level x said thing of controversy level y, nothing bad happened” and conclude that any other instance (x′,y′) with x′<x and y′<y will definitely not lead to anything bad happening.
I don’t think it will be obvious, but I think we’ll be able to make an imperfect estimate of when to change the policy that’s still better than giving up on future evaluation of such tradeoffs and committing reputational murder-suicide immediately. (I for one like free speech and will be happy to advocate for it on LW when conditions change enough to make it seem anything other than pointlessly self-destructive.)
I’m not sure whether that’s true, but separately, the norm against politics has definitely impact our ability to discuss politics. Perhaps that’s a necessary sacrifice, but it’s a sacrifice. In this particular case, both the object level (why is our society the way it is) and the meta-level (what are the actual views in this piece that got severe backlash) are relevant to our modeling of the world and I think it’d be a loss to not have this piece.
I’m not sure where this post would fall in my ranking (along the dimension you’re pointing at). It’s possible I agree with you that it’s at the extreme end–but there has to be a post at the extreme end. The posts that are imo (or other moderator’s opinions) over the line are ones you wouldn’t see.
I’d guess that it was intentionally provocative (to what degree the intention was, I don’t know), but I don’t feel inclined to tell the author they can’t do that in this case.
If I had written the post, I’d have named it differently and added caveats, etc. But I didn’t and wouldn’t have because of timidness, which makes me hesitant to place requirements on the person who actually did.
I agree that the politics ban is a big sacrifice (regardless of whether the benefits outweigh it or not), and also that this particular post has a lot of value. But if you look at the set of all books for which (1) a largely positive reivew could plausibly been written by a super smart guy like lsusr, and (2) the backlash could plausibly be really bad, I think it literally contains a single element. It’s only TBC. There are a bunch of non-bookreview posts that I also wouldn’t want, but they’re very rare. It seems like we’re talking about a much smaller set of topics than what’s covered by the norm around politics.
I feel like if we wanted to find the optimal point in the value-risk space, there’s no way it’s “ban on all politics but no restriction on social justice”. There have got to be political areas with less risk and more payoff, like just all non-US politics or something.
A global ban on political discussion by rationalists might be a big sacrifice, but it seems to me there are no major costs to asking people to take it elsewhere.
(I just edited “would be a big sacrifice” to “might be a big sacrifice”, because the same forces that cause a ban to seem like a good idea will still distort discussions even in the absence of a ban, and perhaps make them worse than useless because they encourage the false belief that a rational discussion is being had.)
Just a short note that the title seems like the correct one so that it’s searchable by the name of the book slash author. Relatedly, all book reviews on LW are called “Book Review: <Book Name>”, this one didn’t stand out as any different to me (except it adds the author’s name, which seems pretty within reasonable bounds to me).
Fwiw, I bet adding the author’s name was an intentional move because it’d be controversial.
Okay. Maybe not the ideal goal, not sure, but I think it’s pretty within range of fine things to do. There’s a fairly good case that people will search the author’s name and want to understand their ideas because he’s well-known, so it helps as a search term.
I’ll bite, but I can’t promise to engage in a lot of back-and-forth.
Let’s generalize. A given post on LW’s frontpage may heighten or diminish its visibility and appeal to potential newcomers, or the visibility/appeal of associated causes like X-risk. You’ve offered one reason why this post might heighten its visibility while diminishing its appeal.
Here’s an alternative scenario, in which this post heightens rather than diminishes the appeal of LW. Perhaps a post about the Bell Curve will strike somebody as a sign that this website welcomes free and open discourse, even on controversial topics, as long as it’s done thoughtfully. This might heighten, rather than diminish, LW’s appeal, for a person such as this. Indeed, hosting posts on potentially controversial topics might select for people like this, and that might not only grow the website, but reinforce its culture in a useful way.
I am not claiming that this post heightens the appeal of LW on net—only that it’s a plausible alternative hypothesis. I think that we should be very confident that a post will diminish the appeal of LW to newcomers before we advocate for communally-imposed censorship.
Not only do we have to worry that such censorship will impact the free flow of information and ideas, but that it will personally hurt the feelings of a contributor. Downvotes and calls for censorship pretty clearly risk diminishing the appeal of the website to the poster, who has already demonstrated that they care about this community. If successful, the censorship would only potentially bolster the website’s appeal for some hypothetical newcomer. It makes more sense to me to prioritize the feelings of those already involved. I don’t know how lsusr feels about your comment, but I know that when other people have downvoted or censored my posts and comments, I have felt demoralized.
The reason I think this is unlikely is that the base rate of (blogs touching on politics making it into the NYT for far-right trolling)/(total blogs touching on politics) is low. Slate Star Codex had a large number of readers before the NYT wrote an article about it. I believe that LW must be have a readership two orders of magnitude lower than SSC/ACX (in the thousands, or even just the hundreds, for LW, in the hundreds of thousands for SSC/ACX). LW is the collective work of a bunch of mainly-anonymous bloggers posting stuff that’s largely inoffensive and ~never (recently) flagrantly attacking particular political factions. Indeed, we have some pretty strong norms against open politicization. Because its level of openly political posting and its readership are both low, I think LW is an unappealing target for a brigade or hit piece. Heck, even Glen Weyl thinks we’re not worth his time!
Edit: See habryka’s stats below for a counterpoint. I still think there’s a meaningful difference between the concentrated attention given to posts on ACX vs. the diffuse attention (of roughly equal magnitude) distributed throughout the vastness of LW.
For this reason, it once again does not seem worth creating a communal norm of censorship and a risk of hurt feelings by active posters.
Note also that, while you have posited and acted upon (via downvoting and commenting) a hypothesis of yours that the risks of this post outweigh the benefits, you’ve burdened respondants with supplying more rigor than you brought to your original post (“I would much welcome some kind of a cost-benefit calculation that concludes that this is a good idea”). It seems to me that a healthier norm would be that, before you publicly proclaim that a post is worthy of censorship, that you do the more rigorous cost/benefit calculation, and offer it up for others to critique.
Or should I fight fire with fire, by strongly-upvoting lsusr’s post to counteract your strong-downvote? In this scenario, upvotes and downvotes are being used not as a referendum on the quality of the post, but on whether or not it should be censored to protect LW. Is that how we wish this debate to be decided?
As a final question, consider that you seem to view this post in particular as exceptionally risky for LW. That means you are making an extraordinary claim: that this post, unlike almost every other LW post, is worthy of censorship. Extraordinary claims require extraordinary evidence. Have you met that standard?
LW’s readership is about the same order of magnitude as SSC. Depending on the mood of the HN and SEO gods.
Not that I don’t believe you, but that’s also really hard for me to wrap my head around. Can you put numbers on that claim? I’m not sure if ACX has a much smaller readership than I’d imagined, or if LW has a much bigger one, but either way I’d like to know!
https://www.similarweb.com/website/astralcodexten.substack.com/?competitors=lesswrong.com Currently shows ACX at something like 1.7x of LessWrong. At some points in the past LessWrong was slightly ahead.
LessWrong is a pretty big website. Here is a random snapshot of top-viewed pages from the last month from Google Analytics:
As you can see from the distribution, it’s a long tail of many pages getting a few hundred pageviews each month, which adds up a lot.
That is vastly more readership than I had thought. A naive look at these numbers suggests that a small city’s worth of people read Elizabeth’s latest post. But I assume that these numbers can’t be taken at face value.
It’s very hard for me to square the idea that these websites get roughly comparable readership with my observation that ACX routinely attracts hundreds of comments on every post. LW gets 1-2 orders of magnitude fewer comments than ACX.
So while I’m updating in favor of the site’s readership being quite a bit bigger than I’d thought, I still think there’s some disconnect here between what I’m thinking of by “readership” and the magnitude of “readership” is coming across in these stats.
Note that LW gets 1-2 OOM fewer comments on the average post, but not in total. I reckon monthly comments is same OOM. And if you add up total word count on each site I suspect LW is 1 OOM bigger each month. ACX is more focused and the discussion is more focused, LW is a much broader space with lots of smaller convos.
That makes a lot of sense. I do get the feeling that, although total volume on a particular topic is more limited here, that there’s a sense of conversation and connection that I don’t get on ACX, which I think is largely due to the notification system we have here for new comments and messages.
This is weekly comments for LessWrong over the last year. Last we counted, something like 300 on a SSC post? So if there are two SSC posts/week, LessWrong is coming out ahead.
I think ACX is ahead of LW here. In October, it got 7126 comments in 14 posts, which is over 1600/week. (Two of them were private with 201 between them, still over 1500/week if you exclude them. One was an unusually high open thread, but still over 1200/week if you exclude that too.)
In September it was 10350 comments, over 2400/week. I can’t be bothered to count August properly but there are 10 threads with over 500 comments and 20 with fewer, so probably higher than October at least.
Not too far separate though, like maybe 2x but not 10x.
(E: to clarify this is “comments on posts published in the relevant month” but that shouldn’t particularly matter here)
I don’t think LW gets at all fewer comments than ACX. I think indeed LW has more comments than ACX, it’s just that LW comments are spread out over 60+ posts in a given week, whereas ACX has like 2-3 posts a week. LessWrong gets about 150-300 comments a day, which is roughly the same as what ACX gets per day.
I think this number can be relatively straightforwardly taken at face value. Elizabeth’s post was at the top of HN for a few hours, so a lot of people saw it. A small city’s worth seems about right for the number of people who clicked through and at least skimmed it.
I’m surprised to see how many people view the Roko’s Basilisk tag. Is that a trend over more than just the last month?
It’s the norm, alas.
https://www.lesswrong.com/posts/Ndtb22KYBxpBsagpj/eliezer-yudkowsky-facts?commentId=u7iYAQM7MkGdhyTL9
I think the evidence that wokeism is a powerful force in the world we live in is abundant, and my primary reaction to your comment is that it feels like everything you said could have been written in a world where this isn’t so. There is an inherent asymmetry here in how many people care about which things to what degree in the real world. (As I’ve mentioned in the last discussion, I know a person who falls squarely into the second category I’ve mentioned; committed EA, very technically smart, but thinks all LW-adjacent things are poisonous, in her case because of sexism rather than racism, but it’s in the same cluster.)
Sam Harris invited the author of the Bell Curve onto his podcast 4 years ago, and as a result has a stream of hateful rhetoric targeted his way that lasts to this day. Where is the analogous observable effect into the opposite direction? If it doesn’t exist, why is postulating the opposite effect plausible in this case?
My rough cost-benefit analysis is −5/-20/-20 for the points I’ve mentioned, +1 for the advantage of being able to discuss this here, and maybe +2 for the effect of attracting people who like it for the opposite symbolism (i.e., here’s someone not afraid to discuss hard things) and I feel like I don’t want to assign a number to how it impacts Isur’s feelings. The reason I didn’t spell this out was because I thought it would come across as unnecessarily uncharitable, and it doesn’t convey much new information because I already communicated that I don’t see the upside.
Sam Harris has enormous reach, comparable to Scott’s. Also, podcasts have a different cultural significance than book reviews. Podcasts tend to come with an implicit sense of friendliness and inclusion extended toward the guest. Not so in a book review, which can be bluntly critical. So for the reasons I outlined above, I don’t think Harris’s experiences are a good reference class for what we should anticipate.
“Wokeism” is powerful, and I agree that this post elevated this site’s risk of being attacked or condemned either by the right or the left. I also agree that some people have been turned off by the views on racism or sexism they’ve been exposed to on by some posters on this site.
I also think that negativity tends to be more salient than approval. If lsusr’s post costs us one long-term reader and gains us two, I expect the one user who exits over it to complain and point to this post, making the reason for their dissatisfaction clear. By contrast, I don’t anticipate the newcomers to make a fanfare, or to even see lsusr’s post as a key reason they stick around. Instead, they’ll find themselves enjoying a site culture and abundance of posts that they find generally appealing. So I don’t think a “comparable observable effect in the opposite direction” is what you’d look for to see whether lsusr’s post enhances or diminishes the site’s appeal on net.
In fact, I am skeptical about our ability to usefully predict the effect of individual posts on driving readership to or away from this site. Which is why I don’t advocate censoring individual posts on this basis.
I agree that the risk of anything terrible happening right now is very low for this reason. (Though I’d still estimate it to be higher than the upside.) But is “let’s rely on us being too small to get noticed by the mob” really a status quo you’re comfortable with?
This comment actually made me update somewhat because it’s harder than I thought to find an asymmetry here. But it’s still only a part of the story (and the part I’ve put the least amount of weight on.)
Let me rephrase that slightly, since I would object to several features of this sentence that I think are beside your main point. I do think that taking the size and context of our community into account when assessing how outsiders will see and respond to our discourse is among the absolute top considerations for judging risk accurately.
On a simple level, my framework is that we care about three factors: object-level risks and consequences, and enforcement-level risks and consequences. These are analogous to the risks and consequences from crime (object-level), and the risks and consequences from creating a police force or military (enforcement-level).
What I am arguing in this case is that the negative risks x consequences of the sort of enforcement-level behaviors you are advocating for and enacting seem to outweigh the negative risks x consequences of being brigaded or criticized in the news. Also, I’m uncertain enough about the balance of this post’s effect on inflow vs. outflow of readership to be close to 50⁄50, and expect it to be small enough either way to ignore it.
Note also that Sam Harris and Scott Alexander still have an enormous readership after their encounters with the threats you’re describing. While I can imagine a scenario in which unwanted attention becomes deeply unpleasant, I also expect to be a temporary situation. By contrast, instantiating a site culture that is self-censoring due to fear of such scenarios seems likely to be much more of a daily encumbrance—and one that still doesn’t rule out the possibility that we get attacked anyway.
I’d also note that you may be contributing to the elevation of risk with your choices of language. By using terms like “wokeism,” “mob,” and painting scrutiny as a dire threat in a public comment, it seems to me that you add potential fuel for any fire that may come raging through. My standard is that, if this is your earnest opinion, then LW ought to be a good platform for you to discuss that, even if it elevates our risk of being cast in a negative light.
Your standard, if I’m reading you right, is that your comment should be considered for potential censorship itself, due to the possibility that it does harm to the site’s reputation. Although it is perhaps not as potentially inflammatory as a review of TBC, it’s also less substantial, and potentially interacts in a synergistic way to elevate the risk. Do you think this is a risk you ought to have taken seriously before commenting? If not, why not?
My perspective is that you were right to post what you posted, because it reflected an honest concern of yours, and permits us to have a conversation about it. I don’t think you should have had to justify the existence of your comment with some sort of cost/benefit analysis. There are times when I think that such a justification is warranted, but this context is very far from that threshold. An example of a post that I think crosses that threshold would be a description of a way to inflict damage that had at least two of the following attributes: novel, convenient, or detailed. Your post is none of these, and neither is lsusr’s, so both of them pass my test for “it’s fine to talk about it.”
After reading this, I realize that I’ve done an extremely poor job communicating with everything I’ve commented on this post, so let me just try to start over.
I think what I’m really afraid of is a sequence of events that goes something like this:
Every couple of months, someone on LW makes a post like the above
In some (most?) cases, someone is going to speak up against this (in this case, we had two), there will be some discussion, but the majority will come down on the side that censorship is bad and there’s no need to take drastic action
The result is that we never establish any kind of norm nor otherwise prepare for political backlash
In ten or twenty or forty years from now, in a way that’s impossible to predict because any specific scenario is extremely unlikely, the position to be worried about AGI will get coupled to being anti social justice in the public discourse, as a result it will massively lose status and the big labs react by taking safety far less seriously and maybe we have fewer people writing papers on alignment
At that point it will be obvious to everyone that not having done anything to prevent this was a catastrophic error
After the discussion on the dating post, I’ve made some attempts to post a follow-up but chickened out of doing it because I was afraid of the reaction or maybe just because I couldn’t figure out how to approach the topic. When I saw this post, I think I originally decided not to do anything, but then anon03 said something and then somehow I thought I had to say something as well but it wasn’t well thought out because I already felt a fair amount of anxiety after having failed to write about it before. When my comment got a bunch of downvotes, the feeling of anxiety got really intense and I felt like the above mentioned scenario is definitely going to happen and I won’t be able to do anything about it because arguing for censorship is just a lost cause, and I think I then intentionally (but subconsciously) used the language you’ve just pointed out to signal that I don’t agree with the object level part of anything I’m arguing for (probably in the hopes of changing the reception?) even though I don’t think that made a lot of sense; I do think I trust people on this site to keep the two things separate. I completely agree that this risks making the problem worse. I think it was a mistake to say it.
I don’t think any of this is an argument for why I’m right, but I think that’s about what really happened.
Probably it’s significantly less than 50% that anything like what I described happens just because of the conjunction—who knows of anyone even still cares about social justice in 20 years. But it doesn’t seem nearly unlikely enough not to take seriously, and I don’t see anyone taking it seriously and it really terrifies me. I don’t completely understand why since I tend to not be very affected when thinking about x-risks. Maybe because of the feeling that it should be possible to prevent it.
I don’t think the fact that Sam still has an audience is a reason not to panic. Joe Rogan has a quadrillion times the audience of the NYT or CNN, but the social justice movement still has disproportionate power over institutions and academia, and probably that includes AI labs?
I will say that although I disagree with your opinion re: censoring this post and general risk assessment related to this issue, I don’t think you’ve expressed yourself particularly poorly. I also acknowledge that it’s hard to manage feelings of anxiety that come up in conversations with an element of conflict, in a community you care about, in regards to an issue that is important to the world. So go easier on yourself, if that helps! I too get anxious when I get downvoted, or when somebody disagrees with me, even though I’m on LW to learn, and being disagreed with and turning out to be wrong is part of that learning process.
It sounds like a broader perspective of yours is that there’s a strategy for growing the AGI safety community that involves keeping it on the good side of whatever political faction is in power. You think that we should do pretty much whatever it takes to make AGI safety research a success, and that this strategy of avoiding any potentially negative associations is important enough for achieving that outcome that we should take deliberate steps to safeguard its perception in this way. As a far-downstream consequence, we should censor posts like this, out of a general policy of expunging anything potentially controversial being associated with x-risk/AGI safety research and their attendant communities.
I think we roughly agree on the importance of x-risk and AGI safety research. If there was a cheap action I could take that I thought would reliably mitigate x-risk by 0.001%, I would take it. Downvoting a worrisome post is definitely a cheap action, so if I thought it would reliably mitigate x-risk by 0.001%, I would probably take it.
The reason I don’t take it is because I don’t share your perception that we can effectively mitigate x-risk in this way. It is not clear to me that the overall effect of posts like lsusr’s is net negative for these causes, nor that such a norm of censorship would be net beneficial.
What I do think is important is an atmosphere in which people feel freedom to follow their intellectual interests, comfort in participating in dialog and community, and a sense that their arguments are being judged on their intrinsic merit and truth-value.
The norm that our arguments should be judged based on their instrumental impact on the world seems to me to be generally harmful to epistemics. And having an environment that tries to center epistemic integrity above other concerns seems like a relatively rare and valuable thing, one that basically benefits AGI safety.
That said, people actually doing AGI research have other forums for their conversation, such as the alignment forum and various nonprofits. It’s unclear that LW is a key part of the pipeline for new AGI researchers, or forum for AGI research to be debated and discussed. If LW is just a magnet for a certain species of blogger who happens to be interested in AGI safety, among other things; and if those bloggers risk attracting a lot of scary attention while contributing minimally to the spread of AGI safety awareness or to the research itself, then that seems like a concerning scenario.
It’s also hard for me to judge. I can say that LW has played a key role for me connecting with and learning from the rationalist community. I understand AGI safety issues better for it, and am the only point of reference that several of my loved ones have for hearing about these issues.
So, N of 1, but LW has probably improved the trajectory of AGI safety by a miniscule but nonzero amount via its influence on me. And I wouldn’t have stuck around on LW if there was a lot of censorship of controversial topics. Indeed, it was the opportunity to wrestle with my attachments and frustrations with leftwing ideology via the ideas I encountered here that made this such an initially compelling online space. Take away the level of engagement with contemporary politics that we permit ourselves here, add in a greater level of censorship and anxiety about the consequences of our speech, and I might not have stuck around.
Thanks for this comment.
I happily endorse this very articulate description of my perspective, with the one caveat that I would draw the line to the right of ‘anything potentially controversial’ (with the left-right axis measuring potential for backlash). I think this post falls to the right of just about any line; I think it hast he highest potential for backlash out of any post I remember seeing on LW ever. (I just said the same in a reply to Ruby, and I wasn’t being hypothetical.)
I’m probably an unusual case, but I got invited into the alignment forum by posting the Factored Cognition sequence on LW, so insofar as I count, LW has been essential. If it weren’t for the way that the two forums are connected, I wouldn’t have written the sequence. The caveat is that I’m currently not pursuing a “direct” path on alignment but am instead trying to go the academia route by doing work in the intersection of [widely recognized] and [safety-relevant] (i.e. on interpretability), so you could argue that the pipeline ultimately didn’t work. But I think (not 100% sure) at least Alex Turner is a straight-forward success story for said pipeline.
I think you probably want to respond on this on my reply to Rubin so that we don’t have two discussions about the same topic. My main objection is that the amount of censorship I’m advocating for seems to me to be tiny, I think less than 5 posts per year, far less than what is censored by the norm against politics.
Edit: I also want to object to this:
I don’t think anything of what I’m saying involves judging arguments based on their impact on the world. I’m saying you shouldn’t be allowed to talk about TBC on LW in the first place. This seems like a super important distinction because it doesn’t involve lying or doing any mental gymnastics. I see it as closely analogous to the norm against politics, which I don’t think has hurt our discourse.
What I mean here is that you, like most advocates of a marginal increase in censorship, justify this stance on the basis that the censored material will cause some people, perhaps its readers or its critics, to take an action with an undesirable consequence. Examples from the past have included suicidal behavior, sexual promiscuity, political revolution, or hate crimes.
To this list, you have appended “elevating X-risk.” This is what I mean by “impact on the world.”
Usually, advocates of marginal increases in censorship are afraid of the content of the published documents. In this case, you’re afraid not of what the document says on the object level, but of how the publication of that document will be perceived symbolically.
An advocate of censorship might point out that we can potentially achieve significant gains on goals with widespread support (in our society, stopping hate crimes might be an example), with only modest censorship. For example, we might not ban sales of a certain book. We just make it library policy not to purchase them. Or we restrict purchase to a certain age group. Or major publishers make a decision not to publish books advocating certain ideas, so that only minor publishing houses are able to market this material. Or we might permit individual social media platforms to ban certain articles or participants, but as long as internet service providers aren’t enacting bans, we’re OK with it.
On LW, one such form of soft censorship is the mod’s decision to keep a post off the frontpage.
To this list of soft censorship options, you are appending “posting it as a linkpost, rather than on the main site,” and assuring us that only 5 posts per year need to be subject even to this amount of censorship.
It is OK to be an advocate of a marginal increase in censorship. Understand, though, that to one such as myself, I believe that it is precisely these small marginal increases in censorship that pose a risk to X-risk, and the marginal posting of content like this book review either decreases X-risk (by reaffirming the epistemic freedom of this community) or does not affect it. If the community were larger, with less anonymity, and had a larger amount of potentially inflammatory political material, I would feel differently about this.
Your desire to marginally increase censorship feels to me a bit like a Pascal’s Mugging. You worry about a small risk of dire consequences that may never emerge, in order to justify a small but clear negative cost in the present moment. I don’t think you’re out of line to hold this belief. I just think that I’d need to see some more substantial empirical evidence that I should subscribe to this fear before I accept that we should pay this cost.
The link thing was anon03′s idea; I want posts about TBC to be banned outright.
Other than that, I think you’ve understood my model. (And I think I understand yours except that I don’t understand the gears of the mechanism by which you think x-risk increases.)
Sorry for conflating anon03′s idea with yours!
A quick sketch at a gears-level model:
X-risk, and AGI safety in particular, require unusual strength in gears-level reasoning to comprehend and work on; a willingness to stand up to criticism not only on technical questions but on moral/value questions; an intense, skeptical, questioning attitude; and a high value placed on altruism. Let’s call these people “rationalists.”
Even in scientific and engineering communities, and the population of rational people generally, the combination of these traits I’m referring to as “rationalism” is rare.
Rationalism causes people to have unusually high and predictable needs for a certain style and subject of debate and discourse, in a way that sets them apart from the general population.
Rationalists won’t be able to get their needs met in mainstream scientific or engineering communities, which prioritize a subset of the total rationalist package of traits.
Hence, they’ll seek an alternative community in which to get those needs met.
Rationalists who haven’t yet discovered a rationalist community won’t often have an advance knowledge of AGI safety. Instead, they’ll have thoughts and frustrations provoked by the non-rationalist society in which they grew up. It is these prosaic frustrations—often with politics—that will motivate them to seek out a different community, and to stay engaged with it.
When these people discover a community that engages with the controversial political topics they’ve seen shunned and censored in the rest of society, and doing it in a way that appears epistemically healthy to them, they’ll take it as evidence that they should stick around. It will also be a place where even AGI safety researchers and their friends can deal with the ongoing their issues and interests beyond AGI safety.
By associating with this community, they’ll pick up on ideas common in the community, like a concern for AGI safety. Some of them will turn it into a career, diminishing the amount of x-risk faced by the world.
I think that marginally increasing censorship on this site risks interfering with step 7. This site will not be recognized by proto-rationalists as a place where they can deal with the frustrations that they’re wrestling with when they first discover it. They won’t see an open attitude of free enquiry modeled, but instead see the same dynamics of fear-based censorship that they encounter almost everywhere else. Likewise, established AGI safety people and their friends will lose a space for free enquiry, a space for intellectual play and exploration that can be highly motivating. Loss of that motivation and appeal may interrupt the pipeline or staying power for people to work on X-risks of all kinds, including AGI safety.
Politics continues to affect people even after they’ve come to understand why it’s so frustrating, and having a minimal space to deal with it on this website seems useful to me. When you have very little of something, losing another piece of it feels like a pretty big deal.
What has gone into forming this model? I only have one datapoint on this (which is myself). I stuck around because of the quality of discussion (people are making sense here!); I don’t think the content mattered. But I don’t have strong resistance to believing that this is how it works for other people.
I think if your model is applied to the politics ban, it would say that it’s also quite bad (maybe not as bad because most politics stuff isn’t as shunned and censored as social justice stuff)? If that’s true, how would you feel about restructuring rather than widening the censorship? Start allowing some political discussions (I also keep thinking about Wei Dai’s “it’ll go there eventually so we should practice” argument) but censor the most controversial social justice stuff. I feel like the current solution isn’t pareto optimal in the {epistemic health} x {safety against backlash} space.
Anecdotal, but about a year ago I committed to the rationalist community for exactly the reasons described. I feel more accepted in rationalist spaces than trans spaces, even though rationalists semi-frequently argue against the standard woke line and trans spaces try to be explicitly welcoming.
Just extrapolating from my own experience. For me, the content was important.
I think where my model really meets challenges is that clearly, the political content on LW has alienated some people. These people were clearly attracted here in the first place. My model says that LW is a magnet for likely AGI-safety researchers, and says nothing about it being a filter for likely AGI-safety researchers. Hence, if our political content is costing us more involvement than it’s retaining, or if the frustration experienced by those who’ve been troubled by the political content outweigh the frustration that would be experienced by those whose content would be censored, then that poses a real problem for my cost/benefit analysis.
A factor asymmetrically against increased censorship here is that censorship is, to me, intrinsically bad. It’s a little like war. Sometimes, you have to fight a war, but you should insist on really good evidence before you commit to it, because wars are terrible. Likewise, censorship sucks, and you should insist on really good evidence before you accept an increase in censorship.
It’s this factor, I think, that tilts me onto the side of preferring the present level of political censorship rather than an increase. I acknowledge and respect the people who feel they can’t participate here because they experience the environment as toxic. I think that is really unfortunate. I also think that censorship sucks, and for me, it roughly balances out with the suckiness of alienating potential participants via a lack of censorship.
This, I think, is the area where my mind is most susceptible to change. If somebody could make a strong case that LW currently has a lot of excessively toxic, alienating content, that this is the main bottleneck for wider participation, and that the number of people who’d leave if that controversial content were removed were outweighed by the number of people who’d join, then I’d be open-minded about that marginal increase in censorship.
An example of a way this evidence could be gathered would be some form of community outreach to ex-LWers and marginal LWers. We’d ask those people to give specific examples of the content they find offensive, and try both to understand why it bothers them, and why they don’t feel it’s something they can or want to tolerate. Then we’d try to form a consensus with them about limitations on political or potentially offensive speech that they would find comfortable, or at least tolerable. We’d also try to understand their level of interest in participating in a version of LW with more of these limitations in place.
Here, I am hypothesizing that there’s a group of ex-LWers or marginal LW-ers who feel a strong affinity for most of the content, while an even stronger aversion for a minority subset of the content to such a degree that they sharply curtail their participation. Such that if the offensive tiny fraction of the content were removed, they’d undergo a dramatic and lasting increase in engagement with LW. I find it unlikely that a sizeable group like this exists, but am very open to having my mind changed via some sort of survey data.
It seems more likely to me that ex/marginal-LWers are people with only a marginal interest in the site as a whole, who point to the minority of posts they find offensive as only the most salient example of what they dislike. Even if it were removed, they wouldn’t participate.
At the same time, we’d engage in community dialog with current active participants about their concerns with such a change. How strong are their feelings about such limitations? How many would likely stop reading/posting/commenting if these limitations were imposed? For the material they feel most strongly about it, why do they feel that way?
I am positing that there are a significant subset of LWers for whom the minority of posts engaging with politics are very important sources of its appeal.
How is it possible that I could simultaneously be guessing—and it is just a guess—that controversial political topics are a make-or-break screening-in feature, but not a make-or-break screening-out feature?
The reason is that there are abundant spaces online and in-person for conversation that does have the political limitations you are seeking to impose here. There are lots of spaces for conversation with a group of likeminded ideologues across the entire political spectrum, where conformity is a prerequisite of polite conversation. Hence, imposing the same sort of guardrails or ideological conformities on this website would make it similar to many other platforms. People who desire these guardrails/conformities can get what they want elsewhere. For them, LW would be a nice-to-have.
For those who desire polite and thoughtful conversation on a variety of intellectual topics, even touching on politics, LW is verging on a need-to-have. It’s rare. This is why I am guessing that a marginal increase in censorship would cost us more appeal than it would gain us.
I agree with you that the risk of being the subject of massive unwanted attention as a consequence is nonzero. I simply am guessing that it’s small enough not to be worth the ongoing short-term costs of a marginal increase in censorship.
But I do think that making the effort to thoroughly examine and gather evidence for the extent to which our political status quo serves to attract or repel people would be well worth a thorough examination. Asking at what point the inherent cost of a marginal increase in censorship becomes worth paying in exchange for a more inclusive environment seems like a reasonable question to ask. But I think this process would need a lot of community buy-in and serious effort on the part of a whole team to do it right.
The people who are already here would need persuading, and indeed, I think they deserve the effort to be persuaded to give up some of their freedom to post what they want here in exchange for, the hope would be, a larger and more vibrant community. And this effort should come with a full readiness to discover that, in fact, such restrictions would diminish the size and vibrancy and intellectual capacity of this community. If it wasn’t approached in that spirit, I think it would just fail.
So, I both think that in the past 1) people have thought the x-risk folks are weird and low-status and didn’t want to be affiliated with them, and in the present 2) people like Phil Torres are going around claiming that EAs and longtermists are white surpremacists, because of central aspects of longtermism (like thinking the present matters in large part because of its ability to impact the future). Things like “willingness to read The Bell Curve” no doubt contribute to their case, but I think focusing on that misses the degree to which the core is actually in competition with other ideologies or worldviews.
I think there’s a lot of value in trying to nudge your presentation to not trigger other people’s allergies or defenses, and trying to incorporate criticisms and alternative perspectives. I think we can’t sacrifice the core to do those things. If we disagree with people about whether the long-term matters, then we disagree with them; if they want to call us names accordingly, so much the worse for them.
I mean, this works until someone in a position of influence bows the the pressure, and I don’t see why this can’t happen.
The main disagreement seems to come down to how much we would give up when disallowing posts like this. My gears model still says ‘almost nothing’ since all it would take is to extend the norm “let’s not talk about politics” to “let’s not talk about politics and extremely sensitive social-justice adjacent issues”, and I feel like that would extend the set of interesting taboo topics by something like 10%.
(I’ve said the same here; if you have a response to this, it might make sense to all keep it in one place.)
Sorry about your anxiety around this discussion :(
I like the norm of “If you’re saying something that lots of people will probably (mis)interpret as being hurtful and insulting, see if you can come up with a better way to say the same thing, such that you’re not doing that.” This is not a norm of censorship nor self-censorship, it’s a norm of clear communication and of kindness. I can easily imagine a book review of TBC that passes that test. But I think this particular post does not pass that test, not even close.
If a TBC post passed that test, well, I would still prefer that it be put off-site with a linkpost and so on, but I wouldn’t feel as strongly about it.
I think “censorship” is entirely the wrong framing. I think we can have our cake and eat it too, with just a little bit of effort and thoughtfulness.
I think that this is completely wrong. Such a norm is definitely a norm of (self-)censorship—as has been discussed on Less Wrong already.
It is plainly obvious to any even remotely reasonable person that the OP is not intended as any insult to anyone, but simply as a book review / summary, just like it says. Catering, in any way whatsoever, to anyone who finds the current post “hurtful and insulting”, is an absolutely terrible idea. Doing such a thing cannot do anything but corrode Less Wrong’s epistemic standards.
Suppose that Person A finds Statement X demeaning, and you believe that X is not in fact demeaning to A, but rather A was misunderstanding X, or trusting bad secondary sources on X, or whatever.
What do you do?
APPROACH 1: You say X all the time, loudly, while you and your friends high-five each other and congratulate yourselves for sticking it to the woke snowflakes.
APPROACH 2: You try sincerely to help A understand that X is not in fact demeaning to A. That involves understanding where A is coming from, meeting A where A is currently at, defusing tension, gently explaining why you believe A is mistaken, etc. And doing all that before you loudly proclaim X.
I strongly endorse Approach 2 over 1. I think Approach 2 is more in keeping with what makes this community awesome, and Approach 2 is the right way to bring exactly the right kind of people into our community, and Approach 2 is the better way to actually “win”, i.e. get lots of people to understand that X is not demeaning, and Approach 2 is obviously what community leaders like Scott Alexander would do (as for Eliezer, um, I dunno, my model of him would strongly endorse approach 2 in principle, but also sometimes he likes to troll…), and Approach 2 has nothing to do with self-censorship.
~~
Getting back to the object level and OP. I think a lot of our disagreement is here in the details. Let me explain why I don’t think it is “plainly obvious to any even remotely reasonable person that the OP is not intended as any insult to anyone”.
Imagine that Person A believes that Charles Murray is a notorious racist, and TBC is a book that famously and successfully advocated for institutional racism via lies and deceptions. You don’t have to actually believe this—I don’t—I am merely asking you to imagine that Person A believes that.
Now look at the OP through A’s eyes. Right from the title, it’s clear that OP is treating TBC as a perfectly reasonable respectable book by a perfectly reasonable respectable person. Now A starts scanning the article, looking for any serious complaint about this book, this book which by the way personally caused me to suffer by successfully advocating for racism, and giving up after scrolling for a while and coming up empty. I think a reasonable conclusion from A’s perspective is that OP doesn’t think that the book’s racism advocacy is a big deal, or maybe OP even thinks it’s a good thing. I think it would be understandable for Person A to be insulted and leave the page without reading every word of the article.
Once again, we can lament (justifiably) that Person A is arriving here with very wrong preconceptions, probably based on trusting bad sources. But that’s the kind of mistake we should be sympathetic to. It doesn’t mean Person A is an unreasonable person. Indeed, Person A could be a very reasonable person, exactly the kind of person who we want in our community. But they’ve been trusting bad sources. Who among us hasn’t trusted bad sources at some point in our lives? I sure have!
And if Person A represents a vanishingly rare segment of society with weird idiosyncratic wrong preconceptions, maybe we can just shrug and say “Oh well, can’t please everyone.” But if Person A’s wrong preconceptions are shared by a large chunk of society, we should go for Approach 2.
If Person A believes this without ever having either (a) read The Bell Curve or (b) read a neutral, careful review/summary of The Bell Curve, then A is not a reasonable person.
All sorts of unreasonable people have all sorts of unreasonable and false beliefs. Should we cater to them all?
No. Of course we should not.
The title, as I said before, is neutrally descriptive. Anyone who takes it as an endorsement is, once again… unreasonable.
Sorry, what? A book which you (the hypothetical Person A) have never read (and in fact have only the vaguest notion of the contents of) has personally caused you to suffer? And by successfully (!!) “advocating for racism”, at that? This is… well, “quite a leap” seems like an understatement; perhaps the appropriate metaphor would have to involve some sort of Olympic pole-vaulting event. This entire (supposed) perspective is absurd from any sane person’s perspective.
No, this would actually be wildly unreasonable behavior, unworthy of any remotely rational, sane adult. Children, perhaps, may be excused for behaving in this way—and only if they’re very young.
The bottom line is: the idea that “reasonable people” think and behave in the way that you’re describing is the antithesis of what is required to maintain a sane society. If we cater to this sort of thing, here on Less Wrong, then we completely betray our raison d’etre, and surrender any pretense to “raising the sanity waterline”, “searching for truth”, etc.
I have a sincere belief that The Protocols Of The Elders Of Zion directly contributed to the torture and death of some of my ancestors. I hold this belief despite having never read this book, and having only the vaguest notion of the contents of this book, and having never sought out sources that describe this book from a “neutral” point of view.
Do you view those facts as evidence that I’m an unreasonable person?
Further, if I saw a post about The Protocols Of The Elders Of Zion that conspicuously failed to mention anything about people being oppressed as a result of the book, or a post that buried said discussion until after 28 paragraphs of calm open-minded analysis, well, I think I wouldn’t read through the whole piece, and I would also jump to some conclusions about the author. I stand by this being a reasonable thing to do, given that I don’t have unlimited time.
By contrast, if I saw a post about The Protocols Of The Elders Of Zion that opened with “I get it, I know what you’ve heard about this book, but hear me out, I’m going to explain why we should give this book a chance with an open mind, notwithstanding its reputation…”, then I would certainly consider reading the piece.
Your analogy breaks down because the Bell Curve is extremely reasonable, not some forged junk like “The Protocols Of The Elders Of Zion”.
If a book mentioned here mentioned evolution and that offended some traditional religious people, would we need to give a disclaimer and potentially leave it off the site? What if some conservative religious people believe belief in evolution directly harms them? They would be regarded as insane, and so are people offended by TBC.
That’s all this is by the way, left-wing evolution denial. How likely is it that people separated for tens of thousands of years with different founder populations will have equal levels of cognitive ability. It’s impossible.
Yeah.
“What do you think you know, and how do you think you know it?” never stopped being the rationalist question.
As for the rest of your comment—first of all, my relative levels of interest in reading a book review of the Protocols would be precisely reversed from yours.
Secondly, I want to call attention to this bit:
There is no particular reason to “give this book a chance”—to what? Convince us of its thesis? Persuade us that it’s harmless? No. The point of reviewing a book is to improve our understanding of the world. The Protocols of the Elders of Zion is a book which had an impact on global events, on world history. The reason to review it is to better understand that history, not to… graciously grant the Protocols the courtesy of having its allotted time in the spotlight.
If you think that the Protocols are insignificant, that they don’t matter (and thus that reading or talking about them is a total waste of our time), that is one thing—but that’s not true, is it? You yourself say that the Protocols had a terrible impact! All the things which we should strive our utmost to understand, how can a piece of writing that contributed to some of the worst atrocities in history not be among them? How do you propose to prevent history from repeating, if you refuse, not only to understand it, but even to bear its presence?
The idea that we should strenuously shut our eyes against bad things, that we should forbid any talk of that which is evil, is intellectually toxic.
And the notion that by doing so, we are actually acting in a moral way, a righteous way, is itself the root of evil.
Hmm, I think you didn’t get what I was saying. A book review of “Protocols of the Elders of Zion” is great, I’m all for it. A book review of “Protocols of the Elders of Zion” which treats it as a perfectly lovely normal book and doesn’t say anything about the book being a forgery until you get 28 paragraphs into the review and even then it’s barely mentioned is the thing that I would find extremely problematic. Wouldn’t you? Wouldn’t that seem like kind of a glaring omission? Wouldn’t that raise some questions about the author’s beliefs and motives in writing the review?
Do you ever, in your life, think that things are true without checking? Do you think that the radius of earth is 6380 km? (Did you check? Did you look for skeptical sources?) Do you think that lobsters are more closely related to shrimp than to silverfish? (Did you check? Did you look for skeptical sources?) Do you think that it’s dangerous to eat an entire bottle of medicine at once? (Did you check? Did you look for skeptical sources?)
I think you’re holding people up to an unreasonable standard here. You can’t do anything in life without having sources that you generally trust as being probably correct about certain things. In my life, I have at time trusted sources that in retrospect did not deserve my trust. I imagine that this is true of everyone.
Suppose we want to solve that problem. (We do, right?) I feel like you’re proposing a solution of “form a community of people who have never trusted anyone about anything”. But such community would be empty! A better solution is: have a bunch of Scott Alexanders, who accept that people currently have beliefs that are wrong, but charitably assume that maybe those people are nevertheless open to reason, and try to meet them where they are and gently persuade them that they might be mistaken. Gradually, in this way, the people (like former-me) who were trusting the wrong sources can escape of their bubble and find better sources, including sources who preach the virtues of rationality.
We’re not born with an epistemology instruction manual. We all have to find our way, and we probably won’t get it right the first time. Splitting the world into “people who already agree with me” and “people who are forever beyond reason”, that’s the wrong approach. Well, maybe it works for powerful interest groups that can bully people around. We here at lesswrong are not such a group. But we do have the superpower of ability and willingness to bring people to our side via patience and charity and good careful arguments. We should use it! :)
I agree completely.
But note that here we are talking about the book’s provenance / authorship / otherwise “metadata”—and certainly not about the book’s impact, effects of its publication, etc. The latter sort of thing may properly be discussed in a “discussion section” subsequent to the main body of the review, or it may simply be left up to a Wikipedia link. I would certainly not require that it preface the book review, before I found that review “acceptable”, or forebore to question the author’s motives, or what have you.
And it would be quite unreasonable to suggest that a post titled “Book Review: The Protocols of the Elders of Zion” is somehow inherently “provocative”, “insulting”, “offensive”, etc., etc.
I certainly try not to, though bounded rationality does not permit me always to live up to this goal.
I have no beliefs about this one way or the other.
I have no beliefs about this one way or the other.
Depends on the medicine, but I am given to understand that this is often true. I have “checked” in the sense that I regularly read up on the toxicology and other pharmacokinetic properties of medications I take, or those might take, or even those I don’t plan to take. Yes, I look for skeptical sources.
My recommendation, in general, is to avoid having opinions about things that don’t affect you; aim for a neutral skepticism. For things that do affect you, investigate; don’t just stumble into beliefs. This is my policy, and it’s served me well.
The solution to this is to trust less, check more; decline to have any opinion one way or the other, where doing so doesn’t affect you. And when you have to, trust—but verify.
Strive always to be aware of just how much trust in sources you haven’t checked underlies any belief you hold—and, crucially, adjust the strength of your beliefs accordingly.
And when you’re given an opportunity to check, to verify, to investigate—seize it!
The principle of charity, as often practiced (here and in other rationalist spaces), can actually be a terrible idea.
We should use it only to the extent that it does not in any way reduce our own ability to seek, and find, the truth, and not one iota more.
A belief that “TBC was written by a racist for the express purpose of justifying racism” would seem to qualify as “worth mentioning prominently at the top” under that standard, right?
I imagine that very few people would find the title by itself insulting; it’s really “the title in conjunction with the first paragraph or two” (i.e. far enough to see that the author is not going to talk up-front about the elephant in the room).
Hmm, maybe another better way to say it is: The title plus the genre is what might insult people. The genre of this OP is “a book review that treats the book as a serious good-faith work of nonfiction, which might have some errors, just like any nonfiction book, but also presumably has some interesting facts etc.” You don’t need to read far or carefully to know that the OP belongs to this genre. It’s a very different genre from a (reasonable) book review of “Protocols of the Elders of Zion”, or a (reasonable) book review of “Mein Kampf”, or a (reasonable) book review of “Harry Potter”.
No, of course not (the more so because it’s a value judgment, not a statement of fact).
The rest of what you say, I have already addressed.
Approach 2 assumes that A is (a) a reasonable person and (b) coming into the situation with good faith. Usually, neither is true.
What is more, your list of two approaches is a very obvious false dichotomy, crafted in such a way as to mock the people you’re disagreeing with. Instead of either the strawman Approach 1 or the unacceptable Approach 2, I endorse the following:
APPROACH 3: Ignore the fact that A (supposedly) finds X “demeaning”. Say (or don’t say) X whenever the situation calls for it. Behave in all ways as if A’s opinion is completely irrelevant.
(Note, by the way, that Approach 2 absolutely does constitute (self-)censorship, as anything that imposes costs on a certain sort of speech—such as, for instance, requiring elaborate genuflection to supposedly “offended” parties, prior to speaking—will serve to discourage that form of speech. Of course, I suspect that this is precisely the goal—and it is also precisely why I reject your suggestion wholeheartedly. Do not feed utility monsters.)
There’s a difference between catering to an audience and proactively framing things in the least explosive way.
Maybe what you are saying is that when people try to do the latter, they inevitably end up self-censoring and catering to the (hostile) audience?
But that seems false to me. I not only think framing contoversial topics in a non-explosive way is a strategically important, underappreciated skill. In addition, I suspect that practicing the skill improves our epistemics. It forces us to engage with a critical audience of people with ideological differences. When I imagine having to write on a controversial topic, one of the readers I mentally simulate is “person who is ideologically biased against me, but still reasonable.” I don’t cater to unreasonable people, but I want to take care to not put off people who are still “in reach.” And if they’re reasonable, sometimes they have good reasons behind at least some of their concerns and their perspectives can be learnt from.
As I mentioned elsethread, if I’d written the book review I would have done what you describe. But I didn’t and probably never would have written it out of timidness, and that makes me reluctant to tell someone less timid who did something valuable that they did it wrong.
I was just commenting on the general norm. I haven’t read the OP and didn’t mean to voice an opinion on it.
I’m updating that I don’t understand how discussions work. It happens a lot that I object only to a particular feature of an argument or particular argument, yet my comments are interpreted as endorsing an entire side of a complicated debate.
FWIW, I think the “caving in” discussed/contemplated in Rafael Harth’s comments is something I find intuitively repugnant. It feels like giving up your soul for some very dubious potential benefits. Intellectually I can see some merits for it but I suspect (and very much like to believe) that it’s a bad strategy.
Maybe I would focus more on criticizing this caving in mentality if I didn’t feel like I was preaching to the choir. “Open discussion” norms feel so ingrained on Lesswrong that I’m more worried that other good norms get lost / overlooked.
Maybe I would feel different (more “under attack”) if I was more emotionally invested in the community and felt like something I helped build was under attack with norm erosion. I feel presently more concerned about dangers from evaporative cooling where many who care a not-small degree about “soft virtues in discussions related to tone/tact/welcomingness, but NOT in a strawmanned sense” end up becoming less active or avoiding the comment sections.
Edit: The virtue I mean is maybe best described as “presenting your side in a way that isn’t just persuasive to people who think like you, but even reaches the most receptive percentage of the outgroup that’s predisposed to be suspicious of you.”
This is a moot point, because anyone who finds a post title like “Book review: The Bell Curve by Charles Murray” to be “controversial”, “explosive”, etc., is manifestly unreasonable.
My comment here argues that a reasonable person could find this post insulting.
(Upvote, but disagree.)
A story I’m worried about goes something like:
LW correctly comes to believe that for an AI to be aligned, its cognitive turboencabulator needs a base plate of prefabulated amulite
the leader of an AI project tries to make the base plate out of unprefabulated amulite
another member of the project mentions off-hand one time that some people think it should be prefabulated
the project leader thinks, “prefabulation, wasn’t that one of the pet issues of those Bell Curve bros? well, whatever, let’s just go ahead”
the AI is built as planned and attains superhuman intelligence, but its cognitive turboencabulator fails, causing human extinction
Meta-meta note:
Even if a theoretical author cares not one whit about appearing to endorse “bad things” #scarequotes, including preemptive disclaimers is still good practice to forestall this sort of meta-commentary and keep the comments focused on the content of the post, and not the method of delivery.