I’ll bite, but I can’t promise to engage in a lot of back-and-forth.
The site is discussed somewhere, someone claims that it’s a home for racism and points to this post as evidence. Someone else who would have otherwise become a valuable contributor reads it and decides not to check it out
A woke and EA-aligned person gets wind of it and henceforth thinks all x-risk related causes are unworthy of support
Let’s generalize. A given post on LW’s frontpage may heighten or diminish its visibility and appeal to potential newcomers, or the visibility/appeal of associated causes like X-risk. You’ve offered one reason why this post might heighten its visibility while diminishing its appeal.
Here’s an alternative scenario, in which this post heightens rather than diminishes the appeal of LW. Perhaps a post about the Bell Curve will strike somebody as a sign that this website welcomes free and open discourse, even on controversial topics, as long as it’s done thoughtfully. This might heighten, rather than diminish, LW’s appeal, for a person such as this. Indeed, hosting posts on potentially controversial topics might select for people like this, and that might not only grow the website, but reinforce its culture in a useful way.
I am not claiming that this post heightens the appeal of LW on net—only that it’s a plausible alternative hypothesis. I think that we should be very confident that a post will diminish the appeal of LW to newcomers before we advocate for communally-imposed censorship.
Not only do we have to worry that such censorship will impact the free flow of information and ideas, but that it will personally hurt the feelings of a contributor. Downvotes and calls for censorship pretty clearly risk diminishing the appeal of the website to the poster, who has already demonstrated that they care about this community. If successful, the censorship would only potentially bolster the website’s appeal for some hypothetical newcomer. It makes more sense to me to prioritize the feelings of those already involved. I don’t know how lsusr feels about your comment, but I know that when other people have downvoted or censored my posts and comments, I have felt demoralized.
Someone links the article from somewhere, it gets posted on far right reddit board, a bunch of people make accounts on LessWrong to make dumb comments, someone from the NYT sees it and writes a hit piece.
The reason I think this is unlikely is that the base rate of (blogs touching on politics making it into the NYT for far-right trolling)/(total blogs touching on politics) is low. Slate Star Codex had a large number of readers before the NYT wrote an article about it. I believe that LW must be have a readership two orders of magnitude lower than SSC/ACX (in the thousands, or even just the hundreds, for LW, in the hundreds of thousands for SSC/ACX). LW is the collective work of a bunch of mainly-anonymous bloggers posting stuff that’s largely inoffensive and ~never (recently) flagrantly attacking particular political factions. Indeed, we have some pretty strong norms against open politicization. Because its level of openly political posting and its readership are both low, I think LW is an unappealing target for a brigade or hit piece. Heck, even Glen Weyl thinks we’re not worth his time!
Edit: See habryka’s stats below for a counterpoint. I still think there’s a meaningful difference between the concentrated attention given to posts on ACX vs. the diffuse attention (of roughly equal magnitude) distributed throughout the vastness of LW.
For this reason, it once again does not seem worth creating a communal norm of censorship and a risk of hurt feelings by active posters.
Note also that, while you have posited and acted upon (via downvoting and commenting) a hypothesis of yours that the risks of this post outweigh the benefits, you’ve burdened respondants with supplying more rigor than you brought to your original post (“I would much welcome some kind of a cost-benefit calculation that concludes that this is a good idea”). It seems to me that a healthier norm would be that, before you publicly proclaim that a post is worthy of censorship, that you do the more rigorous cost/benefit calculation, and offer it up for others to critique.
Or should I fight fire with fire, by strongly-upvoting lsusr’s post to counteract your strong-downvote? In this scenario, upvotes and downvotes are being used not as a referendum on the quality of the post, but on whether or not it should be censored to protect LW. Is that how we wish this debate to be decided?
As a final question, consider that you seem to view this post in particular as exceptionally risky for LW. That means you are making an extraordinary claim: that this post, unlike almost every other LW post, is worthy of censorship. Extraordinary claims require extraordinary evidence. Have you met that standard?
I believe that LW must be have a readership two orders of magnitude lower than SSC/ACX (in the thousands, or even just the hundreds, for LW, in the hundreds of thousands for SSC/ACX)
LW’s readership is about the same order of magnitude as SSC. Depending on the mood of the HN and SEO gods.
Not that I don’t believe you, but that’s also really hard for me to wrap my head around. Can you put numbers on that claim? I’m not sure if ACX has a much smaller readership than I’d imagined, or if LW has a much bigger one, but either way I’d like to know!
That is vastly more readership than I had thought. A naive look at these numbers suggests that a small city’s worth of people read Elizabeth’s latest post. But I assume that these numbers can’t be taken at face value.
It’s very hard for me to square the idea that these websites get roughly comparable readership with my observation that ACX routinely attracts hundreds of comments on every post. LW gets 1-2 orders of magnitude fewer comments than ACX.
So while I’m updating in favor of the site’s readership being quite a bit bigger than I’d thought, I still think there’s some disconnect here between what I’m thinking of by “readership” and the magnitude of “readership” is coming across in these stats.
Note that LW gets 1-2 OOM fewer comments on the average post, but not in total. I reckon monthly comments is same OOM. And if you add up total word count on each site I suspect LW is 1 OOM bigger each month. ACX is more focused and the discussion is more focused, LW is a much broader space with lots of smaller convos.
That makes a lot of sense. I do get the feeling that, although total volume on a particular topic is more limited here, that there’s a sense of conversation and connection that I don’t get on ACX, which I think is largely due to the notification system we have here for new comments and messages.
This is weekly comments for LessWrong over the last year. Last we counted, something like 300 on a SSC post? So if there are two SSC posts/week, LessWrong is coming out ahead.
I think ACX is ahead of LW here. In October, it got 7126 comments in 14 posts, which is over 1600/week. (Two of them were private with 201 between them, still over 1500/week if you exclude them. One was an unusually high open thread, but still over 1200/week if you exclude that too.)
In September it was 10350 comments, over 2400/week. I can’t be bothered to count August properly but there are 10 threads with over 500 comments and 20 with fewer, so probably higher than October at least.
Not too far separate though, like maybe 2x but not 10x.
(E: to clarify this is “comments on posts published in the relevant month” but that shouldn’t particularly matter here)
I don’t think LW gets at all fewer comments than ACX. I think indeed LW has more comments than ACX, it’s just that LW comments are spread out over 60+ posts in a given week, whereas ACX has like 2-3 posts a week. LessWrong gets about 150-300 comments a day, which is roughly the same as what ACX gets per day.
That is vastly more readership than I had thought. A naive look at these numbers suggests that a small city’s worth of people read Elizabeth’s latest post. But I assume that these numbers can’t be taken at face value.
I think this number can be relatively straightforwardly taken at face value. Elizabeth’s post was at the top of HN for a few hours, so a lot of people saw it. A small city’s worth seems about right for the number of people who clicked through and at least skimmed it.
Extraordinary claims require extraordinary evidence. Have you met that standard?
I think the evidence that wokeism is a powerful force in the world we live in is abundant, and my primary reaction to your comment is that it feels like everything you said could have been written in a world where this isn’t so. There is an inherent asymmetry here in how many people care about which things to what degree in the real world. (As I’ve mentioned in the last discussion, I know a person who falls squarely into the second category I’ve mentioned; committed EA, very technically smart, but thinks all LW-adjacent things are poisonous, in her case because of sexism rather than racism, but it’s in the same cluster.)
Sam Harris invited the author of the Bell Curve onto his podcast 4 years ago, and as a result has a stream of hateful rhetoric targeted his way that lasts to this day. Where is the analogous observable effect into the opposite direction? If it doesn’t exist, why is postulating the opposite effect plausible in this case?
My rough cost-benefit analysis is −5/-20/-20 for the points I’ve mentioned, +1 for the advantage of being able to discuss this here, and maybe +2 for the effect of attracting people who like it for the opposite symbolism (i.e., here’s someone not afraid to discuss hard things) and I feel like I don’t want to assign a number to how it impacts Isur’s feelings. The reason I didn’t spell this out was because I thought it would come across as unnecessarily uncharitable, and it doesn’t convey much new information because I already communicated that I don’t see the upside.
Sam Harris has enormous reach, comparable to Scott’s. Also, podcasts have a different cultural significance than book reviews. Podcasts tend to come with an implicit sense of friendliness and inclusion extended toward the guest. Not so in a book review, which can be bluntly critical. So for the reasons I outlined above, I don’t think Harris’s experiences are a good reference class for what we should anticipate.
“Wokeism” is powerful, and I agree that this post elevated this site’s risk of being attacked or condemned either by the right or the left. I also agree that some people have been turned off by the views on racism or sexism they’ve been exposed to on by some posters on this site.
I also think that negativity tends to be more salient than approval. If lsusr’s post costs us one long-term reader and gains us two, I expect the one user who exits over it to complain and point to this post, making the reason for their dissatisfaction clear. By contrast, I don’t anticipate the newcomers to make a fanfare, or to even see lsusr’s post as a key reason they stick around. Instead, they’ll find themselves enjoying a site culture and abundance of posts that they find generally appealing. So I don’t think a “comparable observable effect in the opposite direction” is what you’d look for to see whether lsusr’s post enhances or diminishes the site’s appeal on net.
In fact, I am skeptical about our ability to usefully predict the effect of individual posts on driving readership to or away from this site. Which is why I don’t advocate censoring individual posts on this basis.
Sam Harris has enormous reach, comparable to Scott’s. Also, podcasts have a different cultural significance than book reviews. Podcasts tend to come with an implicit sense of friendliness and inclusion extended toward the guest. Not so in a book review, which can be bluntly critical. So for the reasons I outlined above, I don’t think Harris’s experiences are a good reference class for what we should anticipate.
I agree that the risk of anything terrible happening right now is very low for this reason. (Though I’d still estimate it to be higher than the upside.) But is “let’s rely on us being too small to get noticed by the mob” really a status quo you’re comfortable with?
I also think that negativity tends to be more salient than approval. If lsusr’s post costs us one long-term reader and gains us two, I expect the one user who exits over it to complain and point to this post, making the reason for their dissatisfaction clear. By contrast, I don’t anticipate the newcomers to make a fanfare, or to even see lsusr’s post as a key reason they stick around. Instead, they’ll find themselves enjoying a site culture and abundance of posts that they find generally appealing. So I don’t think a “comparable observable effect in the opposite direction” is what you’d look for to see whether lsusr’s post enhances or diminishes the site’s appeal on net.
This comment actually made me update somewhat because it’s harder than I thought to find an asymmetry here. But it’s still only a part of the story (and the part I’ve put the least amount of weight on.)
But is “let’s rely on us being too small to get noticed by the mob” really a status quo you’re comfortable with?
Let me rephrase that slightly, since I would object to several features of this sentence that I think are beside your main point. I do think that taking the size and context of our community into account when assessing how outsiders will see and respond to our discourse is among the absolute top considerations for judging risk accurately.
On a simple level, my framework is that we care about three factors: object-level risks and consequences, and enforcement-level risks and consequences. These are analogous to the risks and consequences from crime (object-level), and the risks and consequences from creating a police force or military (enforcement-level).
What I am arguing in this case is that the negative risks x consequences of the sort of enforcement-level behaviors you are advocating for and enacting seem to outweigh the negative risks x consequences of being brigaded or criticized in the news. Also, I’m uncertain enough about the balance of this post’s effect on inflow vs. outflow of readership to be close to 50⁄50, and expect it to be small enough either way to ignore it.
Note also that Sam Harris and Scott Alexander still have an enormous readership after their encounters with the threats you’re describing. While I can imagine a scenario in which unwanted attention becomes deeply unpleasant, I also expect to be a temporary situation. By contrast, instantiating a site culture that is self-censoring due to fear of such scenarios seems likely to be much more of a daily encumbrance—and one that still doesn’t rule out the possibility that we get attacked anyway.
I’d also note that you may be contributing to the elevation of risk with your choices of language. By using terms like “wokeism,” “mob,” and painting scrutiny as a dire threat in a public comment, it seems to me that you add potential fuel for any fire that may come raging through. My standard is that, if this is your earnest opinion, then LW ought to be a good platform for you to discuss that, even if it elevates our risk of being cast in a negative light.
Your standard, if I’m reading you right, is that your comment should be considered for potential censorship itself, due to the possibility that it does harm to the site’s reputation. Although it is perhaps not as potentially inflammatory as a review of TBC, it’s also less substantial, and potentially interacts in a synergistic way to elevate the risk. Do you think this is a risk you ought to have taken seriously before commenting? If not, why not?
My perspective is that you were right to post what you posted, because it reflected an honest concern of yours, and permits us to have a conversation about it. I don’t think you should have had to justify the existence of your comment with some sort of cost/benefit analysis. There are times when I think that such a justification is warranted, but this context is very far from that threshold. An example of a post that I think crosses that threshold would be a description of a way to inflict damage that had at least two of the following attributes: novel, convenient, or detailed. Your post is none of these, and neither is lsusr’s, so both of them pass my test for “it’s fine to talk about it.”
After reading this, I realize that I’ve done an extremely poor job communicating with everything I’ve commented on this post, so let me just try to start over.
I think what I’m really afraid of is a sequence of events that goes something like this:
Every couple of months, someone on LW makes a post like the above
In some (most?) cases, someone is going to speak up against this (in this case, we had two), there will be some discussion, but the majority will come down on the side that censorship is bad and there’s no need to take drastic action
The result is that we never establish any kind of norm nor otherwise prepare for political backlash
In ten or twenty or forty years from now, in a way that’s impossible to predict because any specific scenario is extremely unlikely, the position to be worried about AGI will get coupled to being anti social justice in the public discourse, as a result it will massively lose status and the big labs react by taking safety far less seriously and maybe we have fewer people writing papers on alignment
At that point it will be obvious to everyone that not having done anything to prevent this was a catastrophic error
After the discussion on the dating post, I’ve made some attempts to post a follow-up but chickened out of doing it because I was afraid of the reaction or maybe just because I couldn’t figure out how to approach the topic. When I saw this post, I think I originally decided not to do anything, but then anon03 said something and then somehow I thought I had to say something as well but it wasn’t well thought out because I already felt a fair amount of anxiety after having failed to write about it before. When my comment got a bunch of downvotes, the feeling of anxiety got really intense and I felt like the above mentioned scenario is definitely going to happen and I won’t be able to do anything about it because arguing for censorship is just a lost cause, and I think I then intentionally (but subconsciously) used the language you’ve just pointed out to signal that I don’t agree with the object level part of anything I’m arguing for (probably in the hopes of changing the reception?) even though I don’t think that made a lot of sense; I do think I trust people on this site to keep the two things separate. I completely agree that this risks making the problem worse. I think it was a mistake to say it.
I don’t think any of this is an argument for why I’m right, but I think that’s about what really happened.
Probably it’s significantly less than 50% that anything like what I described happens just because of the conjunction—who knows of anyone even still cares about social justice in 20 years. But it doesn’t seem nearly unlikely enough not to take seriously, and I don’t see anyone taking it seriously and it really terrifies me. I don’t completely understand why since I tend to not be very affected when thinking about x-risks. Maybe because of the feeling that it should be possible to prevent it.
I don’t think the fact that Sam still has an audience is a reason not to panic. Joe Rogan has a quadrillion times the audience of the NYT or CNN, but the social justice movement still has disproportionate power over institutions and academia, and probably that includes AI labs?
I will say that although I disagree with your opinion re: censoring this post and general risk assessment related to this issue, I don’t think you’ve expressed yourself particularly poorly. I also acknowledge that it’s hard to manage feelings of anxiety that come up in conversations with an element of conflict, in a community you care about, in regards to an issue that is important to the world. So go easier on yourself, if that helps! I too get anxious when I get downvoted, or when somebody disagrees with me, even though I’m on LW to learn, and being disagreed with and turning out to be wrong is part of that learning process.
It sounds like a broader perspective of yours is that there’s a strategy for growing the AGI safety community that involves keeping it on the good side of whatever political faction is in power. You think that we should do pretty much whatever it takes to make AGI safety research a success, and that this strategy of avoiding any potentially negative associations is important enough for achieving that outcome that we should take deliberate steps to safeguard its perception in this way. As a far-downstream consequence, we should censor posts like this, out of a general policy of expunging anything potentially controversial being associated with x-risk/AGI safety research and their attendant communities.
I think we roughly agree on the importance of x-risk and AGI safety research. If there was a cheap action I could take that I thought would reliably mitigate x-risk by 0.001%, I would take it. Downvoting a worrisome post is definitely a cheap action, so if I thought it would reliably mitigate x-risk by 0.001%, I would probably take it.
The reason I don’t take it is because I don’t share your perception that we can effectively mitigate x-risk in this way. It is not clear to me that the overall effect of posts like lsusr’s is net negative for these causes, nor that such a norm of censorship would be net beneficial.
What I do think is important is an atmosphere in which people feel freedom to follow their intellectual interests, comfort in participating in dialog and community, and a sense that their arguments are being judged on their intrinsic merit and truth-value.
The norm that our arguments should be judged based on their instrumental impact on the world seems to me to be generally harmful to epistemics. And having an environment that tries to center epistemic integrity above other concerns seems like a relatively rare and valuable thing, one that basically benefits AGI safety.
That said, people actually doing AGI research have other forums for their conversation, such as the alignment forum and various nonprofits. It’s unclear that LW is a key part of the pipeline for new AGI researchers, or forum for AGI research to be debated and discussed. If LW is just a magnet for a certain species of blogger who happens to be interested in AGI safety, among other things; and if those bloggers risk attracting a lot of scary attention while contributing minimally to the spread of AGI safety awareness or to the research itself, then that seems like a concerning scenario.
It’s also hard for me to judge. I can say that LW has played a key role for me connecting with and learning from the rationalist community. I understand AGI safety issues better for it, and am the only point of reference that several of my loved ones have for hearing about these issues.
So, N of 1, but LW has probably improved the trajectory of AGI safety by a miniscule but nonzero amount via its influence on me. And I wouldn’t have stuck around on LW if there was a lot of censorship of controversial topics. Indeed, it was the opportunity to wrestle with my attachments and frustrations with leftwing ideology via the ideas I encountered here that made this such an initially compelling online space. Take away the level of engagement with contemporary politics that we permit ourselves here, add in a greater level of censorship and anxiety about the consequences of our speech, and I might not have stuck around.
It sounds like a broader perspective of yours is that there’s a strategy for growing the AGI safety community that involves keeping it on the good side of whatever political faction is in power. You think that we should do pretty much whatever it takes to make AGI safety research a success, and that this strategy of avoiding any potentially negative associations is important enough for achieving that outcome that we should take deliberate steps to safeguard its perception in this way. As a far-downstream consequence, we should censor posts like this, out of a general policy of expunging anything potentially controversial being associated with x-risk/AGI safety research and their attendant communities.
I happily endorse this very articulate description of my perspective, with the one caveat that I would draw the line to the right of ‘anything potentially controversial’ (with the left-right axis measuring potential for backlash). I think this post falls to the right of just about any line; I think it hast he highest potential for backlash out of any post I remember seeing on LW ever. (I just said the same in a reply to Ruby, and I wasn’t being hypothetical.)
That said, people actually doing AGI research have other forums for their conversation, such as the alignment forum and various nonprofits. It’s unclear that LW is a key part of the pipeline for new AGI researchers, or forum for AGI research to be debated and discussed.
I’m probably an unusual case, but I got invited into the alignment forum by posting the Factored Cognition sequence on LW, so insofar as I count, LW has been essential. If it weren’t for the way that the two forums are connected, I wouldn’t have written the sequence. The caveat is that I’m currently not pursuing a “direct” path on alignment but am instead trying to go the academia route by doing work in the intersection of [widely recognized] and [safety-relevant] (i.e. on interpretability), so you could argue that the pipeline ultimately didn’t work. But I think (not 100% sure) at least Alex Turner is a straight-forward success story for said pipeline.
And I wouldn’t have stuck around on LW if there was a lot of censorship of controversial topics.
I think you probably want to respond on this on my reply to Rubin so that we don’t have two discussions about the same topic. My main objection is that the amount of censorship I’m advocating for seems to me to be tiny, I think less than 5 posts per year, far less than what is censored by the norm against politics.
Edit: I also want to object to this:
The norm that our arguments should be judged based on their instrumental impact on the world seems to me to be generally harmful to epistemics. And having an environment that tries to center epistemic integrity above other concerns seems like a relatively rare and valuable thing, one that basically benefits AGI safety.
I don’t think anything of what I’m saying involves judging arguments based on their impact on the world. I’m saying you shouldn’t be allowed to talk about TBC on LW in the first place. This seems like a super important distinction because it doesn’t involve lying or doing any mental gymnastics. I see it as closely analogous to the norm against politics, which I don’t think has hurt our discourse.
I don’t think anything of what I’m saying involves judging arguments based on their impact on the world.
What I mean here is that you, like most advocates of a marginal increase in censorship, justify this stance on the basis that the censored material will cause some people, perhaps its readers or its critics, to take an action with an undesirable consequence. Examples from the past have included suicidal behavior, sexual promiscuity, political revolution, or hate crimes.
To this list, you have appended “elevating X-risk.” This is what I mean by “impact on the world.”
Usually, advocates of marginal increases in censorship are afraid of the content of the published documents. In this case, you’re afraid not of what the document says on the object level, but of how the publication of that document will be perceived symbolically.
An advocate of censorship might point out that we can potentially achieve significant gains on goals with widespread support (in our society, stopping hate crimes might be an example), with only modest censorship. For example, we might not ban sales of a certain book. We just make it library policy not to purchase them. Or we restrict purchase to a certain age group. Or major publishers make a decision not to publish books advocating certain ideas, so that only minor publishing houses are able to market this material. Or we might permit individual social media platforms to ban certain articles or participants, but as long as internet service providers aren’t enacting bans, we’re OK with it.
On LW, one such form of soft censorship is the mod’s decision to keep a post off the frontpage.
To this list of soft censorship options, you are appending “posting it as a linkpost, rather than on the main site,” and assuring us that only 5 posts per year need to be subject even to this amount of censorship.
It is OK to be an advocate of a marginal increase in censorship. Understand, though, that to one such as myself, I believe that it is precisely these small marginal increases in censorship that pose a risk to X-risk, and the marginal posting of content like this book review either decreases X-risk (by reaffirming the epistemic freedom of this community) or does not affect it. If the community were larger, with less anonymity, and had a larger amount of potentially inflammatory political material, I would feel differently about this.
Your desire to marginally increase censorship feels to me a bit like a Pascal’s Mugging. You worry about a small risk of dire consequences that may never emerge, in order to justify a small but clear negative cost in the present moment. I don’t think you’re out of line to hold this belief. I just think that I’d need to see some more substantial empirical evidence that I should subscribe to this fear before I accept that we should pay this cost.
To this list of soft censorship options, you are appending “posting it as a linkpost, rather than on the main site,” and assuring us that only 5 posts per year need to be subject even to this amount of censorship.
The link thing was anon03′s idea; I want posts about TBC to be banned outright.
Other than that, I think you’ve understood my model. (And I think I understand yours except that I don’t understand the gears of the mechanism by which you think x-risk increases.)
X-risk, and AGI safety in particular, require unusual strength in gears-level reasoning to comprehend and work on; a willingness to stand up to criticism not only on technical questions but on moral/value questions; an intense, skeptical, questioning attitude; and a high value placed on altruism. Let’s call these people “rationalists.”
Even in scientific and engineering communities, and the population of rational people generally, the combination of these traits I’m referring to as “rationalism” is rare.
Rationalism causes people to have unusually high and predictable needs for a certain style and subject of debate and discourse, in a way that sets them apart from the general population.
Rationalists won’t be able to get their needs met in mainstream scientific or engineering communities, which prioritize a subset of the total rationalist package of traits.
Hence, they’ll seek an alternative community in which to get those needs met.
Rationalists who haven’t yet discovered a rationalist community won’t often have an advance knowledge of AGI safety. Instead, they’ll have thoughts and frustrations provoked by the non-rationalist society in which they grew up. It is these prosaic frustrations—often with politics—that will motivate them to seek out a different community, and to stay engaged with it.
When these people discover a community that engages with the controversial political topics they’ve seen shunned and censored in the rest of society, and doing it in a way that appears epistemically healthy to them, they’ll take it as evidence that they should stick around. It will also be a place where even AGI safety researchers and their friends can deal with the ongoing their issues and interests beyond AGI safety.
By associating with this community, they’ll pick up on ideas common in the community, like a concern for AGI safety. Some of them will turn it into a career, diminishing the amount of x-risk faced by the world.
I think that marginally increasing censorship on this site risks interfering with step 7. This site will not be recognized by proto-rationalists as a place where they can deal with the frustrations that they’re wrestling with when they first discover it. They won’t see an open attitude of free enquiry modeled, but instead see the same dynamics of fear-based censorship that they encounter almost everywhere else. Likewise, established AGI safety people and their friends will lose a space for free enquiry, a space for intellectual play and exploration that can be highly motivating. Loss of that motivation and appeal may interrupt the pipeline or staying power for people to work on X-risks of all kinds, including AGI safety.
Politics continues to affect people even after they’ve come to understand why it’s so frustrating, and having a minimal space to deal with it on this website seems useful to me. When you have very little of something, losing another piece of it feels like a pretty big deal.
When these people discover a community that engages with the controversial political topics they’ve seen shunned and censored in the rest of society, and doing it in a way that appears epistemically healthy to them, they’ll take it as evidence that they should stick around.
What has gone into forming this model? I only have one datapoint on this (which is myself). I stuck around because of the quality of discussion (people are making sense here!); I don’t think the content mattered. But I don’t have strong resistance to believing that this is how it works for other people.
I think if your model is applied to the politics ban, it would say that it’s also quite bad (maybe not as bad because most politics stuff isn’t as shunned and censored as social justice stuff)? If that’s true, how would you feel about restructuring rather than widening the censorship? Start allowing some political discussions (I also keep thinking about Wei Dai’s “it’ll go there eventually so we should practice” argument) but censor the most controversial social justice stuff. I feel like the current solution isn’t pareto optimal in the {epistemic health} x {safety against backlash} space.
Anecdotal, but about a year ago I committed to the rationalist community for exactly the reasons described. I feel more accepted in rationalist spaces than trans spaces, even though rationalists semi-frequently argue against the standard woke line and trans spaces try to be explicitly welcoming.
Just extrapolating from my own experience. For me, the content was important.
I think where my model really meets challenges is that clearly, the political content on LW has alienated some people. These people were clearly attracted here in the first place. My model says that LW is a magnet for likely AGI-safety researchers, and says nothing about it being a filter for likely AGI-safety researchers. Hence, if our political content is costing us more involvement than it’s retaining, or if the frustration experienced by those who’ve been troubled by the political content outweigh the frustration that would be experienced by those whose content would be censored, then that poses a real problem for my cost/benefit analysis.
A factor asymmetrically against increased censorship here is that censorship is, to me, intrinsically bad. It’s a little like war. Sometimes, you have to fight a war, but you should insist on really good evidence before you commit to it, because wars are terrible. Likewise, censorship sucks, and you should insist on really good evidence before you accept an increase in censorship.
It’s this factor, I think, that tilts me onto the side of preferring the present level of political censorship rather than an increase. I acknowledge and respect the people who feel they can’t participate here because they experience the environment as toxic. I think that is really unfortunate. I also think that censorship sucks, and for me, it roughly balances out with the suckiness of alienating potential participants via a lack of censorship.
This, I think, is the area where my mind is most susceptible to change. If somebody could make a strong case that LW currently has a lot of excessively toxic, alienating content, that this is the main bottleneck for wider participation, and that the number of people who’d leave if that controversial content were removed were outweighed by the number of people who’d join, then I’d be open-minded about that marginal increase in censorship.
An example of a way this evidence could be gathered would be some form of community outreach to ex-LWers and marginal LWers. We’d ask those people to give specific examples of the content they find offensive, and try both to understand why it bothers them, and why they don’t feel it’s something they can or want to tolerate. Then we’d try to form a consensus with them about limitations on political or potentially offensive speech that they would find comfortable, or at least tolerable. We’d also try to understand their level of interest in participating in a version of LW with more of these limitations in place.
Here, I am hypothesizing that there’s a group of ex-LWers or marginal LW-ers who feel a strong affinity for most of the content, while an even stronger aversion for a minority subset of the content to such a degree that they sharply curtail their participation. Such that if the offensive tiny fraction of the content were removed, they’d undergo a dramatic and lasting increase in engagement with LW. I find it unlikely that a sizeable group like this exists, but am very open to having my mind changed via some sort of survey data.
It seems more likely to me that ex/marginal-LWers are people with only a marginal interest in the site as a whole, who point to the minority of posts they find offensive as only the most salient example of what they dislike. Even if it were removed, they wouldn’t participate.
At the same time, we’d engage in community dialog with current active participants about their concerns with such a change. How strong are their feelings about such limitations? How many would likely stop reading/posting/commenting if these limitations were imposed? For the material they feel most strongly about it, why do they feel that way?
I am positing that there are a significant subset of LWers for whom the minority of posts engaging with politics are very important sources of its appeal.
How is it possible that I could simultaneously be guessing—and it is just a guess—that controversial political topics are a make-or-break screening-in feature, but not a make-or-break screening-out feature?
The reason is that there are abundant spaces online and in-person for conversation that does have the political limitations you are seeking to impose here. There are lots of spaces for conversation with a group of likeminded ideologues across the entire political spectrum, where conformity is a prerequisite of polite conversation. Hence, imposing the same sort of guardrails or ideological conformities on this website would make it similar to many other platforms. People who desire these guardrails/conformities can get what they want elsewhere. For them, LW would be a nice-to-have.
For those who desire polite and thoughtful conversation on a variety of intellectual topics, even touching on politics, LW is verging on a need-to-have. It’s rare. This is why I am guessing that a marginal increase in censorship would cost us more appeal than it would gain us.
I agree with you that the risk of being the subject of massive unwanted attention as a consequence is nonzero. I simply am guessing that it’s small enough not to be worth the ongoing short-term costs of a marginal increase in censorship.
But I do think that making the effort to thoroughly examine and gather evidence for the extent to which our political status quo serves to attract or repel people would be well worth a thorough examination. Asking at what point the inherent cost of a marginal increase in censorship becomes worth paying in exchange for a more inclusive environment seems like a reasonable question to ask. But I think this process would need a lot of community buy-in and serious effort on the part of a whole team to do it right.
The people who are already here would need persuading, and indeed, I think they deserve the effort to be persuaded to give up some of their freedom to post what they want here in exchange for, the hope would be, a larger and more vibrant community. And this effort should come with a full readiness to discover that, in fact, such restrictions would diminish the size and vibrancy and intellectual capacity of this community. If it wasn’t approached in that spirit, I think it would just fail.
In ten or twenty or forty years from now, in a way that’s impossible to predict because any specific scenario is extremely unlikely, the position to be worried about AGI will get coupled to being anti social justice in the public discourse, as a result it will massively lose status and the big labs react by taking safety far less seriously and maybe we have fewer people writing papers on alignment
So, I both think that in the past 1) people have thought the x-risk folks are weird and low-status and didn’t want to be affiliated with them, and in the present 2) people like Phil Torres are going around claiming that EAs and longtermists are white surpremacists, because of central aspects of longtermism (like thinking the present matters in large part because of its ability to impact the future). Things like “willingness to read The Bell Curve” no doubt contribute to their case, but I think focusing on that misses the degree to which the core is actually in competition with other ideologies or worldviews.
I think there’s a lot of value in trying to nudge your presentation to not trigger other people’s allergies or defenses, and trying to incorporate criticisms and alternative perspectives. I think we can’t sacrifice the core to do those things. If we disagree with people about whether the long-term matters, then we disagree with them; if they want to call us names accordingly, so much the worse for them.
If we disagree with people about whether the long-term matters, then we disagree with them; if they want to call us names accordingly, so much the worse for them.
I mean, this works until someone in a position of influence bows the the pressure, and I don’t see why this can’t happen.
I think we can’t sacrifice the core to do those things.
The main disagreement seems to come down to how much we would give up when disallowing posts like this. My gears model still says ‘almost nothing’ since all it would take is to extend the norm “let’s not talk about politics” to “let’s not talk about politics and extremely sensitive social-justice adjacent issues”, and I feel like that would extend the set of interesting taboo topics by something like 10%.
(I’ve said the same here; if you have a response to this, it might make sense to all keep it in one place.)
I like the norm of “If you’re saying something that lots of people will probably (mis)interpret as being hurtful and insulting, see if you can come up with a better way to say the same thing, such that you’re not doing that.” This is not a norm of censorship nor self-censorship, it’s a norm of clear communication and of kindness. I can easily imagine a book review of TBC that passes that test. But I think this particular post does not pass that test, not even close.
If a TBC post passed that test, well, I would still prefer that it be put off-site with a linkpost and so on, but I wouldn’t feel as strongly about it.
I think “censorship” is entirely the wrong framing. I think we can have our cake and eat it too, with just a little bit of effort and thoughtfulness.
I like the norm of “If you’re saying something that lots of people will probably (mis)interpret as being hurtful and insulting, see if you can come up with a better way to say the same thing, such that you’re doing that.” This is not a norm of censorship nor self-censorship, it’s a norm of clear communication and of kindness.
I think that this is completely wrong. Such a norm is definitely a norm of (self-)censorship—as has been discussed on Less Wrong already.
It is plainly obvious to any even remotely reasonable person that the OP is not intended as any insult to anyone, but simply as a book review / summary, just like it says. Catering, in any way whatsoever, to anyone who finds the current post “hurtful and insulting”, is an absolutely terrible idea. Doing such a thing cannot do anything but corrode Less Wrong’s epistemic standards.
Suppose that Person A finds Statement X demeaning, and you believe that X is not in fact demeaning to A, but rather A was misunderstanding X, or trusting bad secondary sources on X, or whatever.
What do you do?
APPROACH 1: You say X all the time, loudly, while you and your friends high-five each other and congratulate yourselves for sticking it to the woke snowflakes.
APPROACH 2: You try sincerely to help A understand that X is not in fact demeaning to A. That involves understanding where A is coming from, meeting A where A is currently at, defusing tension, gently explaining why you believe A is mistaken, etc. And doing all that before you loudly proclaim X.
I strongly endorse Approach 2 over 1. I think Approach 2 is more in keeping with what makes this community awesome, and Approach 2 is the right way to bring exactly the right kind of people into our community, and Approach 2 is the better way to actually “win”, i.e. get lots of people to understand that X is not demeaning, and Approach 2 is obviously what community leaders like Scott Alexander would do (as for Eliezer, um, I dunno, my model of him would strongly endorse approach 2 in principle, but also sometimes he likes to troll…), and Approach 2 has nothing to do with self-censorship.
~~
Getting back to the object level and OP. I think a lot of our disagreement is here in the details. Let me explain why I don’t think it is “plainly obvious to any even remotely reasonable person that the OP is not intended as any insult to anyone”.
Imagine that Person A believes that Charles Murray is a notorious racist, and TBC is a book that famously and successfully advocated for institutional racism via lies and deceptions. You don’t have to actually believe this—I don’t—I am merely asking you to imagine that Person A believes that.
Now look at the OP through A’s eyes. Right from the title, it’s clear that OP is treating TBC as a perfectly reasonable respectable book by a perfectly reasonable respectable person. Now A starts scanning the article, looking for any serious complaint about this book, this book which by the way personally caused me to suffer by successfully advocating for racism, and giving up after scrolling for a while and coming up empty. I think a reasonable conclusion from A’s perspective is that OP doesn’t think that the book’s racism advocacy is a big deal, or maybe OP even thinks it’s a good thing. I think it would be understandable for Person A to be insulted and leave the page without reading every word of the article.
Once again, we can lament (justifiably) that Person A is arriving here with very wrong preconceptions, probably based on trusting bad sources. But that’s the kind of mistake we should be sympathetic to. It doesn’t mean Person A is an unreasonable person. Indeed, Person A could be a very reasonable person, exactly the kind of person who we want in our community. But they’ve been trusting bad sources. Who among us hasn’t trusted bad sources at some point in our lives? I sure have!
And if Person A represents a vanishingly rare segment of society with weird idiosyncratic wrong preconceptions, maybe we can just shrug and say “Oh well, can’t please everyone.” But if Person A’s wrong preconceptions are shared by a large chunk of society, we should go for Approach 2.
Imagine that Person A believes that Charles Murray is a notorious racist, and TBC is a book that famously and successfully advocated for institutional racism via lies and deceptions. You don’t have to actually believe this—I don’t—I am merely asking you to imagine that Person A believes that.
If Person A believes this without ever having either (a) read The Bell Curve or (b) read a neutral, careful review/summary of The Bell Curve, then A is not a reasonable person.
All sorts of unreasonable people have all sorts of unreasonable and false beliefs. Should we cater to them all?
No. Of course we should not.
Now look at the OP through A’s eyes. Right from the title, it’s clear that OP is treating TBC as a perfectly reasonable respectable book by a perfectly reasonable respectable person.
The title, as I said before, is neutrally descriptive. Anyone who takes it as an endorsement is, once again… unreasonable.
Now A starts scanning the article, looking for any serious complaint about this book, this book which by the way personally caused me to suffer by successfully advocating for racism
Sorry, what? A book which you (the hypothetical Person A) have never read (and in fact have only the vaguest notion of the contents of) has personally caused you to suffer? And by successfully (!!) “advocating for racism”, at that? This is… well, “quite a leap” seems like an understatement; perhaps the appropriate metaphor would have to involve some sort of Olympic pole-vaulting event. This entire (supposed) perspective is absurd from any sane person’s perspective.
I think a reasonable conclusion from A’s perspective is that OP doesn’t think that the book’s racism advocacy is a big deal, or maybe OP even thinks it’s a good thing. I think it would be understandable for Person A to be insulted and leave the page without reading every word of the article.
No, this would actually be wildly unreasonable behavior, unworthy of any remotely rational, sane adult. Children, perhaps, may be excused for behaving in this way—and only if they’re very young.
The bottom line is: the idea that “reasonable people” think and behave in the way that you’re describing is the antithesis of what is required to maintain a sane society. If we cater to this sort of thing, here on Less Wrong, then we completely betray our raison d’etre, and surrender any pretense to “raising the sanity waterline”, “searching for truth”, etc.
Sorry, what? A book which you (the hypothetical Person A) have never read (and in fact have only the vaguest notion of the contents of) has personally caused you to suffer? And by successfully (!!) “advocating for racism”, at that? This is… well, “quite a leap” seems like an understatement; perhaps the appropriate metaphor would have to involve some sort of Olympic pole-vaulting event. This entire (supposed) perspective is absurd from any sane person’s perspective.
I have a sincere belief that The Protocols Of The Elders Of Zion directly contributed to the torture and death of some of my ancestors. I hold this belief despite having never read this book, and having only the vaguest notion of the contents of this book, and having never sought out sources that describe this book from a “neutral” point of view.
Do you view those facts as evidence that I’m an unreasonable person?
Further, if I saw a post about The Protocols Of The Elders Of Zion that conspicuously failed to mention anything about people being oppressed as a result of the book, or a post that buried said discussion until after 28 paragraphs of calm open-minded analysis, well, I think I wouldn’t read through the whole piece, and I would also jump to some conclusions about the author. I stand by this being a reasonable thing to do, given that I don’t have unlimited time.
By contrast, if I saw a post about The Protocols Of The Elders Of Zion that opened with “I get it, I know what you’ve heard about this book, but hear me out, I’m going to explain why we should give this book a chance with an open mind, notwithstanding its reputation…”, then I would certainly consider reading the piece.
Your analogy breaks down because the Bell Curve is extremely reasonable, not some forged junk like “The Protocols Of The Elders Of Zion”.
If a book mentioned here mentioned evolution and that offended some traditional religious people, would we need to give a disclaimer and potentially leave it off the site? What if some conservative religious people believe belief in evolution directly harms them? They would be regarded as insane, and so are people offended by TBC.
That’s all this is by the way, left-wing evolution denial. How likely is it that people separated for tens of thousands of years with different founder populations will have equal levels of cognitive ability. It’s impossible.
I have a sincere belief that The Protocols Of The Elders Of Zion directly contributed to the torture and death of some of my ancestors. I hold this belief despite having never read this book, and having only the vaguest notion of the contents of this book, and having never sought out sources that describe this book from a “neutral” point of view.
Do you view those facts as evidence that I’m an unreasonable person?
Yeah.
“What do you think you know, and how do you think you know it?” never stopped being the rationalist question.
As for the rest of your comment—first of all, my relative levels of interest in reading a book review of the Protocols would be precisely reversed from yours.
Secondly, I want to call attention to this bit:
“… I’m going to explain why we should give this book a chance with an open mind, notwithstanding its reputation…”
There is no particular reason to “give this book a chance”—to what? Convince us of its thesis? Persuade us that it’s harmless? No. The point of reviewing a book is to improve our understanding of the world. The Protocols of the Elders of Zion is a book which had an impact on global events, on world history. The reason to review it is to better understand that history, not to… graciously grant the Protocols the courtesy of having its allotted time in the spotlight.
If you think that the Protocols are insignificant, that they don’t matter (and thus that reading or talking about them is a total waste of our time), that is one thing—but that’s not true, is it? You yourself say that the Protocols had a terrible impact! All the things which we should strive our utmost to understand, how can a piece of writing that contributed to some of the worst atrocities in history not be among them? How do you propose to prevent history from repeating, if you refuse, not only to understand it, but even to bear its presence?
The idea that we should strenuously shut our eyes against bad things, that we should forbid any talk of that which is evil, is intellectually toxic.
And the notion that by doing so, we are actually acting in a moral way, a righteous way, is itself the root of evil.
Hmm, I think you didn’t get what I was saying. A book review of “Protocols of the Elders of Zion” is great, I’m all for it. A book review of “Protocols of the Elders of Zion” which treats it as a perfectly lovely normal book and doesn’t say anything about the book being a forgery until you get 28 paragraphs into the review and even then it’s barely mentioned is the thing that I would find extremely problematic. Wouldn’t you? Wouldn’t that seem like kind of a glaring omission? Wouldn’t that raise some questions about the author’s beliefs and motives in writing the review?
Do you view those facts as evidence that I’m an unreasonable person?
Yeah.
Do you ever, in your life, think that things are true without checking? Do you think that the radius of earth is 6380 km? (Did you check? Did you look for skeptical sources?) Do you think that lobsters are more closely related to shrimp than to silverfish? (Did you check? Did you look for skeptical sources?) Do you think that it’s dangerous to eat an entire bottle of medicine at once? (Did you check? Did you look for skeptical sources?)
I think you’re holding people up to an unreasonable standard here. You can’t do anything in life without having sources that you generally trust as being probably correct about certain things. In my life, I have at time trusted sources that in retrospect did not deserve my trust. I imagine that this is true of everyone.
Suppose we want to solve that problem. (We do, right?) I feel like you’re proposing a solution of “form a community of people who have never trusted anyone about anything”. But such community would be empty! A better solution is: have a bunch of Scott Alexanders, who accept that people currently have beliefs that are wrong, but charitably assume that maybe those people are nevertheless open to reason, and try to meet them where they are and gently persuade them that they might be mistaken. Gradually, in this way, the people (like former-me) who were trusting the wrong sources can escape of their bubble and find better sources, including sources who preach the virtues of rationality.
We’re not born with an epistemology instruction manual. We all have to find our way, and we probably won’t get it right the first time. Splitting the world into “people who already agree with me” and “people who are forever beyond reason”, that’s the wrong approach. Well, maybe it works for powerful interest groups that can bully people around. We here at lesswrong are not such a group. But we do have the superpower of ability and willingness to bring people to our side via patience and charity and good careful arguments. We should use it! :)
Hmm, I think you didn’t get what I was saying. A book review of “Protocols of the Elders of Zion” is great, I’m all for it. A book review of “Protocols of the Elders of Zion” which treats it as a perfectly lovely normal book and doesn’t say anything about the book being a forgery until you get 28 paragraphs into the review and even then it’s barely mentioned is the thing that I would find extremely problematic. Wouldn’t you? Wouldn’t that seem like kind of a glaring omission? Wouldn’t that raise some questions about the author’s beliefs and motives in writing the review?
I agree completely.
But note that here we are talking about the book’s provenance / authorship / otherwise “metadata”—and certainly not about the book’s impact, effects of its publication, etc. The latter sort of thing may properly be discussed in a “discussion section” subsequent to the main body of the review, or it may simply be left up to a Wikipedia link. I would certainly not require that it preface the book review, before I found that review “acceptable”, or forebore to question the author’s motives, or what have you.
And it would be quite unreasonable to suggest that a post titled “Book Review: The Protocols of the Elders of Zion” is somehow inherently “provocative”, “insulting”, “offensive”, etc., etc.
Do you ever, in your life, think that things are true without checking?
I certainly try not to, though bounded rationality does not permit me always to live up to this goal.
Do you think that the radius of earth is 6380 km? (Did you check? Did you look for skeptical sources?)
I have no beliefs about this one way or the other.
Do you think that lobsters are more closely related to shrimp than to silverfish? (Did you check? Did you look for skeptical sources?)
I have no beliefs about this one way or the other.
Do you think that it’s dangerous to eat an entire bottle of medicine at once? (Did you check? Did you look for skeptical sources?)
Depends on the medicine, but I am given to understand that this is often true. I have “checked” in the sense that I regularly read up on the toxicology and other pharmacokinetic properties of medications I take, or those might take, or even those I don’t plan to take. Yes, I look for skeptical sources.
My recommendation, in general, is to avoid having opinions about things that don’t affect you; aim for a neutral skepticism. For things that do affect you, investigate; don’t just stumble into beliefs. This is my policy, and it’s served me well.
I think you’re holding people up to an unreasonable standard here. You can’t do anything in life without having sources that you generally trust as being probably correct about certain things. In my life, I have at time trusted sources that in retrospect did not deserve my trust. I imagine that this is true of everyone.
The solution to this is to trust less, check more; decline to have any opinion one way or the other, where doing so doesn’t affect you. And when you have to, trust—but verify.
Strive always to be aware of just how much trust in sources you haven’t checked underlies any belief you hold—and, crucially, adjust the strength of your beliefs accordingly.
And when you’re given an opportunity to check, to verify, to investigate—seize it!
A better solution is: have a bunch of Scott Alexanders, who accept that people currently have beliefs that are wrong, but charitably assume that maybe those people are nevertheless open to reason, and try to meet them where they are and gently persuade them that they might be mistaken.
The principle of charity, as often practiced (here and in other rationalist spaces), can actually be a terrible idea.
But we do have the superpower of ability and willingness to bring people to our side via patience and charity and good careful arguments. We should use it! :)
We should use it only to the extent that it does not in any way reduce our own ability to seek, and find, the truth, and not one iota more.
we are talking about the book’s provenance / authorship / otherwise “metadata”—and certainly not about the book’s impact
A belief that “TBC was written by a racist for the express purpose of justifying racism” would seem to qualify as “worth mentioning prominently at the top” under that standard, right?
And it would be quite unreasonable to suggest that a post titled “Book Review: The Protocols of the Elders of Zion” is somehow inherently “provocative”, “insulting”, “offensive”, etc., etc.
I imagine that very few people would find the title by itself insulting; it’s really “the title in conjunction with the first paragraph or two” (i.e. far enough to see that the author is not going to talk up-front about the elephant in the room).
Hmm, maybe another better way to say it is: The title plus the genre is what might insult people. The genre of this OP is “a book review that treats the book as a serious good-faith work of nonfiction, which might have some errors, just like any nonfiction book, but also presumably has some interesting facts etc.” You don’t need to read far or carefully to know that the OP belongs to this genre. It’s a very different genre from a (reasonable) book review of “Protocols of the Elders of Zion”, or a (reasonable) book review of “Mein Kampf”, or a (reasonable) book review of “Harry Potter”.
A belief that “TBC was written by a racist for the express purpose of justifying racism” would seem to qualify as “worth mentioning prominently at the top” under that standard, right?
No, of course not (the more so because it’s a value judgment, not a statement of fact).
The rest of what you say, I have already addressed.
Approach 2 assumes that A is (a) a reasonable person and (b) coming into the situation with good faith. Usually, neither is true.
What is more, your list of two approaches is a very obvious false dichotomy, crafted in such a way as to mock the people you’re disagreeing with. Instead of either the strawman Approach 1 or the unacceptable Approach 2, I endorse the following:
APPROACH 3: Ignore the fact that A (supposedly) finds X “demeaning”. Say (or don’t say) X whenever the situation calls for it. Behave in all ways as if A’s opinion is completely irrelevant.
(Note, by the way, that Approach 2 absolutely does constitute (self-)censorship, as anything that imposes costs on a certain sort of speech—such as, for instance, requiring elaborate genuflection to supposedly “offended” parties, prior to speaking—will serve to discourage that form of speech. Of course, I suspect that this is precisely the goal—and it is also precisely why I reject your suggestion wholeheartedly. Do not feed utility monsters.)
There’s a difference between catering to an audience and proactively framing things in the least explosive way.
Maybe what you are saying is that when people try to do the latter, they inevitably end up self-censoring and catering to the (hostile) audience?
But that seems false to me. I not only think framing contoversial topics in a non-explosive way is a strategically important, underappreciated skill. In addition, I suspect that practicing the skill improves our epistemics. It forces us to engage with a critical audience of people with ideological differences. When I imagine having to write on a controversial topic, one of the readers I mentally simulate is “person who is ideologically biased against me, but still reasonable.” I don’t cater to unreasonable people, but I want to take care to not put off people who are still “in reach.” And if they’re reasonable, sometimes they have good reasons behind at least some of their concerns and their perspectives can be learnt from.
As I mentioned elsethread, if I’d written the book review I would have done what you describe. But I didn’t and probably never would have written it out of timidness, and that makes me reluctant to tell someone less timid who did something valuable that they did it wrong.
I was just commenting on the general norm. I haven’t read the OP and didn’t mean to voice an opinion on it.
I’m updating that I don’t understand how discussions work. It happens a lot that I object only to a particular feature of an argument or particular argument, yet my comments are interpreted as endorsing an entire side of a complicated debate.
FWIW, I think the “caving in” discussed/contemplated in Rafael Harth’s comments is something I find intuitively repugnant. It feels like giving up your soul for some very dubious potential benefits. Intellectually I can see some merits for it but I suspect (and very much like to believe) that it’s a bad strategy.
Maybe I would focus more on criticizing this caving in mentality if I didn’t feel like I was preaching to the choir. “Open discussion” norms feel so ingrained on Lesswrong that I’m more worried that other good norms get lost / overlooked.
Maybe I would feel different (more “under attack”) if I was more emotionally invested in the community and felt like something I helped build was under attack with norm erosion. I feel presently more concerned about dangers from evaporative cooling where many who care a not-small degree about “soft virtues in discussions related to tone/tact/welcomingness, but NOT in a strawmanned sense” end up becoming less active or avoiding the comment sections.
Edit: The virtue I mean is maybe best described as “presenting your side in a way that isn’t just persuasive to people who think like you, but even reaches the most receptive percentage of the outgroup that’s predisposed to be suspicious of you.”
This is a moot point, because anyone who finds a post title like “Book review: The Bell Curve by Charles Murray” to be “controversial”, “explosive”, etc., is manifestly unreasonable.
I’ll bite, but I can’t promise to engage in a lot of back-and-forth.
Let’s generalize. A given post on LW’s frontpage may heighten or diminish its visibility and appeal to potential newcomers, or the visibility/appeal of associated causes like X-risk. You’ve offered one reason why this post might heighten its visibility while diminishing its appeal.
Here’s an alternative scenario, in which this post heightens rather than diminishes the appeal of LW. Perhaps a post about the Bell Curve will strike somebody as a sign that this website welcomes free and open discourse, even on controversial topics, as long as it’s done thoughtfully. This might heighten, rather than diminish, LW’s appeal, for a person such as this. Indeed, hosting posts on potentially controversial topics might select for people like this, and that might not only grow the website, but reinforce its culture in a useful way.
I am not claiming that this post heightens the appeal of LW on net—only that it’s a plausible alternative hypothesis. I think that we should be very confident that a post will diminish the appeal of LW to newcomers before we advocate for communally-imposed censorship.
Not only do we have to worry that such censorship will impact the free flow of information and ideas, but that it will personally hurt the feelings of a contributor. Downvotes and calls for censorship pretty clearly risk diminishing the appeal of the website to the poster, who has already demonstrated that they care about this community. If successful, the censorship would only potentially bolster the website’s appeal for some hypothetical newcomer. It makes more sense to me to prioritize the feelings of those already involved. I don’t know how lsusr feels about your comment, but I know that when other people have downvoted or censored my posts and comments, I have felt demoralized.
The reason I think this is unlikely is that the base rate of (blogs touching on politics making it into the NYT for far-right trolling)/(total blogs touching on politics) is low. Slate Star Codex had a large number of readers before the NYT wrote an article about it. I believe that LW must be have a readership two orders of magnitude lower than SSC/ACX (in the thousands, or even just the hundreds, for LW, in the hundreds of thousands for SSC/ACX). LW is the collective work of a bunch of mainly-anonymous bloggers posting stuff that’s largely inoffensive and ~never (recently) flagrantly attacking particular political factions. Indeed, we have some pretty strong norms against open politicization. Because its level of openly political posting and its readership are both low, I think LW is an unappealing target for a brigade or hit piece. Heck, even Glen Weyl thinks we’re not worth his time!
Edit: See habryka’s stats below for a counterpoint. I still think there’s a meaningful difference between the concentrated attention given to posts on ACX vs. the diffuse attention (of roughly equal magnitude) distributed throughout the vastness of LW.
For this reason, it once again does not seem worth creating a communal norm of censorship and a risk of hurt feelings by active posters.
Note also that, while you have posited and acted upon (via downvoting and commenting) a hypothesis of yours that the risks of this post outweigh the benefits, you’ve burdened respondants with supplying more rigor than you brought to your original post (“I would much welcome some kind of a cost-benefit calculation that concludes that this is a good idea”). It seems to me that a healthier norm would be that, before you publicly proclaim that a post is worthy of censorship, that you do the more rigorous cost/benefit calculation, and offer it up for others to critique.
Or should I fight fire with fire, by strongly-upvoting lsusr’s post to counteract your strong-downvote? In this scenario, upvotes and downvotes are being used not as a referendum on the quality of the post, but on whether or not it should be censored to protect LW. Is that how we wish this debate to be decided?
As a final question, consider that you seem to view this post in particular as exceptionally risky for LW. That means you are making an extraordinary claim: that this post, unlike almost every other LW post, is worthy of censorship. Extraordinary claims require extraordinary evidence. Have you met that standard?
LW’s readership is about the same order of magnitude as SSC. Depending on the mood of the HN and SEO gods.
Not that I don’t believe you, but that’s also really hard for me to wrap my head around. Can you put numbers on that claim? I’m not sure if ACX has a much smaller readership than I’d imagined, or if LW has a much bigger one, but either way I’d like to know!
https://www.similarweb.com/website/astralcodexten.substack.com/?competitors=lesswrong.com Currently shows ACX at something like 1.7x of LessWrong. At some points in the past LessWrong was slightly ahead.
LessWrong is a pretty big website. Here is a random snapshot of top-viewed pages from the last month from Google Analytics:
As you can see from the distribution, it’s a long tail of many pages getting a few hundred pageviews each month, which adds up a lot.
That is vastly more readership than I had thought. A naive look at these numbers suggests that a small city’s worth of people read Elizabeth’s latest post. But I assume that these numbers can’t be taken at face value.
It’s very hard for me to square the idea that these websites get roughly comparable readership with my observation that ACX routinely attracts hundreds of comments on every post. LW gets 1-2 orders of magnitude fewer comments than ACX.
So while I’m updating in favor of the site’s readership being quite a bit bigger than I’d thought, I still think there’s some disconnect here between what I’m thinking of by “readership” and the magnitude of “readership” is coming across in these stats.
Note that LW gets 1-2 OOM fewer comments on the average post, but not in total. I reckon monthly comments is same OOM. And if you add up total word count on each site I suspect LW is 1 OOM bigger each month. ACX is more focused and the discussion is more focused, LW is a much broader space with lots of smaller convos.
That makes a lot of sense. I do get the feeling that, although total volume on a particular topic is more limited here, that there’s a sense of conversation and connection that I don’t get on ACX, which I think is largely due to the notification system we have here for new comments and messages.
This is weekly comments for LessWrong over the last year. Last we counted, something like 300 on a SSC post? So if there are two SSC posts/week, LessWrong is coming out ahead.
I think ACX is ahead of LW here. In October, it got 7126 comments in 14 posts, which is over 1600/week. (Two of them were private with 201 between them, still over 1500/week if you exclude them. One was an unusually high open thread, but still over 1200/week if you exclude that too.)
In September it was 10350 comments, over 2400/week. I can’t be bothered to count August properly but there are 10 threads with over 500 comments and 20 with fewer, so probably higher than October at least.
Not too far separate though, like maybe 2x but not 10x.
(E: to clarify this is “comments on posts published in the relevant month” but that shouldn’t particularly matter here)
I don’t think LW gets at all fewer comments than ACX. I think indeed LW has more comments than ACX, it’s just that LW comments are spread out over 60+ posts in a given week, whereas ACX has like 2-3 posts a week. LessWrong gets about 150-300 comments a day, which is roughly the same as what ACX gets per day.
I think this number can be relatively straightforwardly taken at face value. Elizabeth’s post was at the top of HN for a few hours, so a lot of people saw it. A small city’s worth seems about right for the number of people who clicked through and at least skimmed it.
I’m surprised to see how many people view the Roko’s Basilisk tag. Is that a trend over more than just the last month?
It’s the norm, alas.
https://www.lesswrong.com/posts/Ndtb22KYBxpBsagpj/eliezer-yudkowsky-facts?commentId=u7iYAQM7MkGdhyTL9
I think the evidence that wokeism is a powerful force in the world we live in is abundant, and my primary reaction to your comment is that it feels like everything you said could have been written in a world where this isn’t so. There is an inherent asymmetry here in how many people care about which things to what degree in the real world. (As I’ve mentioned in the last discussion, I know a person who falls squarely into the second category I’ve mentioned; committed EA, very technically smart, but thinks all LW-adjacent things are poisonous, in her case because of sexism rather than racism, but it’s in the same cluster.)
Sam Harris invited the author of the Bell Curve onto his podcast 4 years ago, and as a result has a stream of hateful rhetoric targeted his way that lasts to this day. Where is the analogous observable effect into the opposite direction? If it doesn’t exist, why is postulating the opposite effect plausible in this case?
My rough cost-benefit analysis is −5/-20/-20 for the points I’ve mentioned, +1 for the advantage of being able to discuss this here, and maybe +2 for the effect of attracting people who like it for the opposite symbolism (i.e., here’s someone not afraid to discuss hard things) and I feel like I don’t want to assign a number to how it impacts Isur’s feelings. The reason I didn’t spell this out was because I thought it would come across as unnecessarily uncharitable, and it doesn’t convey much new information because I already communicated that I don’t see the upside.
Sam Harris has enormous reach, comparable to Scott’s. Also, podcasts have a different cultural significance than book reviews. Podcasts tend to come with an implicit sense of friendliness and inclusion extended toward the guest. Not so in a book review, which can be bluntly critical. So for the reasons I outlined above, I don’t think Harris’s experiences are a good reference class for what we should anticipate.
“Wokeism” is powerful, and I agree that this post elevated this site’s risk of being attacked or condemned either by the right or the left. I also agree that some people have been turned off by the views on racism or sexism they’ve been exposed to on by some posters on this site.
I also think that negativity tends to be more salient than approval. If lsusr’s post costs us one long-term reader and gains us two, I expect the one user who exits over it to complain and point to this post, making the reason for their dissatisfaction clear. By contrast, I don’t anticipate the newcomers to make a fanfare, or to even see lsusr’s post as a key reason they stick around. Instead, they’ll find themselves enjoying a site culture and abundance of posts that they find generally appealing. So I don’t think a “comparable observable effect in the opposite direction” is what you’d look for to see whether lsusr’s post enhances or diminishes the site’s appeal on net.
In fact, I am skeptical about our ability to usefully predict the effect of individual posts on driving readership to or away from this site. Which is why I don’t advocate censoring individual posts on this basis.
I agree that the risk of anything terrible happening right now is very low for this reason. (Though I’d still estimate it to be higher than the upside.) But is “let’s rely on us being too small to get noticed by the mob” really a status quo you’re comfortable with?
This comment actually made me update somewhat because it’s harder than I thought to find an asymmetry here. But it’s still only a part of the story (and the part I’ve put the least amount of weight on.)
Let me rephrase that slightly, since I would object to several features of this sentence that I think are beside your main point. I do think that taking the size and context of our community into account when assessing how outsiders will see and respond to our discourse is among the absolute top considerations for judging risk accurately.
On a simple level, my framework is that we care about three factors: object-level risks and consequences, and enforcement-level risks and consequences. These are analogous to the risks and consequences from crime (object-level), and the risks and consequences from creating a police force or military (enforcement-level).
What I am arguing in this case is that the negative risks x consequences of the sort of enforcement-level behaviors you are advocating for and enacting seem to outweigh the negative risks x consequences of being brigaded or criticized in the news. Also, I’m uncertain enough about the balance of this post’s effect on inflow vs. outflow of readership to be close to 50⁄50, and expect it to be small enough either way to ignore it.
Note also that Sam Harris and Scott Alexander still have an enormous readership after their encounters with the threats you’re describing. While I can imagine a scenario in which unwanted attention becomes deeply unpleasant, I also expect to be a temporary situation. By contrast, instantiating a site culture that is self-censoring due to fear of such scenarios seems likely to be much more of a daily encumbrance—and one that still doesn’t rule out the possibility that we get attacked anyway.
I’d also note that you may be contributing to the elevation of risk with your choices of language. By using terms like “wokeism,” “mob,” and painting scrutiny as a dire threat in a public comment, it seems to me that you add potential fuel for any fire that may come raging through. My standard is that, if this is your earnest opinion, then LW ought to be a good platform for you to discuss that, even if it elevates our risk of being cast in a negative light.
Your standard, if I’m reading you right, is that your comment should be considered for potential censorship itself, due to the possibility that it does harm to the site’s reputation. Although it is perhaps not as potentially inflammatory as a review of TBC, it’s also less substantial, and potentially interacts in a synergistic way to elevate the risk. Do you think this is a risk you ought to have taken seriously before commenting? If not, why not?
My perspective is that you were right to post what you posted, because it reflected an honest concern of yours, and permits us to have a conversation about it. I don’t think you should have had to justify the existence of your comment with some sort of cost/benefit analysis. There are times when I think that such a justification is warranted, but this context is very far from that threshold. An example of a post that I think crosses that threshold would be a description of a way to inflict damage that had at least two of the following attributes: novel, convenient, or detailed. Your post is none of these, and neither is lsusr’s, so both of them pass my test for “it’s fine to talk about it.”
After reading this, I realize that I’ve done an extremely poor job communicating with everything I’ve commented on this post, so let me just try to start over.
I think what I’m really afraid of is a sequence of events that goes something like this:
Every couple of months, someone on LW makes a post like the above
In some (most?) cases, someone is going to speak up against this (in this case, we had two), there will be some discussion, but the majority will come down on the side that censorship is bad and there’s no need to take drastic action
The result is that we never establish any kind of norm nor otherwise prepare for political backlash
In ten or twenty or forty years from now, in a way that’s impossible to predict because any specific scenario is extremely unlikely, the position to be worried about AGI will get coupled to being anti social justice in the public discourse, as a result it will massively lose status and the big labs react by taking safety far less seriously and maybe we have fewer people writing papers on alignment
At that point it will be obvious to everyone that not having done anything to prevent this was a catastrophic error
After the discussion on the dating post, I’ve made some attempts to post a follow-up but chickened out of doing it because I was afraid of the reaction or maybe just because I couldn’t figure out how to approach the topic. When I saw this post, I think I originally decided not to do anything, but then anon03 said something and then somehow I thought I had to say something as well but it wasn’t well thought out because I already felt a fair amount of anxiety after having failed to write about it before. When my comment got a bunch of downvotes, the feeling of anxiety got really intense and I felt like the above mentioned scenario is definitely going to happen and I won’t be able to do anything about it because arguing for censorship is just a lost cause, and I think I then intentionally (but subconsciously) used the language you’ve just pointed out to signal that I don’t agree with the object level part of anything I’m arguing for (probably in the hopes of changing the reception?) even though I don’t think that made a lot of sense; I do think I trust people on this site to keep the two things separate. I completely agree that this risks making the problem worse. I think it was a mistake to say it.
I don’t think any of this is an argument for why I’m right, but I think that’s about what really happened.
Probably it’s significantly less than 50% that anything like what I described happens just because of the conjunction—who knows of anyone even still cares about social justice in 20 years. But it doesn’t seem nearly unlikely enough not to take seriously, and I don’t see anyone taking it seriously and it really terrifies me. I don’t completely understand why since I tend to not be very affected when thinking about x-risks. Maybe because of the feeling that it should be possible to prevent it.
I don’t think the fact that Sam still has an audience is a reason not to panic. Joe Rogan has a quadrillion times the audience of the NYT or CNN, but the social justice movement still has disproportionate power over institutions and academia, and probably that includes AI labs?
I will say that although I disagree with your opinion re: censoring this post and general risk assessment related to this issue, I don’t think you’ve expressed yourself particularly poorly. I also acknowledge that it’s hard to manage feelings of anxiety that come up in conversations with an element of conflict, in a community you care about, in regards to an issue that is important to the world. So go easier on yourself, if that helps! I too get anxious when I get downvoted, or when somebody disagrees with me, even though I’m on LW to learn, and being disagreed with and turning out to be wrong is part of that learning process.
It sounds like a broader perspective of yours is that there’s a strategy for growing the AGI safety community that involves keeping it on the good side of whatever political faction is in power. You think that we should do pretty much whatever it takes to make AGI safety research a success, and that this strategy of avoiding any potentially negative associations is important enough for achieving that outcome that we should take deliberate steps to safeguard its perception in this way. As a far-downstream consequence, we should censor posts like this, out of a general policy of expunging anything potentially controversial being associated with x-risk/AGI safety research and their attendant communities.
I think we roughly agree on the importance of x-risk and AGI safety research. If there was a cheap action I could take that I thought would reliably mitigate x-risk by 0.001%, I would take it. Downvoting a worrisome post is definitely a cheap action, so if I thought it would reliably mitigate x-risk by 0.001%, I would probably take it.
The reason I don’t take it is because I don’t share your perception that we can effectively mitigate x-risk in this way. It is not clear to me that the overall effect of posts like lsusr’s is net negative for these causes, nor that such a norm of censorship would be net beneficial.
What I do think is important is an atmosphere in which people feel freedom to follow their intellectual interests, comfort in participating in dialog and community, and a sense that their arguments are being judged on their intrinsic merit and truth-value.
The norm that our arguments should be judged based on their instrumental impact on the world seems to me to be generally harmful to epistemics. And having an environment that tries to center epistemic integrity above other concerns seems like a relatively rare and valuable thing, one that basically benefits AGI safety.
That said, people actually doing AGI research have other forums for their conversation, such as the alignment forum and various nonprofits. It’s unclear that LW is a key part of the pipeline for new AGI researchers, or forum for AGI research to be debated and discussed. If LW is just a magnet for a certain species of blogger who happens to be interested in AGI safety, among other things; and if those bloggers risk attracting a lot of scary attention while contributing minimally to the spread of AGI safety awareness or to the research itself, then that seems like a concerning scenario.
It’s also hard for me to judge. I can say that LW has played a key role for me connecting with and learning from the rationalist community. I understand AGI safety issues better for it, and am the only point of reference that several of my loved ones have for hearing about these issues.
So, N of 1, but LW has probably improved the trajectory of AGI safety by a miniscule but nonzero amount via its influence on me. And I wouldn’t have stuck around on LW if there was a lot of censorship of controversial topics. Indeed, it was the opportunity to wrestle with my attachments and frustrations with leftwing ideology via the ideas I encountered here that made this such an initially compelling online space. Take away the level of engagement with contemporary politics that we permit ourselves here, add in a greater level of censorship and anxiety about the consequences of our speech, and I might not have stuck around.
Thanks for this comment.
I happily endorse this very articulate description of my perspective, with the one caveat that I would draw the line to the right of ‘anything potentially controversial’ (with the left-right axis measuring potential for backlash). I think this post falls to the right of just about any line; I think it hast he highest potential for backlash out of any post I remember seeing on LW ever. (I just said the same in a reply to Ruby, and I wasn’t being hypothetical.)
I’m probably an unusual case, but I got invited into the alignment forum by posting the Factored Cognition sequence on LW, so insofar as I count, LW has been essential. If it weren’t for the way that the two forums are connected, I wouldn’t have written the sequence. The caveat is that I’m currently not pursuing a “direct” path on alignment but am instead trying to go the academia route by doing work in the intersection of [widely recognized] and [safety-relevant] (i.e. on interpretability), so you could argue that the pipeline ultimately didn’t work. But I think (not 100% sure) at least Alex Turner is a straight-forward success story for said pipeline.
I think you probably want to respond on this on my reply to Rubin so that we don’t have two discussions about the same topic. My main objection is that the amount of censorship I’m advocating for seems to me to be tiny, I think less than 5 posts per year, far less than what is censored by the norm against politics.
Edit: I also want to object to this:
I don’t think anything of what I’m saying involves judging arguments based on their impact on the world. I’m saying you shouldn’t be allowed to talk about TBC on LW in the first place. This seems like a super important distinction because it doesn’t involve lying or doing any mental gymnastics. I see it as closely analogous to the norm against politics, which I don’t think has hurt our discourse.
What I mean here is that you, like most advocates of a marginal increase in censorship, justify this stance on the basis that the censored material will cause some people, perhaps its readers or its critics, to take an action with an undesirable consequence. Examples from the past have included suicidal behavior, sexual promiscuity, political revolution, or hate crimes.
To this list, you have appended “elevating X-risk.” This is what I mean by “impact on the world.”
Usually, advocates of marginal increases in censorship are afraid of the content of the published documents. In this case, you’re afraid not of what the document says on the object level, but of how the publication of that document will be perceived symbolically.
An advocate of censorship might point out that we can potentially achieve significant gains on goals with widespread support (in our society, stopping hate crimes might be an example), with only modest censorship. For example, we might not ban sales of a certain book. We just make it library policy not to purchase them. Or we restrict purchase to a certain age group. Or major publishers make a decision not to publish books advocating certain ideas, so that only minor publishing houses are able to market this material. Or we might permit individual social media platforms to ban certain articles or participants, but as long as internet service providers aren’t enacting bans, we’re OK with it.
On LW, one such form of soft censorship is the mod’s decision to keep a post off the frontpage.
To this list of soft censorship options, you are appending “posting it as a linkpost, rather than on the main site,” and assuring us that only 5 posts per year need to be subject even to this amount of censorship.
It is OK to be an advocate of a marginal increase in censorship. Understand, though, that to one such as myself, I believe that it is precisely these small marginal increases in censorship that pose a risk to X-risk, and the marginal posting of content like this book review either decreases X-risk (by reaffirming the epistemic freedom of this community) or does not affect it. If the community were larger, with less anonymity, and had a larger amount of potentially inflammatory political material, I would feel differently about this.
Your desire to marginally increase censorship feels to me a bit like a Pascal’s Mugging. You worry about a small risk of dire consequences that may never emerge, in order to justify a small but clear negative cost in the present moment. I don’t think you’re out of line to hold this belief. I just think that I’d need to see some more substantial empirical evidence that I should subscribe to this fear before I accept that we should pay this cost.
The link thing was anon03′s idea; I want posts about TBC to be banned outright.
Other than that, I think you’ve understood my model. (And I think I understand yours except that I don’t understand the gears of the mechanism by which you think x-risk increases.)
Sorry for conflating anon03′s idea with yours!
A quick sketch at a gears-level model:
X-risk, and AGI safety in particular, require unusual strength in gears-level reasoning to comprehend and work on; a willingness to stand up to criticism not only on technical questions but on moral/value questions; an intense, skeptical, questioning attitude; and a high value placed on altruism. Let’s call these people “rationalists.”
Even in scientific and engineering communities, and the population of rational people generally, the combination of these traits I’m referring to as “rationalism” is rare.
Rationalism causes people to have unusually high and predictable needs for a certain style and subject of debate and discourse, in a way that sets them apart from the general population.
Rationalists won’t be able to get their needs met in mainstream scientific or engineering communities, which prioritize a subset of the total rationalist package of traits.
Hence, they’ll seek an alternative community in which to get those needs met.
Rationalists who haven’t yet discovered a rationalist community won’t often have an advance knowledge of AGI safety. Instead, they’ll have thoughts and frustrations provoked by the non-rationalist society in which they grew up. It is these prosaic frustrations—often with politics—that will motivate them to seek out a different community, and to stay engaged with it.
When these people discover a community that engages with the controversial political topics they’ve seen shunned and censored in the rest of society, and doing it in a way that appears epistemically healthy to them, they’ll take it as evidence that they should stick around. It will also be a place where even AGI safety researchers and their friends can deal with the ongoing their issues and interests beyond AGI safety.
By associating with this community, they’ll pick up on ideas common in the community, like a concern for AGI safety. Some of them will turn it into a career, diminishing the amount of x-risk faced by the world.
I think that marginally increasing censorship on this site risks interfering with step 7. This site will not be recognized by proto-rationalists as a place where they can deal with the frustrations that they’re wrestling with when they first discover it. They won’t see an open attitude of free enquiry modeled, but instead see the same dynamics of fear-based censorship that they encounter almost everywhere else. Likewise, established AGI safety people and their friends will lose a space for free enquiry, a space for intellectual play and exploration that can be highly motivating. Loss of that motivation and appeal may interrupt the pipeline or staying power for people to work on X-risks of all kinds, including AGI safety.
Politics continues to affect people even after they’ve come to understand why it’s so frustrating, and having a minimal space to deal with it on this website seems useful to me. When you have very little of something, losing another piece of it feels like a pretty big deal.
What has gone into forming this model? I only have one datapoint on this (which is myself). I stuck around because of the quality of discussion (people are making sense here!); I don’t think the content mattered. But I don’t have strong resistance to believing that this is how it works for other people.
I think if your model is applied to the politics ban, it would say that it’s also quite bad (maybe not as bad because most politics stuff isn’t as shunned and censored as social justice stuff)? If that’s true, how would you feel about restructuring rather than widening the censorship? Start allowing some political discussions (I also keep thinking about Wei Dai’s “it’ll go there eventually so we should practice” argument) but censor the most controversial social justice stuff. I feel like the current solution isn’t pareto optimal in the {epistemic health} x {safety against backlash} space.
Anecdotal, but about a year ago I committed to the rationalist community for exactly the reasons described. I feel more accepted in rationalist spaces than trans spaces, even though rationalists semi-frequently argue against the standard woke line and trans spaces try to be explicitly welcoming.
Just extrapolating from my own experience. For me, the content was important.
I think where my model really meets challenges is that clearly, the political content on LW has alienated some people. These people were clearly attracted here in the first place. My model says that LW is a magnet for likely AGI-safety researchers, and says nothing about it being a filter for likely AGI-safety researchers. Hence, if our political content is costing us more involvement than it’s retaining, or if the frustration experienced by those who’ve been troubled by the political content outweigh the frustration that would be experienced by those whose content would be censored, then that poses a real problem for my cost/benefit analysis.
A factor asymmetrically against increased censorship here is that censorship is, to me, intrinsically bad. It’s a little like war. Sometimes, you have to fight a war, but you should insist on really good evidence before you commit to it, because wars are terrible. Likewise, censorship sucks, and you should insist on really good evidence before you accept an increase in censorship.
It’s this factor, I think, that tilts me onto the side of preferring the present level of political censorship rather than an increase. I acknowledge and respect the people who feel they can’t participate here because they experience the environment as toxic. I think that is really unfortunate. I also think that censorship sucks, and for me, it roughly balances out with the suckiness of alienating potential participants via a lack of censorship.
This, I think, is the area where my mind is most susceptible to change. If somebody could make a strong case that LW currently has a lot of excessively toxic, alienating content, that this is the main bottleneck for wider participation, and that the number of people who’d leave if that controversial content were removed were outweighed by the number of people who’d join, then I’d be open-minded about that marginal increase in censorship.
An example of a way this evidence could be gathered would be some form of community outreach to ex-LWers and marginal LWers. We’d ask those people to give specific examples of the content they find offensive, and try both to understand why it bothers them, and why they don’t feel it’s something they can or want to tolerate. Then we’d try to form a consensus with them about limitations on political or potentially offensive speech that they would find comfortable, or at least tolerable. We’d also try to understand their level of interest in participating in a version of LW with more of these limitations in place.
Here, I am hypothesizing that there’s a group of ex-LWers or marginal LW-ers who feel a strong affinity for most of the content, while an even stronger aversion for a minority subset of the content to such a degree that they sharply curtail their participation. Such that if the offensive tiny fraction of the content were removed, they’d undergo a dramatic and lasting increase in engagement with LW. I find it unlikely that a sizeable group like this exists, but am very open to having my mind changed via some sort of survey data.
It seems more likely to me that ex/marginal-LWers are people with only a marginal interest in the site as a whole, who point to the minority of posts they find offensive as only the most salient example of what they dislike. Even if it were removed, they wouldn’t participate.
At the same time, we’d engage in community dialog with current active participants about their concerns with such a change. How strong are their feelings about such limitations? How many would likely stop reading/posting/commenting if these limitations were imposed? For the material they feel most strongly about it, why do they feel that way?
I am positing that there are a significant subset of LWers for whom the minority of posts engaging with politics are very important sources of its appeal.
How is it possible that I could simultaneously be guessing—and it is just a guess—that controversial political topics are a make-or-break screening-in feature, but not a make-or-break screening-out feature?
The reason is that there are abundant spaces online and in-person for conversation that does have the political limitations you are seeking to impose here. There are lots of spaces for conversation with a group of likeminded ideologues across the entire political spectrum, where conformity is a prerequisite of polite conversation. Hence, imposing the same sort of guardrails or ideological conformities on this website would make it similar to many other platforms. People who desire these guardrails/conformities can get what they want elsewhere. For them, LW would be a nice-to-have.
For those who desire polite and thoughtful conversation on a variety of intellectual topics, even touching on politics, LW is verging on a need-to-have. It’s rare. This is why I am guessing that a marginal increase in censorship would cost us more appeal than it would gain us.
I agree with you that the risk of being the subject of massive unwanted attention as a consequence is nonzero. I simply am guessing that it’s small enough not to be worth the ongoing short-term costs of a marginal increase in censorship.
But I do think that making the effort to thoroughly examine and gather evidence for the extent to which our political status quo serves to attract or repel people would be well worth a thorough examination. Asking at what point the inherent cost of a marginal increase in censorship becomes worth paying in exchange for a more inclusive environment seems like a reasonable question to ask. But I think this process would need a lot of community buy-in and serious effort on the part of a whole team to do it right.
The people who are already here would need persuading, and indeed, I think they deserve the effort to be persuaded to give up some of their freedom to post what they want here in exchange for, the hope would be, a larger and more vibrant community. And this effort should come with a full readiness to discover that, in fact, such restrictions would diminish the size and vibrancy and intellectual capacity of this community. If it wasn’t approached in that spirit, I think it would just fail.
So, I both think that in the past 1) people have thought the x-risk folks are weird and low-status and didn’t want to be affiliated with them, and in the present 2) people like Phil Torres are going around claiming that EAs and longtermists are white surpremacists, because of central aspects of longtermism (like thinking the present matters in large part because of its ability to impact the future). Things like “willingness to read The Bell Curve” no doubt contribute to their case, but I think focusing on that misses the degree to which the core is actually in competition with other ideologies or worldviews.
I think there’s a lot of value in trying to nudge your presentation to not trigger other people’s allergies or defenses, and trying to incorporate criticisms and alternative perspectives. I think we can’t sacrifice the core to do those things. If we disagree with people about whether the long-term matters, then we disagree with them; if they want to call us names accordingly, so much the worse for them.
I mean, this works until someone in a position of influence bows the the pressure, and I don’t see why this can’t happen.
The main disagreement seems to come down to how much we would give up when disallowing posts like this. My gears model still says ‘almost nothing’ since all it would take is to extend the norm “let’s not talk about politics” to “let’s not talk about politics and extremely sensitive social-justice adjacent issues”, and I feel like that would extend the set of interesting taboo topics by something like 10%.
(I’ve said the same here; if you have a response to this, it might make sense to all keep it in one place.)
Sorry about your anxiety around this discussion :(
I like the norm of “If you’re saying something that lots of people will probably (mis)interpret as being hurtful and insulting, see if you can come up with a better way to say the same thing, such that you’re not doing that.” This is not a norm of censorship nor self-censorship, it’s a norm of clear communication and of kindness. I can easily imagine a book review of TBC that passes that test. But I think this particular post does not pass that test, not even close.
If a TBC post passed that test, well, I would still prefer that it be put off-site with a linkpost and so on, but I wouldn’t feel as strongly about it.
I think “censorship” is entirely the wrong framing. I think we can have our cake and eat it too, with just a little bit of effort and thoughtfulness.
I think that this is completely wrong. Such a norm is definitely a norm of (self-)censorship—as has been discussed on Less Wrong already.
It is plainly obvious to any even remotely reasonable person that the OP is not intended as any insult to anyone, but simply as a book review / summary, just like it says. Catering, in any way whatsoever, to anyone who finds the current post “hurtful and insulting”, is an absolutely terrible idea. Doing such a thing cannot do anything but corrode Less Wrong’s epistemic standards.
Suppose that Person A finds Statement X demeaning, and you believe that X is not in fact demeaning to A, but rather A was misunderstanding X, or trusting bad secondary sources on X, or whatever.
What do you do?
APPROACH 1: You say X all the time, loudly, while you and your friends high-five each other and congratulate yourselves for sticking it to the woke snowflakes.
APPROACH 2: You try sincerely to help A understand that X is not in fact demeaning to A. That involves understanding where A is coming from, meeting A where A is currently at, defusing tension, gently explaining why you believe A is mistaken, etc. And doing all that before you loudly proclaim X.
I strongly endorse Approach 2 over 1. I think Approach 2 is more in keeping with what makes this community awesome, and Approach 2 is the right way to bring exactly the right kind of people into our community, and Approach 2 is the better way to actually “win”, i.e. get lots of people to understand that X is not demeaning, and Approach 2 is obviously what community leaders like Scott Alexander would do (as for Eliezer, um, I dunno, my model of him would strongly endorse approach 2 in principle, but also sometimes he likes to troll…), and Approach 2 has nothing to do with self-censorship.
~~
Getting back to the object level and OP. I think a lot of our disagreement is here in the details. Let me explain why I don’t think it is “plainly obvious to any even remotely reasonable person that the OP is not intended as any insult to anyone”.
Imagine that Person A believes that Charles Murray is a notorious racist, and TBC is a book that famously and successfully advocated for institutional racism via lies and deceptions. You don’t have to actually believe this—I don’t—I am merely asking you to imagine that Person A believes that.
Now look at the OP through A’s eyes. Right from the title, it’s clear that OP is treating TBC as a perfectly reasonable respectable book by a perfectly reasonable respectable person. Now A starts scanning the article, looking for any serious complaint about this book, this book which by the way personally caused me to suffer by successfully advocating for racism, and giving up after scrolling for a while and coming up empty. I think a reasonable conclusion from A’s perspective is that OP doesn’t think that the book’s racism advocacy is a big deal, or maybe OP even thinks it’s a good thing. I think it would be understandable for Person A to be insulted and leave the page without reading every word of the article.
Once again, we can lament (justifiably) that Person A is arriving here with very wrong preconceptions, probably based on trusting bad sources. But that’s the kind of mistake we should be sympathetic to. It doesn’t mean Person A is an unreasonable person. Indeed, Person A could be a very reasonable person, exactly the kind of person who we want in our community. But they’ve been trusting bad sources. Who among us hasn’t trusted bad sources at some point in our lives? I sure have!
And if Person A represents a vanishingly rare segment of society with weird idiosyncratic wrong preconceptions, maybe we can just shrug and say “Oh well, can’t please everyone.” But if Person A’s wrong preconceptions are shared by a large chunk of society, we should go for Approach 2.
If Person A believes this without ever having either (a) read The Bell Curve or (b) read a neutral, careful review/summary of The Bell Curve, then A is not a reasonable person.
All sorts of unreasonable people have all sorts of unreasonable and false beliefs. Should we cater to them all?
No. Of course we should not.
The title, as I said before, is neutrally descriptive. Anyone who takes it as an endorsement is, once again… unreasonable.
Sorry, what? A book which you (the hypothetical Person A) have never read (and in fact have only the vaguest notion of the contents of) has personally caused you to suffer? And by successfully (!!) “advocating for racism”, at that? This is… well, “quite a leap” seems like an understatement; perhaps the appropriate metaphor would have to involve some sort of Olympic pole-vaulting event. This entire (supposed) perspective is absurd from any sane person’s perspective.
No, this would actually be wildly unreasonable behavior, unworthy of any remotely rational, sane adult. Children, perhaps, may be excused for behaving in this way—and only if they’re very young.
The bottom line is: the idea that “reasonable people” think and behave in the way that you’re describing is the antithesis of what is required to maintain a sane society. If we cater to this sort of thing, here on Less Wrong, then we completely betray our raison d’etre, and surrender any pretense to “raising the sanity waterline”, “searching for truth”, etc.
I have a sincere belief that The Protocols Of The Elders Of Zion directly contributed to the torture and death of some of my ancestors. I hold this belief despite having never read this book, and having only the vaguest notion of the contents of this book, and having never sought out sources that describe this book from a “neutral” point of view.
Do you view those facts as evidence that I’m an unreasonable person?
Further, if I saw a post about The Protocols Of The Elders Of Zion that conspicuously failed to mention anything about people being oppressed as a result of the book, or a post that buried said discussion until after 28 paragraphs of calm open-minded analysis, well, I think I wouldn’t read through the whole piece, and I would also jump to some conclusions about the author. I stand by this being a reasonable thing to do, given that I don’t have unlimited time.
By contrast, if I saw a post about The Protocols Of The Elders Of Zion that opened with “I get it, I know what you’ve heard about this book, but hear me out, I’m going to explain why we should give this book a chance with an open mind, notwithstanding its reputation…”, then I would certainly consider reading the piece.
Your analogy breaks down because the Bell Curve is extremely reasonable, not some forged junk like “The Protocols Of The Elders Of Zion”.
If a book mentioned here mentioned evolution and that offended some traditional religious people, would we need to give a disclaimer and potentially leave it off the site? What if some conservative religious people believe belief in evolution directly harms them? They would be regarded as insane, and so are people offended by TBC.
That’s all this is by the way, left-wing evolution denial. How likely is it that people separated for tens of thousands of years with different founder populations will have equal levels of cognitive ability. It’s impossible.
Yeah.
“What do you think you know, and how do you think you know it?” never stopped being the rationalist question.
As for the rest of your comment—first of all, my relative levels of interest in reading a book review of the Protocols would be precisely reversed from yours.
Secondly, I want to call attention to this bit:
There is no particular reason to “give this book a chance”—to what? Convince us of its thesis? Persuade us that it’s harmless? No. The point of reviewing a book is to improve our understanding of the world. The Protocols of the Elders of Zion is a book which had an impact on global events, on world history. The reason to review it is to better understand that history, not to… graciously grant the Protocols the courtesy of having its allotted time in the spotlight.
If you think that the Protocols are insignificant, that they don’t matter (and thus that reading or talking about them is a total waste of our time), that is one thing—but that’s not true, is it? You yourself say that the Protocols had a terrible impact! All the things which we should strive our utmost to understand, how can a piece of writing that contributed to some of the worst atrocities in history not be among them? How do you propose to prevent history from repeating, if you refuse, not only to understand it, but even to bear its presence?
The idea that we should strenuously shut our eyes against bad things, that we should forbid any talk of that which is evil, is intellectually toxic.
And the notion that by doing so, we are actually acting in a moral way, a righteous way, is itself the root of evil.
Hmm, I think you didn’t get what I was saying. A book review of “Protocols of the Elders of Zion” is great, I’m all for it. A book review of “Protocols of the Elders of Zion” which treats it as a perfectly lovely normal book and doesn’t say anything about the book being a forgery until you get 28 paragraphs into the review and even then it’s barely mentioned is the thing that I would find extremely problematic. Wouldn’t you? Wouldn’t that seem like kind of a glaring omission? Wouldn’t that raise some questions about the author’s beliefs and motives in writing the review?
Do you ever, in your life, think that things are true without checking? Do you think that the radius of earth is 6380 km? (Did you check? Did you look for skeptical sources?) Do you think that lobsters are more closely related to shrimp than to silverfish? (Did you check? Did you look for skeptical sources?) Do you think that it’s dangerous to eat an entire bottle of medicine at once? (Did you check? Did you look for skeptical sources?)
I think you’re holding people up to an unreasonable standard here. You can’t do anything in life without having sources that you generally trust as being probably correct about certain things. In my life, I have at time trusted sources that in retrospect did not deserve my trust. I imagine that this is true of everyone.
Suppose we want to solve that problem. (We do, right?) I feel like you’re proposing a solution of “form a community of people who have never trusted anyone about anything”. But such community would be empty! A better solution is: have a bunch of Scott Alexanders, who accept that people currently have beliefs that are wrong, but charitably assume that maybe those people are nevertheless open to reason, and try to meet them where they are and gently persuade them that they might be mistaken. Gradually, in this way, the people (like former-me) who were trusting the wrong sources can escape of their bubble and find better sources, including sources who preach the virtues of rationality.
We’re not born with an epistemology instruction manual. We all have to find our way, and we probably won’t get it right the first time. Splitting the world into “people who already agree with me” and “people who are forever beyond reason”, that’s the wrong approach. Well, maybe it works for powerful interest groups that can bully people around. We here at lesswrong are not such a group. But we do have the superpower of ability and willingness to bring people to our side via patience and charity and good careful arguments. We should use it! :)
I agree completely.
But note that here we are talking about the book’s provenance / authorship / otherwise “metadata”—and certainly not about the book’s impact, effects of its publication, etc. The latter sort of thing may properly be discussed in a “discussion section” subsequent to the main body of the review, or it may simply be left up to a Wikipedia link. I would certainly not require that it preface the book review, before I found that review “acceptable”, or forebore to question the author’s motives, or what have you.
And it would be quite unreasonable to suggest that a post titled “Book Review: The Protocols of the Elders of Zion” is somehow inherently “provocative”, “insulting”, “offensive”, etc., etc.
I certainly try not to, though bounded rationality does not permit me always to live up to this goal.
I have no beliefs about this one way or the other.
I have no beliefs about this one way or the other.
Depends on the medicine, but I am given to understand that this is often true. I have “checked” in the sense that I regularly read up on the toxicology and other pharmacokinetic properties of medications I take, or those might take, or even those I don’t plan to take. Yes, I look for skeptical sources.
My recommendation, in general, is to avoid having opinions about things that don’t affect you; aim for a neutral skepticism. For things that do affect you, investigate; don’t just stumble into beliefs. This is my policy, and it’s served me well.
The solution to this is to trust less, check more; decline to have any opinion one way or the other, where doing so doesn’t affect you. And when you have to, trust—but verify.
Strive always to be aware of just how much trust in sources you haven’t checked underlies any belief you hold—and, crucially, adjust the strength of your beliefs accordingly.
And when you’re given an opportunity to check, to verify, to investigate—seize it!
The principle of charity, as often practiced (here and in other rationalist spaces), can actually be a terrible idea.
We should use it only to the extent that it does not in any way reduce our own ability to seek, and find, the truth, and not one iota more.
A belief that “TBC was written by a racist for the express purpose of justifying racism” would seem to qualify as “worth mentioning prominently at the top” under that standard, right?
I imagine that very few people would find the title by itself insulting; it’s really “the title in conjunction with the first paragraph or two” (i.e. far enough to see that the author is not going to talk up-front about the elephant in the room).
Hmm, maybe another better way to say it is: The title plus the genre is what might insult people. The genre of this OP is “a book review that treats the book as a serious good-faith work of nonfiction, which might have some errors, just like any nonfiction book, but also presumably has some interesting facts etc.” You don’t need to read far or carefully to know that the OP belongs to this genre. It’s a very different genre from a (reasonable) book review of “Protocols of the Elders of Zion”, or a (reasonable) book review of “Mein Kampf”, or a (reasonable) book review of “Harry Potter”.
No, of course not (the more so because it’s a value judgment, not a statement of fact).
The rest of what you say, I have already addressed.
Approach 2 assumes that A is (a) a reasonable person and (b) coming into the situation with good faith. Usually, neither is true.
What is more, your list of two approaches is a very obvious false dichotomy, crafted in such a way as to mock the people you’re disagreeing with. Instead of either the strawman Approach 1 or the unacceptable Approach 2, I endorse the following:
APPROACH 3: Ignore the fact that A (supposedly) finds X “demeaning”. Say (or don’t say) X whenever the situation calls for it. Behave in all ways as if A’s opinion is completely irrelevant.
(Note, by the way, that Approach 2 absolutely does constitute (self-)censorship, as anything that imposes costs on a certain sort of speech—such as, for instance, requiring elaborate genuflection to supposedly “offended” parties, prior to speaking—will serve to discourage that form of speech. Of course, I suspect that this is precisely the goal—and it is also precisely why I reject your suggestion wholeheartedly. Do not feed utility monsters.)
There’s a difference between catering to an audience and proactively framing things in the least explosive way.
Maybe what you are saying is that when people try to do the latter, they inevitably end up self-censoring and catering to the (hostile) audience?
But that seems false to me. I not only think framing contoversial topics in a non-explosive way is a strategically important, underappreciated skill. In addition, I suspect that practicing the skill improves our epistemics. It forces us to engage with a critical audience of people with ideological differences. When I imagine having to write on a controversial topic, one of the readers I mentally simulate is “person who is ideologically biased against me, but still reasonable.” I don’t cater to unreasonable people, but I want to take care to not put off people who are still “in reach.” And if they’re reasonable, sometimes they have good reasons behind at least some of their concerns and their perspectives can be learnt from.
As I mentioned elsethread, if I’d written the book review I would have done what you describe. But I didn’t and probably never would have written it out of timidness, and that makes me reluctant to tell someone less timid who did something valuable that they did it wrong.
I was just commenting on the general norm. I haven’t read the OP and didn’t mean to voice an opinion on it.
I’m updating that I don’t understand how discussions work. It happens a lot that I object only to a particular feature of an argument or particular argument, yet my comments are interpreted as endorsing an entire side of a complicated debate.
FWIW, I think the “caving in” discussed/contemplated in Rafael Harth’s comments is something I find intuitively repugnant. It feels like giving up your soul for some very dubious potential benefits. Intellectually I can see some merits for it but I suspect (and very much like to believe) that it’s a bad strategy.
Maybe I would focus more on criticizing this caving in mentality if I didn’t feel like I was preaching to the choir. “Open discussion” norms feel so ingrained on Lesswrong that I’m more worried that other good norms get lost / overlooked.
Maybe I would feel different (more “under attack”) if I was more emotionally invested in the community and felt like something I helped build was under attack with norm erosion. I feel presently more concerned about dangers from evaporative cooling where many who care a not-small degree about “soft virtues in discussions related to tone/tact/welcomingness, but NOT in a strawmanned sense” end up becoming less active or avoiding the comment sections.
Edit: The virtue I mean is maybe best described as “presenting your side in a way that isn’t just persuasive to people who think like you, but even reaches the most receptive percentage of the outgroup that’s predisposed to be suspicious of you.”
This is a moot point, because anyone who finds a post title like “Book review: The Bell Curve by Charles Murray” to be “controversial”, “explosive”, etc., is manifestly unreasonable.
My comment here argues that a reasonable person could find this post insulting.