But is “let’s rely on us being too small to get noticed by the mob” really a status quo you’re comfortable with?
Let me rephrase that slightly, since I would object to several features of this sentence that I think are beside your main point. I do think that taking the size and context of our community into account when assessing how outsiders will see and respond to our discourse is among the absolute top considerations for judging risk accurately.
On a simple level, my framework is that we care about three factors: object-level risks and consequences, and enforcement-level risks and consequences. These are analogous to the risks and consequences from crime (object-level), and the risks and consequences from creating a police force or military (enforcement-level).
What I am arguing in this case is that the negative risks x consequences of the sort of enforcement-level behaviors you are advocating for and enacting seem to outweigh the negative risks x consequences of being brigaded or criticized in the news. Also, I’m uncertain enough about the balance of this post’s effect on inflow vs. outflow of readership to be close to 50⁄50, and expect it to be small enough either way to ignore it.
Note also that Sam Harris and Scott Alexander still have an enormous readership after their encounters with the threats you’re describing. While I can imagine a scenario in which unwanted attention becomes deeply unpleasant, I also expect to be a temporary situation. By contrast, instantiating a site culture that is self-censoring due to fear of such scenarios seems likely to be much more of a daily encumbrance—and one that still doesn’t rule out the possibility that we get attacked anyway.
I’d also note that you may be contributing to the elevation of risk with your choices of language. By using terms like “wokeism,” “mob,” and painting scrutiny as a dire threat in a public comment, it seems to me that you add potential fuel for any fire that may come raging through. My standard is that, if this is your earnest opinion, then LW ought to be a good platform for you to discuss that, even if it elevates our risk of being cast in a negative light.
Your standard, if I’m reading you right, is that your comment should be considered for potential censorship itself, due to the possibility that it does harm to the site’s reputation. Although it is perhaps not as potentially inflammatory as a review of TBC, it’s also less substantial, and potentially interacts in a synergistic way to elevate the risk. Do you think this is a risk you ought to have taken seriously before commenting? If not, why not?
My perspective is that you were right to post what you posted, because it reflected an honest concern of yours, and permits us to have a conversation about it. I don’t think you should have had to justify the existence of your comment with some sort of cost/benefit analysis. There are times when I think that such a justification is warranted, but this context is very far from that threshold. An example of a post that I think crosses that threshold would be a description of a way to inflict damage that had at least two of the following attributes: novel, convenient, or detailed. Your post is none of these, and neither is lsusr’s, so both of them pass my test for “it’s fine to talk about it.”
After reading this, I realize that I’ve done an extremely poor job communicating with everything I’ve commented on this post, so let me just try to start over.
I think what I’m really afraid of is a sequence of events that goes something like this:
Every couple of months, someone on LW makes a post like the above
In some (most?) cases, someone is going to speak up against this (in this case, we had two), there will be some discussion, but the majority will come down on the side that censorship is bad and there’s no need to take drastic action
The result is that we never establish any kind of norm nor otherwise prepare for political backlash
In ten or twenty or forty years from now, in a way that’s impossible to predict because any specific scenario is extremely unlikely, the position to be worried about AGI will get coupled to being anti social justice in the public discourse, as a result it will massively lose status and the big labs react by taking safety far less seriously and maybe we have fewer people writing papers on alignment
At that point it will be obvious to everyone that not having done anything to prevent this was a catastrophic error
After the discussion on the dating post, I’ve made some attempts to post a follow-up but chickened out of doing it because I was afraid of the reaction or maybe just because I couldn’t figure out how to approach the topic. When I saw this post, I think I originally decided not to do anything, but then anon03 said something and then somehow I thought I had to say something as well but it wasn’t well thought out because I already felt a fair amount of anxiety after having failed to write about it before. When my comment got a bunch of downvotes, the feeling of anxiety got really intense and I felt like the above mentioned scenario is definitely going to happen and I won’t be able to do anything about it because arguing for censorship is just a lost cause, and I think I then intentionally (but subconsciously) used the language you’ve just pointed out to signal that I don’t agree with the object level part of anything I’m arguing for (probably in the hopes of changing the reception?) even though I don’t think that made a lot of sense; I do think I trust people on this site to keep the two things separate. I completely agree that this risks making the problem worse. I think it was a mistake to say it.
I don’t think any of this is an argument for why I’m right, but I think that’s about what really happened.
Probably it’s significantly less than 50% that anything like what I described happens just because of the conjunction—who knows of anyone even still cares about social justice in 20 years. But it doesn’t seem nearly unlikely enough not to take seriously, and I don’t see anyone taking it seriously and it really terrifies me. I don’t completely understand why since I tend to not be very affected when thinking about x-risks. Maybe because of the feeling that it should be possible to prevent it.
I don’t think the fact that Sam still has an audience is a reason not to panic. Joe Rogan has a quadrillion times the audience of the NYT or CNN, but the social justice movement still has disproportionate power over institutions and academia, and probably that includes AI labs?
I will say that although I disagree with your opinion re: censoring this post and general risk assessment related to this issue, I don’t think you’ve expressed yourself particularly poorly. I also acknowledge that it’s hard to manage feelings of anxiety that come up in conversations with an element of conflict, in a community you care about, in regards to an issue that is important to the world. So go easier on yourself, if that helps! I too get anxious when I get downvoted, or when somebody disagrees with me, even though I’m on LW to learn, and being disagreed with and turning out to be wrong is part of that learning process.
It sounds like a broader perspective of yours is that there’s a strategy for growing the AGI safety community that involves keeping it on the good side of whatever political faction is in power. You think that we should do pretty much whatever it takes to make AGI safety research a success, and that this strategy of avoiding any potentially negative associations is important enough for achieving that outcome that we should take deliberate steps to safeguard its perception in this way. As a far-downstream consequence, we should censor posts like this, out of a general policy of expunging anything potentially controversial being associated with x-risk/AGI safety research and their attendant communities.
I think we roughly agree on the importance of x-risk and AGI safety research. If there was a cheap action I could take that I thought would reliably mitigate x-risk by 0.001%, I would take it. Downvoting a worrisome post is definitely a cheap action, so if I thought it would reliably mitigate x-risk by 0.001%, I would probably take it.
The reason I don’t take it is because I don’t share your perception that we can effectively mitigate x-risk in this way. It is not clear to me that the overall effect of posts like lsusr’s is net negative for these causes, nor that such a norm of censorship would be net beneficial.
What I do think is important is an atmosphere in which people feel freedom to follow their intellectual interests, comfort in participating in dialog and community, and a sense that their arguments are being judged on their intrinsic merit and truth-value.
The norm that our arguments should be judged based on their instrumental impact on the world seems to me to be generally harmful to epistemics. And having an environment that tries to center epistemic integrity above other concerns seems like a relatively rare and valuable thing, one that basically benefits AGI safety.
That said, people actually doing AGI research have other forums for their conversation, such as the alignment forum and various nonprofits. It’s unclear that LW is a key part of the pipeline for new AGI researchers, or forum for AGI research to be debated and discussed. If LW is just a magnet for a certain species of blogger who happens to be interested in AGI safety, among other things; and if those bloggers risk attracting a lot of scary attention while contributing minimally to the spread of AGI safety awareness or to the research itself, then that seems like a concerning scenario.
It’s also hard for me to judge. I can say that LW has played a key role for me connecting with and learning from the rationalist community. I understand AGI safety issues better for it, and am the only point of reference that several of my loved ones have for hearing about these issues.
So, N of 1, but LW has probably improved the trajectory of AGI safety by a miniscule but nonzero amount via its influence on me. And I wouldn’t have stuck around on LW if there was a lot of censorship of controversial topics. Indeed, it was the opportunity to wrestle with my attachments and frustrations with leftwing ideology via the ideas I encountered here that made this such an initially compelling online space. Take away the level of engagement with contemporary politics that we permit ourselves here, add in a greater level of censorship and anxiety about the consequences of our speech, and I might not have stuck around.
It sounds like a broader perspective of yours is that there’s a strategy for growing the AGI safety community that involves keeping it on the good side of whatever political faction is in power. You think that we should do pretty much whatever it takes to make AGI safety research a success, and that this strategy of avoiding any potentially negative associations is important enough for achieving that outcome that we should take deliberate steps to safeguard its perception in this way. As a far-downstream consequence, we should censor posts like this, out of a general policy of expunging anything potentially controversial being associated with x-risk/AGI safety research and their attendant communities.
I happily endorse this very articulate description of my perspective, with the one caveat that I would draw the line to the right of ‘anything potentially controversial’ (with the left-right axis measuring potential for backlash). I think this post falls to the right of just about any line; I think it hast he highest potential for backlash out of any post I remember seeing on LW ever. (I just said the same in a reply to Ruby, and I wasn’t being hypothetical.)
That said, people actually doing AGI research have other forums for their conversation, such as the alignment forum and various nonprofits. It’s unclear that LW is a key part of the pipeline for new AGI researchers, or forum for AGI research to be debated and discussed.
I’m probably an unusual case, but I got invited into the alignment forum by posting the Factored Cognition sequence on LW, so insofar as I count, LW has been essential. If it weren’t for the way that the two forums are connected, I wouldn’t have written the sequence. The caveat is that I’m currently not pursuing a “direct” path on alignment but am instead trying to go the academia route by doing work in the intersection of [widely recognized] and [safety-relevant] (i.e. on interpretability), so you could argue that the pipeline ultimately didn’t work. But I think (not 100% sure) at least Alex Turner is a straight-forward success story for said pipeline.
And I wouldn’t have stuck around on LW if there was a lot of censorship of controversial topics.
I think you probably want to respond on this on my reply to Rubin so that we don’t have two discussions about the same topic. My main objection is that the amount of censorship I’m advocating for seems to me to be tiny, I think less than 5 posts per year, far less than what is censored by the norm against politics.
Edit: I also want to object to this:
The norm that our arguments should be judged based on their instrumental impact on the world seems to me to be generally harmful to epistemics. And having an environment that tries to center epistemic integrity above other concerns seems like a relatively rare and valuable thing, one that basically benefits AGI safety.
I don’t think anything of what I’m saying involves judging arguments based on their impact on the world. I’m saying you shouldn’t be allowed to talk about TBC on LW in the first place. This seems like a super important distinction because it doesn’t involve lying or doing any mental gymnastics. I see it as closely analogous to the norm against politics, which I don’t think has hurt our discourse.
I don’t think anything of what I’m saying involves judging arguments based on their impact on the world.
What I mean here is that you, like most advocates of a marginal increase in censorship, justify this stance on the basis that the censored material will cause some people, perhaps its readers or its critics, to take an action with an undesirable consequence. Examples from the past have included suicidal behavior, sexual promiscuity, political revolution, or hate crimes.
To this list, you have appended “elevating X-risk.” This is what I mean by “impact on the world.”
Usually, advocates of marginal increases in censorship are afraid of the content of the published documents. In this case, you’re afraid not of what the document says on the object level, but of how the publication of that document will be perceived symbolically.
An advocate of censorship might point out that we can potentially achieve significant gains on goals with widespread support (in our society, stopping hate crimes might be an example), with only modest censorship. For example, we might not ban sales of a certain book. We just make it library policy not to purchase them. Or we restrict purchase to a certain age group. Or major publishers make a decision not to publish books advocating certain ideas, so that only minor publishing houses are able to market this material. Or we might permit individual social media platforms to ban certain articles or participants, but as long as internet service providers aren’t enacting bans, we’re OK with it.
On LW, one such form of soft censorship is the mod’s decision to keep a post off the frontpage.
To this list of soft censorship options, you are appending “posting it as a linkpost, rather than on the main site,” and assuring us that only 5 posts per year need to be subject even to this amount of censorship.
It is OK to be an advocate of a marginal increase in censorship. Understand, though, that to one such as myself, I believe that it is precisely these small marginal increases in censorship that pose a risk to X-risk, and the marginal posting of content like this book review either decreases X-risk (by reaffirming the epistemic freedom of this community) or does not affect it. If the community were larger, with less anonymity, and had a larger amount of potentially inflammatory political material, I would feel differently about this.
Your desire to marginally increase censorship feels to me a bit like a Pascal’s Mugging. You worry about a small risk of dire consequences that may never emerge, in order to justify a small but clear negative cost in the present moment. I don’t think you’re out of line to hold this belief. I just think that I’d need to see some more substantial empirical evidence that I should subscribe to this fear before I accept that we should pay this cost.
To this list of soft censorship options, you are appending “posting it as a linkpost, rather than on the main site,” and assuring us that only 5 posts per year need to be subject even to this amount of censorship.
The link thing was anon03′s idea; I want posts about TBC to be banned outright.
Other than that, I think you’ve understood my model. (And I think I understand yours except that I don’t understand the gears of the mechanism by which you think x-risk increases.)
X-risk, and AGI safety in particular, require unusual strength in gears-level reasoning to comprehend and work on; a willingness to stand up to criticism not only on technical questions but on moral/value questions; an intense, skeptical, questioning attitude; and a high value placed on altruism. Let’s call these people “rationalists.”
Even in scientific and engineering communities, and the population of rational people generally, the combination of these traits I’m referring to as “rationalism” is rare.
Rationalism causes people to have unusually high and predictable needs for a certain style and subject of debate and discourse, in a way that sets them apart from the general population.
Rationalists won’t be able to get their needs met in mainstream scientific or engineering communities, which prioritize a subset of the total rationalist package of traits.
Hence, they’ll seek an alternative community in which to get those needs met.
Rationalists who haven’t yet discovered a rationalist community won’t often have an advance knowledge of AGI safety. Instead, they’ll have thoughts and frustrations provoked by the non-rationalist society in which they grew up. It is these prosaic frustrations—often with politics—that will motivate them to seek out a different community, and to stay engaged with it.
When these people discover a community that engages with the controversial political topics they’ve seen shunned and censored in the rest of society, and doing it in a way that appears epistemically healthy to them, they’ll take it as evidence that they should stick around. It will also be a place where even AGI safety researchers and their friends can deal with the ongoing their issues and interests beyond AGI safety.
By associating with this community, they’ll pick up on ideas common in the community, like a concern for AGI safety. Some of them will turn it into a career, diminishing the amount of x-risk faced by the world.
I think that marginally increasing censorship on this site risks interfering with step 7. This site will not be recognized by proto-rationalists as a place where they can deal with the frustrations that they’re wrestling with when they first discover it. They won’t see an open attitude of free enquiry modeled, but instead see the same dynamics of fear-based censorship that they encounter almost everywhere else. Likewise, established AGI safety people and their friends will lose a space for free enquiry, a space for intellectual play and exploration that can be highly motivating. Loss of that motivation and appeal may interrupt the pipeline or staying power for people to work on X-risks of all kinds, including AGI safety.
Politics continues to affect people even after they’ve come to understand why it’s so frustrating, and having a minimal space to deal with it on this website seems useful to me. When you have very little of something, losing another piece of it feels like a pretty big deal.
When these people discover a community that engages with the controversial political topics they’ve seen shunned and censored in the rest of society, and doing it in a way that appears epistemically healthy to them, they’ll take it as evidence that they should stick around.
What has gone into forming this model? I only have one datapoint on this (which is myself). I stuck around because of the quality of discussion (people are making sense here!); I don’t think the content mattered. But I don’t have strong resistance to believing that this is how it works for other people.
I think if your model is applied to the politics ban, it would say that it’s also quite bad (maybe not as bad because most politics stuff isn’t as shunned and censored as social justice stuff)? If that’s true, how would you feel about restructuring rather than widening the censorship? Start allowing some political discussions (I also keep thinking about Wei Dai’s “it’ll go there eventually so we should practice” argument) but censor the most controversial social justice stuff. I feel like the current solution isn’t pareto optimal in the {epistemic health} x {safety against backlash} space.
Anecdotal, but about a year ago I committed to the rationalist community for exactly the reasons described. I feel more accepted in rationalist spaces than trans spaces, even though rationalists semi-frequently argue against the standard woke line and trans spaces try to be explicitly welcoming.
Just extrapolating from my own experience. For me, the content was important.
I think where my model really meets challenges is that clearly, the political content on LW has alienated some people. These people were clearly attracted here in the first place. My model says that LW is a magnet for likely AGI-safety researchers, and says nothing about it being a filter for likely AGI-safety researchers. Hence, if our political content is costing us more involvement than it’s retaining, or if the frustration experienced by those who’ve been troubled by the political content outweigh the frustration that would be experienced by those whose content would be censored, then that poses a real problem for my cost/benefit analysis.
A factor asymmetrically against increased censorship here is that censorship is, to me, intrinsically bad. It’s a little like war. Sometimes, you have to fight a war, but you should insist on really good evidence before you commit to it, because wars are terrible. Likewise, censorship sucks, and you should insist on really good evidence before you accept an increase in censorship.
It’s this factor, I think, that tilts me onto the side of preferring the present level of political censorship rather than an increase. I acknowledge and respect the people who feel they can’t participate here because they experience the environment as toxic. I think that is really unfortunate. I also think that censorship sucks, and for me, it roughly balances out with the suckiness of alienating potential participants via a lack of censorship.
This, I think, is the area where my mind is most susceptible to change. If somebody could make a strong case that LW currently has a lot of excessively toxic, alienating content, that this is the main bottleneck for wider participation, and that the number of people who’d leave if that controversial content were removed were outweighed by the number of people who’d join, then I’d be open-minded about that marginal increase in censorship.
An example of a way this evidence could be gathered would be some form of community outreach to ex-LWers and marginal LWers. We’d ask those people to give specific examples of the content they find offensive, and try both to understand why it bothers them, and why they don’t feel it’s something they can or want to tolerate. Then we’d try to form a consensus with them about limitations on political or potentially offensive speech that they would find comfortable, or at least tolerable. We’d also try to understand their level of interest in participating in a version of LW with more of these limitations in place.
Here, I am hypothesizing that there’s a group of ex-LWers or marginal LW-ers who feel a strong affinity for most of the content, while an even stronger aversion for a minority subset of the content to such a degree that they sharply curtail their participation. Such that if the offensive tiny fraction of the content were removed, they’d undergo a dramatic and lasting increase in engagement with LW. I find it unlikely that a sizeable group like this exists, but am very open to having my mind changed via some sort of survey data.
It seems more likely to me that ex/marginal-LWers are people with only a marginal interest in the site as a whole, who point to the minority of posts they find offensive as only the most salient example of what they dislike. Even if it were removed, they wouldn’t participate.
At the same time, we’d engage in community dialog with current active participants about their concerns with such a change. How strong are their feelings about such limitations? How many would likely stop reading/posting/commenting if these limitations were imposed? For the material they feel most strongly about it, why do they feel that way?
I am positing that there are a significant subset of LWers for whom the minority of posts engaging with politics are very important sources of its appeal.
How is it possible that I could simultaneously be guessing—and it is just a guess—that controversial political topics are a make-or-break screening-in feature, but not a make-or-break screening-out feature?
The reason is that there are abundant spaces online and in-person for conversation that does have the political limitations you are seeking to impose here. There are lots of spaces for conversation with a group of likeminded ideologues across the entire political spectrum, where conformity is a prerequisite of polite conversation. Hence, imposing the same sort of guardrails or ideological conformities on this website would make it similar to many other platforms. People who desire these guardrails/conformities can get what they want elsewhere. For them, LW would be a nice-to-have.
For those who desire polite and thoughtful conversation on a variety of intellectual topics, even touching on politics, LW is verging on a need-to-have. It’s rare. This is why I am guessing that a marginal increase in censorship would cost us more appeal than it would gain us.
I agree with you that the risk of being the subject of massive unwanted attention as a consequence is nonzero. I simply am guessing that it’s small enough not to be worth the ongoing short-term costs of a marginal increase in censorship.
But I do think that making the effort to thoroughly examine and gather evidence for the extent to which our political status quo serves to attract or repel people would be well worth a thorough examination. Asking at what point the inherent cost of a marginal increase in censorship becomes worth paying in exchange for a more inclusive environment seems like a reasonable question to ask. But I think this process would need a lot of community buy-in and serious effort on the part of a whole team to do it right.
The people who are already here would need persuading, and indeed, I think they deserve the effort to be persuaded to give up some of their freedom to post what they want here in exchange for, the hope would be, a larger and more vibrant community. And this effort should come with a full readiness to discover that, in fact, such restrictions would diminish the size and vibrancy and intellectual capacity of this community. If it wasn’t approached in that spirit, I think it would just fail.
In ten or twenty or forty years from now, in a way that’s impossible to predict because any specific scenario is extremely unlikely, the position to be worried about AGI will get coupled to being anti social justice in the public discourse, as a result it will massively lose status and the big labs react by taking safety far less seriously and maybe we have fewer people writing papers on alignment
So, I both think that in the past 1) people have thought the x-risk folks are weird and low-status and didn’t want to be affiliated with them, and in the present 2) people like Phil Torres are going around claiming that EAs and longtermists are white surpremacists, because of central aspects of longtermism (like thinking the present matters in large part because of its ability to impact the future). Things like “willingness to read The Bell Curve” no doubt contribute to their case, but I think focusing on that misses the degree to which the core is actually in competition with other ideologies or worldviews.
I think there’s a lot of value in trying to nudge your presentation to not trigger other people’s allergies or defenses, and trying to incorporate criticisms and alternative perspectives. I think we can’t sacrifice the core to do those things. If we disagree with people about whether the long-term matters, then we disagree with them; if they want to call us names accordingly, so much the worse for them.
If we disagree with people about whether the long-term matters, then we disagree with them; if they want to call us names accordingly, so much the worse for them.
I mean, this works until someone in a position of influence bows the the pressure, and I don’t see why this can’t happen.
I think we can’t sacrifice the core to do those things.
The main disagreement seems to come down to how much we would give up when disallowing posts like this. My gears model still says ‘almost nothing’ since all it would take is to extend the norm “let’s not talk about politics” to “let’s not talk about politics and extremely sensitive social-justice adjacent issues”, and I feel like that would extend the set of interesting taboo topics by something like 10%.
(I’ve said the same here; if you have a response to this, it might make sense to all keep it in one place.)
Let me rephrase that slightly, since I would object to several features of this sentence that I think are beside your main point. I do think that taking the size and context of our community into account when assessing how outsiders will see and respond to our discourse is among the absolute top considerations for judging risk accurately.
On a simple level, my framework is that we care about three factors: object-level risks and consequences, and enforcement-level risks and consequences. These are analogous to the risks and consequences from crime (object-level), and the risks and consequences from creating a police force or military (enforcement-level).
What I am arguing in this case is that the negative risks x consequences of the sort of enforcement-level behaviors you are advocating for and enacting seem to outweigh the negative risks x consequences of being brigaded or criticized in the news. Also, I’m uncertain enough about the balance of this post’s effect on inflow vs. outflow of readership to be close to 50⁄50, and expect it to be small enough either way to ignore it.
Note also that Sam Harris and Scott Alexander still have an enormous readership after their encounters with the threats you’re describing. While I can imagine a scenario in which unwanted attention becomes deeply unpleasant, I also expect to be a temporary situation. By contrast, instantiating a site culture that is self-censoring due to fear of such scenarios seems likely to be much more of a daily encumbrance—and one that still doesn’t rule out the possibility that we get attacked anyway.
I’d also note that you may be contributing to the elevation of risk with your choices of language. By using terms like “wokeism,” “mob,” and painting scrutiny as a dire threat in a public comment, it seems to me that you add potential fuel for any fire that may come raging through. My standard is that, if this is your earnest opinion, then LW ought to be a good platform for you to discuss that, even if it elevates our risk of being cast in a negative light.
Your standard, if I’m reading you right, is that your comment should be considered for potential censorship itself, due to the possibility that it does harm to the site’s reputation. Although it is perhaps not as potentially inflammatory as a review of TBC, it’s also less substantial, and potentially interacts in a synergistic way to elevate the risk. Do you think this is a risk you ought to have taken seriously before commenting? If not, why not?
My perspective is that you were right to post what you posted, because it reflected an honest concern of yours, and permits us to have a conversation about it. I don’t think you should have had to justify the existence of your comment with some sort of cost/benefit analysis. There are times when I think that such a justification is warranted, but this context is very far from that threshold. An example of a post that I think crosses that threshold would be a description of a way to inflict damage that had at least two of the following attributes: novel, convenient, or detailed. Your post is none of these, and neither is lsusr’s, so both of them pass my test for “it’s fine to talk about it.”
After reading this, I realize that I’ve done an extremely poor job communicating with everything I’ve commented on this post, so let me just try to start over.
I think what I’m really afraid of is a sequence of events that goes something like this:
Every couple of months, someone on LW makes a post like the above
In some (most?) cases, someone is going to speak up against this (in this case, we had two), there will be some discussion, but the majority will come down on the side that censorship is bad and there’s no need to take drastic action
The result is that we never establish any kind of norm nor otherwise prepare for political backlash
In ten or twenty or forty years from now, in a way that’s impossible to predict because any specific scenario is extremely unlikely, the position to be worried about AGI will get coupled to being anti social justice in the public discourse, as a result it will massively lose status and the big labs react by taking safety far less seriously and maybe we have fewer people writing papers on alignment
At that point it will be obvious to everyone that not having done anything to prevent this was a catastrophic error
After the discussion on the dating post, I’ve made some attempts to post a follow-up but chickened out of doing it because I was afraid of the reaction or maybe just because I couldn’t figure out how to approach the topic. When I saw this post, I think I originally decided not to do anything, but then anon03 said something and then somehow I thought I had to say something as well but it wasn’t well thought out because I already felt a fair amount of anxiety after having failed to write about it before. When my comment got a bunch of downvotes, the feeling of anxiety got really intense and I felt like the above mentioned scenario is definitely going to happen and I won’t be able to do anything about it because arguing for censorship is just a lost cause, and I think I then intentionally (but subconsciously) used the language you’ve just pointed out to signal that I don’t agree with the object level part of anything I’m arguing for (probably in the hopes of changing the reception?) even though I don’t think that made a lot of sense; I do think I trust people on this site to keep the two things separate. I completely agree that this risks making the problem worse. I think it was a mistake to say it.
I don’t think any of this is an argument for why I’m right, but I think that’s about what really happened.
Probably it’s significantly less than 50% that anything like what I described happens just because of the conjunction—who knows of anyone even still cares about social justice in 20 years. But it doesn’t seem nearly unlikely enough not to take seriously, and I don’t see anyone taking it seriously and it really terrifies me. I don’t completely understand why since I tend to not be very affected when thinking about x-risks. Maybe because of the feeling that it should be possible to prevent it.
I don’t think the fact that Sam still has an audience is a reason not to panic. Joe Rogan has a quadrillion times the audience of the NYT or CNN, but the social justice movement still has disproportionate power over institutions and academia, and probably that includes AI labs?
I will say that although I disagree with your opinion re: censoring this post and general risk assessment related to this issue, I don’t think you’ve expressed yourself particularly poorly. I also acknowledge that it’s hard to manage feelings of anxiety that come up in conversations with an element of conflict, in a community you care about, in regards to an issue that is important to the world. So go easier on yourself, if that helps! I too get anxious when I get downvoted, or when somebody disagrees with me, even though I’m on LW to learn, and being disagreed with and turning out to be wrong is part of that learning process.
It sounds like a broader perspective of yours is that there’s a strategy for growing the AGI safety community that involves keeping it on the good side of whatever political faction is in power. You think that we should do pretty much whatever it takes to make AGI safety research a success, and that this strategy of avoiding any potentially negative associations is important enough for achieving that outcome that we should take deliberate steps to safeguard its perception in this way. As a far-downstream consequence, we should censor posts like this, out of a general policy of expunging anything potentially controversial being associated with x-risk/AGI safety research and their attendant communities.
I think we roughly agree on the importance of x-risk and AGI safety research. If there was a cheap action I could take that I thought would reliably mitigate x-risk by 0.001%, I would take it. Downvoting a worrisome post is definitely a cheap action, so if I thought it would reliably mitigate x-risk by 0.001%, I would probably take it.
The reason I don’t take it is because I don’t share your perception that we can effectively mitigate x-risk in this way. It is not clear to me that the overall effect of posts like lsusr’s is net negative for these causes, nor that such a norm of censorship would be net beneficial.
What I do think is important is an atmosphere in which people feel freedom to follow their intellectual interests, comfort in participating in dialog and community, and a sense that their arguments are being judged on their intrinsic merit and truth-value.
The norm that our arguments should be judged based on their instrumental impact on the world seems to me to be generally harmful to epistemics. And having an environment that tries to center epistemic integrity above other concerns seems like a relatively rare and valuable thing, one that basically benefits AGI safety.
That said, people actually doing AGI research have other forums for their conversation, such as the alignment forum and various nonprofits. It’s unclear that LW is a key part of the pipeline for new AGI researchers, or forum for AGI research to be debated and discussed. If LW is just a magnet for a certain species of blogger who happens to be interested in AGI safety, among other things; and if those bloggers risk attracting a lot of scary attention while contributing minimally to the spread of AGI safety awareness or to the research itself, then that seems like a concerning scenario.
It’s also hard for me to judge. I can say that LW has played a key role for me connecting with and learning from the rationalist community. I understand AGI safety issues better for it, and am the only point of reference that several of my loved ones have for hearing about these issues.
So, N of 1, but LW has probably improved the trajectory of AGI safety by a miniscule but nonzero amount via its influence on me. And I wouldn’t have stuck around on LW if there was a lot of censorship of controversial topics. Indeed, it was the opportunity to wrestle with my attachments and frustrations with leftwing ideology via the ideas I encountered here that made this such an initially compelling online space. Take away the level of engagement with contemporary politics that we permit ourselves here, add in a greater level of censorship and anxiety about the consequences of our speech, and I might not have stuck around.
Thanks for this comment.
I happily endorse this very articulate description of my perspective, with the one caveat that I would draw the line to the right of ‘anything potentially controversial’ (with the left-right axis measuring potential for backlash). I think this post falls to the right of just about any line; I think it hast he highest potential for backlash out of any post I remember seeing on LW ever. (I just said the same in a reply to Ruby, and I wasn’t being hypothetical.)
I’m probably an unusual case, but I got invited into the alignment forum by posting the Factored Cognition sequence on LW, so insofar as I count, LW has been essential. If it weren’t for the way that the two forums are connected, I wouldn’t have written the sequence. The caveat is that I’m currently not pursuing a “direct” path on alignment but am instead trying to go the academia route by doing work in the intersection of [widely recognized] and [safety-relevant] (i.e. on interpretability), so you could argue that the pipeline ultimately didn’t work. But I think (not 100% sure) at least Alex Turner is a straight-forward success story for said pipeline.
I think you probably want to respond on this on my reply to Rubin so that we don’t have two discussions about the same topic. My main objection is that the amount of censorship I’m advocating for seems to me to be tiny, I think less than 5 posts per year, far less than what is censored by the norm against politics.
Edit: I also want to object to this:
I don’t think anything of what I’m saying involves judging arguments based on their impact on the world. I’m saying you shouldn’t be allowed to talk about TBC on LW in the first place. This seems like a super important distinction because it doesn’t involve lying or doing any mental gymnastics. I see it as closely analogous to the norm against politics, which I don’t think has hurt our discourse.
What I mean here is that you, like most advocates of a marginal increase in censorship, justify this stance on the basis that the censored material will cause some people, perhaps its readers or its critics, to take an action with an undesirable consequence. Examples from the past have included suicidal behavior, sexual promiscuity, political revolution, or hate crimes.
To this list, you have appended “elevating X-risk.” This is what I mean by “impact on the world.”
Usually, advocates of marginal increases in censorship are afraid of the content of the published documents. In this case, you’re afraid not of what the document says on the object level, but of how the publication of that document will be perceived symbolically.
An advocate of censorship might point out that we can potentially achieve significant gains on goals with widespread support (in our society, stopping hate crimes might be an example), with only modest censorship. For example, we might not ban sales of a certain book. We just make it library policy not to purchase them. Or we restrict purchase to a certain age group. Or major publishers make a decision not to publish books advocating certain ideas, so that only minor publishing houses are able to market this material. Or we might permit individual social media platforms to ban certain articles or participants, but as long as internet service providers aren’t enacting bans, we’re OK with it.
On LW, one such form of soft censorship is the mod’s decision to keep a post off the frontpage.
To this list of soft censorship options, you are appending “posting it as a linkpost, rather than on the main site,” and assuring us that only 5 posts per year need to be subject even to this amount of censorship.
It is OK to be an advocate of a marginal increase in censorship. Understand, though, that to one such as myself, I believe that it is precisely these small marginal increases in censorship that pose a risk to X-risk, and the marginal posting of content like this book review either decreases X-risk (by reaffirming the epistemic freedom of this community) or does not affect it. If the community were larger, with less anonymity, and had a larger amount of potentially inflammatory political material, I would feel differently about this.
Your desire to marginally increase censorship feels to me a bit like a Pascal’s Mugging. You worry about a small risk of dire consequences that may never emerge, in order to justify a small but clear negative cost in the present moment. I don’t think you’re out of line to hold this belief. I just think that I’d need to see some more substantial empirical evidence that I should subscribe to this fear before I accept that we should pay this cost.
The link thing was anon03′s idea; I want posts about TBC to be banned outright.
Other than that, I think you’ve understood my model. (And I think I understand yours except that I don’t understand the gears of the mechanism by which you think x-risk increases.)
Sorry for conflating anon03′s idea with yours!
A quick sketch at a gears-level model:
X-risk, and AGI safety in particular, require unusual strength in gears-level reasoning to comprehend and work on; a willingness to stand up to criticism not only on technical questions but on moral/value questions; an intense, skeptical, questioning attitude; and a high value placed on altruism. Let’s call these people “rationalists.”
Even in scientific and engineering communities, and the population of rational people generally, the combination of these traits I’m referring to as “rationalism” is rare.
Rationalism causes people to have unusually high and predictable needs for a certain style and subject of debate and discourse, in a way that sets them apart from the general population.
Rationalists won’t be able to get their needs met in mainstream scientific or engineering communities, which prioritize a subset of the total rationalist package of traits.
Hence, they’ll seek an alternative community in which to get those needs met.
Rationalists who haven’t yet discovered a rationalist community won’t often have an advance knowledge of AGI safety. Instead, they’ll have thoughts and frustrations provoked by the non-rationalist society in which they grew up. It is these prosaic frustrations—often with politics—that will motivate them to seek out a different community, and to stay engaged with it.
When these people discover a community that engages with the controversial political topics they’ve seen shunned and censored in the rest of society, and doing it in a way that appears epistemically healthy to them, they’ll take it as evidence that they should stick around. It will also be a place where even AGI safety researchers and their friends can deal with the ongoing their issues and interests beyond AGI safety.
By associating with this community, they’ll pick up on ideas common in the community, like a concern for AGI safety. Some of them will turn it into a career, diminishing the amount of x-risk faced by the world.
I think that marginally increasing censorship on this site risks interfering with step 7. This site will not be recognized by proto-rationalists as a place where they can deal with the frustrations that they’re wrestling with when they first discover it. They won’t see an open attitude of free enquiry modeled, but instead see the same dynamics of fear-based censorship that they encounter almost everywhere else. Likewise, established AGI safety people and their friends will lose a space for free enquiry, a space for intellectual play and exploration that can be highly motivating. Loss of that motivation and appeal may interrupt the pipeline or staying power for people to work on X-risks of all kinds, including AGI safety.
Politics continues to affect people even after they’ve come to understand why it’s so frustrating, and having a minimal space to deal with it on this website seems useful to me. When you have very little of something, losing another piece of it feels like a pretty big deal.
What has gone into forming this model? I only have one datapoint on this (which is myself). I stuck around because of the quality of discussion (people are making sense here!); I don’t think the content mattered. But I don’t have strong resistance to believing that this is how it works for other people.
I think if your model is applied to the politics ban, it would say that it’s also quite bad (maybe not as bad because most politics stuff isn’t as shunned and censored as social justice stuff)? If that’s true, how would you feel about restructuring rather than widening the censorship? Start allowing some political discussions (I also keep thinking about Wei Dai’s “it’ll go there eventually so we should practice” argument) but censor the most controversial social justice stuff. I feel like the current solution isn’t pareto optimal in the {epistemic health} x {safety against backlash} space.
Anecdotal, but about a year ago I committed to the rationalist community for exactly the reasons described. I feel more accepted in rationalist spaces than trans spaces, even though rationalists semi-frequently argue against the standard woke line and trans spaces try to be explicitly welcoming.
Just extrapolating from my own experience. For me, the content was important.
I think where my model really meets challenges is that clearly, the political content on LW has alienated some people. These people were clearly attracted here in the first place. My model says that LW is a magnet for likely AGI-safety researchers, and says nothing about it being a filter for likely AGI-safety researchers. Hence, if our political content is costing us more involvement than it’s retaining, or if the frustration experienced by those who’ve been troubled by the political content outweigh the frustration that would be experienced by those whose content would be censored, then that poses a real problem for my cost/benefit analysis.
A factor asymmetrically against increased censorship here is that censorship is, to me, intrinsically bad. It’s a little like war. Sometimes, you have to fight a war, but you should insist on really good evidence before you commit to it, because wars are terrible. Likewise, censorship sucks, and you should insist on really good evidence before you accept an increase in censorship.
It’s this factor, I think, that tilts me onto the side of preferring the present level of political censorship rather than an increase. I acknowledge and respect the people who feel they can’t participate here because they experience the environment as toxic. I think that is really unfortunate. I also think that censorship sucks, and for me, it roughly balances out with the suckiness of alienating potential participants via a lack of censorship.
This, I think, is the area where my mind is most susceptible to change. If somebody could make a strong case that LW currently has a lot of excessively toxic, alienating content, that this is the main bottleneck for wider participation, and that the number of people who’d leave if that controversial content were removed were outweighed by the number of people who’d join, then I’d be open-minded about that marginal increase in censorship.
An example of a way this evidence could be gathered would be some form of community outreach to ex-LWers and marginal LWers. We’d ask those people to give specific examples of the content they find offensive, and try both to understand why it bothers them, and why they don’t feel it’s something they can or want to tolerate. Then we’d try to form a consensus with them about limitations on political or potentially offensive speech that they would find comfortable, or at least tolerable. We’d also try to understand their level of interest in participating in a version of LW with more of these limitations in place.
Here, I am hypothesizing that there’s a group of ex-LWers or marginal LW-ers who feel a strong affinity for most of the content, while an even stronger aversion for a minority subset of the content to such a degree that they sharply curtail their participation. Such that if the offensive tiny fraction of the content were removed, they’d undergo a dramatic and lasting increase in engagement with LW. I find it unlikely that a sizeable group like this exists, but am very open to having my mind changed via some sort of survey data.
It seems more likely to me that ex/marginal-LWers are people with only a marginal interest in the site as a whole, who point to the minority of posts they find offensive as only the most salient example of what they dislike. Even if it were removed, they wouldn’t participate.
At the same time, we’d engage in community dialog with current active participants about their concerns with such a change. How strong are their feelings about such limitations? How many would likely stop reading/posting/commenting if these limitations were imposed? For the material they feel most strongly about it, why do they feel that way?
I am positing that there are a significant subset of LWers for whom the minority of posts engaging with politics are very important sources of its appeal.
How is it possible that I could simultaneously be guessing—and it is just a guess—that controversial political topics are a make-or-break screening-in feature, but not a make-or-break screening-out feature?
The reason is that there are abundant spaces online and in-person for conversation that does have the political limitations you are seeking to impose here. There are lots of spaces for conversation with a group of likeminded ideologues across the entire political spectrum, where conformity is a prerequisite of polite conversation. Hence, imposing the same sort of guardrails or ideological conformities on this website would make it similar to many other platforms. People who desire these guardrails/conformities can get what they want elsewhere. For them, LW would be a nice-to-have.
For those who desire polite and thoughtful conversation on a variety of intellectual topics, even touching on politics, LW is verging on a need-to-have. It’s rare. This is why I am guessing that a marginal increase in censorship would cost us more appeal than it would gain us.
I agree with you that the risk of being the subject of massive unwanted attention as a consequence is nonzero. I simply am guessing that it’s small enough not to be worth the ongoing short-term costs of a marginal increase in censorship.
But I do think that making the effort to thoroughly examine and gather evidence for the extent to which our political status quo serves to attract or repel people would be well worth a thorough examination. Asking at what point the inherent cost of a marginal increase in censorship becomes worth paying in exchange for a more inclusive environment seems like a reasonable question to ask. But I think this process would need a lot of community buy-in and serious effort on the part of a whole team to do it right.
The people who are already here would need persuading, and indeed, I think they deserve the effort to be persuaded to give up some of their freedom to post what they want here in exchange for, the hope would be, a larger and more vibrant community. And this effort should come with a full readiness to discover that, in fact, such restrictions would diminish the size and vibrancy and intellectual capacity of this community. If it wasn’t approached in that spirit, I think it would just fail.
So, I both think that in the past 1) people have thought the x-risk folks are weird and low-status and didn’t want to be affiliated with them, and in the present 2) people like Phil Torres are going around claiming that EAs and longtermists are white surpremacists, because of central aspects of longtermism (like thinking the present matters in large part because of its ability to impact the future). Things like “willingness to read The Bell Curve” no doubt contribute to their case, but I think focusing on that misses the degree to which the core is actually in competition with other ideologies or worldviews.
I think there’s a lot of value in trying to nudge your presentation to not trigger other people’s allergies or defenses, and trying to incorporate criticisms and alternative perspectives. I think we can’t sacrifice the core to do those things. If we disagree with people about whether the long-term matters, then we disagree with them; if they want to call us names accordingly, so much the worse for them.
I mean, this works until someone in a position of influence bows the the pressure, and I don’t see why this can’t happen.
The main disagreement seems to come down to how much we would give up when disallowing posts like this. My gears model still says ‘almost nothing’ since all it would take is to extend the norm “let’s not talk about politics” to “let’s not talk about politics and extremely sensitive social-justice adjacent issues”, and I feel like that would extend the set of interesting taboo topics by something like 10%.
(I’ve said the same here; if you have a response to this, it might make sense to all keep it in one place.)
Sorry about your anxiety around this discussion :(