I agree that LW shouldn’t be a zero-risk space, that some people will always hate us, and that this is unavoidable and only finitely bad. I’m not persuaded by reasons 2 and 3 from your comment at all in the particular case of whether people should talk about Murray. A norm of “don’t bring up highly inflammatory topics unless they’re crucial to the site’s core interests” wouldn’t stop Hanson from posting about ems, or grabby aliens, or farmers and foragers, or construal level theory, or Aumann’s theorem, and anyway, having him post on his own blog works fine. AI alignment was never political remotely like how the Bell Curve is political. (I guess some conceptual precursors came from libertarian email lists in the 90s?) If AI alignment becomes very political (e.g. because people talk about it side by side with Bell Curve reviews), we can invoke the “crucial to the site’s core interests” thing and keep discussing it anyway, ideally taking some care to avoid making people be stupid about it. If someone wants to argue that having Bell Curve discussion on r/TheMotte instead of here would cause us to lose out on something similarly important, I’m open to hearing it.
You’d have to use a broad sense of “political” to make this true (maybe amounting to “controversial”). Nobody is advocating blanket avoidance of controversial opinions, only blanket avoidance of narrow-sense politics, and even then with a strong exception of “if you can make a case that it’s genuinely important to the fate of humanity in the way that AI alignment is important to the fate of humanity, go ahead”. At no point could anyone have used the proposed norms to prevent discussion of AI alignment.
I agree that LW shouldn’t be a zero-risk space, that some people will always hate us, and that this is unavoidable and only finitely bad. I’m not persuaded by reasons 2 and 3 from your comment at all in the particular case of whether people should talk about Murray. A norm of “don’t bring up highly inflammatory topics unless they’re crucial to the site’s core interests” wouldn’t stop Hanson from posting about ems, or grabby aliens, or farmers and foragers, or construal level theory, or Aumann’s theorem, and anyway, having him post on his own blog works fine. AI alignment was never political remotely like how the Bell Curve is political. (I guess some conceptual precursors came from libertarian email lists in the 90s?) If AI alignment becomes very political (e.g. because people talk about it side by side with Bell Curve reviews), we can invoke the “crucial to the site’s core interests” thing and keep discussing it anyway, ideally taking some care to avoid making people be stupid about it. If someone wants to argue that having Bell Curve discussion on r/TheMotte instead of here would cause us to lose out on something similarly important, I’m open to hearing it.
Not within the mainstream politics, but within academic / corporate CS and AI departments.
You’d have to use a broad sense of “political” to make this true (maybe amounting to “controversial”). Nobody is advocating blanket avoidance of controversial opinions, only blanket avoidance of narrow-sense politics, and even then with a strong exception of “if you can make a case that it’s genuinely important to the fate of humanity in the way that AI alignment is important to the fate of humanity, go ahead”. At no point could anyone have used the proposed norms to prevent discussion of AI alignment.