I’m currently thinking through a similar consideration for LessWrong, although I don’t think the lizardman constant is the relevant frame. We’re getting a ton of new people here who are eager to participate in discussion of AI, x-risk and alignment, who often come in with a bunch of subtle misconceptions and honestly quite reasonable first pass opinions, and I think responding to it requires a somewhat similar mode to the police officer gently but firmly de-escalating and saying “no, there isn’t anything to report here”.
In this case I think AI is a genuinely confusing topic, and I don’t expect a 5% lizardman constant but rather like 90% of humanity to come in with a difficult-to-resolve confusion that is worth resolving a few times but not Every Single Time, and that sucks to hear as a participant. I’m still working out how exactly to engage with it. (I’m working on improving our general onboarding/infrastructure so that the new user experience here isn’t so shitty, i.e. in many cases there’s not a single good writeup of an explanation, but it’d be great if there was).
It does seem an important and useful difference, that the sort of person who complains about Rainbowland is probably prone to starting and escalating fights in general, while the person who has misconceptions about AI is probably about as reasonable as the average person. In most of these cases (with some exceptions), LW is finding itself, not in the role of a superintendent fielding paranoid complaints, but something more like the role of a professor who’s struggling to focus on research because there are too many undergraduates.
I’ve been trying to spend a bit more time voting in response to this, to try to help keep thread quality high; at least for now, the size of the influx strikes me as low enough that a few long-time users doing this might help a bunch.
The reason I didn’t run with Eternal September as title or as major example is that Eternal September is about cultures being changed by an influx of unacculturated people, whereas I suspect the problem I’m describing is present in every culture regardless of immigration/emigration. Like, I think that even quiet towns of 5000 people out in the middle of nowhere have their 200 lizardmen, and avoid the problems gestured at above primarily via everybody knowing who they are and discounting accordingly.
(A similarity with Eternal September is “that kind of high-context solution failing at scale.”)
EDIT: Oh, I’m dumb; I thought this was generically responding to the essay and I missed that it’s responding to Ray; the problem Ray describes is VERY Eternal September.
Yes, Eternal September is the basically the name of the problem I outline, the thing that made it seem relevant to this post is that the solution is sort of the same to dealing with Lizardmen.
It really seems like we ought to be able to set up our arsenal of paragraphs so that it’s possible to respond fairly well with not much effort to most new users’ questions, just by link to a couple pages. Then you just have to create some common knowledge that it’s not some sort of diss or implication that it’s a bad idea/question, just one that has been around for a while and that we have a bunch of thoughts on, and please check out these explanations and then feel free to ask more questions if you have followups.
“Institutions” in this case are basically gatekeepers who try to enforce quality of content as judged by insiders, which in turn reduces the content that wastes time. This is very similar to what editors of journals or newspapers do. However, whether people want to engage with the “misconceptions” etc. could be made their own (“self-nudging”) choice by choosing a Karma visibility threshold for comments and posts. Whether average users interact more or less with low-Karma comments and posts could be influenced by changing the standard threshold.
Seems about right.
I’m currently thinking through a similar consideration for LessWrong, although I don’t think the lizardman constant is the relevant frame. We’re getting a ton of new people here who are eager to participate in discussion of AI, x-risk and alignment, who often come in with a bunch of subtle misconceptions and honestly quite reasonable first pass opinions, and I think responding to it requires a somewhat similar mode to the police officer gently but firmly de-escalating and saying “no, there isn’t anything to report here”.
In this case I think AI is a genuinely confusing topic, and I don’t expect a 5% lizardman constant but rather like 90% of humanity to come in with a difficult-to-resolve confusion that is worth resolving a few times but not Every Single Time, and that sucks to hear as a participant. I’m still working out how exactly to engage with it. (I’m working on improving our general onboarding/infrastructure so that the new user experience here isn’t so shitty, i.e. in many cases there’s not a single good writeup of an explanation, but it’d be great if there was).
It does seem an important and useful difference, that the sort of person who complains about Rainbowland is probably prone to starting and escalating fights in general, while the person who has misconceptions about AI is probably about as reasonable as the average person. In most of these cases (with some exceptions), LW is finding itself, not in the role of a superintendent fielding paranoid complaints, but something more like the role of a professor who’s struggling to focus on research because there are too many undergraduates.
I’ve been trying to spend a bit more time voting in response to this, to try to help keep thread quality high; at least for now, the size of the influx strikes me as low enough that a few long-time users doing this might help a bunch.
This sounds like Eternal September to me.
Feels very closely related in my mind, as well.
The reason I didn’t run with Eternal September as title or as major example is that Eternal September is about cultures being changed by an influx of unacculturated people, whereas I suspect the problem I’m describing is present in every culture regardless of immigration/emigration. Like, I think that even quiet towns of 5000 people out in the middle of nowhere have their 200 lizardmen, and avoid the problems gestured at above primarily via everybody knowing who they are and discounting accordingly.
(A similarity with Eternal September is “that kind of high-context solution failing at scale.”)
EDIT: Oh, I’m dumb; I thought this was generically responding to the essay and I missed that it’s responding to Ray; the problem Ray describes is VERY Eternal September.
Yeah I agree, I think your post points at something distinct from Eternal September, but what Raemon was talking about seemed very similar.
Yes, Eternal September is the basically the name of the problem I outline, the thing that made it seem relevant to this post is that the solution is sort of the same to dealing with Lizardmen.
It’s the same problem that “this is not a feminism 101 space” was a complaint about.
It really seems like we ought to be able to set up our arsenal of paragraphs so that it’s possible to respond fairly well with not much effort to most new users’ questions, just by link to a couple pages. Then you just have to create some common knowledge that it’s not some sort of diss or implication that it’s a bad idea/question, just one that has been around for a while and that we have a bunch of thoughts on, and please check out these explanations and then feel free to ask more questions if you have followups.
This is somewhat sort of the idea behind aisafety.info, to have good, but generally brief explanations of various topics that have known answers
“Institutions” in this case are basically gatekeepers who try to enforce quality of content as judged by insiders, which in turn reduces the content that wastes time. This is very similar to what editors of journals or newspapers do. However, whether people want to engage with the “misconceptions” etc. could be made their own (“self-nudging”) choice by choosing a Karma visibility threshold for comments and posts. Whether average users interact more or less with low-Karma comments and posts could be influenced by changing the standard threshold.