Expanding the number of people aware of (and ideally, working on) the alignment problem is a high-leverage activity
It’s also high-variance; I think there’s a risk here that you’re not modeling, along the lines of idea inoculation. If you do a bad job raising awareness you can hurt your cause by making the cause look dumb, low-status, a thing that only cranks worry about, etc. and this could be extremely difficult to undo. I am mostly relieved and not worried to see that most people are not even trying, and basically happy to leave this job to people like Andrew Critch, who is doing it with in-person talks and institutional backing.
There’s an opposite problem which I’m less worried about, but if working on AI safety becomes too high-status then the people who show up to do it need to be filtered for not trying to take advantage, so there’s a cost there. Currently the ways in which it’s still somewhat difficult to learn enough about AI safety to care acts as the filter.
My read on the situation is that I am happy for people to reach out to their close friends, and generally expect little harm to come from that, but encourage people to be very hesitant to reach out to large communities or the public at large.
This article struck me as more asking for advice on how to get people you are already close to interested in AI alignment, which strikes me as significantly less high-variance.
Oh, good, that seems much less dangerous. Usually when people talk about “raising awareness” they’re talking about mass awareness, protests, T-shirts, rallies, etc.
Yeah, want to add that my initial response to the post was strongly negative (becase pattern matching), but after a closer reading (and the title change <3) I’m super happy with this post.
I agree, and I realized this a bit after leaving my keyboard. The problem is that we don’t have enough people doing this kind of outreach, in my opinion. It might be a good idea to get more people doing pretty good outreach than just have a few doing great outreach.
The other question is how hard it is to find people like me—constant effort for a very low probability outcome could be suboptimal compared to just spending more time on the problem ourselves. I don’t think we’re there yet, but it’s something to consider.
It’s also high-variance; I think there’s a risk here that you’re not modeling, along the lines of idea inoculation. If you do a bad job raising awareness you can hurt your cause by making the cause look dumb, low-status, a thing that only cranks worry about, etc. and this could be extremely difficult to undo. I am mostly relieved and not worried to see that most people are not even trying, and basically happy to leave this job to people like Andrew Critch, who is doing it with in-person talks and institutional backing.
There’s an opposite problem which I’m less worried about, but if working on AI safety becomes too high-status then the people who show up to do it need to be filtered for not trying to take advantage, so there’s a cost there. Currently the ways in which it’s still somewhat difficult to learn enough about AI safety to care acts as the filter.
My read on the situation is that I am happy for people to reach out to their close friends, and generally expect little harm to come from that, but encourage people to be very hesitant to reach out to large communities or the public at large.
This article struck me as more asking for advice on how to get people you are already close to interested in AI alignment, which strikes me as significantly less high-variance.
can confirm, that’s what I had in mind (at least in my case).
Oh, good, that seems much less dangerous. Usually when people talk about “raising awareness” they’re talking about mass awareness, protests, T-shirts, rallies, etc.
Yeah, want to add that my initial response to the post was strongly negative (becase pattern matching), but after a closer reading (and the title change <3) I’m super happy with this post.
I agree, and I realized this a bit after leaving my keyboard. The problem is that we don’t have enough people doing this kind of outreach, in my opinion. It might be a good idea to get more people doing pretty good outreach than just have a few doing great outreach.
The other question is how hard it is to find people like me—constant effort for a very low probability outcome could be suboptimal compared to just spending more time on the problem ourselves. I don’t think we’re there yet, but it’s something to consider.