I have some sympathy for this sentiment, but I want to point out that the alignment community was tiny until last year and still is small, so many opportunities that are becoming viable now were well below the bar earlier. If you were Rohin Shah, could you do better than finishing your PhD, publishing the Alignment Newsletter starting 2018 and then joining DeepMind in 2020? If you were Rob Miles, could you do better than the YouTube channel starting 2017? As Jason Matheny, could you do better than co-chairing the task force that wrote the US National AI Strategic Plan in 2016, then founding CSET in 2018? As Kelsey Piper, could you do better than writing for Vox starting 2018? Or should we have diverted some purely technical researchers, who are mostly computer science nerds with no particular talent at talking to people, to come up with a media outreach plan?
Keeping in mind that it’s not clear in advance what your policy asks are or what arguments you need to counter and so your plan would go stale every couple of years, and that for the last 15 years the economy, health care, etc. have been orders of magnitude more salient to the average person than AGI risk with no signs this would change as suddenly as it did with ChatGPT.
To add on my thinking in particular: my view for at least a couple of years was that alignment would go mainstream at some point and discourse quality would then fall. I didn’t really see a good way for me to make the public discourse much better—I am not as gifted at persuasive writing as (say) Eliezer, nor are my views as memetically fit. As a result, my plan has been to have more detailed / nuanced conversations with individuals and/or small groups, and especially to advise people making important decisions (and/or make those decisions myself), and that was a major reason I chose to work at an industry lab. I think that plan has fared pretty well, but you’re not going to see much evidence of that publicly.
I was, however, surprised by the suddenness with which things changed; had I concretely expected that I would have wanted the community to have more “huge asks” ready in advance. (I was instead implicitly thinking that the strength of the community’s asks would ratchet upwards gradually as more and more people were convinced.)
I completely agree that it made no sense to divert qualified researchers away from actually doing the work. I hope my post did not come across as suggesting that.
I have some sympathy for this sentiment, but I want to point out that the alignment community was tiny until last year and still is small, so many opportunities that are becoming viable now were well below the bar earlier. If you were Rohin Shah, could you do better than finishing your PhD, publishing the Alignment Newsletter starting 2018 and then joining DeepMind in 2020? If you were Rob Miles, could you do better than the YouTube channel starting 2017? As Jason Matheny, could you do better than co-chairing the task force that wrote the US National AI Strategic Plan in 2016, then founding CSET in 2018? As Kelsey Piper, could you do better than writing for Vox starting 2018? Or should we have diverted some purely technical researchers, who are mostly computer science nerds with no particular talent at talking to people, to come up with a media outreach plan?
Keeping in mind that it’s not clear in advance what your policy asks are or what arguments you need to counter and so your plan would go stale every couple of years, and that for the last 15 years the economy, health care, etc. have been orders of magnitude more salient to the average person than AGI risk with no signs this would change as suddenly as it did with ChatGPT.
To add on my thinking in particular: my view for at least a couple of years was that alignment would go mainstream at some point and discourse quality would then fall. I didn’t really see a good way for me to make the public discourse much better—I am not as gifted at persuasive writing as (say) Eliezer, nor are my views as memetically fit. As a result, my plan has been to have more detailed / nuanced conversations with individuals and/or small groups, and especially to advise people making important decisions (and/or make those decisions myself), and that was a major reason I chose to work at an industry lab. I think that plan has fared pretty well, but you’re not going to see much evidence of that publicly.
I was, however, surprised by the suddenness with which things changed; had I concretely expected that I would have wanted the community to have more “huge asks” ready in advance. (I was instead implicitly thinking that the strength of the community’s asks would ratchet upwards gradually as more and more people were convinced.)
I completely agree that it made no sense to divert qualified researchers away from actually doing the work. I hope my post did not come across as suggesting that.