The main concrete proposals / ideas that are mentioned here or I can think of are:
Work to spread good knowledge regarding AGI risk / doom stuff among AI researchers.
I think everyone is in favor of this, particularly when it’s tied to the less-adversarial takeaway message “there is a big problem, and more safety research is desperately needed”. When it’s tied to the more-adversarial takeaway message “capabilities research should be slowed down”, I think that can be tactically bad, as I think people generally don’t tend to be receptive to people telling them that they and all their scientific idols are being reckless. I think it’s good to be honest and frank, but in the context of outreach, we can be strategic about what we emphasize, and I think emphasizing the “there is a big problem, and more safety research is desperately needed” message is generally the better approach. It also helps that the “there is a big problem, and more safety research is desperately needed” message is pretty clearly correct and easy to argue whereas “capabilities research is harmful” has comparatively more balanced arguments on both sides, and moreover if people can be sufficiently bought into the “there is a is a big problem, and more safety research is desperately needed” message then they can gradually propagate through to the (more complicated) implications on capabilities research.
Work to spread good knowledge regarding AGI risk / doom stuff among politicians, the general public, etc.
Basically ditto. Emphasizing “there is a big problem, and more safety research is desperately needed” seems good and is I think uncontroversial. Emphasizing “capabilities research should be slowed down” seems at least uncertain to accomplish what one might hope for it to accomplish, and seems to have many fewer advocates in the community AFAICT. For my part, I do some modest amount of outreach of this sort, with the “there is a big problem, and more safety research is desperately needed” messaging.
In support of 1 & 2, do research that may lead to more crisp and rigorous arguments for why AGI doom is likely, if indeed it’s likely.
Everyone agrees.
Don’t do capabilities research oneself, and try to discourage my friends who are already concerned about AGI risk / doom stuff from doing capabilities research.
Seems to be a very active ongoing debate within the community. I usually find myself on the anti-capabilities-research side of these debates, but it depends on the details.
Lay groundwork for possible future capability-slowing-down activities (whether through regulation or voluntary), like efforts to quantify AGI capabilities, track where high-end chips are going, deal with possible future anti-trust issues, start industry groups, get into positions of government influence, etc.
These seem to be some of the main things that “AGI governance” people are doing, right? I’m personally in favor of all of those things, and can’t recall anyone objecting to them. I don’t work on them myself because it’s not my comparative advantage.
Are there other things we should be thinking about besides those 5?
The researchers themselves probably don’t want to destroy the world. Many of them also actually agree that AI is a serious existential risk. So in two natural ways, pushing for caution is cooperative with many if not most AI researchers.
I think a possible crux here is that we have different assumptions about how many people think AI existential risk is real & serious versus stupid. I think that right now most people think it’s stupid, and therefore the thing to do right now is to move lots of people from the “it’s stupid” camp to the “it’s real & serious” camp. After we get a lot more people, especially in the AI community but also probably among the general public, out of the “it’s stupid” camp and into the “it’s real & serious” camp, I think a lot more options open up, both governmental and through industry groups or whatever.
I think there are currently too many people in the “it’s stupid” camp and too few in the “it’s real & serious” camp for those types of slowing-down options to be realistically available right now. I could be wrong—I could imagine having my mind changed by survey results or something. Yours says “the median respondent’s probability of x-risk from humans failing to control AI was 10%”, but I have some concerns about response bias, and also a lot of people (even in ML) don’t think about probability the way you or I do, and may mentally treat 10% as “basically 0”, or at least not enough to outweigh the costs of slowing AI. Maybe if there’s a next survey, you could ask directly about receptiveness to broad-based agreements to slow AI research?? I wonder whether you’re in a bubble, or maybe I am, etc.
Anyway, centering the discussion around regulation would (IMO) be productive if we’re in the latter stage where most people already believe that AI existential risk is real & serious, whereas centering the discussion around regulation would at least plausibly be counterproductive if we’re in the former stage where we’re mainly trying to get lots of people to believe that AI existential risk is real & serious as opposed to stupid.
I’m not aware of examples of a technology not getting developed because of concerns over the technology, when most experts and most of the general public thought those concerns were silly. For example, concerns over nuclear power are in fact mostly silly, but had widespread support in the public before they had any impact, I think.
There might be widespread support for regulation to do with social media recommendation algorithm AIs or a few other things like that, but that’s not the topic of concern—AGI timelines are getting shrunk by-and-large by non-public-facing R&D projects, IMO. There is no IRB-like tradition of getting preapproval for running code that’s not intended to leave the four walls of the R&D lab, and I’m not sure what one could do to help such a tradition to get established, at least not before catastrophic AI lab escape accidents, which (if we survive) would be a different topic of discussion.
I think that can be tactically bad, as I think people generally don’t tend to be receptive to people telling them that they and all their scientific idols are being reckless.
In many cases, they’re right, and in fact they’re working on AI (broadly construed) that’s (1) narrow, (2) pretty unlikely to contribute to AGI, and (3) potentially scientifically interesting or socially/technologically useful, and therefore good to pursue. “We” may have a tactical need to be discerning ourselves in who, and what intentions, we criticize.
Work to spread good knowledge regarding AGI risk / doom stuff among politicians, the general public, etc. [...] Emphasizing “there is a big problem, and more safety research is desperately needed” seems good and is I think uncontroversial.
Nitpick: My impression is that at least some versions of this outreach are very controversial in the community, as suggested by e.g. the lack of mass advocacy efforts. [Edit: “lack of” was an overstatement. But these are still much smaller than they could be.]
For example, Eliezer Yudkowsky went on the Sam Harris podcast in 2018, Stuart Russell wrote an op-ed in the New York Times, Nick Bostrom wrote a book, … I dunno, do you have examples?
Nobody is proposing to play a commercial about AGI doom during the Superbowl or whatever, but I think that’s less “we are opposed to the general public having an understanding of why AGI risk is real and serious” and more “buying ads would not accomplish that”, I think?
The main concrete proposals / ideas that are mentioned here or I can think of are:
Work to spread good knowledge regarding AGI risk / doom stuff among AI researchers.
I think everyone is in favor of this, particularly when it’s tied to the less-adversarial takeaway message “there is a big problem, and more safety research is desperately needed”. When it’s tied to the more-adversarial takeaway message “capabilities research should be slowed down”, I think that can be tactically bad, as I think people generally don’t tend to be receptive to people telling them that they and all their scientific idols are being reckless. I think it’s good to be honest and frank, but in the context of outreach, we can be strategic about what we emphasize, and I think emphasizing the “there is a big problem, and more safety research is desperately needed” message is generally the better approach. It also helps that the “there is a big problem, and more safety research is desperately needed” message is pretty clearly correct and easy to argue whereas “capabilities research is harmful” has comparatively more balanced arguments on both sides, and moreover if people can be sufficiently bought into the “there is a is a big problem, and more safety research is desperately needed” message then they can gradually propagate through to the (more complicated) implications on capabilities research.
Work to spread good knowledge regarding AGI risk / doom stuff among politicians, the general public, etc.
Basically ditto. Emphasizing “there is a big problem, and more safety research is desperately needed” seems good and is I think uncontroversial. Emphasizing “capabilities research should be slowed down” seems at least uncertain to accomplish what one might hope for it to accomplish, and seems to have many fewer advocates in the community AFAICT. For my part, I do some modest amount of outreach of this sort, with the “there is a big problem, and more safety research is desperately needed” messaging.
In support of 1 & 2, do research that may lead to more crisp and rigorous arguments for why AGI doom is likely, if indeed it’s likely.
Everyone agrees.
Don’t do capabilities research oneself, and try to discourage my friends who are already concerned about AGI risk / doom stuff from doing capabilities research.
Seems to be a very active ongoing debate within the community. I usually find myself on the anti-capabilities-research side of these debates, but it depends on the details.
Lay groundwork for possible future capability-slowing-down activities (whether through regulation or voluntary), like efforts to quantify AGI capabilities, track where high-end chips are going, deal with possible future anti-trust issues, start industry groups, get into positions of government influence, etc.
These seem to be some of the main things that “AGI governance” people are doing, right? I’m personally in favor of all of those things, and can’t recall anyone objecting to them. I don’t work on them myself because it’s not my comparative advantage.
So for all of these things, I think they’re worth doing and indeed do them myself to an extent that is appropriate given my strengths, other priorities, etc.
Are there other things we should be thinking about besides those 5?
I think a possible crux here is that we have different assumptions about how many people think AI existential risk is real & serious versus stupid. I think that right now most people think it’s stupid, and therefore the thing to do right now is to move lots of people from the “it’s stupid” camp to the “it’s real & serious” camp. After we get a lot more people, especially in the AI community but also probably among the general public, out of the “it’s stupid” camp and into the “it’s real & serious” camp, I think a lot more options open up, both governmental and through industry groups or whatever.
I think there are currently too many people in the “it’s stupid” camp and too few in the “it’s real & serious” camp for those types of slowing-down options to be realistically available right now. I could be wrong—I could imagine having my mind changed by survey results or something. Yours says “the median respondent’s probability of x-risk from humans failing to control AI was 10%”, but I have some concerns about response bias, and also a lot of people (even in ML) don’t think about probability the way you or I do, and may mentally treat 10% as “basically 0”, or at least not enough to outweigh the costs of slowing AI. Maybe if there’s a next survey, you could ask directly about receptiveness to broad-based agreements to slow AI research?? I wonder whether you’re in a bubble, or maybe I am, etc.
Anyway, centering the discussion around regulation would (IMO) be productive if we’re in the latter stage where most people already believe that AI existential risk is real & serious, whereas centering the discussion around regulation would at least plausibly be counterproductive if we’re in the former stage where we’re mainly trying to get lots of people to believe that AI existential risk is real & serious as opposed to stupid.
I’m not aware of examples of a technology not getting developed because of concerns over the technology, when most experts and most of the general public thought those concerns were silly. For example, concerns over nuclear power are in fact mostly silly, but had widespread support in the public before they had any impact, I think.
There might be widespread support for regulation to do with social media recommendation algorithm AIs or a few other things like that, but that’s not the topic of concern—AGI timelines are getting shrunk by-and-large by non-public-facing R&D projects, IMO. There is no IRB-like tradition of getting preapproval for running code that’s not intended to leave the four walls of the R&D lab, and I’m not sure what one could do to help such a tradition to get established, at least not before catastrophic AI lab escape accidents, which (if we survive) would be a different topic of discussion.
In many cases, they’re right, and in fact they’re working on AI (broadly construed) that’s (1) narrow, (2) pretty unlikely to contribute to AGI, and (3) potentially scientifically interesting or socially/technologically useful, and therefore good to pursue. “We” may have a tactical need to be discerning ourselves in who, and what intentions, we criticize.
Nitpick: My impression is that at least some versions of this outreach are very controversial in the community, as suggested by e.g. the lack of mass advocacy efforts. [Edit: “lack of” was an overstatement. But these are still much smaller than they could be.]
For example, Eliezer Yudkowsky went on the Sam Harris podcast in 2018, Stuart Russell wrote an op-ed in the New York Times, Nick Bostrom wrote a book, … I dunno, do you have examples?
Nobody is proposing to play a commercial about AGI doom during the Superbowl or whatever, but I think that’s less “we are opposed to the general public having an understanding of why AGI risk is real and serious” and more “buying ads would not accomplish that”, I think?