Having read through the descriptions of most research organizations, it seems there’s way, way too little research on medium-to-long-term government policy.
Often, when reading posts on LW, it feels like AI safety researchers are assuming that the research community is going to come up with one single AGI, and if we make it friendly everyone in the world will use the same friendly AGI and the world will be saved. Sometimes people pay lip service to the idea that the world is decentralized and solutions need to be economically competitive, but I see very little in-depth research on what that decentralization means for AI safety.
It seems this disparity is also found in the makeup of research organizations. In the list you mention, it feels like 90% of the research articles are about some novel alignment framework for a single AI, and virtually none of them are about government policy at all; the only outlier is GovernanceAI. This feels like the Silicon Valley stereotype of “we just need to make the technology, and the government will have to adapt to us”.
In particular, I don’t see any research papers about what policy decisions governments could make to lower the risk of an AGI takeover. There’s a million things governments shouldn’t do (eg saying “we ban AGI” is unlikely to help), and probably very few things they could do that would actually help, but that’s why this space needs exploring.
(Also, I think the topic of hardening in particular needs exploring too. When the US was worried about a nuclear war, it invented the internet so its communications would be resilient in case entire cities were wiped off the map. We should have a similar mindset when it comes to “What if these systems that rely on AI suddenly stop working one day?”)
Having read through the descriptions of most research organizations, it seems there’s way, way too little research on medium-to-long-term government policy.
Often, when reading posts on LW, it feels like AI safety researchers are assuming that the research community is going to come up with one single AGI, and if we make it friendly everyone in the world will use the same friendly AGI and the world will be saved. Sometimes people pay lip service to the idea that the world is decentralized and solutions need to be economically competitive, but I see very little in-depth research on what that decentralization means for AI safety.
It seems this disparity is also found in the makeup of research organizations. In the list you mention, it feels like 90% of the research articles are about some novel alignment framework for a single AI, and virtually none of them are about government policy at all; the only outlier is GovernanceAI. This feels like the Silicon Valley stereotype of “we just need to make the technology, and the government will have to adapt to us”.
In particular, I don’t see any research papers about what policy decisions governments could make to lower the risk of an AGI takeover. There’s a million things governments shouldn’t do (eg saying “we ban AGI” is unlikely to help), and probably very few things they could do that would actually help, but that’s why this space needs exploring.
(Also, I think the topic of hardening in particular needs exploring too. When the US was worried about a nuclear war, it invented the internet so its communications would be resilient in case entire cities were wiped off the map. We should have a similar mindset when it comes to “What if these systems that rely on AI suddenly stop working one day?”)