I think that trying to slow down research towards AGI through regulation would fail, because everyone (politicians, voters, lobbyists, business, etc.) likes scientific research and technological development, it creates jobs, it cures diseases, etc. etc., and you’re saying we should have less of that. So I think the effort would fail, and also be massively counterproductive by making the community of AI researchers see the community of AGI safety / alignment people as their enemies, morons, weirdos, Luddites, whatever.
Also, I’m not sure how you can stop “AGI research” without also stopping “AI research” (and for that matter some fraction of neuroscience research too), because we don’t know what research direction will lead to AGI.
If anti-AGI regulations / treaties were the right thing to do (for the sake of argument), the first step would be to get the larger AI community and scientific community thinking more about planning for AGI, and gradually get them on board and caring about the issue. Only then would you have a prayer of succeeding at the second step, i.e. advocating for such a regulation / treaty. But when you think about it, after you’ve taken the first step, do you really need the second step? :-P
Oh, and even if such a law / treaty passed, it seems like it might be unenforceable. There will always be a large absolute number of AI researchers who think the rule is stupid, and then they would all go move to the one random country that didn’t ratify the treaty. Or maybe AGI would be invented in a secret military lab or whatever.
I think that trying to slow down research towards AGI through regulation would fail, because everyone (politicians, voters, lobbyists, business, etc.) likes scientific research and technological development, it creates jobs, it cures diseases, etc. etc., and you’re saying we should have less of that. So I think the effort would fail, and also be massively counterproductive by making the community of AI researchers see the community of AGI safety / alignment people as their enemies, morons, weirdos, Luddites, whatever.
Also, I’m not sure how you can stop “AGI research” without also stopping “AI research” (and for that matter some fraction of neuroscience research too), because we don’t know what research direction will lead to AGI.
If anti-AGI regulations / treaties were the right thing to do (for the sake of argument), the first step would be to get the larger AI community and scientific community thinking more about planning for AGI, and gradually get them on board and caring about the issue. Only then would you have a prayer of succeeding at the second step, i.e. advocating for such a regulation / treaty. But when you think about it, after you’ve taken the first step, do you really need the second step? :-P
Oh, and even if such a law / treaty passed, it seems like it might be unenforceable. There will always be a large absolute number of AI researchers who think the rule is stupid, and then they would all go move to the one random country that didn’t ratify the treaty. Or maybe AGI would be invented in a secret military lab or whatever.
Just my off-the-cuff opinions :-P