The problem is that (1) the benefits of AI are large; (2) there are lots of competing actors; (3) verification is hard; (4) no one really knows where the lines are and (5) timelines may be short.
(2) In addition to major companies in the US, AI research is also conducted in major companies in foreign countries, most notably China. The US government and the Chinese government both view AI as a competitive advantage. So, there are a lot of stakeholders, not all of whom AGI risk aware Americans have easy access to, who would have to agree. (And, of course, new companies can be founded all the time.) So you need almost a universal level of agreement.
(3) Let’s say everyone relevant agrees. The incentive to cheat is enormous. Usually, the way to prevent cheating is some form of verification. How do you verify that no one is conducting AI research? If there is no verification, there will likely be no agreement. And even if there is, the effectiveness would be limited. (Banning GPU production might be verifiable, but note that you have now increased the pool of opponents of your AI research ban significantly and you now need global agreement by all relevant governments on this point.)
(4) There may be agreement on the risk of AGI, but people may have confidence that we are at least a certain distance away from AGI or that certain forms of research don’t pose a threat. This will tend to cause agreements to restrict AGI research to be limited.
(5) How long do we have to get this agreement? I am very confident that we won’t have dangerous AI within the next six years. On the other hand, it took 13 years to get general agreement on banning CFCs after the ozone hole was discovered. I don’t think we will have dangerous AI in 13 years, but other people do. On the other hand, if an agreement between governments is required, 13 years seems optimistic.
Definitely.
The problem is that (1) the benefits of AI are large; (2) there are lots of competing actors; (3) verification is hard; (4) no one really knows where the lines are and (5) timelines may be short.
(2) In addition to major companies in the US, AI research is also conducted in major companies in foreign countries, most notably China. The US government and the Chinese government both view AI as a competitive advantage. So, there are a lot of stakeholders, not all of whom AGI risk aware Americans have easy access to, who would have to agree. (And, of course, new companies can be founded all the time.) So you need almost a universal level of agreement.
(3) Let’s say everyone relevant agrees. The incentive to cheat is enormous. Usually, the way to prevent cheating is some form of verification. How do you verify that no one is conducting AI research? If there is no verification, there will likely be no agreement. And even if there is, the effectiveness would be limited. (Banning GPU production might be verifiable, but note that you have now increased the pool of opponents of your AI research ban significantly and you now need global agreement by all relevant governments on this point.)
(4) There may be agreement on the risk of AGI, but people may have confidence that we are at least a certain distance away from AGI or that certain forms of research don’t pose a threat. This will tend to cause agreements to restrict AGI research to be limited.
(5) How long do we have to get this agreement? I am very confident that we won’t have dangerous AI within the next six years. On the other hand, it took 13 years to get general agreement on banning CFCs after the ozone hole was discovered. I don’t think we will have dangerous AI in 13 years, but other people do. On the other hand, if an agreement between governments is required, 13 years seems optimistic.