Thanks for engaging with the post and acknowledging that regulation may be a possibility we should consider and not reject out of hand.
I don’t share your optimistic view that transnational agencies such as the IAEA will be all that effective. The history of the nuclear arms race is that those countries that could develop weapons did, leading to extremes such as the Tsar Bomba, a 50-megaton monster that was more of a dick-waving demonstration than a real weapon. The only thing that ended the unstable MAD doctrine was the internal collapse of the Soviet Union. So, while countries have agreed to allow limited inspection of their nuclear facilities and stockpiles, it’s nothing like the level of complete sharing that you envision in your description.
My position is actually not that optimistic. I don’t believe that such transnational agencies are very likely to work or a safe bet to ensure a good future, it is more that it seems to be in our best interest to really consider all of the options that we can put on the table, try to learn from what has more or less worked in the past but also look for creative new approaches and solutions because the alternative is dystopia or catastrophe.
A key difference between AI and nuclear weapons is that the AI labs are not as sovereign as nation states. If the US, UK, and EU were to impose strong regulation on their companies and “force them to cooperate” similar to what I outlined, this would seem (at least theoretically) possible and already a big win to me. For example, more resources could be allocated to alignment work compared to capabilities work. China seems much more concerned about regulation and control of companies anyway so I see a chance that they would follow suit in approaching AI carefully.
However, it seems likely that the major commercial players will fight tooth and nail to avoid that situation, and you’ll have to figure out how to apply similar restrictions worldwide.
To be honest, it’s overdue that we find the guts to face up to them and put them in their place. Of course that’s easier said than done but the first step is to not be intimidated before we even tried. Similarly, the call for worldwide regulations often seems to me to be a case of “don’t let the perfect be the enemy of the good”. Of course, worldwide regulations would be desirable but if we only get US, UK, and EU or even the US or EU alone to make some moves here, we would be in a far better position. It’s a bogeyman that companies will simply turn around and set up shop in the Bahamas to pursue AGI development because they would not be able to a) secure the necessary compute to run development and b) sell their products in the largest markets. We do have some leverage here.
So, I think this is an excellent discussion to have, but I’m not convinced that the regulated source model you describe is workable.
Thanks for acknowledging the issue that I am pointing to here. I see the regulated source model mostly as a general outline of a class of potential solutions some of which could be workable and others not. Getting to specifics that are workable is certainly the hard part. For me, the important step was to start discussing them more openly to build more momentum for the people who are interested in taking such ideas forward. If more of us would start to openly acknowledge and advocate that there should be room for discussing stronger regulation our position would already be somewhat improved.
Thanks for engaging with the post and acknowledging that regulation may be a possibility we should consider and not reject out of hand.
My position is actually not that optimistic. I don’t believe that such transnational agencies are very likely to work or a safe bet to ensure a good future, it is more that it seems to be in our best interest to really consider all of the options that we can put on the table, try to learn from what has more or less worked in the past but also look for creative new approaches and solutions because the alternative is dystopia or catastrophe.
A key difference between AI and nuclear weapons is that the AI labs are not as sovereign as nation states. If the US, UK, and EU were to impose strong regulation on their companies and “force them to cooperate” similar to what I outlined, this would seem (at least theoretically) possible and already a big win to me. For example, more resources could be allocated to alignment work compared to capabilities work. China seems much more concerned about regulation and control of companies anyway so I see a chance that they would follow suit in approaching AI carefully.
To be honest, it’s overdue that we find the guts to face up to them and put them in their place. Of course that’s easier said than done but the first step is to not be intimidated before we even tried. Similarly, the call for worldwide regulations often seems to me to be a case of “don’t let the perfect be the enemy of the good”. Of course, worldwide regulations would be desirable but if we only get US, UK, and EU or even the US or EU alone to make some moves here, we would be in a far better position. It’s a bogeyman that companies will simply turn around and set up shop in the Bahamas to pursue AGI development because they would not be able to a) secure the necessary compute to run development and b) sell their products in the largest markets. We do have some leverage here.
Thanks for acknowledging the issue that I am pointing to here. I see the regulated source model mostly as a general outline of a class of potential solutions some of which could be workable and others not. Getting to specifics that are workable is certainly the hard part. For me, the important step was to start discussing them more openly to build more momentum for the people who are interested in taking such ideas forward. If more of us would start to openly acknowledge and advocate that there should be room for discussing stronger regulation our position would already be somewhat improved.