The difference is that AI is relatively easy to do in secret. CFCs and nukes are much harder to hide.
Also, only AGI research is dangerous (or, more exactly, self-improving AI), but the other kinds are very useful. Since it’s hard to tell how far the danger is (and many don’t believe there’s a big danger), you’ll get a similar reaction to emission control proposals (i.e., some will refuse to stop, and it’s hard to convince a democratic country’s population to start a war over that; not to mention that a war risks making the AI danger moot by killing us all).
I agree that all kinds of AI research that are even close to AGI will have to be banned or strictly regulated, and that convincing all nations to ensure this is a hugely complicated political problem. (I don’t think it is more difficult than controlling carbon emissions, because of status quo bias: it is easier to convince someone to not do something new that sounds good, than to get them to stop doing something they view as good. But it is still hugely difficult, no questions about that.) It just seems to me even more difficult (and risky) to aim to solve flawlessly all the problems of FAI.
Note that the problem is not convincing countries not to do AI, the problem is convincing countries to police their population to prevent them from doing AI.
It’s much harder to hide a factory or a nuclear laboratory than to hide a bunch of geeks in a basement filled with computers. Note how bio-weapons are really scary not (just) because countries might (or are) developing them, but that it’s soon becoming easy enough for someone to do it in their kitchen.
The difference is that AI is relatively easy to do in secret. CFCs and nukes are much harder to hide.
Also, only AGI research is dangerous (or, more exactly, self-improving AI), but the other kinds are very useful. Since it’s hard to tell how far the danger is (and many don’t believe there’s a big danger), you’ll get a similar reaction to emission control proposals (i.e., some will refuse to stop, and it’s hard to convince a democratic country’s population to start a war over that; not to mention that a war risks making the AI danger moot by killing us all).
I agree that all kinds of AI research that are even close to AGI will have to be banned or strictly regulated, and that convincing all nations to ensure this is a hugely complicated political problem. (I don’t think it is more difficult than controlling carbon emissions, because of status quo bias: it is easier to convince someone to not do something new that sounds good, than to get them to stop doing something they view as good. But it is still hugely difficult, no questions about that.) It just seems to me even more difficult (and risky) to aim to solve flawlessly all the problems of FAI.
Note that the problem is not convincing countries not to do AI, the problem is convincing countries to police their population to prevent them from doing AI.
It’s much harder to hide a factory or a nuclear laboratory than to hide a bunch of geeks in a basement filled with computers. Note how bio-weapons are really scary not (just) because countries might (or are) developing them, but that it’s soon becoming easy enough for someone to do it in their kitchen.