has anyone done a similar analysis for “pushing for world-wide safety regulations on AI research” or “spending money directly on building FAI”?
The number one point of comparison for safety regulations is the cryptography export regulations. I am pretty sceptical about something similar being attempted for machine intelligence. It is possible to imagine the export of smart robots to “bad” countries being banned—for fear that they will reverse-engineer their secrets—but not easy to imagine that anyone will bother. Machine intelligence will ultimately be more useful than cryptography was. It seems pretty difficult to imagine an effective ban. So far, I haven’t seen any serious proposals to do that.
Governments seem likely to continue promoting this kind of thing, not banning it.
The number one point of comparison for safety regulations is the cryptography export regulations. I am pretty sceptical about something similar being attempted for machine intelligence. It is possible to imagine the export of smart robots to “bad” countries being banned—for fear that they will reverse-engineer their secrets—but not easy to imagine that anyone will bother. Machine intelligence will ultimately be more useful than cryptography was. It seems pretty difficult to imagine an effective ban. So far, I haven’t seen any serious proposals to do that.
Governments seem likely to continue promoting this kind of thing, not banning it.