Interesting point of view. I don’t think I agree with the sex triggers section: it seems that applying this retroactively would predict that the internet and video games would be banned by now (it is of course the case that in many instances they are stigmatized, but nowhere near the extent that would result in banning them).
Also, the essay does not touch on the most important piece of equation, which is the immense upside of AGI—the metaphore about the nuclear weapons spitting out gold, up until they got large enough. This means there is a huge incentive for private companies to unilaterally improve the tech, plus the Moore’s law of the compute being cheaper every year. If you can get the AI comprehend text a bit better (or do any sort of other “backend” task), this is much different from the production of child porn, growing weed, or killing people more effectively, which are very localized sources of profit. I think only human cloning comes as the a close example, but still not quite (the gains are very uncertain and temporarily discontinued, it’s more difficult to hide the experiments, the technology is much more specialised, whilist compute is needed in every other part of the economy, and ‘doing AI’ is not so well-defined category as ‘using human stem cells’).
Well, the AI industry and the pro-AI accelerationists believe that there is an ‘immense upside of AGI’, but that is a highly speculative, faith-based claim, IMHO. (The case for narrow AI having clear upsides is much stronger, I think.)
It’s worth noting that almost every R&D field that has been morally stigmatized—such as intelligence research, evolutionary psychology, and behavior genetics—also offered huge and transformative upsides to society, when the field first developed. Until they got crushed by political demonization, and their potential was strangled in the cradle, so to speak.
The public perception of likely relative costs vs. benefits is part of the moral stigmatization process. If AI gets stigmatized, the public will not believe that AGI has ‘immense upside’. And they might be right.
Maybe. But at the moment, the US is really the only significant actor in the AGI development space. Other nations are reacting in various ways, ranging from curious concern to geopolitical horror. But if we want to minimize risk of a nation-state AI arms races, the burden is on the US companies to Just Stop Unilaterally Driving The Arms Race.
Interesting point of view. I don’t think I agree with the sex triggers section: it seems that applying this retroactively would predict that the internet and video games would be banned by now (it is of course the case that in many instances they are stigmatized, but nowhere near the extent that would result in banning them).
Also, the essay does not touch on the most important piece of equation, which is the immense upside of AGI—the metaphore about the nuclear weapons spitting out gold, up until they got large enough. This means there is a huge incentive for private companies to unilaterally improve the tech, plus the Moore’s law of the compute being cheaper every year. If you can get the AI comprehend text a bit better (or do any sort of other “backend” task), this is much different from the production of child porn, growing weed, or killing people more effectively, which are very localized sources of profit. I think only human cloning comes as the a close example, but still not quite (the gains are very uncertain and temporarily discontinued, it’s more difficult to hide the experiments, the technology is much more specialised, whilist compute is needed in every other part of the economy, and ‘doing AI’ is not so well-defined category as ‘using human stem cells’).
Well, the AI industry and the pro-AI accelerationists believe that there is an ‘immense upside of AGI’, but that is a highly speculative, faith-based claim, IMHO. (The case for narrow AI having clear upsides is much stronger, I think.)
It’s worth noting that almost every R&D field that has been morally stigmatized—such as intelligence research, evolutionary psychology, and behavior genetics—also offered huge and transformative upsides to society, when the field first developed. Until they got crushed by political demonization, and their potential was strangled in the cradle, so to speak.
The public perception of likely relative costs vs. benefits is part of the moral stigmatization process. If AI gets stigmatized, the public will not believe that AGI has ‘immense upside’. And they might be right.
I think as capabilities increase at least one nation will view developing safe AGI as a requirement in their national security strategy.
Maybe. But at the moment, the US is really the only significant actor in the AGI development space. Other nations are reacting in various ways, ranging from curious concern to geopolitical horror. But if we want to minimize risk of a nation-state AI arms races, the burden is on the US companies to Just Stop Unilaterally Driving The Arms Race.