I’m unclear what the thrust of this article is intended to be. Are you predicting that such things will happen, or recommending that readers concerned with AI doom should encourage and fan the flames of such a movement?
I’m predicting that an anti-AI backlash is likely, given human moral psychology and the likely applications of AI over the next few years.
In further essays I’m working on, I’ll probably end up arguing that an anti-AI backlash may be a good strategy for reducing AI extinction risk—probably much faster, more effective, and more globally applicable than any formal regulatory regime or AI safety tactics that the AI industry is willing to adopt.
I’m unclear what the thrust of this article is intended to be. Are you predicting that such things will happen, or recommending that readers concerned with AI doom should encourage and fan the flames of such a movement?
I’m predicting that an anti-AI backlash is likely, given human moral psychology and the likely applications of AI over the next few years.
In further essays I’m working on, I’ll probably end up arguing that an anti-AI backlash may be a good strategy for reducing AI extinction risk—probably much faster, more effective, and more globally applicable than any formal regulatory regime or AI safety tactics that the AI industry is willing to adopt.