I think there’s a strong case to be made that moral considerations are already having a noticeable effect on slowing AI progress. There are probably many bright EAs and rationalists who would be working in AI capabilities, if they didn’t believe that was likely to lead to the extinction of humanity, and that the extinction of humanity is morally wrong.
As capabilities advance further, more “normies” are probably going to make the connection between the development of AGI to the threat of extinction, and then make the relatively smaller leap to the belief that extinction is morally wrong. And normies seem more likely to react to this realization with backlash, loud outrage, and mood affiliation with narrow AI, than EAs and rationalists tend to.
Personally, I strongly predict that any outrage or backlash will not actually result in sufficiently drastic, correctly targeted, and globally coordinated interventions needed to slow AI progress by enough to make a real difference. (In part, because, as Geoffrey points out, the triggers for this backlash will likely be problems caused by misuse of narrow AI, which is not where most of the danger actually lies.) But it seems worth tracking this as an avenue where things could change, perhaps rapidly.
FWIW the title threw a red flag for me, leading me to expect some poorly reasoned take. I’m not sure why. Reading the Overview section my expectations were immediately reset. Possibly some amount of downvotes are just people reacting to the title without reading or failing to update their sentiment even after reading (I’d like to believe that doesn’t happen on LessWrong but I’m sure it does, especially on long posts that many are likely to skim or not read).
Gordon—I was also puzzled by the initial downvotes. But they happened so quickly that I figured the downvoters hadn’t actually read or digested my essay. Disappointing that this happens on LessWrong, but here we are.
Max—I think your observations are right. The ‘normies’, once they understand AI extinction risk, tend to have much clearer, more decisive, more negative moral reactions to AI than many EAs, rationalists, and technophiles tend to have. (We’ve been conditioned by our EA/Rat subcultures to think we need to ‘play nice’ with the AI industry, no matter how sociopathic it proves to be.)
Whether a moral anti-AI backlash can actually slow AI progress is the Big Question. I think so, but my epistemic confidence on this issue is pretty wide. As an evolutionary psychologist, my inclination is to expect that human instincts for morally stigmatizing behaviors, traits, and people perceived as ‘evil’ have evolved to be very effective in reducing those behaviors, suppressing those traits, and ostracizing those people. But whether those instincts can be organized at a global scale, across billions of people, is the open question.
Of course, we don’t need billions to become anti-AI activists. We only need a few million of the most influential, committed people to raise the alarm—and that would already vastly out-number the people working in the AI industry or actively supporting its hubris.
Surprised to see the downvotes on this.
I think there’s a strong case to be made that moral considerations are already having a noticeable effect on slowing AI progress. There are probably many bright EAs and rationalists who would be working in AI capabilities, if they didn’t believe that was likely to lead to the extinction of humanity, and that the extinction of humanity is morally wrong.
As capabilities advance further, more “normies” are probably going to make the connection between the development of AGI to the threat of extinction, and then make the relatively smaller leap to the belief that extinction is morally wrong. And normies seem more likely to react to this realization with backlash, loud outrage, and mood affiliation with narrow AI, than EAs and rationalists tend to.
Personally, I strongly predict that any outrage or backlash will not actually result in sufficiently drastic, correctly targeted, and globally coordinated interventions needed to slow AI progress by enough to make a real difference. (In part, because, as Geoffrey points out, the triggers for this backlash will likely be problems caused by misuse of narrow AI, which is not where most of the danger actually lies.) But it seems worth tracking this as an avenue where things could change, perhaps rapidly.
FWIW the title threw a red flag for me, leading me to expect some poorly reasoned take. I’m not sure why. Reading the Overview section my expectations were immediately reset. Possibly some amount of downvotes are just people reacting to the title without reading or failing to update their sentiment even after reading (I’d like to believe that doesn’t happen on LessWrong but I’m sure it does, especially on long posts that many are likely to skim or not read).
Gordon—I was also puzzled by the initial downvotes. But they happened so quickly that I figured the downvoters hadn’t actually read or digested my essay. Disappointing that this happens on LessWrong, but here we are.
Max—I think your observations are right. The ‘normies’, once they understand AI extinction risk, tend to have much clearer, more decisive, more negative moral reactions to AI than many EAs, rationalists, and technophiles tend to have. (We’ve been conditioned by our EA/Rat subcultures to think we need to ‘play nice’ with the AI industry, no matter how sociopathic it proves to be.)
Whether a moral anti-AI backlash can actually slow AI progress is the Big Question. I think so, but my epistemic confidence on this issue is pretty wide. As an evolutionary psychologist, my inclination is to expect that human instincts for morally stigmatizing behaviors, traits, and people perceived as ‘evil’ have evolved to be very effective in reducing those behaviors, suppressing those traits, and ostracizing those people. But whether those instincts can be organized at a global scale, across billions of people, is the open question.
Of course, we don’t need billions to become anti-AI activists. We only need a few million of the most influential, committed people to raise the alarm—and that would already vastly out-number the people working in the AI industry or actively supporting its hubris.