My version of Example 2 sounds more like “at some point, Watson might badly misdiagnose a human patient, or a bunch of self-driving cars might cause a terrible accident, or more inscrutable algorithms will do more inscrutable things, and this sort of thing might cause public opinion to turn against AI entirely in the same way that it turned against nuclear power.”
I think that people will react more negatively to harms than they react positively to benefits, but I would still expect the impacts of broadly infrahuman AI to be strongly skewed towards the positive. Accidents might lead to more investment in safety, but a “turn against AI entirely” situation seems unlikely to me.
You could say the same about nuclear power. It’s conceivable that with enough noise about “AI is costing jobs” the broad positive impacts could be viewed as ritually contaminated a la nuclear power. Hm, now I wonder if I should actually publish my “Why AI isn’t the cause of modern unemployment” writeup.
I don’t think that’s a good analogy. The cold war had two generations of people living under the very real prospect of nuclear apocalypse.Grant Morrison wrote once about how, at like age five, he was concretely visualizing nuclear annihilation regularly. By his early twenties, pretty much everyone he knew figured civilization wasn’t going to make it out of the cold war—that’s a lot of trauma, enough to power a massive ugh field. Vague complaints of “AI is costing jobs” just can’t compare to the bone-deep terror that was pretty much universal during the cold war.
My version of Example 2 sounds more like “at some point, Watson might badly misdiagnose a human patient, or a bunch of self-driving cars might cause a terrible accident, or more inscrutable algorithms will do more inscrutable things, and this sort of thing might cause public opinion to turn against AI entirely in the same way that it turned against nuclear power.”
I think that people will react more negatively to harms than they react positively to benefits, but I would still expect the impacts of broadly infrahuman AI to be strongly skewed towards the positive. Accidents might lead to more investment in safety, but a “turn against AI entirely” situation seems unlikely to me.
You could say the same about nuclear power. It’s conceivable that with enough noise about “AI is costing jobs” the broad positive impacts could be viewed as ritually contaminated a la nuclear power. Hm, now I wonder if I should actually publish my “Why AI isn’t the cause of modern unemployment” writeup.
I don’t know about that; I think that a lot of the people who think that AI is “costing jobs” view that as a positive thing.
I don’t think that’s a good analogy. The cold war had two generations of people living under the very real prospect of nuclear apocalypse.Grant Morrison wrote once about how, at like age five, he was concretely visualizing nuclear annihilation regularly. By his early twenties, pretty much everyone he knew figured civilization wasn’t going to make it out of the cold war—that’s a lot of trauma, enough to power a massive ugh field. Vague complaints of “AI is costing jobs” just can’t compare to the bone-deep terror that was pretty much universal during the cold war.