Scott Aaronson wrote about it last year, and it might count as an answer, especially the last paragraph:
I suppose this is as good a place as any to say that my views on AI risk have evolved. A decade ago, it was far from obvious that known methods like deep learning and reinforcement learning, merely run with much faster computers and on much bigger datasets, would work as spectacularly well as they’ve turned out to work, on such a wide variety of problems, including beating all humans at Go without needing to be trained on any human game. But now that we know these things, I think intellectual honesty requires updating on them. And indeed, when I talk to the AI researchers whose expertise I trust the most, many, though not all, have updated in the direction of “maybe we should start worrying.” (Related: Eliezer Yudkowsky’s There’s No Fire Alarm for Artificial General Intelligence.)
Who knows how much of the human cognitive fortress might fall to a few more orders of magnitude in processing power? I don’t—not in the sense of “I basically know but am being coy,” but really in the sense of not knowing.
To be clear, I still think that by far the most urgent challenges facing humanity are things like: resisting Trump and the other forces of authoritarianism, slowing down and responding to climate change and ocean acidification, preventing a nuclear war, preserving what’s left of Enlightenment norms. But I no longer put AI too far behind that other stuff. If civilization manages not to destroy itself over the next century—a huge “if”—I now think it’s plausible that we’ll eventually confront questions about intelligences greater than ours: do we want to create them? Can we even prevent their creation? If they arise, can we ensure that they’ll show us more regard than we show chimps? And while I don’t know how much we can say about such questions that’s useful, without way more experience with powerful AI than we have now, I’m glad that a few people are at least trying to say things.
But one more point: given the way civilization seems to be headed, I’m actually mildly in favor of superintelligences coming into being sooner rather than later. Like, given the choice between a hypothetical paperclip maximizer destroying the galaxy, versus a delusional autocrat burning civilization to the ground while his supporters cheer him on and his opponents fight amongst themselves, I’m just about ready to take my chances with the AI. Sure, superintelligence is scary, but superstupidity has already been given its chance and been found wanting.
Wow, that’s a very shallow and short-sighted reason for wanting AI to come sooner, especially since this is a global issue and not just American. I guess there’s a difference between intelligence and wisdom.
Scott Aaronson wrote about it last year, and it might count as an answer, especially the last paragraph:
Wow, that’s a very shallow and short-sighted reason for wanting AI to come sooner, especially since this is a global issue and not just American. I guess there’s a difference between intelligence and wisdom.