’Actually, the people Tim is talking about here are often more pessimistic about societal outcomes than Tim is suggesting. Many of them are, roughly speaking, 65%-85% confident that machine superintelligence will lead to human extinction, and that it’s only in a small minority of possible worlds that humanity rises to the challenge and gets a machine superintelligence robustly aligned with humane values.’ — Luke Muehlhauser, https://lukemuehlhauser.com/a-reply-to-wait-but-why-on-machine-superintelligence/
’In terms of falsifiability, if you have an AGI that passes the real no-holds-barred Turing Test over all human capabilities that can be tested in a one-hour conversation, and life as we know it is still continuing 2 years later, I’m pretty shocked. In fact, I’m pretty shocked if you get up to that point at all before the end of the world.’ — Eliezer Yudkowsky, https://www.econlib.org/archives/2016/03/so_far_my_respo.html
It may be useful.
’Actually, the people Tim is talking about here are often more pessimistic about societal outcomes than Tim is suggesting. Many of them are, roughly speaking, 65%-85% confident that machine superintelligence will lead to human extinction, and that it’s only in a small minority of possible worlds that humanity rises to the challenge and gets a machine superintelligence robustly aligned with humane values.’ — Luke Muehlhauser, https://lukemuehlhauser.com/a-reply-to-wait-but-why-on-machine-superintelligence/
’In terms of falsifiability, if you have an AGI that passes the real no-holds-barred Turing Test over all human capabilities that can be tested in a one-hour conversation, and life as we know it is still continuing 2 years later, I’m pretty shocked. In fact, I’m pretty shocked if you get up to that point at all before the end of the world.’ — Eliezer Yudkowsky, https://www.econlib.org/archives/2016/03/so_far_my_respo.html