People seem to be blurring the difference between “The human race will probably survive the creation of a superintelligent AI” and “This isn’t even something worth being concerned about.” Based on a quick google search, Zuckerberg denies that there’s even a chance of existential risks here, whereas I’m fairly certain Hanson thinks there’s at least some.
I think it’s fairly clear that most skeptics who have engaged with the arguments to any extent at all are closer to the “probably survive” part of the spectrum than the “not worth being concerned about” part.
People seem to be blurring the difference between “The human race will probably survive the creation of a superintelligent AI” and “This isn’t even something worth being concerned about.” Based on a quick google search, Zuckerberg denies that there’s even a chance of existential risks here, whereas I’m fairly certain Hanson thinks there’s at least some.
I think it’s fairly clear that most skeptics who have engaged with the arguments to any extent at all are closer to the “probably survive” part of the spectrum than the “not worth being concerned about” part.