The answer to your initial question is that Eliezer and Luke believe that if we create AI, the default result is itkills us all.or does.something else equally unpleasant. And also that creating Friendly AI will be an extraordinarily good thing, in part (and only in part) because it would be excellent protection against other risks.
That said, I think there is a limit to how confident anyone ought to be in that view, and it is worth trying to prepare for other scenarios.
The answer to your initial question is that Eliezer and Luke believe that if we create AI, the default result is itkills us all.or does.something else equally unpleasant. And also that creating Friendly AI will be an extraordinarily good thing, in part (and only in part) because it would be excellent protection against other risks.
That said, I think there is a limit to how confident anyone ought to be in that view, and it is worth trying to prepare for other scenarios.