“what I really don’t understand is why ‘failure to solve the problem in time’ sounds so much like ‘we’re all going to die, and that’s so certain that some otherwise sensible people are tempted to just give in to despair and stop trying at all’ ”
I agree. In this community, most people only talk of x-risk (existential risk). Most people equate failure to align AI to our values to human extinction. I disagree. Classic literature examples of failure can be found, like With Folded Hands, where AI creates an unbreakable dictatorship, not extinction.
I think it’s for the sake of sanity (things worse than extinction are quite harder to accept), or not to scare the normies, who are already quite scared.
But it’s also true that unaligned AI could result in a kinda positive outcome, or even neutral. I just personally wouldn’t put much probability in there. Why? 2 concepts that you can look up on Lesswrong: orthogonality thesis (high intelligence isn’t necessarily correlated to high values), and basic AI drives (advanced AI would naturally develop dangerous instrumental goals like survival and resource acquisition). And also that it’s pretty hard to tell computers to do what we mean, which scaled up could turn out very dangerous.
(See Eliezer’s post “Failed Utopia 4-2”, where an unaligned AGI ends up creating a failed utopia which really doesn’t sound THAT bad, and I’d say is even much better than the current world when you weight all the good and bad.)
Fundamentally, we just shouldn’t take the gamble. The stakes are too high.
If you wanna have an impact, AI is the way to go. Definitely.
“what I really don’t understand is why ‘failure to solve the problem in time’ sounds so much like ‘we’re all going to die, and that’s so certain that some otherwise sensible people are tempted to just give in to despair and stop trying at all’ ”
I agree. In this community, most people only talk of x-risk (existential risk). Most people equate failure to align AI to our values to human extinction. I disagree. Classic literature examples of failure can be found, like With Folded Hands, where AI creates an unbreakable dictatorship, not extinction.
I think it’s for the sake of sanity (things worse than extinction are quite harder to accept), or not to scare the normies, who are already quite scared.
But it’s also true that unaligned AI could result in a kinda positive outcome, or even neutral. I just personally wouldn’t put much probability in there. Why? 2 concepts that you can look up on Lesswrong: orthogonality thesis (high intelligence isn’t necessarily correlated to high values), and basic AI drives (advanced AI would naturally develop dangerous instrumental goals like survival and resource acquisition). And also that it’s pretty hard to tell computers to do what we mean, which scaled up could turn out very dangerous.
(See Eliezer’s post “Failed Utopia 4-2”, where an unaligned AGI ends up creating a failed utopia which really doesn’t sound THAT bad, and I’d say is even much better than the current world when you weight all the good and bad.)
Fundamentally, we just shouldn’t take the gamble. The stakes are too high.
If you wanna have an impact, AI is the way to go. Definitely.