The spread of opinions seems narrow compared to what I would expect. OP makes some bold predictions in his post. I see more debate over less controversial claims all of the time.
The spread of opinions seems narrow compared to what I would expect. OP makes some bold predictions in his post. I see more debate over less controversial claims all of the time.
That’s fair.
what do aliens have to do with AI?
Sorry, I said it badly/unclearly. What I meant was: most ways to design powerful AI will, on my best guess, be “alien” intelligences, in the sense that they are different from us (think differently, have different goals/values, etc.).
There’s an analogy being drawn between the power of a hypothetical advanced alien civilization and the power of a superintelligent AI. If you agree that the hypothetical AI would be more powerful, and that an alien civilization capable of travelling to Earth would be a threat, then it follows that superintelligent AI is a threat.
I think most people here are in agreement that AI poses a huge risk, but are differ on how likely it is that we’re all going to die. A 20% chance we’re all going to die is very much worth trying to mitigate sensibly, and the OP says still it’s worth trying to mitigate a 99.9999% chance of human extinction in a similarly level-headed manner (even if the mechanics of doing the work are slightly different at that point).
The spread of opinions seems narrow compared to what I would expect. OP makes some bold predictions in his post. I see more debate over less controversial claims all of the time.
Sorry, but what do aliens have to do with AI?
Part of the reason the spread seems small is that people are correctly inferring that this comment section is not a venue for debating the object-level question of Probability(doom via AI), but rather for discussing EY’s viewpoint as written in the post. See e.g. https://www.lesswrong.com/posts/34Gkqus9vusXRevR8/late-2021-miri-conversations-ama-discussion for more of a debate.
Debating p(doom) here seems fine to me, unless there’s an explicit request to talk about that elsewhere.
That’s fair.
Sorry, I said it badly/unclearly. What I meant was: most ways to design powerful AI will, on my best guess, be “alien” intelligences, in the sense that they are different from us (think differently, have different goals/values, etc.).
I just want to say I don’t think that was unclear at all. It’s fair to expect people to know the wider meaning of the word ‘alien’.
There’s an analogy being drawn between the power of a hypothetical advanced alien civilization and the power of a superintelligent AI. If you agree that the hypothetical AI would be more powerful, and that an alien civilization capable of travelling to Earth would be a threat, then it follows that superintelligent AI is a threat.
I think most people here are in agreement that AI poses a huge risk, but are differ on how likely it is that we’re all going to die. A 20% chance we’re all going to die is very much worth trying to mitigate sensibly, and the OP says still it’s worth trying to mitigate a 99.9999% chance of human extinction in a similarly level-headed manner (even if the mechanics of doing the work are slightly different at that point).