The other factor here is that our AGI risk choices could affect other intelligent species. If we create an unaligned maximizer, it’s like to wipe out everything in its light cone. To be fair, soft maximizers are looking more likely, and I don’t know how much a thing would spread. Nuclear war only gets most of the species on this planet. So making this point has always felt a bit species centric to me.
There’s also the possibility that a nuclear war wouldn’t wipe out the human race. It seems to be unknown even by experts, I think. I’m thinking that building an AGI in the ashes of a civilization fallen to hubris might make our second round attempts more cautious.
I sure don’t want to die and let everyone I know die when we could’ve tried to get it right and extend our lives indefinitely. But I realize I’m biased. I don’t want to be so selfish as to kill an unimaginably large and perhaps bright future.
That’s the alignment problem, the primary topic of this site. Opinions vary and arguments are plentiful. The general consensus is that there’s tons of reasons it might wipe us, the informed average something like a 50‰ estimate, and overconfidence is usually from ignorance of those arguments. I won’t try to restate them all, and I don’t know of a place they’re all collected, but they’re all over this site.
The other factor here is that our AGI risk choices could affect other intelligent species. If we create an unaligned maximizer, it’s like to wipe out everything in its light cone. To be fair, soft maximizers are looking more likely, and I don’t know how much a thing would spread. Nuclear war only gets most of the species on this planet. So making this point has always felt a bit species centric to me.
There’s also the possibility that a nuclear war wouldn’t wipe out the human race. It seems to be unknown even by experts, I think. I’m thinking that building an AGI in the ashes of a civilization fallen to hubris might make our second round attempts more cautious.
I sure don’t want to die and let everyone I know die when we could’ve tried to get it right and extend our lives indefinitely. But I realize I’m biased. I don’t want to be so selfish as to kill an unimaginably large and perhaps bright future.
Why do you think AGI would necessarily be worse than us? I think we really don’t know.
If it wiped us out, it will probably wipe them out too.
But what is the probability that AGI wipes us? Why would AGI be more aggresive than humans? Specially if we carefully nurture her to be our Queen!
That’s the alignment problem, the primary topic of this site. Opinions vary and arguments are plentiful. The general consensus is that there’s tons of reasons it might wipe us, the informed average something like a 50‰ estimate, and overconfidence is usually from ignorance of those arguments. I won’t try to restate them all, and I don’t know of a place they’re all collected, but they’re all over this site.