Notably, the case for certain doom as proposed by Rob Bensinger et al relies on 3 assumptions that needs to be tested: A: The Singularity/AI Foom scenarios are likely. I have my problems with this assumption, but I will accept it to show why that doesn’t lead to certain doom.
The next assumption is B: That AI will all have the same goals and that these goals all lead to destroying humanity. Now this is a case where I see factions forming on this question, and naively I expect a bell curve of opinions on this question, as well as many different opinions. I don’t expect coordination of all AIs to destroy humanity not because of fundamental incapability, but because I don’t expect unification of opinions here.
And finally, this rests on assumption C: that humanity and it’s descendants are narrowly defined. The nice thing about AI Foom scenarios is that while I don’t expect instant technology to come online, it also makes transhumanism far easier than otherwise, quickly closing the gap. That doesn’t mean it’s all sunny and rainbows, but we are spared certain doom by this.
That AI will all have the same goals and that these goals all lead to destroying humanity.
Nope. I think AI can have any goal; by default its goal will be ‘random’; and most random goals destroy humanity. See Bostrom’s “The Superintelligent Will” for a description of my view on this.
I don’t expect coordination of all AIs to destroy humanity
I don’t know what you mean by “coordination of all AIs” here, or why you think it’s relevant.
but because I don’t expect unification of opinions here
Unification of whose opinions, about what? Are you saying you don’t expect all possible AIs to have the same “opinions”? I think more precise language would be better here; “opinion” is a very vague word.
that humanity and it’s descendants are narrowly defined.
Again, nope! I’m a transhumanist who wants to usher in an awesome posthuman future. I would consider it a massive existential catastrophe to lock in humanity’s current, flawed understanding of The Good.
Notably, the case for certain doom as proposed by Rob Bensinger et al relies on 3 assumptions that needs to be tested: A: The Singularity/AI Foom scenarios are likely. I have my problems with this assumption, but I will accept it to show why that doesn’t lead to certain doom.
The next assumption is B: That AI will all have the same goals and that these goals all lead to destroying humanity. Now this is a case where I see factions forming on this question, and naively I expect a bell curve of opinions on this question, as well as many different opinions. I don’t expect coordination of all AIs to destroy humanity not because of fundamental incapability, but because I don’t expect unification of opinions here.
And finally, this rests on assumption C: that humanity and it’s descendants are narrowly defined. The nice thing about AI Foom scenarios is that while I don’t expect instant technology to come online, it also makes transhumanism far easier than otherwise, quickly closing the gap. That doesn’t mean it’s all sunny and rainbows, but we are spared certain doom by this.
Nope. I think AI can have any goal; by default its goal will be ‘random’; and most random goals destroy humanity. See Bostrom’s “The Superintelligent Will” for a description of my view on this.
I don’t know what you mean by “coordination of all AIs” here, or why you think it’s relevant.
Unification of whose opinions, about what? Are you saying you don’t expect all possible AIs to have the same “opinions”? I think more precise language would be better here; “opinion” is a very vague word.
Again, nope! I’m a transhumanist who wants to usher in an awesome posthuman future. I would consider it a massive existential catastrophe to lock in humanity’s current, flawed understanding of The Good.
True, I’ve got to be more specific in my wording when I talk about stuff. And I’ll read that link you’ve gave me.