That AI will all have the same goals and that these goals all lead to destroying humanity.
Nope. I think AI can have any goal; by default its goal will be ‘random’; and most random goals destroy humanity. See Bostrom’s “The Superintelligent Will” for a description of my view on this.
I don’t expect coordination of all AIs to destroy humanity
I don’t know what you mean by “coordination of all AIs” here, or why you think it’s relevant.
but because I don’t expect unification of opinions here
Unification of whose opinions, about what? Are you saying you don’t expect all possible AIs to have the same “opinions”? I think more precise language would be better here; “opinion” is a very vague word.
that humanity and it’s descendants are narrowly defined.
Again, nope! I’m a transhumanist who wants to usher in an awesome posthuman future. I would consider it a massive existential catastrophe to lock in humanity’s current, flawed understanding of The Good.
Nope. I think AI can have any goal; by default its goal will be ‘random’; and most random goals destroy humanity. See Bostrom’s “The Superintelligent Will” for a description of my view on this.
I don’t know what you mean by “coordination of all AIs” here, or why you think it’s relevant.
Unification of whose opinions, about what? Are you saying you don’t expect all possible AIs to have the same “opinions”? I think more precise language would be better here; “opinion” is a very vague word.
Again, nope! I’m a transhumanist who wants to usher in an awesome posthuman future. I would consider it a massive existential catastrophe to lock in humanity’s current, flawed understanding of The Good.
True, I’ve got to be more specific in my wording when I talk about stuff. And I’ll read that link you’ve gave me.