An AI that is successfully “Friendly” poses an extistential risk of a kind that other AIs don’t pose. The main risk from an unfriendly AI is that it will kill all humans. That isn’t much of a risk
What do you mean by existential risk, then? I thought things that killed all humans were, by definition, existential risks.
humans are on the way out in any case.
What, if anything, do you value that you expect to exist in the long term?
There are arguments that [an UFAI] will inevitably take resources away from humans, but these are just that—arguments.
Pretty compelling arguments, IMO. It’s simple—the vast majority of goals can be achieved more easily if one has more resources, and humans control resources, so an entity that is able to self-improve will tend to seize control of all the resources and therefore take control of those resources from the humans.
Do you have a counterargument, or something relevant to the issue that isn’t just an argument?
What do you mean by existential risk, then? I thought things that killed all humans were, by definition, existential risks.
What, if anything, do you value that you expect to exist in the long term?
Pretty compelling arguments, IMO. It’s simple—the vast majority of goals can be achieved more easily if one has more resources, and humans control resources, so an entity that is able to self-improve will tend to seize control of all the resources and therefore take control of those resources from the humans.
Do you have a counterargument, or something relevant to the issue that isn’t just an argument?