“But surely it’s still much better than just getting wiped out”
I think that is the key here. If “just getting wiped out” is the definition of unfriendly then “not gettting wiped out” should be the MINIMUM goal for a putative “friendly” AI.
i.e. “kill no humans”.
It starts to get complex after that. For example: Is it OK to kill all humans, but freeze their dead bodies at the point of death and then resurrect one or more of them later? Is it OK to kill all humans by destructively scanning them and then running them as software inside simulations? What about killing all humans but keeping a facility of frozen embryos to be born at a later date?
“But surely it’s still much better than just getting wiped out”
I think that is the key here. If “just getting wiped out” is the definition of unfriendly then “not gettting wiped out” should be the MINIMUM goal for a putative “friendly” AI.
i.e. “kill no humans”.
It starts to get complex after that. For example: Is it OK to kill all humans, but freeze their dead bodies at the point of death and then resurrect one or more of them later? Is it OK to kill all humans by destructively scanning them and then running them as software inside simulations? What about killing all humans but keeping a facility of frozen embryos to be born at a later date?