In the context of a hard-takeoff scenario (a perfectly plausible outcome, from our view), there will be no community of AIs within which any one AI will have to act. Therefore, the pressure to develop a compassionate utility function is absent, and an AI which does not already have such a function will not need to produce it.
In the context of a soft-takeoff, a community of AIs may come to dominate major world events in the same sense that humans do now, and that community may develop the various sorts of altruistic behavior selected for in such a community (reciprocal being the obvious one). However, if these AIs are never severely impeded in their actions by competition with human beings, they will never need to develop any compassion for human beings.
Reiterating your argument does not affect either of these problems for assumption A, and without assumption A, AdeleneDawner’s objection is fatal to your conclusion.
In the context of a hard-takeoff scenario (a perfectly plausible outcome, from our view), there will be no community of AIs within which any one AI will have to act. Therefore, the pressure to develop a compassionate utility function is absent, and an AI which does not already have such a function will not need to produce it.
In the context of a soft-takeoff, a community of AIs may come to dominate major world events in the same sense that humans do now, and that community may develop the various sorts of altruistic behavior selected for in such a community (reciprocal being the obvious one). However, if these AIs are never severely impeded in their actions by competition with human beings, they will never need to develop any compassion for human beings.
Reiterating your argument does not affect either of these problems for assumption A, and without assumption A, AdeleneDawner’s objection is fatal to your conclusion.