If you’re talking about AIs that care about each other’s utility functions as such, then the possibility of “true” explosion here seems perfectly possible. So you might be talking about a reason not to design an AI (or multiple AIs) that way, for fairly concrete programming reasons.
If you’re talking about empathy between humans, I suspect we don’t and maybe even can’t care about each others’ utility functions as such. We care about each other’s happiness and well-being, but that seems to be a different thing.
Are you talking about humans or AIs?
If you’re talking about AIs that care about each other’s utility functions as such, then the possibility of “true” explosion here seems perfectly possible. So you might be talking about a reason not to design an AI (or multiple AIs) that way, for fairly concrete programming reasons.
If you’re talking about empathy between humans, I suspect we don’t and maybe even can’t care about each others’ utility functions as such. We care about each other’s happiness and well-being, but that seems to be a different thing.