So, I can’t conceive of an agent-independent reason for acting altruistically
If by that you mean an agent-independent cause of altruistic actions, then I agree. My life would be a lot simpler if Friendly AIs automatically emerged from fully arbitrary Bayesian decision systems.
But I fear that you misinterpret me. I’m simply (a) speaking from within my own moral frame of reference and (b) assuming that my audience is composed of human beings rather than fully arbitrary Bayesian decision systems.
So, I can’t conceive of an agent-independent reason for acting altruistically
If by that you mean an agent-independent cause of altruistic actions, then I agree. My life would be a lot simpler if Friendly AIs automatically emerged from fully arbitrary Bayesian decision systems.
But I fear that you misinterpret me. I’m simply (a) speaking from within my own moral frame of reference and (b) assuming that my audience is composed of human beings rather than fully arbitrary Bayesian decision systems.