Because of these shifts, a “selfish” agent using FDT can end up making choices more similar to the choices of an altruistic CDT agent than a selfish CDT agent, for reasons closely related to the traditional moral intuition of universalizability.
Can’t you just make decisions using functions which optimize outcomes for specific implementation? You’ll need to choose how to aggregate scores under uncertainty, but this choice doesn’t need to converge.
Can’t you just make decisions using functions which optimize outcomes for specific implementation? You’ll need to choose how to aggregate scores under uncertainty, but this choice doesn’t need to converge.