but our decision theory can just output betting outcomes instead of probabilities.
Indeed. And ADT outputs betting outcomes without any problems. It’s when you interpret them as probabilities that you start having problems, because in order to go from betting odds to probabilities, you have to sort out how much you value two copies of you getting a reward, versus one copy.
I suppose that makes sense if you’re a moral non-realist.
Also, you may care about other people for reasons of morality. Or simply because you like them. Ultimately why you care doesn’t matter and only the fact that you have a preference matters. The morality aspect is inessential.
Indeed. And ADT outputs betting outcomes without any problems. It’s when you interpret them as probabilities that you start having problems, because in order to go from betting odds to probabilities, you have to sort out how much you value two copies of you getting a reward, versus one copy.
Well, if anything that’s about your preferences, not morality.
Moral preferences are a specific subtype of preferences.
I suppose that makes sense if you’re a moral non-realist.
Also, you may care about other people for reasons of morality. Or simply because you like them. Ultimately why you care doesn’t matter and only the fact that you have a preference matters. The morality aspect is inessential.