how a “duty” gets cashed out computationally, if not as a contribution to expected utility. If I’m not using some kind of moral points, how do I calculate what my “duty” is?
We humans don’t seem to act as if we’re cashing out an expected utility. Instead we act as if we had a patchwork of lexically distinct moral codes for different situations, and problems come when they overlap.
Since current AI is far from being intelligent, we probably shouldn’t see it as compelling argument for how humans do or should behave.
Such questions form the base of some pretty strong theorems showing that consistent preferences must cash out as some kind of expected utility maximization.
Sounds right. The more reliable information I get about the world, the more my moral preferences start resembling a utility function. Out of interest, do you have a link to those theorems?
But the consistency assumption is not present in humans, even morally well-rounded ones. We are always learning, intellectually and morally. The moral decisions we make affect our moral values as well as the other way round (this post touched on similar ideas). Seeing morality as a learning process may bring it closer to Paul’s queries: what sort of a person am I? What are my values?
Except here the answers to the questions come as a result of the moral action, rather than before it.
how a “duty” gets cashed out computationally, if not as a contribution to expected utility. If I’m not using some kind of moral points, how do I calculate what my “duty” is?
We humans don’t seem to act as if we’re cashing out an expected utility. Instead we act as if we had a patchwork of lexically distinct moral codes for different situations, and problems come when they overlap.
Since current AI is far from being intelligent, we probably shouldn’t see it as compelling argument for how humans do or should behave.
Such questions form the base of some pretty strong theorems showing that consistent preferences must cash out as some kind of expected utility maximization. Sounds right. The more reliable information I get about the world, the more my moral preferences start resembling a utility function. Out of interest, do you have a link to those theorems?
But the consistency assumption is not present in humans, even morally well-rounded ones. We are always learning, intellectually and morally. The moral decisions we make affect our moral values as well as the other way round (this post touched on similar ideas). Seeing morality as a learning process may bring it closer to Paul’s queries: what sort of a person am I? What are my values?
Except here the answers to the questions come as a result of the moral action, rather than before it.