One answer to the above might be “we have a meta-preference to have a consistent morality”.
Well, fair enough, if so. However, if that is the only answer—if this this our only reason for preferring the consistent-but-inaccurate moral system to the accurate-but-inconsistent one—then we ought to get clear on this fact, first. Having our choice in such a dilemma be driven only by a meta-preference, and not by any other considerations, is a special case, and must be unambiguously identified before we attempt to resolve the issue.
We have to make choices, and it is not possible to make choices to maximize outcomes if we don’t have a consistent utility function. Since having a consistent utility function is a hard requirement of simply being an agent and having any effect on the world, I think it’s a reasonable requirement to have.
People say they have inconsistent utility functions. But then they go and make real life decisions anyway, so their actions imply a consistent utility function. Actions speak louder than words...
having a consistent utility function is a hard requirement of simply being an agent and having any effect on the world
I don’t know what you mean by this; it seems plainly false. I have effects on the world all the time (as do most people), and I don’t, as far as I can tell, have a consistent utility function (nor do most people).
People say they have inconsistent utility functions. But then they go and make real life decisions anyway, so their actions imply a consistent utility function.
But just the fact of making decisions doesn’t imply a utility function. What can you mean by this…?
Devil’s advocacy:
One answer to the above might be “we have a meta-preference to have a consistent morality”.
Well, fair enough, if so. However, if that is the only answer—if this this our only reason for preferring the consistent-but-inaccurate moral system to the accurate-but-inconsistent one—then we ought to get clear on this fact, first. Having our choice in such a dilemma be driven only by a meta-preference, and not by any other considerations, is a special case, and must be unambiguously identified before we attempt to resolve the issue.
We have to make choices, and it is not possible to make choices to maximize outcomes if we don’t have a consistent utility function. Since having a consistent utility function is a hard requirement of simply being an agent and having any effect on the world, I think it’s a reasonable requirement to have.
People say they have inconsistent utility functions. But then they go and make real life decisions anyway, so their actions imply a consistent utility function. Actions speak louder than words...
I don’t know what you mean by this; it seems plainly false. I have effects on the world all the time (as do most people), and I don’t, as far as I can tell, have a consistent utility function (nor do most people).
But just the fact of making decisions doesn’t imply a utility function. What can you mean by this…?