But on the other hand, consequentialism is particularly prone to value misalignment. In order to systematize human preferences or human happiness, it requires a metric; in introducing a metric, it risks optimizing the metric itself over the actual preferences and happiness.
Yes, in consequentialism you try to figure out what values you should have, and your attempts at doing better might lead you down the Moral Landscape rather than up toward a local maximum.
But what are the alternatives? In deontology you try to follow a bunch of rules in the hope that they will keep you where you are on the landscape, trying to halt progress. Is this really preferable?
So it seems important to have an ability to step back and ask, “am I morally insane?”, commensurate with one’s degree of confidence in the metric and method of consequentialism.
It seems to me that any moral agent should have this ability.
Yes, in consequentialism you try to figure out what values you should have, and your attempts at doing better might lead you down the Moral Landscape rather than up toward a local maximum.
But what are the alternatives? In deontology you try to follow a bunch of rules in the hope that they will keep you where you are on the landscape, trying to halt progress. Is this really preferable?
It seems to me that any moral agent should have this ability.