Paul, do you think that your own morality is optimum or can you conceive of someone more moral than yourself—not just a being who better adheres to your current ideals, but a being with better ideals than you?
Yes I can.
If you take the view that ethics and aesthetics are one and the same, then in general it’s hard to imagine how any ideals other than your own could be better than your own for the obvious reason that I can only measure them against my own.
What interests me about the rule I propose (circular preferences are bad!) is that it is exclusively a meta-rule—it cannot measure my behavour, only my ideals. It provides a meta-ethic that can show flaws in my current ethical thinking, but not how to correct them—it provides no guidance on which arrow in the circle needs to be reversed. And I think it covers the way in which I’ve been persuaded of moral positions in the past (very hard to account for otherwise) and better yet allows me to imagine that I might be persuaded of moral points in the future, though obviously I can’t anticipate which ones.
If I can imagine that through this rule I could be persuaded to take a different moral stance in the future, and see that as good, then I’m definitely elevating a different set of ideals—my imagined future ideals—over my current ideals.
Paul, do you think that your own morality is optimum or can you conceive of someone more moral than yourself—not just a being who better adheres to your current ideals, but a being with better ideals than you?
Yes I can.
If you take the view that ethics and aesthetics are one and the same, then in general it’s hard to imagine how any ideals other than your own could be better than your own for the obvious reason that I can only measure them against my own.
What interests me about the rule I propose (circular preferences are bad!) is that it is exclusively a meta-rule—it cannot measure my behavour, only my ideals. It provides a meta-ethic that can show flaws in my current ethical thinking, but not how to correct them—it provides no guidance on which arrow in the circle needs to be reversed. And I think it covers the way in which I’ve been persuaded of moral positions in the past (very hard to account for otherwise) and better yet allows me to imagine that I might be persuaded of moral points in the future, though obviously I can’t anticipate which ones.
If I can imagine that through this rule I could be persuaded to take a different moral stance in the future, and see that as good, then I’m definitely elevating a different set of ideals—my imagined future ideals—over my current ideals.