Thus, in order to be truly good people, we must take an active role, predict the future of moral progress, and live by tomorrow’s rules, today.
Suppose you think X is what is actually moral (or is a distribution representing your moral uncertainty after doing your best to try to figure out what is actually moral) and Y is what you expect most people will recognize as moral in the future (or is a distribution representing your uncertainty about that). Are you proposing to follow Y instead of X? (It sounds that way but I want to make sure I’m not misunderstanding.)
Assuming the answer is yes, is that because you think that trying to predict what most people will recognize as moral is more likely to lead to what is actually moral than directly trying to figure it out yourself? Or is it because you want to be recognized by future people as being moral and following Y is more likely to lead to that result?
If we define Z as what most people recognize as moral today, then I think most people end up doing Z, not X. And Y is arguably a lot better than Z.
I’m also sympathetic to your second paragraph. Presumably a lot of the people I gave as examples would at least claim to be following X. Since their actions are no longer ones we consider moral, then plausibly they were wrong about their X, and there’s no reason to believe we will be any more accurate. Y seems more accessible in that regard.
I’m trying to walk a pretty thin line here between taking this argument seriously and admitting to full on moral relativism. Thus the disclaimer at the top of the post.
Why not argue it, then? The OP takes some premise on the lines of “moral gets objectively better over time”, as an unstated given. Saying its arguable, but not arguing it, is not much of an improvement.
Suppose you think X is what is actually moral (or is a distribution representing your moral uncertainty after doing your best to try to figure out what is actually moral) and Y is what you expect most people will recognize as moral in the future (or is a distribution representing your uncertainty about that). Are you proposing to follow Y instead of X? (It sounds that way but I want to make sure I’m not misunderstanding.)
Assuming the answer is yes, is that because you think that trying to predict what most people will recognize as moral is more likely to lead to what is actually moral than directly trying to figure it out yourself? Or is it because you want to be recognized by future people as being moral and following Y is more likely to lead to that result?
If we define Z as what most people recognize as moral today, then I think most people end up doing Z, not X. And Y is arguably a lot better than Z.
I’m also sympathetic to your second paragraph. Presumably a lot of the people I gave as examples would at least claim to be following X. Since their actions are no longer ones we consider moral, then plausibly they were wrong about their X, and there’s no reason to believe we will be any more accurate. Y seems more accessible in that regard.
I’m trying to walk a pretty thin line here between taking this argument seriously and admitting to full on moral relativism. Thus the disclaimer at the top of the post.
You don’t need to argue for moral relativism to think that change in moral norms is not always change for the better.
In the US morals are changing to give homosexuals more rights and in Russia they are changing to give them less rights.
Why not argue it, then? The OP takes some premise on the lines of “moral gets objectively better over time”, as an unstated given. Saying its arguable, but not arguing it, is not much of an improvement.