This argument is treating “moral worth” like an unknown scientific truth that we are trying to discover, which seems incorrect to me. Moral judgements vary over time (rather than converging on some absolute truth) and are given from society to an individual, or an individual to themselves. The more useful question to ask is not “is a fetus morally significant” (in some absolute sense), but “what are the chances that I will either regret the abortion or be punished for the abortion in the future?”. This may give a different answer.
This is just confusing moral anti-realism with egoism. The point is that it makes no sense for anti-realists to worry about the probability of being mistaken about the truth of a moral fact, but it might make sense to worry about the probability of your value system evolving in a direction that causes you to regret prior decisions. Although I suspect that it only makes sense to worry about this when your uncertainty is very high (i.e. you are confused about the issue and are not sure how you will feel after you’ve had a chance to think it through).
Regardless, the comment that I replied to above is either confused or disingenuous. It is entirely consistent for anti-realists to agonize over ethical decisions, act with strictly altruistic motivations and all the rest of it.
With a sufficiently long-term view, “what one can get away with” (including considerations of signalling, effects on self, etc) is not as scary as it sounds. It’s basically just near-mode utilitarianism. And to me it’s the only ethics that that doesn’t seem to rely on confused notions like unknown absolute moral worth.
Most ethics discussions, even on LW, are more about signalling and bullying people into doing what you want them to do, rather the describing how decision making actually works. I’d prefer that there was somewhere we could stay descriptive instead of prescriptive since I think there’s a lot more insight to be had that way. Separate the game-theoretical negotiation of establishing a society’s ethics (which can take place anywhere) from the theoretical basis of how it all works (elucidation of which can only be done those sufficiently versed in rationality).
This argument is treating “moral worth” like an unknown scientific truth that we are trying to discover, which seems incorrect to me. Moral judgements vary over time (rather than converging on some absolute truth) and are given from society to an individual, or an individual to themselves. The more useful question to ask is not “is a fetus morally significant” (in some absolute sense), but “what are the chances that I will either regret the abortion or be punished for the abortion in the future?”. This may give a different answer.
This is a horrendous way to do ethics. It leads to concluding that ethical behavior is whatever I can get away with.
This is just confusing moral anti-realism with egoism. The point is that it makes no sense for anti-realists to worry about the probability of being mistaken about the truth of a moral fact, but it might make sense to worry about the probability of your value system evolving in a direction that causes you to regret prior decisions. Although I suspect that it only makes sense to worry about this when your uncertainty is very high (i.e. you are confused about the issue and are not sure how you will feel after you’ve had a chance to think it through).
You realize that’s an argument against moral anti-realism right?
If it is, it’s not a very good one.
Regardless, the comment that I replied to above is either confused or disingenuous. It is entirely consistent for anti-realists to agonize over ethical decisions, act with strictly altruistic motivations and all the rest of it.
With a sufficiently long-term view, “what one can get away with” (including considerations of signalling, effects on self, etc) is not as scary as it sounds. It’s basically just near-mode utilitarianism. And to me it’s the only ethics that that doesn’t seem to rely on confused notions like unknown absolute moral worth.
Most ethics discussions, even on LW, are more about signalling and bullying people into doing what you want them to do, rather the describing how decision making actually works. I’d prefer that there was somewhere we could stay descriptive instead of prescriptive since I think there’s a lot more insight to be had that way. Separate the game-theoretical negotiation of establishing a society’s ethics (which can take place anywhere) from the theoretical basis of how it all works (elucidation of which can only be done those sufficiently versed in rationality).
Only if you believe there is some universal force that ensures good wins in the end.