The position you describe is sensical, but it’s not what people (at least on LW) who think “moral realism” is nonsensical mean by “morality”. You’re not saying anything about ultimate ends (which I’m pretty sure is what NMJablonski, e.g., means by “preferences”); the version of “moral realism” that gets rejected is about certain terminal values being spookily metaphysically privileged.
Actually, I am saying something about ultimate ends, at least indirectly. My position only makes sense if long-term ultimate ends become somehow ‘spookily metaphysically privileged’ over short-term ultimate ends.
My position still contains ‘spookiness’ but it is perhaps a less arbitrary kind of ‘magic’ - I’m talking about time-scales rather than laws inscribed on tablets.
Well, I am attempting to claim here that there exists an objective moral code (moral realism) which applies to all agents—both those who care about the long term and those who don’t. Agents who mostly care about the short term will probably be more ethically challenged than agents who easily and naturally defer their gratification. But, in this thread at least, I’m arguing that both short-sighted and long-sighted agents confront the same objective moral code. So, I apparently need to appeal to some irreducible spookiness to justify that long-term bias.
The position you describe is sensical, but it’s not what people (at least on LW) who think “moral realism” is nonsensical mean by “morality”. You’re not saying anything about ultimate ends (which I’m pretty sure is what NMJablonski, e.g., means by “preferences”); the version of “moral realism” that gets rejected is about certain terminal values being spookily metaphysically privileged.
Actually, I am saying something about ultimate ends, at least indirectly. My position only makes sense if long-term ultimate ends become somehow ‘spookily metaphysically privileged’ over short-term ultimate ends.
My position still contains ‘spookiness’ but it is perhaps a less arbitrary kind of ‘magic’ - I’m talking about time-scales rather than laws inscribed on tablets.
How is this different from “if the agents under consideration care about the long run more than the short run”?
Well, I am attempting to claim here that there exists an objective moral code (moral realism) which applies to all agents—both those who care about the long term and those who don’t. Agents who mostly care about the short term will probably be more ethically challenged than agents who easily and naturally defer their gratification. But, in this thread at least, I’m arguing that both short-sighted and long-sighted agents confront the same objective moral code. So, I apparently need to appeal to some irreducible spookiness to justify that long-term bias.