I don’t think morality is objective, but I still care greatly about what a future Holden—one who has reflected more, learned more, etc. - would think about the ethical choices I’m making today.
I think that an ethical theory that doesn’t believe baby torture is objectively wrong is flawed. If there is no objective morality, then reflection and learning information cannot guide us toward any sort of “correct” evaluation of our past actions. I don’t think preferences should change realist to quasi-realist. Is it any less realist to think murder is okay, but avoid it because we are worried about judgement from others? It seems like anti-realism + a preference. There already is a definition of quasi-realist which seems different from yours unless I’m misunderstanding yours [1].
I mean specifically to point to the changes that you (whoever is reading this) consider to be progress, whether because they are honing in on objective truth or resulting from better knowledge and reasoning or for any other good reason. Future-proof ethics is about making ethical choices that will still look good after your and/or society’s ethics have “improved” (not just “changed”).
Reasoning helps us reach truth better. If there are no moral facts, then it is not really reasoning and the knowledge is useless. Imagine I said that I do not believe ghosts exist, but I want to be sure that I look back on my past opinions on ghosts and hope they are correct. I want this because I expect to learn much more about the qualities and nature of ghosts and I will be a much more knowledgable person. The problem is that ghosts have no qualities or nature because they do not exist.
You wanted to use future-proof ethics as a meta-ethical justification for selecting an ethical system, if I recall your original post correctly. My point about circularity was that if I’m using the meta-ethical justification of future proof to pick my ethical system, I can’t justify future proofing with the very same ethical system. The whole concept of progress, whether it be individual, societal or objective relies on a measure of morality to determine progress. I don’t have that ethical system if I haven’t used future proofing to justify it. I don’t have future proofing, unless I’ve got some ethical truths already.
Imagine I want to come up with a good way of determining if I am good at math. I could use my roommate to check my math. How do I know my roommate is good at math? Well, in the past, I checked his math and it was good.
I am a moral realist. I believe there are moral facts. I believe that through examination of evidence and our intuitions, we become a more moral society generally speaking. I therefore think that the future will be more ethical. Future proof choices and ethical choices will correlate somewhat. I believe this because I believe that I can, in the present, determine ethical truths via my intuition. This is better than future proof because it gets at the heart of what I want.
How would you justify your “quasi realist” position. You want future Holden to look back on you. Why? Should others hold this preference? What if I wanted past Parrhesia to respect future Parrhesia. Should I weigh this more than future Parrhesia respecting past Parrhesia? I don’t think this is meta-ethically justified. Can you really say there is nothing objectively wrong with torturing a baby for sadistic pleasure?
I think “objective” is not quite specific enough about the axis of variation here. Like in [The Meaning of Right](https://www.lesswrong.com/posts/fG3g3764tSubr6xvs/the-meaning-of-right) - or that entire sequence of posts if you haven’t read it yet. That post talks about a sense of “objective” which means something like “even if you changed my brain to think something else was right, that wouldn’t change what was right.” But the trick is, both realists and anti-realists can both have that sort of objectivity!
You seem to want some additional second meaning of “objective” that’s something like “testable”—if we disagree about something, there’s some arbitration process we can use to tell who’s right—at least well enough to make progress over time.
One of the issues with this is that it’s hard to have your cake and eat it too. If you can get new information about morality, that information might be bad from the standpoint of your current morality! The arbitration process might say “baby torture is great”—and if it never says anything objectionable, then it’s not actually giving you new information.
Note that epistemic normativity (correct versus incorrect), pragmatic normativity (works versus fails), and ethical normativity (good versus evil) are not necessarily the same. If they are not , then “works” and “correct” can be used to arrive at “good” without circularity.
I think that morality is objective in the sense that you mentioned in paragraph one. I think that it has the feature of paragraph two that you are talking about but that isn’t the definition objective in my view, it is merely a feature of the fact that we have moral intuitions.
Yes, you can get new information on morality that contradicts your current standpoint. It could never say anything objectionable because I am actually factually correct on baby torture.
> How would you justify your “quasi realist” position. You want future Holden to look back on you. Why? Should others hold this preference? What if I wanted past Parrhesia to respect future Parrhesia. Should I weigh this more than future Parrhesia respecting past Parrhesia? I don’t think this is meta-ethically justified. Can you really say there is nothing objectively wrong with torturing a baby for sadistic pleasure?
I don’t think we can justify every claim with reference to some “foundation.” At some point we have to say something like “This is how it seems to me; maybe, having articulated it, it’s how it seems to you too; if not, I guess we can agree to disagree.” That’s roughly what I’m doing with respect to a comment like “I’d like to do things that a future Holden distinguished primarily by having learned and reflected more would consider ethical.”
I think that an ethical theory that doesn’t believe baby torture is objectively wrong is flawed. If there is no objective morality, then reflection and learning information cannot guide us toward any sort of “correct” evaluation of our past actions. I don’t think preferences should change realist to quasi-realist. Is it any less realist to think murder is okay, but avoid it because we are worried about judgement from others? It seems like anti-realism + a preference. There already is a definition of quasi-realist which seems different from yours unless I’m misunderstanding yours [1].
Reasoning helps us reach truth better. If there are no moral facts, then it is not really reasoning and the knowledge is useless. Imagine I said that I do not believe ghosts exist, but I want to be sure that I look back on my past opinions on ghosts and hope they are correct. I want this because I expect to learn much more about the qualities and nature of ghosts and I will be a much more knowledgable person. The problem is that ghosts have no qualities or nature because they do not exist.
You wanted to use future-proof ethics as a meta-ethical justification for selecting an ethical system, if I recall your original post correctly. My point about circularity was that if I’m using the meta-ethical justification of future proof to pick my ethical system, I can’t justify future proofing with the very same ethical system. The whole concept of progress, whether it be individual, societal or objective relies on a measure of morality to determine progress. I don’t have that ethical system if I haven’t used future proofing to justify it. I don’t have future proofing, unless I’ve got some ethical truths already.
Imagine I want to come up with a good way of determining if I am good at math. I could use my roommate to check my math. How do I know my roommate is good at math? Well, in the past, I checked his math and it was good.
I am a moral realist. I believe there are moral facts. I believe that through examination of evidence and our intuitions, we become a more moral society generally speaking. I therefore think that the future will be more ethical. Future proof choices and ethical choices will correlate somewhat. I believe this because I believe that I can, in the present, determine ethical truths via my intuition. This is better than future proof because it gets at the heart of what I want.
How would you justify your “quasi realist” position. You want future Holden to look back on you. Why? Should others hold this preference? What if I wanted past Parrhesia to respect future Parrhesia. Should I weigh this more than future Parrhesia respecting past Parrhesia? I don’t think this is meta-ethically justified. Can you really say there is nothing objectively wrong with torturing a baby for sadistic pleasure?
[1] see: https://en.wikipedia.org/wiki/Quasi-realism
I think “objective” is not quite specific enough about the axis of variation here. Like in [The Meaning of Right](https://www.lesswrong.com/posts/fG3g3764tSubr6xvs/the-meaning-of-right) - or that entire sequence of posts if you haven’t read it yet. That post talks about a sense of “objective” which means something like “even if you changed my brain to think something else was right, that wouldn’t change what was right.” But the trick is, both realists and anti-realists can both have that sort of objectivity!
You seem to want some additional second meaning of “objective” that’s something like “testable”—if we disagree about something, there’s some arbitration process we can use to tell who’s right—at least well enough to make progress over time.
One of the issues with this is that it’s hard to have your cake and eat it too. If you can get new information about morality, that information might be bad from the standpoint of your current morality! The arbitration process might say “baby torture is great”—and if it never says anything objectionable, then it’s not actually giving you new information.
Note that epistemic normativity (correct versus incorrect), pragmatic normativity (works versus fails), and ethical normativity (good versus evil) are not necessarily the same. If they are not , then “works” and “correct” can be used to arrive at “good” without circularity.
I think that morality is objective in the sense that you mentioned in paragraph one. I think that it has the feature of paragraph two that you are talking about but that isn’t the definition objective in my view, it is merely a feature of the fact that we have moral intuitions.
Yes, you can get new information on morality that contradicts your current standpoint. It could never say anything objectionable because I am actually factually correct on baby torture.
> How would you justify your “quasi realist” position. You want future Holden to look back on you. Why? Should others hold this preference? What if I wanted past Parrhesia to respect future Parrhesia. Should I weigh this more than future Parrhesia respecting past Parrhesia? I don’t think this is meta-ethically justified. Can you really say there is nothing objectively wrong with torturing a baby for sadistic pleasure?
I don’t think we can justify every claim with reference to some “foundation.” At some point we have to say something like “This is how it seems to me; maybe, having articulated it, it’s how it seems to you too; if not, I guess we can agree to disagree.” That’s roughly what I’m doing with respect to a comment like “I’d like to do things that a future Holden distinguished primarily by having learned and reflected more would consider ethical.”