Your theory of morality is certainly complex and well-thought out, but I think is based on an assertion “persons have rights, which it is wrong to violate” that isn’t established in any sort of traditionally realist way. Indeed, I think you agree with me that since absolutism theory is false, only those who prefer to recognize rights (or, alternatively, are caught in some regulatory scheme that enforces those rights) have a reason to recognize rights.
Alternatively, as Kaj mentioned, there are other systems of morality, like utilitarianism, that also capture a lot of what is meant by morality and there aren’t any grounds to dismiss them as inferior. In an essay I wrote, “Too Many Moralities”, I make the place I choose to carve reality around the word “morality” as to whether the “end” holds as its goal acting not with regard to only the self, but rather with regard to the direct or indirect benefit of others. If does, it counts as “morality”, and if it doesn’t, it does not. I don’t personally yet see any reason why a particular theory deserves the special treatment of being singled out as the “one, true theory of morality”.
I’d appreciate your thoughts on the matter because it could help me understand (and perhaps even sympathize with) the unitary perspective a lot more.
Hmm. I’m not sure I understand your perspective. I’m happy to call all sorts of incorrect moralities “things based on moral intuition”, even if I think the extrapolation is wrong, does that help?
I’m not sure I know what you mean by the first question. Regarding the second, it means that they have not arrived at the (one true unitary) morality, at least as far as I know. If someone looks an optical illusion like, say, the Muller-Lyer, they base their conclusions about the lengths of the lines they’re looking at on their vision, but reach incorrect conclusions. I don’t think deriving moral theory from moral intuition is that straightforward or that it’s fooled in any particularly analogous way, but that’s about what I mean by someone extrapolating incorrectly from moral intuitions.
I’m not sure I know what you mean by the first question.
I think that he meant something like:
You seem to be saying that while different people can have different moralities, many (most?) of the moralities that people can have are wrong.
You also seem to be implying that you consider your morality to be more correct than that of many others.
Since you believe that there are moralities which are wrong, and that you have a morality which is, if not completely correct then at least more correct than the moralities of many others, that means that you need to have some sort of a rule for deciding what kind of a morality is right and what kind of morality is wrong.
So what is the rule that makes you consider your morality more correct than e.g. consequentialism? What are some of the specific mistakes that e.g. consequentialism makes, and how do you know that they are mistakes?
Sorry for so long between this response and the previous one, but I’m still interested. With the Muller-Lyer Illusion, you can demonstrate it’s an illusion by using a ruler. Following your analogy, how would you demonstrate that a incorrect moral extrapolation was similarly in error? Is there a moral “ruler”?
Not one that you can buy at an office supply store, at any rate, but you can triangulate a little using other people and of course checking for consistency is important.
You say you can order a list of statements from more to less confident. Say, Moral Principle A is more confident than Moral Principle B. But how do you know that? Why isn’t Moral Principle B more confident than Moral Principle A? I imagine you have some criteria for determining the confidence of moral principles to determine their order, but I don’t know what that criteria is.
Your theory of morality is certainly complex and well-thought out, but I think is based on an assertion “persons have rights, which it is wrong to violate” that isn’t established in any sort of traditionally realist way. Indeed, I think you agree with me that since absolutism theory is false, only those who prefer to recognize rights (or, alternatively, are caught in some regulatory scheme that enforces those rights) have a reason to recognize rights.
Alternatively, as Kaj mentioned, there are other systems of morality, like utilitarianism, that also capture a lot of what is meant by morality and there aren’t any grounds to dismiss them as inferior. In an essay I wrote, “Too Many Moralities”, I make the place I choose to carve reality around the word “morality” as to whether the “end” holds as its goal acting not with regard to only the self, but rather with regard to the direct or indirect benefit of others. If does, it counts as “morality”, and if it doesn’t, it does not. I don’t personally yet see any reason why a particular theory deserves the special treatment of being singled out as the “one, true theory of morality”.
I’d appreciate your thoughts on the matter because it could help me understand (and perhaps even sympathize with) the unitary perspective a lot more.
Hmm. I’m not sure I understand your perspective. I’m happy to call all sorts of incorrect moralities “things based on moral intuition”, even if I think the extrapolation is wrong, does that help?
Why do you think their extrapolation is wrong? And what does “wrong” mean in that context?
I’m not sure I know what you mean by the first question. Regarding the second, it means that they have not arrived at the (one true unitary) morality, at least as far as I know. If someone looks an optical illusion like, say, the Muller-Lyer, they base their conclusions about the lengths of the lines they’re looking at on their vision, but reach incorrect conclusions. I don’t think deriving moral theory from moral intuition is that straightforward or that it’s fooled in any particularly analogous way, but that’s about what I mean by someone extrapolating incorrectly from moral intuitions.
I think that he meant something like:
You seem to be saying that while different people can have different moralities, many (most?) of the moralities that people can have are wrong.
You also seem to be implying that you consider your morality to be more correct than that of many others.
Since you believe that there are moralities which are wrong, and that you have a morality which is, if not completely correct then at least more correct than the moralities of many others, that means that you need to have some sort of a rule for deciding what kind of a morality is right and what kind of morality is wrong.
So what is the rule that makes you consider your morality more correct than e.g. consequentialism? What are some of the specific mistakes that e.g. consequentialism makes, and how do you know that they are mistakes?
Sorry for so long between this response and the previous one, but I’m still interested. With the Muller-Lyer Illusion, you can demonstrate it’s an illusion by using a ruler. Following your analogy, how would you demonstrate that a incorrect moral extrapolation was similarly in error? Is there a moral “ruler”?
Not one that you can buy at an office supply store, at any rate, but you can triangulate a little using other people and of course checking for consistency is important.
So what is moral is what is the most popular among all internally consistent possibilities?
No, morality is not contingent on popularity.
I’m confused. Can you explain how you triangulate morality using other people?
Mostly, they’re helpful for locating hypotheses.
I’m still confused, sorry. How do you arrive at a moral principle and how do you know it’s not a moral illusion?
You can’t be certain it’s not a moral illusion, I hope I never implied that.
You’re right; you haven’t. Do you put any probability estimate on whether a certain moral principle is not an illusion? If so, how?
I don’t naturally think in numbers and decline to forcibly attach any. I could probably order a list of statements from more to less confident.
By what basis do you make that ordering?
I’m not sure what you mean by this question.
You say you can order a list of statements from more to less confident. Say, Moral Principle A is more confident than Moral Principle B. But how do you know that? Why isn’t Moral Principle B more confident than Moral Principle A? I imagine you have some criteria for determining the confidence of moral principles to determine their order, but I don’t know what that criteria is.
Someone has taken a dislike to this thread, so I’m going to tap out now.
Thanks for the conversation.