By the definitions above, I’m a unitary but not an absolutism theorist. I would describe rationally binding constraints as those that govern prudence, not morality; one can be perfectly prudent without being moral (indeed, if one does not have morality among one’s priorities, perfect prudence could require immorality). A brief sketch of my moral theory can be found here.
Why is there only one particular morality?
What would it mean for there to be several? I think morality drops out of personhood. It’s possible that other things drop out of personhood, too, or that categories other than persons produce their own special results (although I don’t know what any of that might look like), but I wouldn’t refer to such things by the same name; that would just be confusing. If there were several moralities it’s unclear which would bind actors or how they’d interact. Of course people have all kinds of preferences, but these govern what it’s prudent for those actors to do and what axiology is likely to inform their attempts at world-steering, not what is moral.
Where does morality come from?
People. Only people are morally obliged to do or not do things. Only people have rights that makes it particularly moral or immoral to do or not do things with them. (I have a secondary feature to my system that still only constrains people but doesn’t refer so specially to acting on them, of which I am less confident; it’s a patch for incompleteness, not a grounding principle.) Rights and the obligation to respect them are just a thing that happens when something complicated and persony exists.
Are moral facts contingent; could morality have been different?
Only cosmetically. There could have failed to be any people, or there could be only one person in the world who could find it a practical impossibility to violate their own rights, or such far-flung people that they couldn’t interact in any potentially immoral way. But given the existence of people who can interact with each other, I think morality is a necessity.
Is it possible to make it different in the future?
Only cosmetically. If there were no people—or if everyone’s preferences changed so they always waived all their rights—or something, then morality could cease to be an interesting feature of the world, but it would still be there.
Why should we care about (your) morality?
Caring is not even morally obligatory (although compliance is), let alone rationally required.
That there are many possible moral intuitions or axioms that one could base one’s morality on, with no objective criteria for saying which set of intuitions or axioms is the best one? Your basic axioms say that (to simplify a lot) personhood grants rights and morality is about respecting those rights, while a utilitarian could say that suffering is bad and pleasure is good and morality is about how to best minimize suffering and maximize pleasure. Since all morality ultimately reduces to some kinds of axioms that just have to be taken as granted, I am in turn confused about what it would even mean to say that there are is only one correct set of them. (There obviously is some set of axioms that is the only correct one for me, but moral realism seems to imply some set that would be the only correct one for everybody.)
That there are many possible moral intuitions or axioms that one could base one’s morality on, with no objective criteria for saying which set of intuitions or axioms is the best one?
Well, yes, I suppose this is literally what that would mean, but I don’t see much reason to call any particular thing chosen out of a grab bag “morality” instead of “prudence” or “that thing that Joe does” or “a popular action-tree-pruning algorithm”.
Your theory of morality is certainly complex and well-thought out, but I think is based on an assertion “persons have rights, which it is wrong to violate” that isn’t established in any sort of traditionally realist way. Indeed, I think you agree with me that since absolutism theory is false, only those who prefer to recognize rights (or, alternatively, are caught in some regulatory scheme that enforces those rights) have a reason to recognize rights.
Alternatively, as Kaj mentioned, there are other systems of morality, like utilitarianism, that also capture a lot of what is meant by morality and there aren’t any grounds to dismiss them as inferior. In an essay I wrote, “Too Many Moralities”, I make the place I choose to carve reality around the word “morality” as to whether the “end” holds as its goal acting not with regard to only the self, but rather with regard to the direct or indirect benefit of others. If does, it counts as “morality”, and if it doesn’t, it does not. I don’t personally yet see any reason why a particular theory deserves the special treatment of being singled out as the “one, true theory of morality”.
I’d appreciate your thoughts on the matter because it could help me understand (and perhaps even sympathize with) the unitary perspective a lot more.
Hmm. I’m not sure I understand your perspective. I’m happy to call all sorts of incorrect moralities “things based on moral intuition”, even if I think the extrapolation is wrong, does that help?
I’m not sure I know what you mean by the first question. Regarding the second, it means that they have not arrived at the (one true unitary) morality, at least as far as I know. If someone looks an optical illusion like, say, the Muller-Lyer, they base their conclusions about the lengths of the lines they’re looking at on their vision, but reach incorrect conclusions. I don’t think deriving moral theory from moral intuition is that straightforward or that it’s fooled in any particularly analogous way, but that’s about what I mean by someone extrapolating incorrectly from moral intuitions.
I’m not sure I know what you mean by the first question.
I think that he meant something like:
You seem to be saying that while different people can have different moralities, many (most?) of the moralities that people can have are wrong.
You also seem to be implying that you consider your morality to be more correct than that of many others.
Since you believe that there are moralities which are wrong, and that you have a morality which is, if not completely correct then at least more correct than the moralities of many others, that means that you need to have some sort of a rule for deciding what kind of a morality is right and what kind of morality is wrong.
So what is the rule that makes you consider your morality more correct than e.g. consequentialism? What are some of the specific mistakes that e.g. consequentialism makes, and how do you know that they are mistakes?
Sorry for so long between this response and the previous one, but I’m still interested. With the Muller-Lyer Illusion, you can demonstrate it’s an illusion by using a ruler. Following your analogy, how would you demonstrate that a incorrect moral extrapolation was similarly in error? Is there a moral “ruler”?
Not one that you can buy at an office supply store, at any rate, but you can triangulate a little using other people and of course checking for consistency is important.
You say you can order a list of statements from more to less confident. Say, Moral Principle A is more confident than Moral Principle B. But how do you know that? Why isn’t Moral Principle B more confident than Moral Principle A? I imagine you have some criteria for determining the confidence of moral principles to determine their order, but I don’t know what that criteria is.
(There obviously is some set of axioms that is the only correct one for me, but moral realism seems to imply some set that would be the only correct one for everybody.)
Note: What I think Alicorn is saying (And I think it makes a of of sense), is that those “axioms” can be derived from the notion of “personhood” or “humanity”. That is, given that humans are the way there are, from that we can derive some rules about how to behave. These rules are not truly universal, as aliens would not have them, or be in any way obliged to come up with them. (Of course, they would have there own separate system, but calling that system a form of morality would be distorting the meaning of the word.)
No. Personhood ≠ humanity. If we find persony aliens I will apply the same moral system to them. Your interpretation seems to cross the cosmetic features of what I’m saying with some of the deeper principles of what Eliezer tends to say.
Interesting thoughts. Definitely agree that morality comes from people, and specifically their interactions with each other. Although I would additionally clarify that in my case I consider morality (as opposed to a simple action decided by personal gain or benefit) comes from the interaction between sentients where one or more can act on another based on knowledge not only of their own state but the state of that other. This is because I consider any sentient to have some nonzero moral value to me, but am not sure if I would consider all of them persons. I am comfortable thinking of an ape or a dolphin as a person, but I think I do not give a mouse the same status. Nevertheless, I would feel some amount of moral wrongness involved in causing unnecessary pain to the mouse, since I believe such creatures to be sentient and therefore capable of suffering.
I’m not sure how the rest of my morality compares to yours, though. I don’t think there is any one morality, or indeed that moral facts exist at all. Now, this does not mean that I subscribe to multiple moralities, especially those whose actions and consequences directly contradict each other. I simply believe that if one of my highest goals is the protection of sapient life, and someone else’s highest goal is the destruction of it, I cannot necessarily expect that I can ever show them, with any facts about the world, that their morality is wrong. I could only say that it was a fact about the world that their morality is in direct contradiction with mine.
Now I don’t believe that anything I’ve said above about morality (which was mostly metaethics anyway) precludes my existence or anyone else’s existence as a moral actor. In fact, all people, by their capability to make decisions based on their knowledge of the present state of others, and their ability to extrapolate that state into the future based on their actions, are automatically moral actors in my view of things. I just don’t necessarily think they always act in accordance with their own morals or have morals mutually compatible with my morals.
Nevertheless, I think that facts are very useful in discussing morality, because sometimes people are not actually in disagreement with each other’s highest moral goals—they simply have a disagreement about facts and if that can be resolved, they can agree on a mutually compatible course of action.
By the definitions above, I’m a unitary but not an absolutism theorist. I would describe rationally binding constraints as those that govern prudence, not morality; one can be perfectly prudent without being moral (indeed, if one does not have morality among one’s priorities, perfect prudence could require immorality). A brief sketch of my moral theory can be found here.
What would it mean for there to be several? I think morality drops out of personhood. It’s possible that other things drop out of personhood, too, or that categories other than persons produce their own special results (although I don’t know what any of that might look like), but I wouldn’t refer to such things by the same name; that would just be confusing. If there were several moralities it’s unclear which would bind actors or how they’d interact. Of course people have all kinds of preferences, but these govern what it’s prudent for those actors to do and what axiology is likely to inform their attempts at world-steering, not what is moral.
People. Only people are morally obliged to do or not do things. Only people have rights that makes it particularly moral or immoral to do or not do things with them. (I have a secondary feature to my system that still only constrains people but doesn’t refer so specially to acting on them, of which I am less confident; it’s a patch for incompleteness, not a grounding principle.) Rights and the obligation to respect them are just a thing that happens when something complicated and persony exists.
Only cosmetically. There could have failed to be any people, or there could be only one person in the world who could find it a practical impossibility to violate their own rights, or such far-flung people that they couldn’t interact in any potentially immoral way. But given the existence of people who can interact with each other, I think morality is a necessity.
Only cosmetically. If there were no people—or if everyone’s preferences changed so they always waived all their rights—or something, then morality could cease to be an interesting feature of the world, but it would still be there.
Caring is not even morally obligatory (although compliance is), let alone rationally required.
That there are many possible moral intuitions or axioms that one could base one’s morality on, with no objective criteria for saying which set of intuitions or axioms is the best one? Your basic axioms say that (to simplify a lot) personhood grants rights and morality is about respecting those rights, while a utilitarian could say that suffering is bad and pleasure is good and morality is about how to best minimize suffering and maximize pleasure. Since all morality ultimately reduces to some kinds of axioms that just have to be taken as granted, I am in turn confused about what it would even mean to say that there are is only one correct set of them. (There obviously is some set of axioms that is the only correct one for me, but moral realism seems to imply some set that would be the only correct one for everybody.)
Well, yes, I suppose this is literally what that would mean, but I don’t see much reason to call any particular thing chosen out of a grab bag “morality” instead of “prudence” or “that thing that Joe does” or “a popular action-tree-pruning algorithm”.
Your theory of morality is certainly complex and well-thought out, but I think is based on an assertion “persons have rights, which it is wrong to violate” that isn’t established in any sort of traditionally realist way. Indeed, I think you agree with me that since absolutism theory is false, only those who prefer to recognize rights (or, alternatively, are caught in some regulatory scheme that enforces those rights) have a reason to recognize rights.
Alternatively, as Kaj mentioned, there are other systems of morality, like utilitarianism, that also capture a lot of what is meant by morality and there aren’t any grounds to dismiss them as inferior. In an essay I wrote, “Too Many Moralities”, I make the place I choose to carve reality around the word “morality” as to whether the “end” holds as its goal acting not with regard to only the self, but rather with regard to the direct or indirect benefit of others. If does, it counts as “morality”, and if it doesn’t, it does not. I don’t personally yet see any reason why a particular theory deserves the special treatment of being singled out as the “one, true theory of morality”.
I’d appreciate your thoughts on the matter because it could help me understand (and perhaps even sympathize with) the unitary perspective a lot more.
Hmm. I’m not sure I understand your perspective. I’m happy to call all sorts of incorrect moralities “things based on moral intuition”, even if I think the extrapolation is wrong, does that help?
Why do you think their extrapolation is wrong? And what does “wrong” mean in that context?
I’m not sure I know what you mean by the first question. Regarding the second, it means that they have not arrived at the (one true unitary) morality, at least as far as I know. If someone looks an optical illusion like, say, the Muller-Lyer, they base their conclusions about the lengths of the lines they’re looking at on their vision, but reach incorrect conclusions. I don’t think deriving moral theory from moral intuition is that straightforward or that it’s fooled in any particularly analogous way, but that’s about what I mean by someone extrapolating incorrectly from moral intuitions.
I think that he meant something like:
You seem to be saying that while different people can have different moralities, many (most?) of the moralities that people can have are wrong.
You also seem to be implying that you consider your morality to be more correct than that of many others.
Since you believe that there are moralities which are wrong, and that you have a morality which is, if not completely correct then at least more correct than the moralities of many others, that means that you need to have some sort of a rule for deciding what kind of a morality is right and what kind of morality is wrong.
So what is the rule that makes you consider your morality more correct than e.g. consequentialism? What are some of the specific mistakes that e.g. consequentialism makes, and how do you know that they are mistakes?
Sorry for so long between this response and the previous one, but I’m still interested. With the Muller-Lyer Illusion, you can demonstrate it’s an illusion by using a ruler. Following your analogy, how would you demonstrate that a incorrect moral extrapolation was similarly in error? Is there a moral “ruler”?
Not one that you can buy at an office supply store, at any rate, but you can triangulate a little using other people and of course checking for consistency is important.
So what is moral is what is the most popular among all internally consistent possibilities?
No, morality is not contingent on popularity.
I’m confused. Can you explain how you triangulate morality using other people?
Mostly, they’re helpful for locating hypotheses.
I’m still confused, sorry. How do you arrive at a moral principle and how do you know it’s not a moral illusion?
You can’t be certain it’s not a moral illusion, I hope I never implied that.
You’re right; you haven’t. Do you put any probability estimate on whether a certain moral principle is not an illusion? If so, how?
I don’t naturally think in numbers and decline to forcibly attach any. I could probably order a list of statements from more to less confident.
By what basis do you make that ordering?
I’m not sure what you mean by this question.
You say you can order a list of statements from more to less confident. Say, Moral Principle A is more confident than Moral Principle B. But how do you know that? Why isn’t Moral Principle B more confident than Moral Principle A? I imagine you have some criteria for determining the confidence of moral principles to determine their order, but I don’t know what that criteria is.
Someone has taken a dislike to this thread, so I’m going to tap out now.
Thanks for the conversation.
Note: What I think Alicorn is saying (And I think it makes a of of sense), is that those “axioms” can be derived from the notion of “personhood” or “humanity”. That is, given that humans are the way there are, from that we can derive some rules about how to behave. These rules are not truly universal, as aliens would not have them, or be in any way obliged to come up with them. (Of course, they would have there own separate system, but calling that system a form of morality would be distorting the meaning of the word.)
No. Personhood ≠ humanity. If we find persony aliens I will apply the same moral system to them. Your interpretation seems to cross the cosmetic features of what I’m saying with some of the deeper principles of what Eliezer tends to say.
Ah. OK, sorry for misinterpreting you. This is just what I got from what you wrote, but of course, the illusion of transparency comes into play.
Interesting thoughts. Definitely agree that morality comes from people, and specifically their interactions with each other. Although I would additionally clarify that in my case I consider morality (as opposed to a simple action decided by personal gain or benefit) comes from the interaction between sentients where one or more can act on another based on knowledge not only of their own state but the state of that other. This is because I consider any sentient to have some nonzero moral value to me, but am not sure if I would consider all of them persons. I am comfortable thinking of an ape or a dolphin as a person, but I think I do not give a mouse the same status. Nevertheless, I would feel some amount of moral wrongness involved in causing unnecessary pain to the mouse, since I believe such creatures to be sentient and therefore capable of suffering.
I’m not sure how the rest of my morality compares to yours, though. I don’t think there is any one morality, or indeed that moral facts exist at all. Now, this does not mean that I subscribe to multiple moralities, especially those whose actions and consequences directly contradict each other. I simply believe that if one of my highest goals is the protection of sapient life, and someone else’s highest goal is the destruction of it, I cannot necessarily expect that I can ever show them, with any facts about the world, that their morality is wrong. I could only say that it was a fact about the world that their morality is in direct contradiction with mine.
Now I don’t believe that anything I’ve said above about morality (which was mostly metaethics anyway) precludes my existence or anyone else’s existence as a moral actor. In fact, all people, by their capability to make decisions based on their knowledge of the present state of others, and their ability to extrapolate that state into the future based on their actions, are automatically moral actors in my view of things. I just don’t necessarily think they always act in accordance with their own morals or have morals mutually compatible with my morals.
Nevertheless, I think that facts are very useful in discussing morality, because sometimes people are not actually in disagreement with each other’s highest moral goals—they simply have a disagreement about facts and if that can be resolved, they can agree on a mutually compatible course of action.