Deontological morality is simply an axiom. “You should do X!” End of discussion.
If you want to continue the discussion, for example by asking “why?” (why this specific axiom, and not any other), you are outside of its realm. The question does not make sense for a deontologist. At best they will provide you a circular answer: “You should do X, because you should do X!” An eloquent deontologist can make the circle larger than this, if you insist.
On the other hand, any other morality could be seen as an instance of deontological morality for a specific value of “X”. For example “You should maximize the utility of the consequences of your choices” = consequentialism. (If you say that we should maximize the utility of consequences because of some Y, for example because it makes people happy, again the question is: why Y?)
So every normative morality has its axioms, and any evaluation of which axioms are better must already use some axioms. Even if we say that e.g. self-consistent axioms seem better than self-contradictory axioms, even that requires some axiom, and we could again ask: “why”?
There is no such thing as a mind starting from a blank slate and ever achieving anything other than a blank state, because… seriously, what mechanism would it use to make its first step? Same thing with morality: if you say that X is a reason to care about Y, you must already care about X, otherwise the reasoning will leave you unimpressed. (Related: Created Already In Motion.)
So it could be said that all moralities are axiomatic, and in this technical sense, all of them are equal. However, some of those axioms are more compatible with a human mind, so we judge them as “better” or “making more sense”. It is a paradox that if we want to find a good normative morality, we must look at how human brains really work. And then if we find that human brains somehow prefer X, we can declare “You should do X” a good normative morality.
Please note that this is not circular. It does not mean “we should always do what we prefer”, but rather “we prefer X; so now we forever fix this X as a constant; and we should do X even if our preferences later change (unless X explicitly says how our actions should change according to changes in our future preferences)”. As an example—let’s suppose that my highest value is pleasure, and I currently like chocolate, but I am aware that my taste may change later. Then my current preference X is that I should eat what I like, whether that is a chocolate or something else. Even if today I can’t imagine liking something else, I still wish to keep this option open. On the other hand, let’s suppose I love other people, but I am aware that in a future I could accidentally become a psychopath who loves torturing people. Then my current preference X is that I should never torture people. I am aware of the possible change, but I disagree with it now. There is a difference between a possible development that I find morally acceptable, and a possible development that I find morally unacceptable, and that difference is encoded in my morality axiom X.
The preferences should be examined carefully; I don’t know how to say it exactly, but even if I think I want something now, I may be mistaken. For example I can be mistaken of some facts, which can lead me to wrong conclusion about my preferences. So I would prefer a preferences-extraction process which would correct my mistakes and would instead select things I would prefer if I knew all the facts correctly and had enough intelligence to understand it all. (Related: Ideal Advisor Theories and Personal CEV.)
Summary: To have a normative morality, we need to choose an axiom. But an arbitrary axiom could result in a morality we would consider evil or nonsensical. To consider it good, we much choose an axiom reflecting what humans already want. (Or, for an individual morality, what the individual wants.) This reflection should assume more intelligence and better information than we already have.
That was very clear, except for one thing. It seems like you are conflating human desires with morality. The obvious (to me) question is: what happens if, instead of currently loving other people and being aware that I may become a psychopath later, I am a psychopath now and realize I may disposed to become a lover of people later?
I do see how any moral theory becomes deontological at some level. But because the world is complex and the human brain is crazy, I feel like that level ought to be as high as possible in order to obtain the largest amount of sophistication and mental awareness of our actions and their consequences. (More on this in a second). Perhaps I am looking at it backwards, and the simplest, most direct moral rules would be better. While that might be true, I feel like if all moral agents were to introspect and reason properly, such a paradigm would not satisfy us. Though I claim no awesome reasoning or introspection powers, it is unsatisfying to me, at least.
Above I mention consequences again. I don’t think this is question begging because I think that I can turn around your argument. Any consequentialism says “don’t do X because X would cause Y and Y is bad”. Any morality including deontological theories can be interpreted as saying the same thing, one level down. So, “don’t do X, not because X would cause Y, but because it might and we aren’t sure, so lets just say X is bad. Therefore, don’t do X.” I don’t think this is faulty reasoning at all. In fact, I think it is a safe bet most of the time, (very much using Eliezer’s Ethical Injuntions). What I am concerned with about Deontology is that it seems absolute to me. This is why I prefer the injunctions over old school deontology, because it takes into account our error prone and biased brains.
It seems like you are conflating human desires with morality.
I am probably using the words incorrectly, because I don’t know how philosophers define them, or even whether they can agree on a definition. I essentially used “morality” to mean “any system which says what you should”, and added an observation that if you take literally any such system, most of them will not fit your intuition of morality. Why? Because they recommend things you find repulsive or just stupid. But this is a fact about you or about humans in general, so in order to find “a system which says what you should, and it makes sense and is not repulsive”, you must study humans. Specifically, human desires.
In other words, I define “morality” as “a system of ‘shoulds’ that humans can agree with”.
Paperclip maximizers, capable of reflexivity and knowing game theory, could derive their own “system of ‘shoulds’” they could agree with. It could include rules like “don’t destroy your neighbor’s two paperclips just to build one yourself”, which would be similar to our morality, but that’s because the game theory is the same.
But it would be game theory plus paperclip-maximizer desires, so even if it would contain some concepts of friendship and non-violence (cooperating with each other in the iterated Prisonner’s Dilemma’s) which would make all human hippies happy, when given a choice “sending all sentient beings into eternal hell of maximum suffering in exchange for a machine that tiles the universe with the paperclips” would seem to them like a great idea. Don’t ever forget it when dealing with paperclip maximizers.
what happens if [...] I am a psychopath now and realize I may disposed to become a lover of people later?
If I am a psychopath now, I don’t give a **** about morality, do I? So I decide according to whatever psychopaths consider important. (I guess it would be according to my whim at the moment.)
In other words, I define “morality” as “a system of ‘shoulds’ that humans can agree with”.
If you want a name for your position on this (which, as far as I can tell, is very well put,) a suitable philosophical equivalent is Moral Contractualism, a la Thomas Scanlon in “What We Owe To Each Other.” He defines certain kinds of acts as morally wrong thus:
An act is wrong if and only if any principle that permitted it would be one that could reasonably be rejected by people moved to find principles for the general regulation of behaviour that others, similarly motivated, could not reasonably reject.
A few thoughts, hopefully useful for you:
Deontological morality is simply an axiom. “You should do X!” End of discussion.
If you want to continue the discussion, for example by asking “why?” (why this specific axiom, and not any other), you are outside of its realm. The question does not make sense for a deontologist. At best they will provide you a circular answer: “You should do X, because you should do X!” An eloquent deontologist can make the circle larger than this, if you insist.
On the other hand, any other morality could be seen as an instance of deontological morality for a specific value of “X”. For example “You should maximize the utility of the consequences of your choices” = consequentialism. (If you say that we should maximize the utility of consequences because of some Y, for example because it makes people happy, again the question is: why Y?)
So every normative morality has its axioms, and any evaluation of which axioms are better must already use some axioms. Even if we say that e.g. self-consistent axioms seem better than self-contradictory axioms, even that requires some axiom, and we could again ask: “why”?
There is no such thing as a mind starting from a blank slate and ever achieving anything other than a blank state, because… seriously, what mechanism would it use to make its first step? Same thing with morality: if you say that X is a reason to care about Y, you must already care about X, otherwise the reasoning will leave you unimpressed. (Related: Created Already In Motion.)
So it could be said that all moralities are axiomatic, and in this technical sense, all of them are equal. However, some of those axioms are more compatible with a human mind, so we judge them as “better” or “making more sense”. It is a paradox that if we want to find a good normative morality, we must look at how human brains really work. And then if we find that human brains somehow prefer X, we can declare “You should do X” a good normative morality.
Please note that this is not circular. It does not mean “we should always do what we prefer”, but rather “we prefer X; so now we forever fix this X as a constant; and we should do X even if our preferences later change (unless X explicitly says how our actions should change according to changes in our future preferences)”. As an example—let’s suppose that my highest value is pleasure, and I currently like chocolate, but I am aware that my taste may change later. Then my current preference X is that I should eat what I like, whether that is a chocolate or something else. Even if today I can’t imagine liking something else, I still wish to keep this option open. On the other hand, let’s suppose I love other people, but I am aware that in a future I could accidentally become a psychopath who loves torturing people. Then my current preference X is that I should never torture people. I am aware of the possible change, but I disagree with it now. There is a difference between a possible development that I find morally acceptable, and a possible development that I find morally unacceptable, and that difference is encoded in my morality axiom X.
The preferences should be examined carefully; I don’t know how to say it exactly, but even if I think I want something now, I may be mistaken. For example I can be mistaken of some facts, which can lead me to wrong conclusion about my preferences. So I would prefer a preferences-extraction process which would correct my mistakes and would instead select things I would prefer if I knew all the facts correctly and had enough intelligence to understand it all. (Related: Ideal Advisor Theories and Personal CEV.)
Summary: To have a normative morality, we need to choose an axiom. But an arbitrary axiom could result in a morality we would consider evil or nonsensical. To consider it good, we much choose an axiom reflecting what humans already want. (Or, for an individual morality, what the individual wants.) This reflection should assume more intelligence and better information than we already have.
This is not true. Deontological systems have modes of inference. e.g.
P1) You should not kill people P2) Sally is a person C) You should not kill Sally
would be totally legitimate to a deontologist
Viliam! Thank you!
That was very clear, except for one thing. It seems like you are conflating human desires with morality. The obvious (to me) question is: what happens if, instead of currently loving other people and being aware that I may become a psychopath later, I am a psychopath now and realize I may disposed to become a lover of people later?
I do see how any moral theory becomes deontological at some level. But because the world is complex and the human brain is crazy, I feel like that level ought to be as high as possible in order to obtain the largest amount of sophistication and mental awareness of our actions and their consequences. (More on this in a second). Perhaps I am looking at it backwards, and the simplest, most direct moral rules would be better. While that might be true, I feel like if all moral agents were to introspect and reason properly, such a paradigm would not satisfy us. Though I claim no awesome reasoning or introspection powers, it is unsatisfying to me, at least.
Above I mention consequences again. I don’t think this is question begging because I think that I can turn around your argument. Any consequentialism says “don’t do X because X would cause Y and Y is bad”. Any morality including deontological theories can be interpreted as saying the same thing, one level down. So, “don’t do X, not because X would cause Y, but because it might and we aren’t sure, so lets just say X is bad. Therefore, don’t do X.” I don’t think this is faulty reasoning at all. In fact, I think it is a safe bet most of the time, (very much using Eliezer’s Ethical Injuntions). What I am concerned with about Deontology is that it seems absolute to me. This is why I prefer the injunctions over old school deontology, because it takes into account our error prone and biased brains.
Thanks for the discussion!
I am probably using the words incorrectly, because I don’t know how philosophers define them, or even whether they can agree on a definition. I essentially used “morality” to mean “any system which says what you should”, and added an observation that if you take literally any such system, most of them will not fit your intuition of morality. Why? Because they recommend things you find repulsive or just stupid. But this is a fact about you or about humans in general, so in order to find “a system which says what you should, and it makes sense and is not repulsive”, you must study humans. Specifically, human desires.
In other words, I define “morality” as “a system of ‘shoulds’ that humans can agree with”.
Paperclip maximizers, capable of reflexivity and knowing game theory, could derive their own “system of ‘shoulds’” they could agree with. It could include rules like “don’t destroy your neighbor’s two paperclips just to build one yourself”, which would be similar to our morality, but that’s because the game theory is the same.
But it would be game theory plus paperclip-maximizer desires, so even if it would contain some concepts of friendship and non-violence (cooperating with each other in the iterated Prisonner’s Dilemma’s) which would make all human hippies happy, when given a choice “sending all sentient beings into eternal hell of maximum suffering in exchange for a machine that tiles the universe with the paperclips” would seem to them like a great idea. Don’t ever forget it when dealing with paperclip maximizers.
If I am a psychopath now, I don’t give a **** about morality, do I? So I decide according to whatever psychopaths consider important. (I guess it would be according to my whim at the moment.)
If you want a name for your position on this (which, as far as I can tell, is very well put,) a suitable philosophical equivalent is Moral Contractualism, a la Thomas Scanlon in “What We Owe To Each Other.” He defines certain kinds of acts as morally wrong thus: