That was very clear, except for one thing. It seems like you are conflating human desires with morality. The obvious (to me) question is: what happens if, instead of currently loving other people and being aware that I may become a psychopath later, I am a psychopath now and realize I may disposed to become a lover of people later?
I do see how any moral theory becomes deontological at some level. But because the world is complex and the human brain is crazy, I feel like that level ought to be as high as possible in order to obtain the largest amount of sophistication and mental awareness of our actions and their consequences. (More on this in a second). Perhaps I am looking at it backwards, and the simplest, most direct moral rules would be better. While that might be true, I feel like if all moral agents were to introspect and reason properly, such a paradigm would not satisfy us. Though I claim no awesome reasoning or introspection powers, it is unsatisfying to me, at least.
Above I mention consequences again. I don’t think this is question begging because I think that I can turn around your argument. Any consequentialism says “don’t do X because X would cause Y and Y is bad”. Any morality including deontological theories can be interpreted as saying the same thing, one level down. So, “don’t do X, not because X would cause Y, but because it might and we aren’t sure, so lets just say X is bad. Therefore, don’t do X.” I don’t think this is faulty reasoning at all. In fact, I think it is a safe bet most of the time, (very much using Eliezer’s Ethical Injuntions). What I am concerned with about Deontology is that it seems absolute to me. This is why I prefer the injunctions over old school deontology, because it takes into account our error prone and biased brains.
It seems like you are conflating human desires with morality.
I am probably using the words incorrectly, because I don’t know how philosophers define them, or even whether they can agree on a definition. I essentially used “morality” to mean “any system which says what you should”, and added an observation that if you take literally any such system, most of them will not fit your intuition of morality. Why? Because they recommend things you find repulsive or just stupid. But this is a fact about you or about humans in general, so in order to find “a system which says what you should, and it makes sense and is not repulsive”, you must study humans. Specifically, human desires.
In other words, I define “morality” as “a system of ‘shoulds’ that humans can agree with”.
Paperclip maximizers, capable of reflexivity and knowing game theory, could derive their own “system of ‘shoulds’” they could agree with. It could include rules like “don’t destroy your neighbor’s two paperclips just to build one yourself”, which would be similar to our morality, but that’s because the game theory is the same.
But it would be game theory plus paperclip-maximizer desires, so even if it would contain some concepts of friendship and non-violence (cooperating with each other in the iterated Prisonner’s Dilemma’s) which would make all human hippies happy, when given a choice “sending all sentient beings into eternal hell of maximum suffering in exchange for a machine that tiles the universe with the paperclips” would seem to them like a great idea. Don’t ever forget it when dealing with paperclip maximizers.
what happens if [...] I am a psychopath now and realize I may disposed to become a lover of people later?
If I am a psychopath now, I don’t give a **** about morality, do I? So I decide according to whatever psychopaths consider important. (I guess it would be according to my whim at the moment.)
In other words, I define “morality” as “a system of ‘shoulds’ that humans can agree with”.
If you want a name for your position on this (which, as far as I can tell, is very well put,) a suitable philosophical equivalent is Moral Contractualism, a la Thomas Scanlon in “What We Owe To Each Other.” He defines certain kinds of acts as morally wrong thus:
An act is wrong if and only if any principle that permitted it would be one that could reasonably be rejected by people moved to find principles for the general regulation of behaviour that others, similarly motivated, could not reasonably reject.
Viliam! Thank you!
That was very clear, except for one thing. It seems like you are conflating human desires with morality. The obvious (to me) question is: what happens if, instead of currently loving other people and being aware that I may become a psychopath later, I am a psychopath now and realize I may disposed to become a lover of people later?
I do see how any moral theory becomes deontological at some level. But because the world is complex and the human brain is crazy, I feel like that level ought to be as high as possible in order to obtain the largest amount of sophistication and mental awareness of our actions and their consequences. (More on this in a second). Perhaps I am looking at it backwards, and the simplest, most direct moral rules would be better. While that might be true, I feel like if all moral agents were to introspect and reason properly, such a paradigm would not satisfy us. Though I claim no awesome reasoning or introspection powers, it is unsatisfying to me, at least.
Above I mention consequences again. I don’t think this is question begging because I think that I can turn around your argument. Any consequentialism says “don’t do X because X would cause Y and Y is bad”. Any morality including deontological theories can be interpreted as saying the same thing, one level down. So, “don’t do X, not because X would cause Y, but because it might and we aren’t sure, so lets just say X is bad. Therefore, don’t do X.” I don’t think this is faulty reasoning at all. In fact, I think it is a safe bet most of the time, (very much using Eliezer’s Ethical Injuntions). What I am concerned with about Deontology is that it seems absolute to me. This is why I prefer the injunctions over old school deontology, because it takes into account our error prone and biased brains.
Thanks for the discussion!
I am probably using the words incorrectly, because I don’t know how philosophers define them, or even whether they can agree on a definition. I essentially used “morality” to mean “any system which says what you should”, and added an observation that if you take literally any such system, most of them will not fit your intuition of morality. Why? Because they recommend things you find repulsive or just stupid. But this is a fact about you or about humans in general, so in order to find “a system which says what you should, and it makes sense and is not repulsive”, you must study humans. Specifically, human desires.
In other words, I define “morality” as “a system of ‘shoulds’ that humans can agree with”.
Paperclip maximizers, capable of reflexivity and knowing game theory, could derive their own “system of ‘shoulds’” they could agree with. It could include rules like “don’t destroy your neighbor’s two paperclips just to build one yourself”, which would be similar to our morality, but that’s because the game theory is the same.
But it would be game theory plus paperclip-maximizer desires, so even if it would contain some concepts of friendship and non-violence (cooperating with each other in the iterated Prisonner’s Dilemma’s) which would make all human hippies happy, when given a choice “sending all sentient beings into eternal hell of maximum suffering in exchange for a machine that tiles the universe with the paperclips” would seem to them like a great idea. Don’t ever forget it when dealing with paperclip maximizers.
If I am a psychopath now, I don’t give a **** about morality, do I? So I decide according to whatever psychopaths consider important. (I guess it would be according to my whim at the moment.)
If you want a name for your position on this (which, as far as I can tell, is very well put,) a suitable philosophical equivalent is Moral Contractualism, a la Thomas Scanlon in “What We Owe To Each Other.” He defines certain kinds of acts as morally wrong thus: