“No. There’s morality, and then there’s all the many things that are not morality.”
Is this only a linguistic argument about what to call morality? With ,e.g. , virtue ethics claiming that all areas of life are part of morality, since ethics is about human excellence, and your claim that ethics only has to do with obligations and rights? Is there a reason you prefer to limit the domain of morality? Is there a concept you think gets lost when all of life is included in ethics (in virtue ethics or utilitarianism)?
Also, could you clarify the idea of obligations, are then any obligations which don’t emanate from the rights of another person? Are there any obligations which emerge inherently from a person’s humanity and are therefore not waivable?
Is this only a linguistic argument about what to call morality?
You could re-name everything, but if you renamed my deontological rules “fleeb”, I would go on considering fleeb to be ontologically distinct in important ways from things that are not fleeb. I’m pretty sure it’s not just linguistic.
Is there a reason you prefer to limit the domain of morality?
Because there’s already a perfectly good vocabulary for the ontologically distinct non-fleeb things that people are motivated to act towards—“prudence”, “axiology”.
Is there a concept you think gets lost when all of life is included in ethics (in virtue ethics or utilitarianism)?
Unassailable priority. People start looking at very large numbers and nodding to themselves and deciding that these very large numbers mean that if they take a thought experiment as a given they have to commit atrocities.
Also, could you clarify the idea of obligations, are then any obligations which don’t emanate from the rights of another person?
Yes; I have a secondary rule which for lack of better terminology I call “the principle of needless destruction”. It states that you shouldn’t go around wrecking stuff for no reason or insufficient reason, with the exact thresholds as yet undefined.
Are there any obligations which emerge inherently from a person’s humanity and are therefore not waivable?
“Humanity” is the wrong word; I apply my ethics across the board to all persons regardless of species. I’m not sure I understand the question even if I substitute “personhood”.
Lets take truth telling as an example. What is the difference between saying that there is an obligation to tell the truth, or honesty being a virtue or that telling the truth is a terminal value which we must maximize in a consequentialist type equation. Won’t the different frameworks be mutually supportive since obligation will create a terminal value, virtue ethics will show how to incorporate that into your personality and consequentialism will say that we must be prudent in attaining it? Similarly prudence is a virtue which we must be consequentialist to attain and which is useful in living up to our deontological obligations. and justice is a virtue which emanates from the obligation not to steal and not to harm other people and therefore we must consider the consequences of our actions so that we don’t end up in a situation where we will act unjust.
I think I am misunderstanding something in your position, since it seems to me that you don’t seem to disagree with consequentialism in that we need to calculate, but rather in what the terminal values are (with utilitarianism saying utility is the only terminal value and you saying hat there are numerous (such as not lying , not stealing not being destructive etc.))
By obligations which emerge from a person’s personhood which are not waivable, I mean that they emerge from the self and not in relation to another’s rights and therefore can not be waived. To take an example (which I know you do not consider an obligation, but will serve to illustrate the class since many people have this belief) A person has an obligation to live out their life as a result of their personhood and therefore is not allowed to commit suicide since that would be unjust to the self (or nature or god or whatever)
What is the difference between saying that there is an obligation to tell the truth, or honesty being a virtue or that telling the truth is a terminal value which we must maximize in a consequentialist type equation.
The first thing says you must not lie. The second thing says you must not lie because it signifies or causes defects in your character. The third thing says you must not lie unless there is a compensatory amount of something else encouraging you to lie. The systems really don’t fuse this prettily unless you badly misunderstand at least two of them, I’m afraid. (They can cooperate at different levels and human agents can switch around between implementing each of them, but on a theoretical level I don’t think this works.)
I think I am misunderstanding something in your position, since it seems to me that you don’t seem to disagree with consequentialism in that we need to calculate, but rather in what the terminal values are (with utilitarianism saying utility is the only terminal value and you saying hat there are numerous (such as not lying , not stealing not being destructive etc.))
I still don’t know what you mean by “emerge from the self”, but if I understand the class of thing you’re pointing out with the suicide example, I don’t think I have any of those.
Yes I read that post, (Thank you for putting in all this time clarifying your view)
I don’t think you understood my question. since “The third thing says you must not lie unless there is a compensatory amount of something else encouraging you to lie. ” is not viewing ‘not lying’ as a terminal value but rather as an instrumental value. a terminal value would mean that lying is bad not because of what it will lead to (as you explain in that post), but if that is the case, must I act in a situation so as not to be forced to lie. For example, lets say you made a promise to someone not to get fired in your first week at work, and if the boss knows that you cheered for a certain team he will fire you, would you say that you shouldn’t watch that game since you will be forced to either lie to the boss or to break your promise of keeping your job? (Please fix any loopholes you notice, since this is only meant for illustration)
If so it seems like the consequentialist utilitarian is saying that there is a deontological obligation to maximize utility, and therefore you must act to maximize that, whereas you are arguing that there are other deontological values, but you would agree that you should be prudent in achieving your deontological obligations.
(we can put virtue ethics to the side if you want, but won’t your deontological commitments dictate which virtues you must have, for example honesty, or even courage, so as to act in line with your deontological obligations)
That’s a very long paragraph, I’m going to do my best but some things may have been lost in the wall of text.
I understand the difference between terminal and instrumental values, but your conclusion doesn’t follow from this distinction. You can have multiple terminal values. If you terminally value both not-lying and also (to take a silly example) chocolate cake, you will lie to get a large amount of chocolate cake (where the value of “large” is defined somewhere in your utility function). Even if your only terminal value is not-lying, you might find yourself in an odd corner case where you can lie once and thereby avoid lying many times elsewhere. Or if you also value other people not lying, you could lie once to prevent many other people from lying.
a deontological obligation to maximize utility
AAAAAAAAAAAH
you should be prudent in achieving your deontological obligations
It is prudent to be prudent in achieving your deontological obligations. Putting “should” in that sentence flirts with equivocation.
won’t your deontological commitments dictate which virtues you must have, for example honesty, or even courage, so as to act in line with your deontological obligations
I think it’s possible to act completely morally acceptably according to my system while having whopping defects of character that would make any virtue ethicist blush. It might be unlikely, but it’s not impossible.
To make sure I understand you correctly. are these correct conclusions from what you have said?
a. It is permitted (i.e. ethical) to lie to yourself (though probably not prudent)
b. It is permitted (i.e. ethical) to act in a way which will force you to tell a lie tomorrow
c. It is forbidden (i.e. unethical) to lie now to avoid lying tomorrow (no matter how many times or how significant the lie in the future)
d. The differences between the systems will only express themselves in unusual corner cases, but the underlying conceptual structure is very different
I still don’t understand your view of utilitarian consequentialism, if ‘maximizing utility’ isn’t a deontological obligation emanating from personhood or the like, where does it come from?
A, B, and C all look correct as stated, presuming situations really did meet the weird criteria for B and C. I think differences between consequentialism and deontology come up sometimes in regular situations, but less often when humans are running them, since human architecture will drag us all towards a fuzzy intuitionist middle.
I don’t think I understand the last paragraph. Can you rephrase?
Why don’t you view the consequentialist imperative to always seek maximum utility as a deontological rule? If it isn’t deontological where does it come from?
The imperative to maximize utility is utilitarian, not necessarily consequentialist. I know I keep harping on this point, but it’s an important distinction.
Edit: And even more specifically, it’s total utilitarian.
Keep up the good work. Any idea where this conflation might have come from? It’s widespread enough that there might be some commonly misunderstood article in the archives.
I don’t know if it’s anything specific… classic utilitarianism is the most common form of consequentialism espoused on Lesswrong, I think, so it could be as simple as “the most commonly encountered member of a category is assumed to represent the whole category”.
It could also be because utilitarianism was the first (?) form of consequentialism to be put forth by philosophers. Certainly it predates some of the more esoteric forms of consequentialism. I’m pretty sure it’s also got more famous philosophers defending it, by rather a large margin, than any other form of consequentialism.
To me, it looks like consequentialists care exclusively about prudence, which I also care about, and not at all about morality, which I also care about. It looks to me like the thing consequentialists call morality just is prudence and comes from the same places prudence comes from—wanting things, appreciating the nature of cause and effect, etc.
Could you elaborate on what this thing you call “morality” is?
To me, it seems like the “morality” that deontology aspires to be, or to represent / capture, doesn’t actually exist, and thus deontology fails on its own criterion. Consequentialism also fails in this sense, of course, but consequentialism does not actually attempt to work as the sort of “morality” you seem to be referring to.
“No. There’s morality, and then there’s all the many things that are not morality.”
Is this only a linguistic argument about what to call morality? With ,e.g. , virtue ethics claiming that all areas of life are part of morality, since ethics is about human excellence, and your claim that ethics only has to do with obligations and rights? Is there a reason you prefer to limit the domain of morality? Is there a concept you think gets lost when all of life is included in ethics (in virtue ethics or utilitarianism)?
Also, could you clarify the idea of obligations, are then any obligations which don’t emanate from the rights of another person? Are there any obligations which emerge inherently from a person’s humanity and are therefore not waivable?
You could re-name everything, but if you renamed my deontological rules “fleeb”, I would go on considering fleeb to be ontologically distinct in important ways from things that are not fleeb. I’m pretty sure it’s not just linguistic.
Because there’s already a perfectly good vocabulary for the ontologically distinct non-fleeb things that people are motivated to act towards—“prudence”, “axiology”.
Unassailable priority. People start looking at very large numbers and nodding to themselves and deciding that these very large numbers mean that if they take a thought experiment as a given they have to commit atrocities.
Yes; I have a secondary rule which for lack of better terminology I call “the principle of needless destruction”. It states that you shouldn’t go around wrecking stuff for no reason or insufficient reason, with the exact thresholds as yet undefined.
“Humanity” is the wrong word; I apply my ethics across the board to all persons regardless of species. I’m not sure I understand the question even if I substitute “personhood”.
Lets take truth telling as an example. What is the difference between saying that there is an obligation to tell the truth, or honesty being a virtue or that telling the truth is a terminal value which we must maximize in a consequentialist type equation. Won’t the different frameworks be mutually supportive since obligation will create a terminal value, virtue ethics will show how to incorporate that into your personality and consequentialism will say that we must be prudent in attaining it? Similarly prudence is a virtue which we must be consequentialist to attain and which is useful in living up to our deontological obligations. and justice is a virtue which emanates from the obligation not to steal and not to harm other people and therefore we must consider the consequences of our actions so that we don’t end up in a situation where we will act unjust.
I think I am misunderstanding something in your position, since it seems to me that you don’t seem to disagree with consequentialism in that we need to calculate, but rather in what the terminal values are (with utilitarianism saying utility is the only terminal value and you saying hat there are numerous (such as not lying , not stealing not being destructive etc.))
By obligations which emerge from a person’s personhood which are not waivable, I mean that they emerge from the self and not in relation to another’s rights and therefore can not be waived. To take an example (which I know you do not consider an obligation, but will serve to illustrate the class since many people have this belief) A person has an obligation to live out their life as a result of their personhood and therefore is not allowed to commit suicide since that would be unjust to the self (or nature or god or whatever)
The first thing says you must not lie. The second thing says you must not lie because it signifies or causes defects in your character. The third thing says you must not lie unless there is a compensatory amount of something else encouraging you to lie. The systems really don’t fuse this prettily unless you badly misunderstand at least two of them, I’m afraid. (They can cooperate at different levels and human agents can switch around between implementing each of them, but on a theoretical level I don’t think this works.)
Absolutely not. Did you read Deontology for Consequentialists?
I still don’t know what you mean by “emerge from the self”, but if I understand the class of thing you’re pointing out with the suicide example, I don’t think I have any of those.
Yes I read that post, (Thank you for putting in all this time clarifying your view)
I don’t think you understood my question. since “The third thing says you must not lie unless there is a compensatory amount of something else encouraging you to lie. ” is not viewing ‘not lying’ as a terminal value but rather as an instrumental value. a terminal value would mean that lying is bad not because of what it will lead to (as you explain in that post), but if that is the case, must I act in a situation so as not to be forced to lie. For example, lets say you made a promise to someone not to get fired in your first week at work, and if the boss knows that you cheered for a certain team he will fire you, would you say that you shouldn’t watch that game since you will be forced to either lie to the boss or to break your promise of keeping your job? (Please fix any loopholes you notice, since this is only meant for illustration) If so it seems like the consequentialist utilitarian is saying that there is a deontological obligation to maximize utility, and therefore you must act to maximize that, whereas you are arguing that there are other deontological values, but you would agree that you should be prudent in achieving your deontological obligations. (we can put virtue ethics to the side if you want, but won’t your deontological commitments dictate which virtues you must have, for example honesty, or even courage, so as to act in line with your deontological obligations)
That’s a very long paragraph, I’m going to do my best but some things may have been lost in the wall of text.
I understand the difference between terminal and instrumental values, but your conclusion doesn’t follow from this distinction. You can have multiple terminal values. If you terminally value both not-lying and also (to take a silly example) chocolate cake, you will lie to get a large amount of chocolate cake (where the value of “large” is defined somewhere in your utility function). Even if your only terminal value is not-lying, you might find yourself in an odd corner case where you can lie once and thereby avoid lying many times elsewhere. Or if you also value other people not lying, you could lie once to prevent many other people from lying.
AAAAAAAAAAAH
It is prudent to be prudent in achieving your deontological obligations. Putting “should” in that sentence flirts with equivocation.
I think it’s possible to act completely morally acceptably according to my system while having whopping defects of character that would make any virtue ethicist blush. It might be unlikely, but it’s not impossible.
Thank you, I think I understand this now.
To make sure I understand you correctly. are these correct conclusions from what you have said? a. It is permitted (i.e. ethical) to lie to yourself (though probably not prudent) b. It is permitted (i.e. ethical) to act in a way which will force you to tell a lie tomorrow c. It is forbidden (i.e. unethical) to lie now to avoid lying tomorrow (no matter how many times or how significant the lie in the future) d. The differences between the systems will only express themselves in unusual corner cases, but the underlying conceptual structure is very different
I still don’t understand your view of utilitarian consequentialism, if ‘maximizing utility’ isn’t a deontological obligation emanating from personhood or the like, where does it come from?
A, B, and C all look correct as stated, presuming situations really did meet the weird criteria for B and C. I think differences between consequentialism and deontology come up sometimes in regular situations, but less often when humans are running them, since human architecture will drag us all towards a fuzzy intuitionist middle.
I don’t think I understand the last paragraph. Can you rephrase?
Why don’t you view the consequentialist imperative to always seek maximum utility as a deontological rule? If it isn’t deontological where does it come from?
The imperative to maximize utility is utilitarian, not necessarily consequentialist. I know I keep harping on this point, but it’s an important distinction.
Edit: And even more specifically, it’s total utilitarian.
Keep up the good work. Any idea where this conflation might have come from? It’s widespread enough that there might be some commonly misunderstood article in the archives.
I don’t know if it’s anything specific… classic utilitarianism is the most common form of consequentialism espoused on Lesswrong, I think, so it could be as simple as “the most commonly encountered member of a category is assumed to represent the whole category”.
It could also be because utilitarianism was the first (?) form of consequentialism to be put forth by philosophers. Certainly it predates some of the more esoteric forms of consequentialism. I’m pretty sure it’s also got more famous philosophers defending it, by rather a large margin, than any other form of consequentialism.
It’s VNM consequentialist, which is a broader category then the common meaning of “utilitarian”.
To me, it looks like consequentialists care exclusively about prudence, which I also care about, and not at all about morality, which I also care about. It looks to me like the thing consequentialists call morality just is prudence and comes from the same places prudence comes from—wanting things, appreciating the nature of cause and effect, etc.
Thank you for all of your clarifications, I think I now understand how you are viewing morality.
Could you elaborate on what this thing you call “morality” is?
To me, it seems like the “morality” that deontology aspires to be, or to represent / capture, doesn’t actually exist, and thus deontology fails on its own criterion. Consequentialism also fails in this sense, of course, but consequentialism does not actually attempt to work as the sort of “morality” you seem to be referring to.