Yes I read that post, (Thank you for putting in all this time clarifying your view)
I don’t think you understood my question. since “The third thing says you must not lie unless there is a compensatory amount of something else encouraging you to lie. ” is not viewing ‘not lying’ as a terminal value but rather as an instrumental value. a terminal value would mean that lying is bad not because of what it will lead to (as you explain in that post), but if that is the case, must I act in a situation so as not to be forced to lie. For example, lets say you made a promise to someone not to get fired in your first week at work, and if the boss knows that you cheered for a certain team he will fire you, would you say that you shouldn’t watch that game since you will be forced to either lie to the boss or to break your promise of keeping your job? (Please fix any loopholes you notice, since this is only meant for illustration)
If so it seems like the consequentialist utilitarian is saying that there is a deontological obligation to maximize utility, and therefore you must act to maximize that, whereas you are arguing that there are other deontological values, but you would agree that you should be prudent in achieving your deontological obligations.
(we can put virtue ethics to the side if you want, but won’t your deontological commitments dictate which virtues you must have, for example honesty, or even courage, so as to act in line with your deontological obligations)
That’s a very long paragraph, I’m going to do my best but some things may have been lost in the wall of text.
I understand the difference between terminal and instrumental values, but your conclusion doesn’t follow from this distinction. You can have multiple terminal values. If you terminally value both not-lying and also (to take a silly example) chocolate cake, you will lie to get a large amount of chocolate cake (where the value of “large” is defined somewhere in your utility function). Even if your only terminal value is not-lying, you might find yourself in an odd corner case where you can lie once and thereby avoid lying many times elsewhere. Or if you also value other people not lying, you could lie once to prevent many other people from lying.
a deontological obligation to maximize utility
AAAAAAAAAAAH
you should be prudent in achieving your deontological obligations
It is prudent to be prudent in achieving your deontological obligations. Putting “should” in that sentence flirts with equivocation.
won’t your deontological commitments dictate which virtues you must have, for example honesty, or even courage, so as to act in line with your deontological obligations
I think it’s possible to act completely morally acceptably according to my system while having whopping defects of character that would make any virtue ethicist blush. It might be unlikely, but it’s not impossible.
To make sure I understand you correctly. are these correct conclusions from what you have said?
a. It is permitted (i.e. ethical) to lie to yourself (though probably not prudent)
b. It is permitted (i.e. ethical) to act in a way which will force you to tell a lie tomorrow
c. It is forbidden (i.e. unethical) to lie now to avoid lying tomorrow (no matter how many times or how significant the lie in the future)
d. The differences between the systems will only express themselves in unusual corner cases, but the underlying conceptual structure is very different
I still don’t understand your view of utilitarian consequentialism, if ‘maximizing utility’ isn’t a deontological obligation emanating from personhood or the like, where does it come from?
A, B, and C all look correct as stated, presuming situations really did meet the weird criteria for B and C. I think differences between consequentialism and deontology come up sometimes in regular situations, but less often when humans are running them, since human architecture will drag us all towards a fuzzy intuitionist middle.
I don’t think I understand the last paragraph. Can you rephrase?
Why don’t you view the consequentialist imperative to always seek maximum utility as a deontological rule? If it isn’t deontological where does it come from?
The imperative to maximize utility is utilitarian, not necessarily consequentialist. I know I keep harping on this point, but it’s an important distinction.
Edit: And even more specifically, it’s total utilitarian.
Keep up the good work. Any idea where this conflation might have come from? It’s widespread enough that there might be some commonly misunderstood article in the archives.
I don’t know if it’s anything specific… classic utilitarianism is the most common form of consequentialism espoused on Lesswrong, I think, so it could be as simple as “the most commonly encountered member of a category is assumed to represent the whole category”.
It could also be because utilitarianism was the first (?) form of consequentialism to be put forth by philosophers. Certainly it predates some of the more esoteric forms of consequentialism. I’m pretty sure it’s also got more famous philosophers defending it, by rather a large margin, than any other form of consequentialism.
To me, it looks like consequentialists care exclusively about prudence, which I also care about, and not at all about morality, which I also care about. It looks to me like the thing consequentialists call morality just is prudence and comes from the same places prudence comes from—wanting things, appreciating the nature of cause and effect, etc.
Could you elaborate on what this thing you call “morality” is?
To me, it seems like the “morality” that deontology aspires to be, or to represent / capture, doesn’t actually exist, and thus deontology fails on its own criterion. Consequentialism also fails in this sense, of course, but consequentialism does not actually attempt to work as the sort of “morality” you seem to be referring to.
Yes I read that post, (Thank you for putting in all this time clarifying your view)
I don’t think you understood my question. since “The third thing says you must not lie unless there is a compensatory amount of something else encouraging you to lie. ” is not viewing ‘not lying’ as a terminal value but rather as an instrumental value. a terminal value would mean that lying is bad not because of what it will lead to (as you explain in that post), but if that is the case, must I act in a situation so as not to be forced to lie. For example, lets say you made a promise to someone not to get fired in your first week at work, and if the boss knows that you cheered for a certain team he will fire you, would you say that you shouldn’t watch that game since you will be forced to either lie to the boss or to break your promise of keeping your job? (Please fix any loopholes you notice, since this is only meant for illustration) If so it seems like the consequentialist utilitarian is saying that there is a deontological obligation to maximize utility, and therefore you must act to maximize that, whereas you are arguing that there are other deontological values, but you would agree that you should be prudent in achieving your deontological obligations. (we can put virtue ethics to the side if you want, but won’t your deontological commitments dictate which virtues you must have, for example honesty, or even courage, so as to act in line with your deontological obligations)
That’s a very long paragraph, I’m going to do my best but some things may have been lost in the wall of text.
I understand the difference between terminal and instrumental values, but your conclusion doesn’t follow from this distinction. You can have multiple terminal values. If you terminally value both not-lying and also (to take a silly example) chocolate cake, you will lie to get a large amount of chocolate cake (where the value of “large” is defined somewhere in your utility function). Even if your only terminal value is not-lying, you might find yourself in an odd corner case where you can lie once and thereby avoid lying many times elsewhere. Or if you also value other people not lying, you could lie once to prevent many other people from lying.
AAAAAAAAAAAH
It is prudent to be prudent in achieving your deontological obligations. Putting “should” in that sentence flirts with equivocation.
I think it’s possible to act completely morally acceptably according to my system while having whopping defects of character that would make any virtue ethicist blush. It might be unlikely, but it’s not impossible.
Thank you, I think I understand this now.
To make sure I understand you correctly. are these correct conclusions from what you have said? a. It is permitted (i.e. ethical) to lie to yourself (though probably not prudent) b. It is permitted (i.e. ethical) to act in a way which will force you to tell a lie tomorrow c. It is forbidden (i.e. unethical) to lie now to avoid lying tomorrow (no matter how many times or how significant the lie in the future) d. The differences between the systems will only express themselves in unusual corner cases, but the underlying conceptual structure is very different
I still don’t understand your view of utilitarian consequentialism, if ‘maximizing utility’ isn’t a deontological obligation emanating from personhood or the like, where does it come from?
A, B, and C all look correct as stated, presuming situations really did meet the weird criteria for B and C. I think differences between consequentialism and deontology come up sometimes in regular situations, but less often when humans are running them, since human architecture will drag us all towards a fuzzy intuitionist middle.
I don’t think I understand the last paragraph. Can you rephrase?
Why don’t you view the consequentialist imperative to always seek maximum utility as a deontological rule? If it isn’t deontological where does it come from?
The imperative to maximize utility is utilitarian, not necessarily consequentialist. I know I keep harping on this point, but it’s an important distinction.
Edit: And even more specifically, it’s total utilitarian.
Keep up the good work. Any idea where this conflation might have come from? It’s widespread enough that there might be some commonly misunderstood article in the archives.
I don’t know if it’s anything specific… classic utilitarianism is the most common form of consequentialism espoused on Lesswrong, I think, so it could be as simple as “the most commonly encountered member of a category is assumed to represent the whole category”.
It could also be because utilitarianism was the first (?) form of consequentialism to be put forth by philosophers. Certainly it predates some of the more esoteric forms of consequentialism. I’m pretty sure it’s also got more famous philosophers defending it, by rather a large margin, than any other form of consequentialism.
It’s VNM consequentialist, which is a broader category then the common meaning of “utilitarian”.
To me, it looks like consequentialists care exclusively about prudence, which I also care about, and not at all about morality, which I also care about. It looks to me like the thing consequentialists call morality just is prudence and comes from the same places prudence comes from—wanting things, appreciating the nature of cause and effect, etc.
Thank you for all of your clarifications, I think I now understand how you are viewing morality.
Could you elaborate on what this thing you call “morality” is?
To me, it seems like the “morality” that deontology aspires to be, or to represent / capture, doesn’t actually exist, and thus deontology fails on its own criterion. Consequentialism also fails in this sense, of course, but consequentialism does not actually attempt to work as the sort of “morality” you seem to be referring to.