For example, eliminating lies doesn’t “make the world a better place” unless it actually makes people happier; claiming so is just concealed deontologism.
I disagree. Not lying or not being lied to might well be a terminal value, why not? You that lies or doesn’t lie is part of the world. A person may dislike being lied to, value the world where such lying occurs less, irrespective of whether they know of said lying. (Correspondingly, the world becomes a better place even if you eliminate some lying without anyone knowing about that, so nobody becomes happier in the sense of actually experiencing different emotions, assuming nothing else that matters changes as well.)
Of course, if you can only eliminate a specific case of lying by on the net making the outcome even worse for other reasons, it shouldn’t be done (and some of your examples may qualify for that).
A person may dislike being lied to, value the world where such lying occurs less, irrespective of whether they know of said lying.
In my opinion, this is a lawyer’s attempt to masquerade deontologism as consequentialism. You can, of course, reformulate the deontologist rule “never lie” as a consequentialist “I assign an extremely high disutility to situations where I lie”. In the same way you can put consequentialist preferences as a deontoligst rule “at any case, do whatever maximises your utility”. But doing that, the point of the distinction between the two ethical systems is lost.
My comment argues about the relationship of concepts “make the world a better place” and “makes people happier”. cousin_it’s statement:
For example, eliminating lies doesn’t “make the world a better place” unless it actually makes people happier; claiming so is just concealed deontologism.
I saw this as an argument, in countrapositive form for this: if we take a consequentialist outlook, then “make the world a better place” should be the same as “makes people happier”. However, it’s against the spirit of consequentialist outlook, in that it privileges “happy people” and disregards other aspects of value. Taking “happy people” as a value through deontological lens would be more appropriate, but it’s not what was being said.
Let’s carry this train of thought to its logical extreme. Imagine two worlds, World 1 and World 2. They are in exactly the same state at present, but their past histories differ: in World 1, person A lied to person B and then promptly forgot about it. In World 2 this didn’t happen. You seem to be saying that a sufficiently savvy consequentialist will value one of those worlds higher than the other. I think this is a very extreme position for a “consequentialist” to take, and the word “deontologism” would fit it way better.
IMO, a “proper” consequentialist should care about consequences they can (in principle, someday) see, and shouldn’t care about something they can never ever receive information about. If we don’t make this distinction or something similar to it, there’s no theoretical difference between deontologism and consequentialism—each one can be implemented perfectly on top of the other—and this whole discussion is pointless, and likewise is a good chunk of LW. Is that the position you take?
That the consequences are distinct according to one’s ontological model is distinct from a given agent being able to trace these consequences. What if the fact about the lie being present or not was encrypted using a one-way injective function, with the original forgotten, but the cypher retained? In principle, you can figure which is which (decipher), but not in practice for many years to come. Does your inability to decipher this difference change the fact of one of these worlds being better? What if you are not given a formal cipher, but how a butterfly flaps its wings 100 year later can be traced back to the event of lying/not lying through the laws of physics? What if the same can only be said of a record in an obscure historical text from 500 years ago, so that the event of lying was actually indirectly predicted/caused far in advance, and can in principle be inferred from that evidence?
The condition for the difference to be observable in principle is much weaker than you seem to imply. And since ability to make logical conclusions from the data doesn’t seem like the sort of thing that influences the actual moral value of the world, we might as well agree that you don’t need to distinguish them at all, although it doesn’t make much sense to introduce the distinction in value if no potential third-party beneficiary can distinguish as well (this would be just taking a quotient of ontology on the potential observation/action equivalence classes, in other words using ontological boxing of syntactic preference).
The condition for the difference to be observable in principle is much weaker than you seem to imply.
It might be, but whether or not it is seems to depend on, among other things, how much randomness there is in the laws of physics. And the minutiae of micro-physics also don’t seem like the kind of thing that can influence the moral value of the world, assuming that the psychological states of all actors in the world are essentially indifferent to these minutiae.
Can’t we resolve this problem by saying that the moral value attaches to a history of the world rather than (say) a state of the world, or the deductive closure of the information available to an agent? Then we can be consistent with the letter if not the spirit of consequentialism by stipulating that a world history containing a forgotten lie gets lower value than an otherwise macroscopically identical world history not containing it. (Is this already your view, in fact?)
Now to consider cousin_it’s idea that a “proper” consequentialist only cares about consequences that can be seen:
Even if all information about the lie is rapidly obliterated, and cannot be recovered later, it’s still true that the lie and its immediate consequences are seen by the person telling it, so we might regard this as being ‘sufficient’ for a proper consequentialist to care about it. But if we don’t, and all that matters is the indefinite future, then don’t we face the problem that “in the long term we’re all dead”? OK, perhaps some of us think that rule will eventually cease to apply, but for argument’s sake, if we knew with certainty that all life would be extinguished, say, 1000 years from now (and that all traces of whether people lived well or badly would subsequently be obliterated) we’d want our ethical theory to be more robust than to say “Do whatever you like—nothing matters any more.”
This is correct, and I was wrong. But your last sentence sounds weird. You seem to be saying that it’s not okay for me to lie even if I can’t get caught, because then I’d be the “third-party beneficiary”, but somehow it’s okay to lie and then erase my memory of lying. Is that right?
You seem to be saying that it’s not okay for me to lie even if I can’t get caught, because then I’d be the “third-party beneficiary”
Right. “Third-party beneficiary” can be seen as a generalized action, where the action is to produce an agent, or cause a behavior of an existing agent, that works towards optimizing your value.
but it’s somehow okay to lie and then erase my memory of lying. Is that right?
It’s not okay, in the sense that if you introduce the concept of you-that-decided-to-lie, existing in the past but not in present, then you also have to morally color this ontological distinction, and the natural way to do that would be to label the lying option worse. The you-that-decided is the third-party “beneficiary” in that case, that distinguished the states of the world containing lying and not-lying.
But it probably doesn’t make sense for you to have that concept in your ontology if the states of the world that contained you-lying can’t be in principle (in the strong sense described in the previous comment) distinguished from the ones that don’t. You can even introduce ontological models for this case that, say, mark past-you-lying as better than past-you-not-lying and lead to exactly the same decisions, but that would be a non-standard model ;-)
I can’t believe you took the exact cop-out I warned you against. Use more imagination next time! Here, let me make the problem a little harder for you: restrict your attention to consequentialists whose terminal values have to be observable.
I can’t believe you took the exact cop-out I warned you against.
Not surprisingly, as I was arguing with that warning, and cited it in the comment.
restrict your attention to consequentialists whose terminal values have to be observable.
What does this mean? Consequentialist values are about the world, not about observations (but your words don’t seem to fit to disagreement with this position, thus the ‘what does this mean?’). Consequentialist notion of values allows a third party to act for your benefit, in which case you don’t need to know what the third party needs to know in order to implement those values. The third party knows you could be lied to or not, and tries to make it so that you are not lied to, but you don’t need to know about these options in order to benefit.
I disagree. Not lying or not being lied to might well be a terminal value, why not? You that lies or doesn’t lie is part of the world. A person may dislike being lied to, value the world where such lying occurs less, irrespective of whether they know of said lying. (Correspondingly, the world becomes a better place even if you eliminate some lying without anyone knowing about that, so nobody becomes happier in the sense of actually experiencing different emotions, assuming nothing else that matters changes as well.)
Of course, if you can only eliminate a specific case of lying by on the net making the outcome even worse for other reasons, it shouldn’t be done (and some of your examples may qualify for that).
In my opinion, this is a lawyer’s attempt to masquerade deontologism as consequentialism. You can, of course, reformulate the deontologist rule “never lie” as a consequentialist “I assign an extremely high disutility to situations where I lie”. In the same way you can put consequentialist preferences as a deontoligst rule “at any case, do whatever maximises your utility”. But doing that, the point of the distinction between the two ethical systems is lost.
If so, maybe we want that.
My comment argues about the relationship of concepts “make the world a better place” and “makes people happier”. cousin_it’s statement:
I saw this as an argument, in countrapositive form for this: if we take a consequentialist outlook, then “make the world a better place” should be the same as “makes people happier”. However, it’s against the spirit of consequentialist outlook, in that it privileges “happy people” and disregards other aspects of value. Taking “happy people” as a value through deontological lens would be more appropriate, but it’s not what was being said.
Let’s carry this train of thought to its logical extreme. Imagine two worlds, World 1 and World 2. They are in exactly the same state at present, but their past histories differ: in World 1, person A lied to person B and then promptly forgot about it. In World 2 this didn’t happen. You seem to be saying that a sufficiently savvy consequentialist will value one of those worlds higher than the other. I think this is a very extreme position for a “consequentialist” to take, and the word “deontologism” would fit it way better.
IMO, a “proper” consequentialist should care about consequences they can (in principle, someday) see, and shouldn’t care about something they can never ever receive information about. If we don’t make this distinction or something similar to it, there’s no theoretical difference between deontologism and consequentialism—each one can be implemented perfectly on top of the other—and this whole discussion is pointless, and likewise is a good chunk of LW. Is that the position you take?
That the consequences are distinct according to one’s ontological model is distinct from a given agent being able to trace these consequences. What if the fact about the lie being present or not was encrypted using a one-way injective function, with the original forgotten, but the cypher retained? In principle, you can figure which is which (decipher), but not in practice for many years to come. Does your inability to decipher this difference change the fact of one of these worlds being better? What if you are not given a formal cipher, but how a butterfly flaps its wings 100 year later can be traced back to the event of lying/not lying through the laws of physics? What if the same can only be said of a record in an obscure historical text from 500 years ago, so that the event of lying was actually indirectly predicted/caused far in advance, and can in principle be inferred from that evidence?
The condition for the difference to be observable in principle is much weaker than you seem to imply. And since ability to make logical conclusions from the data doesn’t seem like the sort of thing that influences the actual moral value of the world, we might as well agree that you don’t need to distinguish them at all, although it doesn’t make much sense to introduce the distinction in value if no potential third-party beneficiary can distinguish as well (this would be just taking a quotient of ontology on the potential observation/action equivalence classes, in other words using ontological boxing of syntactic preference).
It might be, but whether or not it is seems to depend on, among other things, how much randomness there is in the laws of physics. And the minutiae of micro-physics also don’t seem like the kind of thing that can influence the moral value of the world, assuming that the psychological states of all actors in the world are essentially indifferent to these minutiae.
Can’t we resolve this problem by saying that the moral value attaches to a history of the world rather than (say) a state of the world, or the deductive closure of the information available to an agent? Then we can be consistent with the letter if not the spirit of consequentialism by stipulating that a world history containing a forgotten lie gets lower value than an otherwise macroscopically identical world history not containing it. (Is this already your view, in fact?)
Now to consider cousin_it’s idea that a “proper” consequentialist only cares about consequences that can be seen:
Even if all information about the lie is rapidly obliterated, and cannot be recovered later, it’s still true that the lie and its immediate consequences are seen by the person telling it, so we might regard this as being ‘sufficient’ for a proper consequentialist to care about it. But if we don’t, and all that matters is the indefinite future, then don’t we face the problem that “in the long term we’re all dead”? OK, perhaps some of us think that rule will eventually cease to apply, but for argument’s sake, if we knew with certainty that all life would be extinguished, say, 1000 years from now (and that all traces of whether people lived well or badly would subsequently be obliterated) we’d want our ethical theory to be more robust than to say “Do whatever you like—nothing matters any more.”
This is correct, and I was wrong. But your last sentence sounds weird. You seem to be saying that it’s not okay for me to lie even if I can’t get caught, because then I’d be the “third-party beneficiary”, but somehow it’s okay to lie and then erase my memory of lying. Is that right?
Right. “Third-party beneficiary” can be seen as a generalized action, where the action is to produce an agent, or cause a behavior of an existing agent, that works towards optimizing your value.
It’s not okay, in the sense that if you introduce the concept of you-that-decided-to-lie, existing in the past but not in present, then you also have to morally color this ontological distinction, and the natural way to do that would be to label the lying option worse. The you-that-decided is the third-party “beneficiary” in that case, that distinguished the states of the world containing lying and not-lying.
But it probably doesn’t make sense for you to have that concept in your ontology if the states of the world that contained you-lying can’t be in principle (in the strong sense described in the previous comment) distinguished from the ones that don’t. You can even introduce ontological models for this case that, say, mark past-you-lying as better than past-you-not-lying and lead to exactly the same decisions, but that would be a non-standard model ;-)
I suggest that eliminating lying would only be an improvement if people have reasonable expectations of each other.
Less directly, a person may value a world where beliefs were more accurate—in such a world, both lying and bullshit would be negatives.
I can’t believe you took the exact cop-out I warned you against. Use more imagination next time! Here, let me make the problem a little harder for you: restrict your attention to consequentialists whose terminal values have to be observable.
Not surprisingly, as I was arguing with that warning, and cited it in the comment.
What does this mean? Consequentialist values are about the world, not about observations (but your words don’t seem to fit to disagreement with this position, thus the ‘what does this mean?’). Consequentialist notion of values allows a third party to act for your benefit, in which case you don’t need to know what the third party needs to know in order to implement those values. The third party knows you could be lied to or not, and tries to make it so that you are not lied to, but you don’t need to know about these options in order to benefit.