My comment argues about the relationship of concepts “make the world a better place” and “makes people happier”. cousin_it’s statement:
For example, eliminating lies doesn’t “make the world a better place” unless it actually makes people happier; claiming so is just concealed deontologism.
I saw this as an argument, in countrapositive form for this: if we take a consequentialist outlook, then “make the world a better place” should be the same as “makes people happier”. However, it’s against the spirit of consequentialist outlook, in that it privileges “happy people” and disregards other aspects of value. Taking “happy people” as a value through deontological lens would be more appropriate, but it’s not what was being said.
Let’s carry this train of thought to its logical extreme. Imagine two worlds, World 1 and World 2. They are in exactly the same state at present, but their past histories differ: in World 1, person A lied to person B and then promptly forgot about it. In World 2 this didn’t happen. You seem to be saying that a sufficiently savvy consequentialist will value one of those worlds higher than the other. I think this is a very extreme position for a “consequentialist” to take, and the word “deontologism” would fit it way better.
IMO, a “proper” consequentialist should care about consequences they can (in principle, someday) see, and shouldn’t care about something they can never ever receive information about. If we don’t make this distinction or something similar to it, there’s no theoretical difference between deontologism and consequentialism—each one can be implemented perfectly on top of the other—and this whole discussion is pointless, and likewise is a good chunk of LW. Is that the position you take?
That the consequences are distinct according to one’s ontological model is distinct from a given agent being able to trace these consequences. What if the fact about the lie being present or not was encrypted using a one-way injective function, with the original forgotten, but the cypher retained? In principle, you can figure which is which (decipher), but not in practice for many years to come. Does your inability to decipher this difference change the fact of one of these worlds being better? What if you are not given a formal cipher, but how a butterfly flaps its wings 100 year later can be traced back to the event of lying/not lying through the laws of physics? What if the same can only be said of a record in an obscure historical text from 500 years ago, so that the event of lying was actually indirectly predicted/caused far in advance, and can in principle be inferred from that evidence?
The condition for the difference to be observable in principle is much weaker than you seem to imply. And since ability to make logical conclusions from the data doesn’t seem like the sort of thing that influences the actual moral value of the world, we might as well agree that you don’t need to distinguish them at all, although it doesn’t make much sense to introduce the distinction in value if no potential third-party beneficiary can distinguish as well (this would be just taking a quotient of ontology on the potential observation/action equivalence classes, in other words using ontological boxing of syntactic preference).
The condition for the difference to be observable in principle is much weaker than you seem to imply.
It might be, but whether or not it is seems to depend on, among other things, how much randomness there is in the laws of physics. And the minutiae of micro-physics also don’t seem like the kind of thing that can influence the moral value of the world, assuming that the psychological states of all actors in the world are essentially indifferent to these minutiae.
Can’t we resolve this problem by saying that the moral value attaches to a history of the world rather than (say) a state of the world, or the deductive closure of the information available to an agent? Then we can be consistent with the letter if not the spirit of consequentialism by stipulating that a world history containing a forgotten lie gets lower value than an otherwise macroscopically identical world history not containing it. (Is this already your view, in fact?)
Now to consider cousin_it’s idea that a “proper” consequentialist only cares about consequences that can be seen:
Even if all information about the lie is rapidly obliterated, and cannot be recovered later, it’s still true that the lie and its immediate consequences are seen by the person telling it, so we might regard this as being ‘sufficient’ for a proper consequentialist to care about it. But if we don’t, and all that matters is the indefinite future, then don’t we face the problem that “in the long term we’re all dead”? OK, perhaps some of us think that rule will eventually cease to apply, but for argument’s sake, if we knew with certainty that all life would be extinguished, say, 1000 years from now (and that all traces of whether people lived well or badly would subsequently be obliterated) we’d want our ethical theory to be more robust than to say “Do whatever you like—nothing matters any more.”
This is correct, and I was wrong. But your last sentence sounds weird. You seem to be saying that it’s not okay for me to lie even if I can’t get caught, because then I’d be the “third-party beneficiary”, but somehow it’s okay to lie and then erase my memory of lying. Is that right?
You seem to be saying that it’s not okay for me to lie even if I can’t get caught, because then I’d be the “third-party beneficiary”
Right. “Third-party beneficiary” can be seen as a generalized action, where the action is to produce an agent, or cause a behavior of an existing agent, that works towards optimizing your value.
but it’s somehow okay to lie and then erase my memory of lying. Is that right?
It’s not okay, in the sense that if you introduce the concept of you-that-decided-to-lie, existing in the past but not in present, then you also have to morally color this ontological distinction, and the natural way to do that would be to label the lying option worse. The you-that-decided is the third-party “beneficiary” in that case, that distinguished the states of the world containing lying and not-lying.
But it probably doesn’t make sense for you to have that concept in your ontology if the states of the world that contained you-lying can’t be in principle (in the strong sense described in the previous comment) distinguished from the ones that don’t. You can even introduce ontological models for this case that, say, mark past-you-lying as better than past-you-not-lying and lead to exactly the same decisions, but that would be a non-standard model ;-)
My comment argues about the relationship of concepts “make the world a better place” and “makes people happier”. cousin_it’s statement:
I saw this as an argument, in countrapositive form for this: if we take a consequentialist outlook, then “make the world a better place” should be the same as “makes people happier”. However, it’s against the spirit of consequentialist outlook, in that it privileges “happy people” and disregards other aspects of value. Taking “happy people” as a value through deontological lens would be more appropriate, but it’s not what was being said.
Let’s carry this train of thought to its logical extreme. Imagine two worlds, World 1 and World 2. They are in exactly the same state at present, but their past histories differ: in World 1, person A lied to person B and then promptly forgot about it. In World 2 this didn’t happen. You seem to be saying that a sufficiently savvy consequentialist will value one of those worlds higher than the other. I think this is a very extreme position for a “consequentialist” to take, and the word “deontologism” would fit it way better.
IMO, a “proper” consequentialist should care about consequences they can (in principle, someday) see, and shouldn’t care about something they can never ever receive information about. If we don’t make this distinction or something similar to it, there’s no theoretical difference between deontologism and consequentialism—each one can be implemented perfectly on top of the other—and this whole discussion is pointless, and likewise is a good chunk of LW. Is that the position you take?
That the consequences are distinct according to one’s ontological model is distinct from a given agent being able to trace these consequences. What if the fact about the lie being present or not was encrypted using a one-way injective function, with the original forgotten, but the cypher retained? In principle, you can figure which is which (decipher), but not in practice for many years to come. Does your inability to decipher this difference change the fact of one of these worlds being better? What if you are not given a formal cipher, but how a butterfly flaps its wings 100 year later can be traced back to the event of lying/not lying through the laws of physics? What if the same can only be said of a record in an obscure historical text from 500 years ago, so that the event of lying was actually indirectly predicted/caused far in advance, and can in principle be inferred from that evidence?
The condition for the difference to be observable in principle is much weaker than you seem to imply. And since ability to make logical conclusions from the data doesn’t seem like the sort of thing that influences the actual moral value of the world, we might as well agree that you don’t need to distinguish them at all, although it doesn’t make much sense to introduce the distinction in value if no potential third-party beneficiary can distinguish as well (this would be just taking a quotient of ontology on the potential observation/action equivalence classes, in other words using ontological boxing of syntactic preference).
It might be, but whether or not it is seems to depend on, among other things, how much randomness there is in the laws of physics. And the minutiae of micro-physics also don’t seem like the kind of thing that can influence the moral value of the world, assuming that the psychological states of all actors in the world are essentially indifferent to these minutiae.
Can’t we resolve this problem by saying that the moral value attaches to a history of the world rather than (say) a state of the world, or the deductive closure of the information available to an agent? Then we can be consistent with the letter if not the spirit of consequentialism by stipulating that a world history containing a forgotten lie gets lower value than an otherwise macroscopically identical world history not containing it. (Is this already your view, in fact?)
Now to consider cousin_it’s idea that a “proper” consequentialist only cares about consequences that can be seen:
Even if all information about the lie is rapidly obliterated, and cannot be recovered later, it’s still true that the lie and its immediate consequences are seen by the person telling it, so we might regard this as being ‘sufficient’ for a proper consequentialist to care about it. But if we don’t, and all that matters is the indefinite future, then don’t we face the problem that “in the long term we’re all dead”? OK, perhaps some of us think that rule will eventually cease to apply, but for argument’s sake, if we knew with certainty that all life would be extinguished, say, 1000 years from now (and that all traces of whether people lived well or badly would subsequently be obliterated) we’d want our ethical theory to be more robust than to say “Do whatever you like—nothing matters any more.”
This is correct, and I was wrong. But your last sentence sounds weird. You seem to be saying that it’s not okay for me to lie even if I can’t get caught, because then I’d be the “third-party beneficiary”, but somehow it’s okay to lie and then erase my memory of lying. Is that right?
Right. “Third-party beneficiary” can be seen as a generalized action, where the action is to produce an agent, or cause a behavior of an existing agent, that works towards optimizing your value.
It’s not okay, in the sense that if you introduce the concept of you-that-decided-to-lie, existing in the past but not in present, then you also have to morally color this ontological distinction, and the natural way to do that would be to label the lying option worse. The you-that-decided is the third-party “beneficiary” in that case, that distinguished the states of the world containing lying and not-lying.
But it probably doesn’t make sense for you to have that concept in your ontology if the states of the world that contained you-lying can’t be in principle (in the strong sense described in the previous comment) distinguished from the ones that don’t. You can even introduce ontological models for this case that, say, mark past-you-lying as better than past-you-not-lying and lead to exactly the same decisions, but that would be a non-standard model ;-)