We often imagine a “consequence” as a state of the world at a particular time. But we could also include processes that stretch out in time under the label “consequence”. More generally, we could allow the truth of any proposition as a potential consequence. This wouldn’t be restricted to a state, and not even to a single process.
I think this is intuitive. Generally, when we want something, we do wish for something to be true. E.g. I want to climb a mountain: I want it to be true that I climb a mountain.
Yeah, you can say something like “I want the world to be such that I follow deontology” and then consequentialism includes deontology. Or you could say “it’s right to follow consequentialism” and then deontology includes consequentialism. Understood this way, the systems become vacuous and don’t mean anything at all. When people say “I’m an consequentialist”, they usually mean something more: that their wishes are naturally expressed in terms of consequences. That’s what my post is arguing against. I think some wishes are naturally consequentialist, but there are other equally valid wishes that aren’t, and expressing all wishes in terms of consequences isn’t especially useful.
This reminds me of the puzzle: why is death bad? After all, when you are dead, you won’t be around to suffer from it. Or why worry about not being alive in the future when you weren’t alive before birth either? Simple response: We just don’t want to be dead in the future for evolutionary reasons. Organisms who hate death had higher rates of reproduction. What matters for us is not a fact about the consequence of dying, but what we happen to want or not want. (Related: this, but also this.)
I think consequentialism is the robust framework for achieving goals and I think my top goal is the flourishing of (most, the ones compatible with me) human values.
That uses consequentialism as the ultimate lever to move the world but refers to consequences that are (almost) entirely the results of our biology-driven thinking and desiring and existing, at least for now.
On reflection, I would rewrite this a bit: What we care about is things being true. Potential facts. So we care about the truth of propositions. And both actions (“I do X.”) and consequences of those actions (“Y happens.”) can be expressed as propositions. But an action is not itself a consequence of an action. It’s directly caused by a decision. So consequentialism is “wrong” insofar that it doesn’t account for the possibility that one can care about actions for themselves, not just for their consequences.
Yeah. I think consequentialism is a great framing that has done a lot of good in EA, where the desired state of the world is easy to describe (remove X amount of disease and such). And this created a bit of a blindspot, where people started thinking that goals not natively formulated in terms of end states (“play with this toy”, “respect this person’s wishes” and such) should be reformulated in terms of end states anyway, in more complex ways. To be honest I still go back and forth on whether that works—my post was a bit polemical. But it feels like there’s something to the idea of keeping some goals in our “internal language”, not rewriting them into the language of consequences.
We often imagine a “consequence” as a state of the world at a particular time. But we could also include processes that stretch out in time under the label “consequence”. More generally, we could allow the truth of any proposition as a potential consequence. This wouldn’t be restricted to a state, and not even to a single process.
I think this is intuitive. Generally, when we want something, we do wish for something to be true. E.g. I want to climb a mountain: I want it to be true that I climb a mountain.
Yeah, you can say something like “I want the world to be such that I follow deontology” and then consequentialism includes deontology. Or you could say “it’s right to follow consequentialism” and then deontology includes consequentialism. Understood this way, the systems become vacuous and don’t mean anything at all. When people say “I’m an consequentialist”, they usually mean something more: that their wishes are naturally expressed in terms of consequences. That’s what my post is arguing against. I think some wishes are naturally consequentialist, but there are other equally valid wishes that aren’t, and expressing all wishes in terms of consequences isn’t especially useful.
This reminds me of the puzzle: why is death bad? After all, when you are dead, you won’t be around to suffer from it. Or why worry about not being alive in the future when you weren’t alive before birth either? Simple response: We just don’t want to be dead in the future for evolutionary reasons. Organisms who hate death had higher rates of reproduction. What matters for us is not a fact about the consequence of dying, but what we happen to want or not want. (Related: this, but also this.)
I think consequentialism is the robust framework for achieving goals and I think my top goal is the flourishing of (most, the ones compatible with me) human values.
That uses consequentialism as the ultimate lever to move the world but refers to consequences that are (almost) entirely the results of our biology-driven thinking and desiring and existing, at least for now.
On reflection, I would rewrite this a bit: What we care about is things being true. Potential facts. So we care about the truth of propositions. And both actions (“I do X.”) and consequences of those actions (“Y happens.”) can be expressed as propositions. But an action is not itself a consequence of an action. It’s directly caused by a decision. So consequentialism is “wrong” insofar that it doesn’t account for the possibility that one can care about actions for themselves, not just for their consequences.
Yeah. I think consequentialism is a great framing that has done a lot of good in EA, where the desired state of the world is easy to describe (remove X amount of disease and such). And this created a bit of a blindspot, where people started thinking that goals not natively formulated in terms of end states (“play with this toy”, “respect this person’s wishes” and such) should be reformulated in terms of end states anyway, in more complex ways. To be honest I still go back and forth on whether that works—my post was a bit polemical. But it feels like there’s something to the idea of keeping some goals in our “internal language”, not rewriting them into the language of consequences.