On reflection, I would rewrite this a bit: What we care about is things being true. Potential facts. So we care about the truth of propositions. And both actions (“I do X.”) and consequences of those actions (“Y happens.”) can be expressed as propositions. But an action is not itself a consequence of an action. It’s directly caused by a decision. So consequentialism is “wrong” insofar that it doesn’t account for the possibility that one can care about actions for themselves, not just for their consequences.
Yeah. I think consequentialism is a great framing that has done a lot of good in EA, where the desired state of the world is easy to describe (remove X amount of disease and such). And this created a bit of a blindspot, where people started thinking that goals not natively formulated in terms of end states (“play with this toy”, “respect this person’s wishes” and such) should be reformulated in terms of end states anyway, in more complex ways. To be honest I still go back and forth on whether that works—my post was a bit polemical. But it feels like there’s something to the idea of keeping some goals in our “internal language”, not rewriting them into the language of consequences.
On reflection, I would rewrite this a bit: What we care about is things being true. Potential facts. So we care about the truth of propositions. And both actions (“I do X.”) and consequences of those actions (“Y happens.”) can be expressed as propositions. But an action is not itself a consequence of an action. It’s directly caused by a decision. So consequentialism is “wrong” insofar that it doesn’t account for the possibility that one can care about actions for themselves, not just for their consequences.
Yeah. I think consequentialism is a great framing that has done a lot of good in EA, where the desired state of the world is easy to describe (remove X amount of disease and such). And this created a bit of a blindspot, where people started thinking that goals not natively formulated in terms of end states (“play with this toy”, “respect this person’s wishes” and such) should be reformulated in terms of end states anyway, in more complex ways. To be honest I still go back and forth on whether that works—my post was a bit polemical. But it feels like there’s something to the idea of keeping some goals in our “internal language”, not rewriting them into the language of consequences.