@Zach Stein-Perlman I’m not really sure why you gave a thumbs-down. Probably you’re not trying to communicate that you think there shouldn’t be deontological injunctions against genocide. I think someone renouncing any deontological injunctions against such devastating and irreversible actions would be both pretty scary and reprehensible. But I failed to come up with a different hypothesis for what you are communicating with a thumbs-down on that statement (to be clear I wouldn’t be surprised if you provided one).
Suppose you can take an action that decreases net P(everyone dying) but increases P(you yourself kill everyone), and leaves all else equal. I claim you should take it; everyone is better off if you take it.
I deny “deontological injunctions.” I want you and everyone to take the actions that lead to the best outcomes, not that keep your own hands clean. I’m puzzled by your expectation that I’d endorse “deontological injunctions.”
This situation seems identical to the trolley problem in the relevant ways. I think you should avoid letting people die, not just avoid killing people.
[Note: I roughly endorse heuristics like if you’re contemplating crazy-sounding actions for strange-sounding reasons, you should suspect that you’re confused about your situation or the effects of your actions, and you should be more cautious than your naive calculations suggest. But that’s very different from deontology.]
I think I have a different overall take than Ben here, but, the frame I think makes sense here is to be like: “Deontological injuctions are guardrails. There are hypothetical situations (and, some real situations) where it’s correct to override them, but the guardrail should have some weight and for more important guardrails, you need a clearer reasoning for why avoiding it actually helps.”
I don’t know what I think about this in the case of a country passing laws. Countries aren’t exactly agents. Passing novel laws is different than following existing laws. But, I observe:
it’s really hard to be confident about longterm consequences of things. Consequentialism just isn’t actually compute-efficient enough to be what you use most of the time for making decisions. (This includes but isn’t limited to “you’re contemplating crazy sounding actions for strange sounding reasons”, although I think has a similar generator)
it matters just not what you-in-particular-in-a-vacuum do, in one particular timeslice. It matters how complicated the world is to reason about. If everyone is doing pure consequentialism all the time, you have to model the way each person is going to interpret consequences with their own special-snowflake worldview. Having to model “well, Alice and Bob and Charlie and 1000s of other people might decide to steal from me, or from my friends, if the benefits were high enough and they thought they could get away with it” adds a tremendous amount of overhead.
You should be looking for moral reasoning that makes you simple to reason about, and that perform well in most cases. That’s a lot of what deontology is for.
@Zach Stein-Perlman I’m not really sure why you gave a thumbs-down. Probably you’re not trying to communicate that you think there shouldn’t be deontological injunctions against genocide. I think someone renouncing any deontological injunctions against such devastating and irreversible actions would be both pretty scary and reprehensible. But I failed to come up with a different hypothesis for what you are communicating with a thumbs-down on that statement (to be clear I wouldn’t be surprised if you provided one).
Suppose you can take an action that decreases net P(everyone dying) but increases P(you yourself kill everyone), and leaves all else equal. I claim you should take it; everyone is better off if you take it.
I deny “deontological injunctions.” I want you and everyone to take the actions that lead to the best outcomes, not that keep your own hands clean. I’m puzzled by your expectation that I’d endorse “deontological injunctions.”
This situation seems identical to the trolley problem in the relevant ways. I think you should avoid letting people die, not just avoid killing people.
[Note: I roughly endorse heuristics like if you’re contemplating crazy-sounding actions for strange-sounding reasons, you should suspect that you’re confused about your situation or the effects of your actions, and you should be more cautious than your naive calculations suggest. But that’s very different from deontology.]
I think I have a different overall take than Ben here, but, the frame I think makes sense here is to be like: “Deontological injuctions are guardrails. There are hypothetical situations (and, some real situations) where it’s correct to override them, but the guardrail should have some weight and for more important guardrails, you need a clearer reasoning for why avoiding it actually helps.”
I don’t know what I think about this in the case of a country passing laws. Countries aren’t exactly agents. Passing novel laws is different than following existing laws. But, I observe:
it’s really hard to be confident about longterm consequences of things. Consequentialism just isn’t actually compute-efficient enough to be what you use most of the time for making decisions. (This includes but isn’t limited to “you’re contemplating crazy sounding actions for strange sounding reasons”, although I think has a similar generator)
it matters just not what you-in-particular-in-a-vacuum do, in one particular timeslice. It matters how complicated the world is to reason about. If everyone is doing pure consequentialism all the time, you have to model the way each person is going to interpret consequences with their own special-snowflake worldview. Having to model “well, Alice and Bob and Charlie and 1000s of other people might decide to steal from me, or from my friends, if the benefits were high enough and they thought they could get away with it” adds a tremendous amount of overhead.
You should be looking for moral reasoning that makes you simple to reason about, and that perform well in most cases. That’s a lot of what deontology is for.