(1) It is NOT CLEAR to me that saving 1 person with certainty is morally equivalent to saving 2 people when a fair coin lands heads in a one-off deal.
(2) It is CLEAR to me that saving 1000 people with p=.99 is morally better than saving 1 person with certainty.
Models are supposed to hew to the facts. Your model diverges from the facts of human moral judgments, and you respond by exhorting us to live up to your model.
In a world sufficiently replete with aspiring rationalists there will be not just one chance to save lives probabilistically, but (over the centuries) many. By the law of large numbers, we can be confident that the outcome of following the expected-value strategy consistently (even if any particular person only makes a choice like this zero or one times in their life) will be that more total lives will be saved.
Some people believe that “being virtuous” (or suchlike) is better than achieving a better society-level outcome. To that view I cannot say it better than Eliezer: “A human life, with all its joys and all its pains, adding up over the course of decades, is worth far more than your brain’s feelings of comfort or discomfort with a plan.”
I see a problem with Eliezer’s strategy that is psychological rather than moral: if 500 people die, you may be devastated, especially if you find out later that the chance of failure was, say, 50% rather than 10%. Consequentialism asks us to take this into account. If you are a general making battle decisions, which would weigh on you more? The death of 500 (in your effort to save 100), or abandoning 100 to die at enemy hands, knowing you had a roughly 90% chance to save them? Could that adversely affect future decisions? (in specific scenarios we must also consider other things, e.g. in this case whether it’s worth the cost in resources—military leaders know, or should know, that resources can be equated with lives as well...)
Note: I’m pretty confident Eliezer wouldn’t object to you using your moral sense as a tiebreaker if you had the choice between saving one person with certainty and two people with 50% probability.
Consider these two facts about me:
(1) It is NOT CLEAR to me that saving 1 person with certainty is morally equivalent to saving 2 people when a fair coin lands heads in a one-off deal.
(2) It is CLEAR to me that saving 1000 people with p=.99 is morally better than saving 1 person with certainty.
Models are supposed to hew to the facts. Your model diverges from the facts of human moral judgments, and you respond by exhorting us to live up to your model.
Why should we do that?
In a world sufficiently replete with aspiring rationalists there will be not just one chance to save lives probabilistically, but (over the centuries) many. By the law of large numbers, we can be confident that the outcome of following the expected-value strategy consistently (even if any particular person only makes a choice like this zero or one times in their life) will be that more total lives will be saved.
Some people believe that “being virtuous” (or suchlike) is better than achieving a better society-level outcome. To that view I cannot say it better than Eliezer: “A human life, with all its joys and all its pains, adding up over the course of decades, is worth far more than your brain’s feelings of comfort or discomfort with a plan.”
I see a problem with Eliezer’s strategy that is psychological rather than moral: if 500 people die, you may be devastated, especially if you find out later that the chance of failure was, say, 50% rather than 10%. Consequentialism asks us to take this into account. If you are a general making battle decisions, which would weigh on you more? The death of 500 (in your effort to save 100), or abandoning 100 to die at enemy hands, knowing you had a roughly 90% chance to save them? Could that adversely affect future decisions? (in specific scenarios we must also consider other things, e.g. in this case whether it’s worth the cost in resources—military leaders know, or should know, that resources can be equated with lives as well...)
Note: I’m pretty confident Eliezer wouldn’t object to you using your moral sense as a tiebreaker if you had the choice between saving one person with certainty and two people with 50% probability.