This probably sounds horrible, but “saving human lives” in some contexts is an applause light. We should be able to think beyond that.
As a textbook example, saving Hitler’s life in a specific moment of history of the alternate universe would create more harm than good. Regardless of how much or little money it would cost.
Even if we value all human lifes as intrinsically equal, we can still ask what will be the expected consequences of saving this specific human. Is he or she more likely to help other people, or perhaps to harm them? Because that is a multiplier of my intervention, and consequences of consequences of my actions are consequences of my actions, even when I am not aware of them.
Don’t just tell me that I saved a hypothetical person from malaria. Tell me whether that person is likely to live a happy life and contribute to happy lives of their neighbors, or whether I have most likely provided another soldier for the next genocide.
Even in areas with frequent wars and human rights violations, curing malaria does more good than harm. (To prevent the status quo bias: Imagine healthy people suffering from the war or genocide. Would sending tons of malaria-infected mosquitoes make the situation better or worse?) But perhaps something else, like education or government change that could reduce war, would be better in long term, even if in the short term there are less “lives per dollar saved”.
Of course, as is the usual problem with consequentialism, it is pretty difficult to predict the consequences of our actions.
This probably sounds horrible, but “saving human lives” in some contexts is an applause light. We should be able to think beyond that.
As a textbook example, saving Hitler’s life in a specific moment of history of the alternate universe would create more harm than good. Regardless of how much or little money it would cost.
Even if we value all human lifes as intrinsically equal, we can still ask what will be the expected consequences of saving this specific human. Is he or she more likely to help other people, or perhaps to harm them? Because that is a multiplier of my intervention, and consequences of consequences of my actions are consequences of my actions, even when I am not aware of them.
Don’t just tell me that I saved a hypothetical person from malaria. Tell me whether that person is likely to live a happy life and contribute to happy lives of their neighbors, or whether I have most likely provided another soldier for the next genocide.
Even in areas with frequent wars and human rights violations, curing malaria does more good than harm. (To prevent the status quo bias: Imagine healthy people suffering from the war or genocide. Would sending tons of malaria-infected mosquitoes make the situation better or worse?) But perhaps something else, like education or government change that could reduce war, would be better in long term, even if in the short term there are less “lives per dollar saved”.
Of course, as is the usual problem with consequentialism, it is pretty difficult to predict the consequences of our actions.