Okay. Imagine two versions of you: In one, you were born into a society in which, owing to nuclear war, the country you live in is the only one remaining. It is just as wealthy as our own current society owing to the point this hypothesis is leading to.
The other version of you exists in a society much more like the one we live in, where poor people are starving to death.
I’ll observe that, strictly in terms of ethical obligations, the person in the scenario in which the poor people didn’t exist is ethically superior, because fewer ethical obligations are being unmet. In spite of their actions being exactly the same.
Outside the hypothetical: I agree wholeheartedly the world in which poor people don’t starve is better than the one in which they do. That’s the world I’d prefer exist. I simply fail to see it as an ethical issue, as I regard ethics as being the governance of one’s own behavior rather than the governance of the world.
Hmm. You’re getting close to Repugnant Conclusion territory here, which I tend to resolve by rejecting the redistribution argument rather than the addition argument.
In my view, In terms of world-preference, the smaller world with no poverty is inferior, as there are fewer net-positive lives. If you’re claiming that near-starving impoverished people are leading lives that are negative value, I understand but do not agree with your position.
What’s your reason for not agreeing with that position?
I ask because my own experience is that I feel strongly inclined to disagree with it, but when I look closer I think that’s because of a couple of confusions.
Confusion #1. Here are two questions we can ask about a life. (1) “Would it be an improvement to end this life now?” (2) “Would it be an improvement if this life had simply never been?”. The question relevant to the Repugnant Conclusion is #2 (almost—see below), but there’s a tendency to conflate it with #1. (Imagine tactlessly telling someone that the answer to #2 in their case is yes. I think they would likely respond indignantly with something like “So you’d prefer me dead, would you?”—question #1.) And, because people value their own lives a lot and people’s preferences matter, a life has to be much much worse to make the answer to #1 positive than to make the answer to #2 positive. So when we try to imagine lives that are just barely worth having (best not to say “worth living” because again this wrongly suggests #1) we tend to think about ones that are borderline for #1. I think most human lives are well above the threshold for saying no to #1, but quite a lot might be below the threshold for #2.
Confusion #2. People’s lives matter not only to themselves but to other people around them. Imagine (ridiculously oversimple toy model alert) a community of people, all with lives to which the answer to question 2 above is (all things considered) yes and who care a lot about the people around them; let’s have a scale on which the borderline for question 2 is at zero, and suppose that someone with N friends scores −1/(N^2+1). Suppose everyone has 10 friends; then the incremental effect of removing someone with N friends is to improve the score by about 0.01 for their life and reduce it by 10(1/82-1/101) or about 0.023. In other words, this world would be worse off without any individual in the community—if what you imagine when assessing that is that that individual is gone and no one else takes their place in others’ social relationships. But everyone in the community has a life that, all told, is negative, the world would be better off if none of them had ever lived, and it would be better off if any individual one had never lived and their place in others’ lives had been taken by someone else*.
(By the way—do you feel that sense of outrage as if I’m proposing dropping bombs on this hypothetical community? That’s the difference between question 1 and question 2, again. For the avoidance of doubt, I feel it too.)
This second effect, like the first one, tends to make us overestimate how bad a life has to be before the world would have been better off without it, because even if we’re careful not to confuse question 1 with question 2 we’re still liable to think of a “borderline” life as one for which the world would be neither better nor worse off if it were simply deleted, which accounts for social relationships in the wrong way.
In the first scenario, in which ethics is an obligation (i/e, your ethical standing decreases for not fulfilling ethical obligations), you’re ethically a worse person in a world with poverty, because there are ethical obligations you cannot meet. The idea of ethical standing being independent of your personal activities is, to me, contrary to the nature of ethics.
In the second scenario, in which ethics are additive (you’re not a worse person for not doing good, but instead, the good you do adds to some sort of ethical “score”), your ethical standing is limited by how horrible the world you are in is—that is, the most ethical people can only exist in worlds in which suffering is sufficiently frequent that they can constantly act to avert it. The idea of ethical standing being dependent upon other people’s suffering is also, to me, contrary to the nature of ethics.
It’s not a matter of which world you’d prefer to live in, it’s a matter of how the world you live in changes your ethical standing.
ETA: Although the “additive” model of ethics, come to think of it, solves the theodicy problem. Why is there evil? Because otherwise people couldn’t be good.
I suspect I’m more confused than even this implies. I don’t think there’s any numerical ethical standing measurement, and I think that cross-universe comparisons are incoherent. Ethics is solely and simply about decisions—which future state, conditional on current choice, is preferable.
I’m not trying to compare a current world with poverty against a counterfactual current world without—that’s completely irrelevant and unhelpful. In a world with experienced pain (including some forms of poverty), an agent is ethically superior if it makes decisions that alleviate such pain, and ethically inferior if it fails to do so.
Ethics is solely and simply about decisions—which future state, conditional on current choice, is preferable.
From my perspective, we have a word for that, and it isn’t ethics. It’s preference. Ethics are the rules governing how preference conflicts are mediated.
I’m not trying to compare a current world with poverty against a counterfactual current world without—that’s completely irrelevant and unhelpful.
Then imagine somebody living an upper-class life who is unaware of suffering. Are they ethically inferior because they haven’t made decisions to alleviate pain they don’t know about? Does informing them of the pain change their ethical status—does it make them ethically worse-off?
Ethics are the rules governing how preference conflicts are mediated.
Absolutely agreed. But it’s about conflicts among preferred outcomes of a decision, not about preferences among disconnected world-states.
upper-class life who is unaware of suffering.
If they’re unaware because there’s no reasonable way for them to be aware, it’s hard for me to hold them to blame for not acting on that. Ought implies can. If they’re unaware because they’ve made choices to avoid the truth, then they’re ethically inferior to the version of themselves which do learn and act.
Absolutely agreed. But it’s about conflicts among preferred outcomes of a decision, not about preferences among disconnected world-states.
Less about two outcomes your preferences conflict on, and more about, say, your preferences and mine.
Insofar as your internal preferences conflict, I’m not certain ethics are the correct approach to resolve the issue.
If they’re unaware because there’s no reasonable way for them to be aware, it’s hard for me to hold them to blame for not acting on that. Ought implies can. If they’re unaware because they’ve made choices to avoid the truth, then they’re ethically inferior to the version of themselves which do learn and act.
This leads to a curious metaethics problem; I can construct a society of more ethically perfect people just by construction it so that other people’s suffering is an unknown unknown. Granted, that probably makes me something of an ethical monster, but given that I’m making ethically superior people, is it worth the ethical cost to me?
Once you start treating ethics like utility—that is, a comparable, in some sense ordinal, value—you produce meta-ethical issues identical to the ethical issues with utilitarianism.
You’re still treating ethical values as external summable properties. You just can’t compare the ethical value of people in radically different situations. You can compare the ethical value of two possible decisions of a single situation.
If there’s no suffering, that doesn’t make people more or less ethical than if there is suffering—that comparison is meaningless. If an entity chooses to avoid knowledge of suffering, that choice is morally objectionable compared to the same entity seeking knowledge of such.
You can get away to some extent by generalizing and treating agents in somewhat similar situations as somewhat comparable—to the degree that you think A and B are facing the same decision points, you can judge the choices they make as comparable. But this is always less than 100%.
In fact, I think the same about utility—it’s bizarre and incoherent to treat it as comparable or additive. It’s ordinal only within a decision, and has no ordering across entities. This is my primary reason for being consequentialist but not utilitarian—those guys are crazy.
Okay. Imagine two versions of you: In one, you were born into a society in which, owing to nuclear war, the country you live in is the only one remaining. It is just as wealthy as our own current society owing to the point this hypothesis is leading to.
The other version of you exists in a society much more like the one we live in, where poor people are starving to death.
I’ll observe that, strictly in terms of ethical obligations, the person in the scenario in which the poor people didn’t exist is ethically superior, because fewer ethical obligations are being unmet. In spite of their actions being exactly the same.
Outside the hypothetical: I agree wholeheartedly the world in which poor people don’t starve is better than the one in which they do. That’s the world I’d prefer exist. I simply fail to see it as an ethical issue, as I regard ethics as being the governance of one’s own behavior rather than the governance of the world.
Hmm. You’re getting close to Repugnant Conclusion territory here, which I tend to resolve by rejecting the redistribution argument rather than the addition argument.
In my view, In terms of world-preference, the smaller world with no poverty is inferior, as there are fewer net-positive lives. If you’re claiming that near-starving impoverished people are leading lives that are negative value, I understand but do not agree with your position.
What’s your reason for not agreeing with that position?
I ask because my own experience is that I feel strongly inclined to disagree with it, but when I look closer I think that’s because of a couple of confusions.
Confusion #1. Here are two questions we can ask about a life. (1) “Would it be an improvement to end this life now?” (2) “Would it be an improvement if this life had simply never been?”. The question relevant to the Repugnant Conclusion is #2 (almost—see below), but there’s a tendency to conflate it with #1. (Imagine tactlessly telling someone that the answer to #2 in their case is yes. I think they would likely respond indignantly with something like “So you’d prefer me dead, would you?”—question #1.) And, because people value their own lives a lot and people’s preferences matter, a life has to be much much worse to make the answer to #1 positive than to make the answer to #2 positive. So when we try to imagine lives that are just barely worth having (best not to say “worth living” because again this wrongly suggests #1) we tend to think about ones that are borderline for #1. I think most human lives are well above the threshold for saying no to #1, but quite a lot might be below the threshold for #2.
Confusion #2. People’s lives matter not only to themselves but to other people around them. Imagine (ridiculously oversimple toy model alert) a community of people, all with lives to which the answer to question 2 above is (all things considered) yes and who care a lot about the people around them; let’s have a scale on which the borderline for question 2 is at zero, and suppose that someone with N friends scores −1/(N^2+1). Suppose everyone has 10 friends; then the incremental effect of removing someone with N friends is to improve the score by about 0.01 for their life and reduce it by 10(1/82-1/101) or about 0.023. In other words, this world would be worse off without any individual in the community—if what you imagine when assessing that is that that individual is gone and no one else takes their place in others’ social relationships. But everyone in the community has a life that, all told, is negative, the world would be better off if none of them had ever lived, and it would be better off if any individual one had never lived and their place in others’ lives had been taken by someone else*.
(By the way—do you feel that sense of outrage as if I’m proposing dropping bombs on this hypothetical community? That’s the difference between question 1 and question 2, again. For the avoidance of doubt, I feel it too.)
This second effect, like the first one, tends to make us overestimate how bad a life has to be before the world would have been better off without it, because even if we’re careful not to confuse question 1 with question 2 we’re still liable to think of a “borderline” life as one for which the world would be neither better nor worse off if it were simply deleted, which accounts for social relationships in the wrong way.
There are two problems.
In the first scenario, in which ethics is an obligation (i/e, your ethical standing decreases for not fulfilling ethical obligations), you’re ethically a worse person in a world with poverty, because there are ethical obligations you cannot meet. The idea of ethical standing being independent of your personal activities is, to me, contrary to the nature of ethics.
In the second scenario, in which ethics are additive (you’re not a worse person for not doing good, but instead, the good you do adds to some sort of ethical “score”), your ethical standing is limited by how horrible the world you are in is—that is, the most ethical people can only exist in worlds in which suffering is sufficiently frequent that they can constantly act to avert it. The idea of ethical standing being dependent upon other people’s suffering is also, to me, contrary to the nature of ethics.
It’s not a matter of which world you’d prefer to live in, it’s a matter of how the world you live in changes your ethical standing.
ETA: Although the “additive” model of ethics, come to think of it, solves the theodicy problem. Why is there evil? Because otherwise people couldn’t be good.
I suspect I’m more confused than even this implies. I don’t think there’s any numerical ethical standing measurement, and I think that cross-universe comparisons are incoherent. Ethics is solely and simply about decisions—which future state, conditional on current choice, is preferable.
I’m not trying to compare a current world with poverty against a counterfactual current world without—that’s completely irrelevant and unhelpful. In a world with experienced pain (including some forms of poverty), an agent is ethically superior if it makes decisions that alleviate such pain, and ethically inferior if it fails to do so.
From my perspective, we have a word for that, and it isn’t ethics. It’s preference. Ethics are the rules governing how preference conflicts are mediated.
Then imagine somebody living an upper-class life who is unaware of suffering. Are they ethically inferior because they haven’t made decisions to alleviate pain they don’t know about? Does informing them of the pain change their ethical status—does it make them ethically worse-off?
Absolutely agreed. But it’s about conflicts among preferred outcomes of a decision, not about preferences among disconnected world-states.
If they’re unaware because there’s no reasonable way for them to be aware, it’s hard for me to hold them to blame for not acting on that. Ought implies can. If they’re unaware because they’ve made choices to avoid the truth, then they’re ethically inferior to the version of themselves which do learn and act.
Less about two outcomes your preferences conflict on, and more about, say, your preferences and mine.
Insofar as your internal preferences conflict, I’m not certain ethics are the correct approach to resolve the issue.
This leads to a curious metaethics problem; I can construct a society of more ethically perfect people just by construction it so that other people’s suffering is an unknown unknown. Granted, that probably makes me something of an ethical monster, but given that I’m making ethically superior people, is it worth the ethical cost to me?
Once you start treating ethics like utility—that is, a comparable, in some sense ordinal, value—you produce meta-ethical issues identical to the ethical issues with utilitarianism.
You’re still treating ethical values as external summable properties. You just can’t compare the ethical value of people in radically different situations. You can compare the ethical value of two possible decisions of a single situation.
If there’s no suffering, that doesn’t make people more or less ethical than if there is suffering—that comparison is meaningless. If an entity chooses to avoid knowledge of suffering, that choice is morally objectionable compared to the same entity seeking knowledge of such.
You can get away to some extent by generalizing and treating agents in somewhat similar situations as somewhat comparable—to the degree that you think A and B are facing the same decision points, you can judge the choices they make as comparable. But this is always less than 100%.
In fact, I think the same about utility—it’s bizarre and incoherent to treat it as comparable or additive. It’s ordinal only within a decision, and has no ordering across entities. This is my primary reason for being consequentialist but not utilitarian—those guys are crazy.