In the first scenario, in which ethics is an obligation (i/e, your ethical standing decreases for not fulfilling ethical obligations), you’re ethically a worse person in a world with poverty, because there are ethical obligations you cannot meet. The idea of ethical standing being independent of your personal activities is, to me, contrary to the nature of ethics.
In the second scenario, in which ethics are additive (you’re not a worse person for not doing good, but instead, the good you do adds to some sort of ethical “score”), your ethical standing is limited by how horrible the world you are in is—that is, the most ethical people can only exist in worlds in which suffering is sufficiently frequent that they can constantly act to avert it. The idea of ethical standing being dependent upon other people’s suffering is also, to me, contrary to the nature of ethics.
It’s not a matter of which world you’d prefer to live in, it’s a matter of how the world you live in changes your ethical standing.
ETA: Although the “additive” model of ethics, come to think of it, solves the theodicy problem. Why is there evil? Because otherwise people couldn’t be good.
I suspect I’m more confused than even this implies. I don’t think there’s any numerical ethical standing measurement, and I think that cross-universe comparisons are incoherent. Ethics is solely and simply about decisions—which future state, conditional on current choice, is preferable.
I’m not trying to compare a current world with poverty against a counterfactual current world without—that’s completely irrelevant and unhelpful. In a world with experienced pain (including some forms of poverty), an agent is ethically superior if it makes decisions that alleviate such pain, and ethically inferior if it fails to do so.
Ethics is solely and simply about decisions—which future state, conditional on current choice, is preferable.
From my perspective, we have a word for that, and it isn’t ethics. It’s preference. Ethics are the rules governing how preference conflicts are mediated.
I’m not trying to compare a current world with poverty against a counterfactual current world without—that’s completely irrelevant and unhelpful.
Then imagine somebody living an upper-class life who is unaware of suffering. Are they ethically inferior because they haven’t made decisions to alleviate pain they don’t know about? Does informing them of the pain change their ethical status—does it make them ethically worse-off?
Ethics are the rules governing how preference conflicts are mediated.
Absolutely agreed. But it’s about conflicts among preferred outcomes of a decision, not about preferences among disconnected world-states.
upper-class life who is unaware of suffering.
If they’re unaware because there’s no reasonable way for them to be aware, it’s hard for me to hold them to blame for not acting on that. Ought implies can. If they’re unaware because they’ve made choices to avoid the truth, then they’re ethically inferior to the version of themselves which do learn and act.
Absolutely agreed. But it’s about conflicts among preferred outcomes of a decision, not about preferences among disconnected world-states.
Less about two outcomes your preferences conflict on, and more about, say, your preferences and mine.
Insofar as your internal preferences conflict, I’m not certain ethics are the correct approach to resolve the issue.
If they’re unaware because there’s no reasonable way for them to be aware, it’s hard for me to hold them to blame for not acting on that. Ought implies can. If they’re unaware because they’ve made choices to avoid the truth, then they’re ethically inferior to the version of themselves which do learn and act.
This leads to a curious metaethics problem; I can construct a society of more ethically perfect people just by construction it so that other people’s suffering is an unknown unknown. Granted, that probably makes me something of an ethical monster, but given that I’m making ethically superior people, is it worth the ethical cost to me?
Once you start treating ethics like utility—that is, a comparable, in some sense ordinal, value—you produce meta-ethical issues identical to the ethical issues with utilitarianism.
You’re still treating ethical values as external summable properties. You just can’t compare the ethical value of people in radically different situations. You can compare the ethical value of two possible decisions of a single situation.
If there’s no suffering, that doesn’t make people more or less ethical than if there is suffering—that comparison is meaningless. If an entity chooses to avoid knowledge of suffering, that choice is morally objectionable compared to the same entity seeking knowledge of such.
You can get away to some extent by generalizing and treating agents in somewhat similar situations as somewhat comparable—to the degree that you think A and B are facing the same decision points, you can judge the choices they make as comparable. But this is always less than 100%.
In fact, I think the same about utility—it’s bizarre and incoherent to treat it as comparable or additive. It’s ordinal only within a decision, and has no ordering across entities. This is my primary reason for being consequentialist but not utilitarian—those guys are crazy.
There are two problems.
In the first scenario, in which ethics is an obligation (i/e, your ethical standing decreases for not fulfilling ethical obligations), you’re ethically a worse person in a world with poverty, because there are ethical obligations you cannot meet. The idea of ethical standing being independent of your personal activities is, to me, contrary to the nature of ethics.
In the second scenario, in which ethics are additive (you’re not a worse person for not doing good, but instead, the good you do adds to some sort of ethical “score”), your ethical standing is limited by how horrible the world you are in is—that is, the most ethical people can only exist in worlds in which suffering is sufficiently frequent that they can constantly act to avert it. The idea of ethical standing being dependent upon other people’s suffering is also, to me, contrary to the nature of ethics.
It’s not a matter of which world you’d prefer to live in, it’s a matter of how the world you live in changes your ethical standing.
ETA: Although the “additive” model of ethics, come to think of it, solves the theodicy problem. Why is there evil? Because otherwise people couldn’t be good.
I suspect I’m more confused than even this implies. I don’t think there’s any numerical ethical standing measurement, and I think that cross-universe comparisons are incoherent. Ethics is solely and simply about decisions—which future state, conditional on current choice, is preferable.
I’m not trying to compare a current world with poverty against a counterfactual current world without—that’s completely irrelevant and unhelpful. In a world with experienced pain (including some forms of poverty), an agent is ethically superior if it makes decisions that alleviate such pain, and ethically inferior if it fails to do so.
From my perspective, we have a word for that, and it isn’t ethics. It’s preference. Ethics are the rules governing how preference conflicts are mediated.
Then imagine somebody living an upper-class life who is unaware of suffering. Are they ethically inferior because they haven’t made decisions to alleviate pain they don’t know about? Does informing them of the pain change their ethical status—does it make them ethically worse-off?
Absolutely agreed. But it’s about conflicts among preferred outcomes of a decision, not about preferences among disconnected world-states.
If they’re unaware because there’s no reasonable way for them to be aware, it’s hard for me to hold them to blame for not acting on that. Ought implies can. If they’re unaware because they’ve made choices to avoid the truth, then they’re ethically inferior to the version of themselves which do learn and act.
Less about two outcomes your preferences conflict on, and more about, say, your preferences and mine.
Insofar as your internal preferences conflict, I’m not certain ethics are the correct approach to resolve the issue.
This leads to a curious metaethics problem; I can construct a society of more ethically perfect people just by construction it so that other people’s suffering is an unknown unknown. Granted, that probably makes me something of an ethical monster, but given that I’m making ethically superior people, is it worth the ethical cost to me?
Once you start treating ethics like utility—that is, a comparable, in some sense ordinal, value—you produce meta-ethical issues identical to the ethical issues with utilitarianism.
You’re still treating ethical values as external summable properties. You just can’t compare the ethical value of people in radically different situations. You can compare the ethical value of two possible decisions of a single situation.
If there’s no suffering, that doesn’t make people more or less ethical than if there is suffering—that comparison is meaningless. If an entity chooses to avoid knowledge of suffering, that choice is morally objectionable compared to the same entity seeking knowledge of such.
You can get away to some extent by generalizing and treating agents in somewhat similar situations as somewhat comparable—to the degree that you think A and B are facing the same decision points, you can judge the choices they make as comparable. But this is always less than 100%.
In fact, I think the same about utility—it’s bizarre and incoherent to treat it as comparable or additive. It’s ordinal only within a decision, and has no ordering across entities. This is my primary reason for being consequentialist but not utilitarian—those guys are crazy.