Ethics is solely and simply about decisions—which future state, conditional on current choice, is preferable.
From my perspective, we have a word for that, and it isn’t ethics. It’s preference. Ethics are the rules governing how preference conflicts are mediated.
I’m not trying to compare a current world with poverty against a counterfactual current world without—that’s completely irrelevant and unhelpful.
Then imagine somebody living an upper-class life who is unaware of suffering. Are they ethically inferior because they haven’t made decisions to alleviate pain they don’t know about? Does informing them of the pain change their ethical status—does it make them ethically worse-off?
Ethics are the rules governing how preference conflicts are mediated.
Absolutely agreed. But it’s about conflicts among preferred outcomes of a decision, not about preferences among disconnected world-states.
upper-class life who is unaware of suffering.
If they’re unaware because there’s no reasonable way for them to be aware, it’s hard for me to hold them to blame for not acting on that. Ought implies can. If they’re unaware because they’ve made choices to avoid the truth, then they’re ethically inferior to the version of themselves which do learn and act.
Absolutely agreed. But it’s about conflicts among preferred outcomes of a decision, not about preferences among disconnected world-states.
Less about two outcomes your preferences conflict on, and more about, say, your preferences and mine.
Insofar as your internal preferences conflict, I’m not certain ethics are the correct approach to resolve the issue.
If they’re unaware because there’s no reasonable way for them to be aware, it’s hard for me to hold them to blame for not acting on that. Ought implies can. If they’re unaware because they’ve made choices to avoid the truth, then they’re ethically inferior to the version of themselves which do learn and act.
This leads to a curious metaethics problem; I can construct a society of more ethically perfect people just by construction it so that other people’s suffering is an unknown unknown. Granted, that probably makes me something of an ethical monster, but given that I’m making ethically superior people, is it worth the ethical cost to me?
Once you start treating ethics like utility—that is, a comparable, in some sense ordinal, value—you produce meta-ethical issues identical to the ethical issues with utilitarianism.
You’re still treating ethical values as external summable properties. You just can’t compare the ethical value of people in radically different situations. You can compare the ethical value of two possible decisions of a single situation.
If there’s no suffering, that doesn’t make people more or less ethical than if there is suffering—that comparison is meaningless. If an entity chooses to avoid knowledge of suffering, that choice is morally objectionable compared to the same entity seeking knowledge of such.
You can get away to some extent by generalizing and treating agents in somewhat similar situations as somewhat comparable—to the degree that you think A and B are facing the same decision points, you can judge the choices they make as comparable. But this is always less than 100%.
In fact, I think the same about utility—it’s bizarre and incoherent to treat it as comparable or additive. It’s ordinal only within a decision, and has no ordering across entities. This is my primary reason for being consequentialist but not utilitarian—those guys are crazy.
From my perspective, we have a word for that, and it isn’t ethics. It’s preference. Ethics are the rules governing how preference conflicts are mediated.
Then imagine somebody living an upper-class life who is unaware of suffering. Are they ethically inferior because they haven’t made decisions to alleviate pain they don’t know about? Does informing them of the pain change their ethical status—does it make them ethically worse-off?
Absolutely agreed. But it’s about conflicts among preferred outcomes of a decision, not about preferences among disconnected world-states.
If they’re unaware because there’s no reasonable way for them to be aware, it’s hard for me to hold them to blame for not acting on that. Ought implies can. If they’re unaware because they’ve made choices to avoid the truth, then they’re ethically inferior to the version of themselves which do learn and act.
Less about two outcomes your preferences conflict on, and more about, say, your preferences and mine.
Insofar as your internal preferences conflict, I’m not certain ethics are the correct approach to resolve the issue.
This leads to a curious metaethics problem; I can construct a society of more ethically perfect people just by construction it so that other people’s suffering is an unknown unknown. Granted, that probably makes me something of an ethical monster, but given that I’m making ethically superior people, is it worth the ethical cost to me?
Once you start treating ethics like utility—that is, a comparable, in some sense ordinal, value—you produce meta-ethical issues identical to the ethical issues with utilitarianism.
You’re still treating ethical values as external summable properties. You just can’t compare the ethical value of people in radically different situations. You can compare the ethical value of two possible decisions of a single situation.
If there’s no suffering, that doesn’t make people more or less ethical than if there is suffering—that comparison is meaningless. If an entity chooses to avoid knowledge of suffering, that choice is morally objectionable compared to the same entity seeking knowledge of such.
You can get away to some extent by generalizing and treating agents in somewhat similar situations as somewhat comparable—to the degree that you think A and B are facing the same decision points, you can judge the choices they make as comparable. But this is always less than 100%.
In fact, I think the same about utility—it’s bizarre and incoherent to treat it as comparable or additive. It’s ordinal only within a decision, and has no ordering across entities. This is my primary reason for being consequentialist but not utilitarian—those guys are crazy.