I’ve had similar thoughts; the working title that I jotted down at some point is “Two Aspects of Morality: Do-Gooding and Coordination.” A quick summary of those thoughts:
Do-gooding is about seeing some worlds as better than others, and steering towards the better ones. Consequentialism, basically. A widely held view is that what makes some worlds better than others is how good they are for the beings in those worlds, and so people often contrast do-gooding with selfishness because do-gooding requires recognizing that the world is full of moral patients.
Coordination is about recognizing that the world is full of other agents, who are trying to steer towards (at least somewhat) different worlds. It’s about finding ways to arrange the efforts of many agents so that they add up to more than the sum of their parts, rather than less. In other words, try for: many agents combine their efforts to get to worlds that are better (according to each agent) than the world that that agent would have reached without working together. And try to avoid: agents stepping on each other’s toes, devoting lots of their efforts to undoing what other agents have done, or otherwise undermining each other’s efforts. Related: game theory, Moloch, decision theory, contractualism.
These both seem like aspects of morality because:
“moral emotions”, “moral intuitions”, and other places where people use words like “moral” arise from both sorts of situations
both aspects involve some deep structure related to being an agent in the world, neither seems like just messy implementation details for the other
a person who is trying to cultivate virtues or become a more effective agent will work on both
both aspects involve some deep structure related to being an agent in the world, neither seems like just messy implementation details for the other
Indeed. Specifically, “right” and “good” are not synonyms.
“Right” and “wrong”, that is praisweorthiness and blameability
are concepts that belong to deontology. A good outcome in the consequentialist sense, one that is a generally desired, is a different
concept from a deontologically right action.
Consider a case where someone dies in an industrial accident , although all rules were followed: if you think the plant manager should be exonerated because he folowed the rules, you are siding with deontology, whereas if you think he should be punished because a death occurred under his supervision, you are siding with consequentialism.
Consider a case where someone dies in an industrial accident , although all rules were followed: if you think the plant manager should be exonerated because he folowed the rules, you are siding with deontology, whereas if you think he should be punished because a death occurred under his supervision, you are siding with consequentialism.
That’s not how consequentialism works. The consequentialist answer would be to punish the plant manager if and only if doing so would cause the world to become a better place.
I think I like “Do Gooding” in place of where I currently have “altruism” in my title. I used “altruism” despite it actually being more specific than I wanted because I couldn’t think of a succinct enough word-phrase.
I’ve had similar thoughts; the working title that I jotted down at some point is “Two Aspects of Morality: Do-Gooding and Coordination.” A quick summary of those thoughts:
Do-gooding is about seeing some worlds as better than others, and steering towards the better ones. Consequentialism, basically. A widely held view is that what makes some worlds better than others is how good they are for the beings in those worlds, and so people often contrast do-gooding with selfishness because do-gooding requires recognizing that the world is full of moral patients.
Coordination is about recognizing that the world is full of other agents, who are trying to steer towards (at least somewhat) different worlds. It’s about finding ways to arrange the efforts of many agents so that they add up to more than the sum of their parts, rather than less. In other words, try for: many agents combine their efforts to get to worlds that are better (according to each agent) than the world that that agent would have reached without working together. And try to avoid: agents stepping on each other’s toes, devoting lots of their efforts to undoing what other agents have done, or otherwise undermining each other’s efforts. Related: game theory, Moloch, decision theory, contractualism.
These both seem like aspects of morality because:
“moral emotions”, “moral intuitions”, and other places where people use words like “moral” arise from both sorts of situations
both aspects involve some deep structure related to being an agent in the world, neither seems like just messy implementation details for the other
a person who is trying to cultivate virtues or become a more effective agent will work on both
Indeed. Specifically, “right” and “good” are not synonyms.
“Right” and “wrong”, that is praisweorthiness and blameability are concepts that belong to deontology. A good outcome in the consequentialist sense, one that is a generally desired, is a different concept from a deontologically right action.
Consider a case where someone dies in an industrial accident , although all rules were followed: if you think the plant manager should be exonerated because he folowed the rules, you are siding with deontology, whereas if you think he should be punished because a death occurred under his supervision, you are siding with consequentialism.
That’s not how consequentialism works. The consequentialist answer would be to punish the plant manager if and only if doing so would cause the world to become a better place.
I think I like “Do Gooding” in place of where I currently have “altruism” in my title. I used “altruism” despite it actually being more specific than I wanted because I couldn’t think of a succinct enough word-phrase.