Hmm, you cross-posted to EA forum, so I guess I’ll reply both places since each might be seen by different folks.
I think this is often non-explicit in most discussions of morality/ethics/what-people-should-do. It seems common for people to conflate “actions that are bad because it ruins ability to coordinate” and “actions that are bad because empathy and/or principles tell me they are.”
I think it’s worth challenging the idea that this conflation is actually an issue with ethics.
Although it’s true that things like coordination mechanisms and compassion are not literally the same thing and can have expressions that try to isolate themselves from each other (cf. market economies and prayer) and so things that are bad because they break coordination mechanisms or because they don’t express compassion are not bad for exactly the same reasons, this need not mean there is not something deeper going on that ties them together.
I think this is why there tends to be a focus on meta-ethics among philosophers of ethics rather than directly trying to figure out what people should do, even when setting meta-ethical uncertainty aside. There’s some notion of badness or undesireableness (and conversely goodness or desirableness) that powers both of these, and so they are both different expressions of this same underlying phenomenon. So we can reasonably ties these two approaches together by looking at this question of what makes something seem good or bad to us, and simply consider these different domains over which we consider how we make good or bad things happen.
As to what good and bad mean, well, that’s a larger discussion. My best theory is that in humans it’s rooted in prediction error plus some evolved affinities, but this is an ongoing place where folks are trying to figure out what good and bad mean beyond our intuitive sense that something is good or bad.
The issue isn’t just the conflation, but missing a gear about how the two relate.
The mistake I was making, that I think many EAs are making, is to conflate different pieces of the moral model that have specifically different purposes.
Singer-ian ethics pushes you to take the entire world into your circle of concern. And this is quite important. But, it’s also quite important that the way that the entire world is in your circle of concern is different from the way your friends and government and company and tribal groups are in your circle of concern.
In particular, I was concretely assuming “torturing people to death is generally worse than lying.” But, that’s specifically comparing within alike circles. It is now quite plausible to me that lying (or even exaggeration/filtered evidence) among the groups of people I actually have to coordinate with might actually be worse than allowing the torture-killing of others who I don’t have the ability to coordinate with. (Or, might not – it depends a lot on the the weightings. But it is not the straightforward question I assumed at first)
Thanks. (I think honestly the EA forum needs to see this more than LessWrong does so I appreciate some commenting there. I’ll probably reply in both places for lack of a better option)
Hmm, you cross-posted to EA forum, so I guess I’ll reply both places since each might be seen by different folks.
I think it’s worth challenging the idea that this conflation is actually an issue with ethics.
Although it’s true that things like coordination mechanisms and compassion are not literally the same thing and can have expressions that try to isolate themselves from each other (cf. market economies and prayer) and so things that are bad because they break coordination mechanisms or because they don’t express compassion are not bad for exactly the same reasons, this need not mean there is not something deeper going on that ties them together.
I think this is why there tends to be a focus on meta-ethics among philosophers of ethics rather than directly trying to figure out what people should do, even when setting meta-ethical uncertainty aside. There’s some notion of badness or undesireableness (and conversely goodness or desirableness) that powers both of these, and so they are both different expressions of this same underlying phenomenon. So we can reasonably ties these two approaches together by looking at this question of what makes something seem good or bad to us, and simply consider these different domains over which we consider how we make good or bad things happen.
As to what good and bad mean, well, that’s a larger discussion. My best theory is that in humans it’s rooted in prediction error plus some evolved affinities, but this is an ongoing place where folks are trying to figure out what good and bad mean beyond our intuitive sense that something is good or bad.
Crossposted on EA forum (I think this particular convo is more valuable over there)
The issue isn’t just the conflation, but missing a gear about how the two relate.
The mistake I was making, that I think many EAs are making, is to conflate different pieces of the moral model that have specifically different purposes.
Singer-ian ethics pushes you to take the entire world into your circle of concern. And this is quite important. But, it’s also quite important that the way that the entire world is in your circle of concern is different from the way your friends and government and company and tribal groups are in your circle of concern.
In particular, I was concretely assuming “torturing people to death is generally worse than lying.” But, that’s specifically comparing within alike circles. It is now quite plausible to me that lying (or even exaggeration/filtered evidence) among the groups of people I actually have to coordinate with might actually be worse than allowing the torture-killing of others who I don’t have the ability to coordinate with. (Or, might not – it depends a lot on the the weightings. But it is not the straightforward question I assumed at first)
Thanks. (I think honestly the EA forum needs to see this more than LessWrong does so I appreciate some commenting there. I’ll probably reply in both places for lack of a better option)