There was a particular mistake I made over in this thread. Noticing the mistake didn’t change my overall position (and also my overall position was even weirder than I think people thought it was). But, seemed worth noting somewhere.
I think most folk morality (or at least my own folk morality), generally has the following crimes in ascending order of badness:
Lying
Stealing
Killing
Torturing people to death (I’m not sure if torture-without-death is generally considered better/worse/about-the-same-as killing)
But this is the conflation of a few different things. One axis I was ignoring was “morality as coordination tool” vs “morality as ‘doing the right thing because I think it’s right’.” And these are actually quite different. And, importantly, you don’t get to spend many resources on morality-as-doing-the-right-thing unless you have a solid foundation of the morality-as-coordination-tool.
There’s actually a 4x3 matrix you can plot lying/stealing/killing/torture-killing into which are:
harming the ingroup
harming the outgroup (who you may benefit from trading with)
harming powerless people who don’t have the ability to trade or collaborate with you
And you basically need to tackle these in this order. If you live in a world where even people in your tribe backstab each other all the time, you won’t have spare resources to spend on the outgroup or the powerless until your tribe has gotten it’s basic shit together and figured out that lying/stealing/killing each other sucks.
If your tribe has it’s basic shit together, then maybe you have the slack to ask the question: “hey, that outgroup over there, who we regularly raid and steal their sheep and stuff, maybe it’d be better if we traded with them instead of stealing their sheep?” and then begin to develop cosmopolitan norms.
If you eventually become a powerful empire (or similar), eventually you may notice that you’re going around exploiting or conquering and… maybe you just don’t actually want to do that anymore? Or maybe, within your empire, there’s an underclass of people who are slaves or slave-like instead of being formally traded with. And maybe this is locally beneficial. But… you just don’t want to do that anymore, because empathy or because you’ve come to believe in principles that say not to, or something. Sometimes this is because the powerless people would actually be more productive if they were free builders/traders, but sometimes it just seems like the right thing to do.
Avoiding harming the ingroup and productive outgroup are things that you’re locally incentived to do because cooperation is very valuable. In an iterated strategy game, these are things you’re incentived to do all the way along.
Avoiding harming the powerless is something that you are limited in your ability to do until the point where it starts making sense to cash in your victory points.
I think this is all pretty non-explicit in most discussions of morality/ethics/what-people-should-do, and conflation of “actions that are bad because it ruins ability to coordinate” and “actions that are bad because empathy and/or principles tell me they are” is common.
On the object level, the three levels you described are extremely important:
harming the ingroup
harming the outgroup (who you may benefit from trading with)
harming powerless people who don’t have the ability to trade or collaborate with you
I’m basically never talking about the third thing when I talk about morality or anything like that, because I don’t think we’ve done a decent job at the first thing. I think there’s a lot of misinformation out there about how well we’ve done the first thing, and I think that in practice utilitarian ethical discourse tends to raise the message length of making that distinction, by implicitly denying that there’s an outgroup.
I don’t think ingroups should be arbitrary affiliation groups. Or, more precisely, “ingroups are arbitrary affiliation groups” is one natural supergroup which I think is doing a lot of harm, and there are other natural supergroups following different strategies, of which “righteousness/justice” is one that I think is especially important. But pretending there’s no outgroup is worse than honestly trying to treat foreigners decently as foreigners who can’t be counted on to trust us with arbitrary power or share our preferences or standards.
Sometimes we should be thinking about what internal norms to coordinate around (which is part of how the ingroup is defined), and sometimes we should be thinking about conflicts with other perspectives or strategies (how we treat outgroups). The Humility Argument for Honesty and Against Neglectedness Considerations are examples of an idea about what kinds of norms constitute a beneficial-to-many supergroup, while Should Effective Altruism be at war with North Korea? was an attempt to raise the visibility of the existence of outgroups, so we could think strategically about them.
I’m basically never talking about the third thing when I talk about morality or anything like that, because I don’t think we’ve done a decent job at the first thing.
Wait, why do you think these have to be done in order?
Some beliefs of mine, I assume different from Ben’s but I think still relevant to this question are:
At the very least, your ability to accomplish anything re: helping the outgroup or helping the powerless is dependent on having spare resources to do so.
There are many clusters of actions which might locally benefit the ingroup and leave the outgroup or powerless in the cold, but which then enable future generations of ingroup more ability to take useful actions to help them. i.e. if you’re a tribe in the wilderness, I much rather you invent capitalism and build supermarkets than that you try to help the poor. The helping of the poor is nice but barely matters in the grand scheme of things.
I don’t personally think you need to halt *all* helping of the powerless until you’ve solidified your treatment of the ingroup/outgroup. But I could imagine future me changing my mind about that.
A major suspicion/confusion I have here is that the two frames:
“Help the ingroup, so that the ingroup eventually has the bandwidth and slack to help the outgroup and the powerless”, and
“Help the ingroup, because it’s convenient and they’re the ingroup”
Look very similar.
Or, alternately: Optimizing even for the welfare of the ingroup, vs the longterm production power of the ingroup are fairly different things. For example, say that income inequality leads to less welfare (because what people really care about is relative status). But, capitalism longterm yields way more resources, using mechanisms that specifically depend on income inequality.
An argument someone once made to me [I’m not sure if the actual facts here check out but the thought experiment was sufficient to change my outlook] was “look, 100 years ago Mexico made choices that optimized for more equality at the expense of 1% economic growth. Trading 1% economic growth for a lot of equality might sound like a good trade, but it means that 100 years later people in Mexico are literally dying to try to get into the US.”
(This fits into the ingroup/outgroup/powerless schema if you think of the “trade 1% growth for equality” as a choice that elites (rich/wealthy/well-connected/intelligentsia] might make, as a pseudo-ingroup, in order to help the less fortunate in their own country, which are a pseudo-relative-outgroup)
Attention is scarce and there are lots of optimization processes going on, so if you think the future is big relative to the present, interventions that increase the optimization power serving your values are going to outperform direct interventions. This doesn’t imply that we should just do infinite meta, but it does imply that the value of direct object-level improvements will nearly always be via how they affect different optimizing processes.
A lot of this makes sense. Some of it feels like I haven’t quite understood the frame you’re using (and unfortunately can’t specify further which parts those are because it’s a bit confusing)
One thing that seems relevant: My preference to “declare staghunts first and get explicit buy in before trying to do anything cooperatively-challenging” feels quite related to “ambiguity over who is in the ingroup causes problems” thing.
This feels like the most direct engagement I’ve seen from you with what I’ve been trying to say. Thanks! I’m not sure how to describe the metric on which this is obviously to-the-point and trying-to-be-pin-down-able, but I want to at least flag an example where it seems like you’re doing the thing.
There was a particular mistake I made over in this thread. Noticing the mistake didn’t change my overall position (and also my overall position was even weirder than I think people thought it was). But, seemed worth noting somewhere.
I think most folk morality (or at least my own folk morality), generally has the following crimes in ascending order of badness:
Lying
Stealing
Killing
Torturing people to death (I’m not sure if torture-without-death is generally considered better/worse/about-the-same-as killing)
But this is the conflation of a few different things. One axis I was ignoring was “morality as coordination tool” vs “morality as ‘doing the right thing because I think it’s right’.” And these are actually quite different. And, importantly, you don’t get to spend many resources on morality-as-doing-the-right-thing unless you have a solid foundation of the morality-as-coordination-tool.
There’s actually a 4x3 matrix you can plot lying/stealing/killing/torture-killing into which are:
harming the ingroup
harming the outgroup (who you may benefit from trading with)
harming powerless people who don’t have the ability to trade or collaborate with you
And you basically need to tackle these in this order. If you live in a world where even people in your tribe backstab each other all the time, you won’t have spare resources to spend on the outgroup or the powerless until your tribe has gotten it’s basic shit together and figured out that lying/stealing/killing each other sucks.
If your tribe has it’s basic shit together, then maybe you have the slack to ask the question: “hey, that outgroup over there, who we regularly raid and steal their sheep and stuff, maybe it’d be better if we traded with them instead of stealing their sheep?” and then begin to develop cosmopolitan norms.
If you eventually become a powerful empire (or similar), eventually you may notice that you’re going around exploiting or conquering and… maybe you just don’t actually want to do that anymore? Or maybe, within your empire, there’s an underclass of people who are slaves or slave-like instead of being formally traded with. And maybe this is locally beneficial. But… you just don’t want to do that anymore, because empathy or because you’ve come to believe in principles that say not to, or something. Sometimes this is because the powerless people would actually be more productive if they were free builders/traders, but sometimes it just seems like the right thing to do.
Avoiding harming the ingroup and productive outgroup are things that you’re locally incentived to do because cooperation is very valuable. In an iterated strategy game, these are things you’re incentived to do all the way along.
Avoiding harming the powerless is something that you are limited in your ability to do until the point where it starts making sense to cash in your victory points.
I think this is all pretty non-explicit in most discussions of morality/ethics/what-people-should-do, and conflation of “actions that are bad because it ruins ability to coordinate” and “actions that are bad because empathy and/or principles tell me they are” is common.
On the object level, the three levels you described are extremely important:
harming the ingroup
harming the outgroup (who you may benefit from trading with)
harming powerless people who don’t have the ability to trade or collaborate with you
I’m basically never talking about the third thing when I talk about morality or anything like that, because I don’t think we’ve done a decent job at the first thing. I think there’s a lot of misinformation out there about how well we’ve done the first thing, and I think that in practice utilitarian ethical discourse tends to raise the message length of making that distinction, by implicitly denying that there’s an outgroup.
I don’t think ingroups should be arbitrary affiliation groups. Or, more precisely, “ingroups are arbitrary affiliation groups” is one natural supergroup which I think is doing a lot of harm, and there are other natural supergroups following different strategies, of which “righteousness/justice” is one that I think is especially important. But pretending there’s no outgroup is worse than honestly trying to treat foreigners decently as foreigners who can’t be counted on to trust us with arbitrary power or share our preferences or standards.
Sometimes we should be thinking about what internal norms to coordinate around (which is part of how the ingroup is defined), and sometimes we should be thinking about conflicts with other perspectives or strategies (how we treat outgroups). The Humility Argument for Honesty and Against Neglectedness Considerations are examples of an idea about what kinds of norms constitute a beneficial-to-many supergroup, while Should Effective Altruism be at war with North Korea? was an attempt to raise the visibility of the existence of outgroups, so we could think strategically about them.
Wait, why do you think these have to be done in order?
Some beliefs of mine, I assume different from Ben’s but I think still relevant to this question are:
At the very least, your ability to accomplish anything re: helping the outgroup or helping the powerless is dependent on having spare resources to do so.
There are many clusters of actions which might locally benefit the ingroup and leave the outgroup or powerless in the cold, but which then enable future generations of ingroup more ability to take useful actions to help them. i.e. if you’re a tribe in the wilderness, I much rather you invent capitalism and build supermarkets than that you try to help the poor. The helping of the poor is nice but barely matters in the grand scheme of things.
I don’t personally think you need to halt *all* helping of the powerless until you’ve solidified your treatment of the ingroup/outgroup. But I could imagine future me changing my mind about that.
A major suspicion/confusion I have here is that the two frames:
“Help the ingroup, so that the ingroup eventually has the bandwidth and slack to help the outgroup and the powerless”, and
“Help the ingroup, because it’s convenient and they’re the ingroup”
Look very similar.
Or, alternately: Optimizing even for the welfare of the ingroup, vs the longterm production power of the ingroup are fairly different things. For example, say that income inequality leads to less welfare (because what people really care about is relative status). But, capitalism longterm yields way more resources, using mechanisms that specifically depend on income inequality.
An argument someone once made to me [I’m not sure if the actual facts here check out but the thought experiment was sufficient to change my outlook] was “look, 100 years ago Mexico made choices that optimized for more equality at the expense of 1% economic growth. Trading 1% economic growth for a lot of equality might sound like a good trade, but it means that 100 years later people in Mexico are literally dying to try to get into the US.”
(This fits into the ingroup/outgroup/powerless schema if you think of the “trade 1% growth for equality” as a choice that elites (rich/wealthy/well-connected/intelligentsia] might make, as a pseudo-ingroup, in order to help the less fortunate in their own country, which are a pseudo-relative-outgroup)
Attention is scarce and there are lots of optimization processes going on, so if you think the future is big relative to the present, interventions that increase the optimization power serving your values are going to outperform direct interventions. This doesn’t imply that we should just do infinite meta, but it does imply that the value of direct object-level improvements will nearly always be via how they affect different optimizing processes.
A lot of this makes sense. Some of it feels like I haven’t quite understood the frame you’re using (and unfortunately can’t specify further which parts those are because it’s a bit confusing)
One thing that seems relevant: My preference to “declare staghunts first and get explicit buy in before trying to do anything cooperatively-challenging” feels quite related to “ambiguity over who is in the ingroup causes problems” thing.
This feels like the most direct engagement I’ve seen from you with what I’ve been trying to say. Thanks! I’m not sure how to describe the metric on which this is obviously to-the-point and trying-to-be-pin-down-able, but I want to at least flag an example where it seems like you’re doing the thing.