I’m basically never talking about the third thing when I talk about morality or anything like that, because I don’t think we’ve done a decent job at the first thing.
Wait, why do you think these have to be done in order?
Some beliefs of mine, I assume different from Ben’s but I think still relevant to this question are:
At the very least, your ability to accomplish anything re: helping the outgroup or helping the powerless is dependent on having spare resources to do so.
There are many clusters of actions which might locally benefit the ingroup and leave the outgroup or powerless in the cold, but which then enable future generations of ingroup more ability to take useful actions to help them. i.e. if you’re a tribe in the wilderness, I much rather you invent capitalism and build supermarkets than that you try to help the poor. The helping of the poor is nice but barely matters in the grand scheme of things.
I don’t personally think you need to halt *all* helping of the powerless until you’ve solidified your treatment of the ingroup/outgroup. But I could imagine future me changing my mind about that.
A major suspicion/confusion I have here is that the two frames:
“Help the ingroup, so that the ingroup eventually has the bandwidth and slack to help the outgroup and the powerless”, and
“Help the ingroup, because it’s convenient and they’re the ingroup”
Look very similar.
Or, alternately: Optimizing even for the welfare of the ingroup, vs the longterm production power of the ingroup are fairly different things. For example, say that income inequality leads to less welfare (because what people really care about is relative status). But, capitalism longterm yields way more resources, using mechanisms that specifically depend on income inequality.
An argument someone once made to me [I’m not sure if the actual facts here check out but the thought experiment was sufficient to change my outlook] was “look, 100 years ago Mexico made choices that optimized for more equality at the expense of 1% economic growth. Trading 1% economic growth for a lot of equality might sound like a good trade, but it means that 100 years later people in Mexico are literally dying to try to get into the US.”
(This fits into the ingroup/outgroup/powerless schema if you think of the “trade 1% growth for equality” as a choice that elites (rich/wealthy/well-connected/intelligentsia] might make, as a pseudo-ingroup, in order to help the less fortunate in their own country, which are a pseudo-relative-outgroup)
Attention is scarce and there are lots of optimization processes going on, so if you think the future is big relative to the present, interventions that increase the optimization power serving your values are going to outperform direct interventions. This doesn’t imply that we should just do infinite meta, but it does imply that the value of direct object-level improvements will nearly always be via how they affect different optimizing processes.
Wait, why do you think these have to be done in order?
Some beliefs of mine, I assume different from Ben’s but I think still relevant to this question are:
At the very least, your ability to accomplish anything re: helping the outgroup or helping the powerless is dependent on having spare resources to do so.
There are many clusters of actions which might locally benefit the ingroup and leave the outgroup or powerless in the cold, but which then enable future generations of ingroup more ability to take useful actions to help them. i.e. if you’re a tribe in the wilderness, I much rather you invent capitalism and build supermarkets than that you try to help the poor. The helping of the poor is nice but barely matters in the grand scheme of things.
I don’t personally think you need to halt *all* helping of the powerless until you’ve solidified your treatment of the ingroup/outgroup. But I could imagine future me changing my mind about that.
A major suspicion/confusion I have here is that the two frames:
“Help the ingroup, so that the ingroup eventually has the bandwidth and slack to help the outgroup and the powerless”, and
“Help the ingroup, because it’s convenient and they’re the ingroup”
Look very similar.
Or, alternately: Optimizing even for the welfare of the ingroup, vs the longterm production power of the ingroup are fairly different things. For example, say that income inequality leads to less welfare (because what people really care about is relative status). But, capitalism longterm yields way more resources, using mechanisms that specifically depend on income inequality.
An argument someone once made to me [I’m not sure if the actual facts here check out but the thought experiment was sufficient to change my outlook] was “look, 100 years ago Mexico made choices that optimized for more equality at the expense of 1% economic growth. Trading 1% economic growth for a lot of equality might sound like a good trade, but it means that 100 years later people in Mexico are literally dying to try to get into the US.”
(This fits into the ingroup/outgroup/powerless schema if you think of the “trade 1% growth for equality” as a choice that elites (rich/wealthy/well-connected/intelligentsia] might make, as a pseudo-ingroup, in order to help the less fortunate in their own country, which are a pseudo-relative-outgroup)
Attention is scarce and there are lots of optimization processes going on, so if you think the future is big relative to the present, interventions that increase the optimization power serving your values are going to outperform direct interventions. This doesn’t imply that we should just do infinite meta, but it does imply that the value of direct object-level improvements will nearly always be via how they affect different optimizing processes.