Two arguments against longtermist thought experiments

Epistemic status: shower thoughts.

I am currently going through the EA Introductory Course and we discussed two arguments against longtermism which I have not seen elsewhere.

So goes a thought experiment: imagine you have toxic waste at hand, which you can process right now at the cost of 100 lives, or bury it so it’ll have no effect right away but poison the land, at the cost of 1000 lives in 100 years. Should you do it? Should you do the opposite tradeoff?
The basic intuition of longtermism is that clearly, the 1000 lives matter more than the 100, regardless of their position in time.

From Introduction to longtermism:

Imagine burying broken glass in a forest. In one possible future, a child steps on the glass in 5 years’ time, and hurts herself. In a different possible future, a child steps on the glass in 500 years’ time, and hurts herself just as much. Longtermism begins by appreciating that both possibilities seem equally bad: why stop caring about the effects of our actions just because they take place a long time from now?

Faced with this tradeoff, I’d save the 100 immediate lives. More than that, longtermism as assigning-significant-value-to-far-future-things has almost nothing to do with this thought experiment.

The first reason is a matter of practical mindset which does not undermine longtermist principles but I feel like it’s overlooked.

The second reason is more central to deprioritizing directly far-reaching actions in general.

My criticisms basically don’t matter for practical caring-about-far-future-people, but I still find it annoying that the thought experiments used to build longtermist intuitions are so unrelated to the central reasons why I care about influencing the far future.

Choose actions, not outcomes

The first reason is that in practice, we face not a direct choice about outcomes (100 vs 1000 lives), but over actions (processing vs burying the waste) and so the hypothetical is fraught with assumptions about the irrelevance of indirect consequences by abstracting away the causal pathways through which your action has an impact.

  • For example, the 100 people we save now will have a lot of impact in 100 years, which will plausibly compound to more than 10 future life saved per living person today.

    • This could make sense if the population over the next 100 years is more than 10 times today’s population, and you assign an equal share of responsibility to everyone for that (assuming you save 100 random lives).

    • This is very speculative, but the point is that it’s not obvious that 1000 lives in 100 years is more total impact than 100 lives now.

Someone could also say that our ability to de-poison the land will be improved in the future, or find other ways to reject the hypothetical. One could argue the thought experiment demands we disregard such considerations: assume all things are equal except for the number of lives saved, in which case you can validly derive that there are no other relevant parameters than the number of lives saved… but it doesn’t feel like such a strong result now does it?

The strength of longtermism as a novel idea is its counterintuitiveness; it is the measure in which sharp arguments support unprecedented conclusions, because that is how much it will change our behavior.[1]

In practice, longtermism informs how we want to think about far-reaching actions such as creating seed banks or managing existential risk. Framing these actions in terms of tradeoffs between current and future lives forgets important information about the impact of saving a life.

Future lives are cheaper

More specifically, I think that (contrarily to what is often stated), saving future lives is not a neglected problem, and that it’s relatively intractable, because sometimes we want not to compare [current efforts to save future lives] to [current efforts to save current lives] but to [future efforts to save future lives].

  • The first comparison makes sense if you want to reallocate today’s efforts between today’s and tomorrow’s causes. (“Should I buy malaria nets or build a seed bank?”)

  • The second makes sense if you want to reallocate tomorrow’s causes between today’s and tomorrow’s efforts. (“Should I endanger future lives by burying toxic waste, effectively outsourcing the waste processing to the future?”)

First an assumption: if the world is broadly the same or worse than today in terms of population, technology, economy, etc. then something has gone extremely wrong, and preventing this is a priority regardless of longtermist considerations.[2]
So I’m now assuming the thought experiment is about a far future which is stupendously big compared to our present, and very probably much better.

So the people of the future will have an easier time replacing lost lives (so our marginal effort is less impactful now than then) and they have more resources to devote to charity overall (so problems are less neglected).[3]
It’s not infinitely easier to save a life in the future than now, but it’s probably an order of magnitude easier.

Longtermism says that future lives have as much value as present lives and I say that the relative price of future lives is much lower than that of current lives; the two are not incompatible, but in practice I’m often exposed to longtermism in terms of cause prioritization.

Conclusion

I like to think of thought experiments the same way I think of made-up statistics: you should dutifully go to the end of counterintuitive reasoning in order to build your intuition, then throw away the explicit result and not rely too much on auxiliary hot takes.

So, outsource your causes to the future. They’ll take care of it more effectively than you.

  1. ^

    I am implicitly adopting a consequentialist position here: I care about making my altruist actions effective, not about the platonic truth or virtue of longtermism.

  2. ^

    I assume the far future is overwhelmingly likely to be very much futuristic or not at all. Even if you don’t think future lives are comparable to current lives in any significant manner, you probably still don’t want the kind of events which would be necessary to make Earth barren or stagnant in a few centuries.

  3. ^

    According to the first predictions that show up after a Google search, global population will be around 11B and world GDP will have grown by x25 in 100 years, so assuming resources are allocated similarly to now, I’d take 25*8/​11 ~= 18 as my first approximation of how many more resources are devoted to saving lives per capita.
    (If my argument does not hold up to scrutiny, if think this is the most likely point of failure.)

    Note: The population could be much higher due to emulation or space travel (without necessarily large economic growth per capita if ems and colonists are basically slaves or very low-class, which would undermine my argument), and economic growth could be much higher due to AI (which would strengthen my argument; remember we’re assuming away extinction risks). Consider other transformative technologies of your liking as appropriate.