One frame I have for ‘maximizing altruism’ is that it’s something like a liquid: it’s responsive to its surroundings, taking on their shape, flowing to the lowest point available. It rapidly conforms to new surroundings if there are changes; turn a bottle on its side and the liquid inside will rapidly resettle into the new best configuration.
This has both upsides and downsides: the flexibility and ability to do rapid shifts mean that as new concerns become the most prominent, they can be rapidly addressed. The near-continuous nature of liquids means that as you get more and more maximizing altruist capacity, you can smoothly increase the ‘shoreline’.
Many other approaches seem solid instead of liquid, in a way that promotes robustness and specialization (while being less flexible and responsive). If the only important resources are fungible commodities, then the liquid model seems optimal; if it turns out that the skills and resources you need for tackling one challenge are different than the skills and resources needed for tackling another, or if switching costs dominate the relative differences between projects. Reality has a surprising amount of detail, and it takes time and effort to build up the ability to handle that detail effectively.
I think there’s something important here for the broader EA/rationalist sphere, tho I haven’t crystallized it well yet. It’s something like—the ‘maximizing altruism’ thing, which I think of as being the heart of EA, is important but also a ‘sometimes food’ in some ways; it is pretty good for thinking about how to allocate money (with some caveats) but is much less good for thinking about how to allocate human effort. It makes sense for generalists, but actually that’s not what most people are or should be. This isn’t to say we should abandon maximizing altruism, or all of its precursors, but… somehow build a thing that both makes good use of that, and good use of less redirectable resources.
One frame I have for ‘maximizing altruism’ is that it’s something like a liquid: it’s responsive to its surroundings, taking on their shape, flowing to the lowest point available. It rapidly conforms to new surroundings if there are changes; turn a bottle on its side and the liquid inside will rapidly resettle into the new best configuration.
This has both upsides and downsides: the flexibility and ability to do rapid shifts mean that as new concerns become the most prominent, they can be rapidly addressed. The near-continuous nature of liquids means that as you get more and more maximizing altruist capacity, you can smoothly increase the ‘shoreline’.
Many other approaches seem solid instead of liquid, in a way that promotes robustness and specialization (while being less flexible and responsive). If the only important resources are fungible commodities, then the liquid model seems optimal; if it turns out that the skills and resources you need for tackling one challenge are different than the skills and resources needed for tackling another, or if switching costs dominate the relative differences between projects. Reality has a surprising amount of detail, and it takes time and effort to build up the ability to handle that detail effectively.
I think there’s something important here for the broader EA/rationalist sphere, tho I haven’t crystallized it well yet. It’s something like—the ‘maximizing altruism’ thing, which I think of as being the heart of EA, is important but also a ‘sometimes food’ in some ways; it is pretty good for thinking about how to allocate money (with some caveats) but is much less good for thinking about how to allocate human effort. It makes sense for generalists, but actually that’s not what most people are or should be. This isn’t to say we should abandon maximizing altruism, or all of its precursors, but… somehow build a thing that both makes good use of that, and good use of less redirectable resources.