″...expansionism violates … Pareto over agents...”
I don’t think this statement makes sense: Expansionism specifies no mapping between equivalent agents. Pareto must specify a mapping identifying equivalent pairs of agents. For a given pair of worlds, expansionism will usually violate Pareto for some mappings and not others—because it must: Pareto gives different answers with different mappings.
[I believe I’m disagreeing with Askell 2018 here; I’m genuinely confused that she seems to be making a simple error—so it’s entirely possible that I’m just genuinely confused :)]
E.g. in the Balmy/Blustery case, Pareto tells us that with the mapping taking X/Y/Z Balmy to X/Y/Z Blustery, we should prefer Blustery (call this the XYZ mapping).
However, with the mirror mapping (draw a line between Balmy and Blustery on the page, map each person to the person in their reflected position), Pareto tells us we should prefer Balmy: the left-siders are better off, and the right-siders are no worse off. (with many mappings Pareto doesn’t tell us to prefer either world)
Now one can say that obviously we should use the XYZ mapping (otherwise the letters don’t match!) - and another can say that obviously we should use the mirror mapping (otherwise the mirrored positions don’t match!). We can’t say “Just look for the person that’s the same in the other world”: equivalent persons won’t be the same in all respects, since in general they’ll have different welfare levels.
It’s true that the XYZ mapping was chosen as part of the example setup. However, this doesn’t make it any less arbitrary—and importantly this mapping is not an argument that expansionism can ‘see’: expansionism takes two worlds and a space-time mapping.
When making claims like ”...we can see that Blustery is better than Balmy by Pareto...” (Askell p84), we ought to specify the mapping. Expansionism must disagree here with one of Pareto_XYZ or Pareto_mirror. This disagreement can be reasonably be called a violation of Pareto_XYZ, but not of Pareto.
None of this is to say that we need to throw out Pareto—only that we need to be clear on what it is/isn’t saying. Being guided by Pareto means bringing in a kind of path-dependence.
To take another example, consider the trade T = [all-planets-closer-together] for [any finite suffering] situation. Pareto_Obvious says that the original planet people are no better off, while the new suffering people are clearly worse off, so T is a bad trade-off. However, expansionism doesn’t see the Obvious agent mapping for T. From expansionism’s point of view, it’s as if we left the planet people at approximately their original world position, and created an infinite number of new happy people to increase the population density as necessary. (and note that we could have done something like this, to get to a final world similar in all important respects)
For this reason, it seems a mistake to say “No one’s thanking you for pulling those planets closer together. In fact, no one noticed.”. First, people’s noticing isn’t the point. You could create an infinite number of happy people who believed they’d always existed and none of them need notice. In either case, what matters is the moral significance of creating them / moving them.
Second, moving an infinite number of people can get you to the same state as creating an infinite number of people. It’s not clear that moving infinite numbers of people should have little moral significance. (the [moving-shouldn’t-matter-much] intuition is based on moving finite numbers of things; in the infinite case, various important invariants cease to hold (e.g. invariant total density; invariant number of occupied (/empty) positions)) [I’m not at all sure what I think here; intuitively, it seems that creating one happy person in exchange for dividing happy-person-density by Graham’s number is a terrible trade (so I’m picking expansionism over Pareto); however, I worry I might be focusing on what it’s like for an observer to examine some region of that world, rather than on the inherent welfare of the world itself]
That said, I think it’s fine to take the stance that it does matter whether we [moved people] or [created people] to reach a given world state. Expansionism doesn’t have to care, but for Pareto it can be important.
>We can’t say “Just look for the person that’s the same in the other world”: equivalent persons won’t be the same in all respects, since in general they’ll have different welfare levels.
Askell extensively argues for why you should be able to do that in the first part of her thesis. For one thing, it’s highly implausible to say that differing welfare levels alone necessarily imply an alteration in personal identity. It seems obvious that my life could have been happier or sadder, at least to some extent, without my being a different person. For another, your condition for transworld identity more broadly is way too strong: no philosopher that I know of thinks that I need to be the same in every respect in some other world in order for the person in that other world to be “me.”
Excellent post. Thanks for writing this.
I don’t think this statement makes sense:
Expansionism specifies no mapping between equivalent agents.
Pareto must specify a mapping identifying equivalent pairs of agents.
For a given pair of worlds, expansionism will usually violate Pareto for some mappings and not others—because it must: Pareto gives different answers with different mappings.
[I believe I’m disagreeing with Askell 2018 here; I’m genuinely confused that she seems to be making a simple error—so it’s entirely possible that I’m just genuinely confused :)]
E.g. in the Balmy/Blustery case, Pareto tells us that with the mapping taking X/Y/Z Balmy to X/Y/Z Blustery, we should prefer Blustery (call this the XYZ mapping).
However, with the mirror mapping (draw a line between Balmy and Blustery on the page, map each person to the person in their reflected position), Pareto tells us we should prefer Balmy: the left-siders are better off, and the right-siders are no worse off.
(with many mappings Pareto doesn’t tell us to prefer either world)
Now one can say that obviously we should use the XYZ mapping (otherwise the letters don’t match!) - and another can say that obviously we should use the mirror mapping (otherwise the mirrored positions don’t match!). We can’t say “Just look for the person that’s the same in the other world”: equivalent persons won’t be the same in all respects, since in general they’ll have different welfare levels.
It’s true that the XYZ mapping was chosen as part of the example setup. However, this doesn’t make it any less arbitrary—and importantly this mapping is not an argument that expansionism can ‘see’: expansionism takes two worlds and a space-time mapping.
When making claims like ”...we can see that Blustery is better than Balmy by Pareto...” (Askell p84), we ought to specify the mapping. Expansionism must disagree here with one of Pareto_XYZ or Pareto_mirror.
This disagreement can be reasonably be called a violation of Pareto_XYZ, but not of Pareto.
None of this is to say that we need to throw out Pareto—only that we need to be clear on what it is/isn’t saying. Being guided by Pareto means bringing in a kind of path-dependence.
To take another example, consider the trade T = [all-planets-closer-together] for [any finite suffering] situation. Pareto_Obvious says that the original planet people are no better off, while the new suffering people are clearly worse off, so T is a bad trade-off.
However, expansionism doesn’t see the Obvious agent mapping for T. From expansionism’s point of view, it’s as if we left the planet people at approximately their original world position, and created an infinite number of new happy people to increase the population density as necessary. (and note that we could have done something like this, to get to a final world similar in all important respects)
For this reason, it seems a mistake to say “No one’s thanking you for pulling those planets closer together. In fact, no one noticed.”.
First, people’s noticing isn’t the point. You could create an infinite number of happy people who believed they’d always existed and none of them need notice. In either case, what matters is the moral significance of creating them / moving them.
Second, moving an infinite number of people can get you to the same state as creating an infinite number of people. It’s not clear that moving infinite numbers of people should have little moral significance. (the [moving-shouldn’t-matter-much] intuition is based on moving finite numbers of things; in the infinite case, various important invariants cease to hold (e.g. invariant total density; invariant number of occupied (/empty) positions))
[I’m not at all sure what I think here; intuitively, it seems that creating one happy person in exchange for dividing happy-person-density by Graham’s number is a terrible trade (so I’m picking expansionism over Pareto); however, I worry I might be focusing on what it’s like for an observer to examine some region of that world, rather than on the inherent welfare of the world itself]
That said, I think it’s fine to take the stance that it does matter whether we [moved people] or [created people] to reach a given world state. Expansionism doesn’t have to care, but for Pareto it can be important.
>We can’t say “Just look for the person that’s the same in the other world”: equivalent persons won’t be the same in all respects, since in general they’ll have different welfare levels.
Askell extensively argues for why you should be able to do that in the first part of her thesis. For one thing, it’s highly implausible to say that differing welfare levels alone necessarily imply an alteration in personal identity. It seems obvious that my life could have been happier or sadder, at least to some extent, without my being a different person. For another, your condition for transworld identity more broadly is way too strong: no philosopher that I know of thinks that I need to be the same in every respect in some other world in order for the person in that other world to be “me.”