If each dollar gives the same amount of utility, then one person with $0 and one person with $1,000,000 would be just as good as two people with $500,000. That’s how utility is defined. If Bob doesn’t consider these choices just as good, then they do not give the same utility according to his PVG.
I think this argument is unclear because there is two different senses of “utility” in play.
First, there is the sense from decision theory, your utility function encodes your preferences for different worlds. So if we were talking about Bob’s utility function, these states would indeed be indifferent per definition.
The other sense is from (naive?) utilitarianism, which states something like: “In order to decide which state of the world I prefer, I should take into account the preferences/happiness/something of other beings. In particular, I prefer states that maximize the sum of the utilities of everyone involved” (because that best agrees with everyone’s preferences?). This argument that we should prefer dustspecks in effect says that our utility functions should have this particular form.
But that is a rather strong statement! In particular, if you you find Rawl’s veil-of-ignorance appealing, your utility function does not have that form (it would seem to be the minimum rather than the sum of the other individuals’ utilities). So many actual humans are not that kind of utilitarians.
your utility function does not have that form (it would seem to be the minimum rather than the sum of the other individuals’ utilities).
The average, rather, if the people expect to get utility randomly sampled from the population distribution. The original position gives you total utilitarianism if the parties face the possibility of there “not being enough slots” for all.
I think this argument is unclear because there is two different senses of “utility” in play.
First, there is the sense from decision theory, your utility function encodes your preferences for different worlds. So if we were talking about Bob’s utility function, these states would indeed be indifferent per definition.
The other sense is from (naive?) utilitarianism, which states something like: “In order to decide which state of the world I prefer, I should take into account the preferences/happiness/something of other beings. In particular, I prefer states that maximize the sum of the utilities of everyone involved” (because that best agrees with everyone’s preferences?). This argument that we should prefer dustspecks in effect says that our utility functions should have this particular form.
But that is a rather strong statement! In particular, if you you find Rawl’s veil-of-ignorance appealing, your utility function does not have that form (it would seem to be the minimum rather than the sum of the other individuals’ utilities). So many actual humans are not that kind of utilitarians.
The average, rather, if the people expect to get utility randomly sampled from the population distribution. The original position gives you total utilitarianism if the parties face the possibility of there “not being enough slots” for all.