That is true, but I think that the discrepancy arises from me foolishly using a deontologically-loaded word like “obligation,” in a consequentialist discussion.
I’ll try to recast the language in a more consequentialist style:
Instead of saying that, from a person-affecting perspective:
“Adding new people to the world is worse if the addition makes existing people worse off, or confers upon the existing people a moral obligation to take an action that will make them worse off.”
We can instead say:
“An action that adds new people to the world, from a person-affecting perspective, makes the world a worse place if, after the action is taken, the world would be made a better place if all the previously existing people did something that harmed them.”
Instead of saying: “It seems to me that a world where a person has a moral obligation and is prevented from fulfilling it is worse than one where they have one and are capable of fulfilling it.”
We can instead say:
“It seems to me that a world where it is physically impossible for someone to undertake an action that would improve it is worse than one where it is physically possible for someone to undertake that action.”
If you accept these premises then A+ is worse than A, from a person-affecting perspective anyway. I don’t think that the second premise is at all controversial, but the first one might be.
I also invite you to consider a variation of the Invincible Slaver AI variant of the problem I described. Suppose you had a choice between 1. Creating the slaves and the Invincible Slaver AI & 2. Doing nothing. You do not get the choice to create only the slaves, it’s a package deal, slave and Slaver AI or nothing at all. Would you do it? I know I wouldn’t.
That is true, but I think that the discrepancy arises from me foolishly using a deontologically-loaded word like “obligation,” in a consequentialist discussion.
I’ll try to recast the language in a more consequentialist style: Instead of saying that, from a person-affecting perspective: “Adding new people to the world is worse if the addition makes existing people worse off, or confers upon the existing people a moral obligation to take an action that will make them worse off.”
We can instead say: “An action that adds new people to the world, from a person-affecting perspective, makes the world a worse place if, after the action is taken, the world would be made a better place if all the previously existing people did something that harmed them.”
Instead of saying: “It seems to me that a world where a person has a moral obligation and is prevented from fulfilling it is worse than one where they have one and are capable of fulfilling it.”
We can instead say: “It seems to me that a world where it is physically impossible for someone to undertake an action that would improve it is worse than one where it is physically possible for someone to undertake that action.”
If you accept these premises then A+ is worse than A, from a person-affecting perspective anyway. I don’t think that the second premise is at all controversial, but the first one might be.
I also invite you to consider a variation of the Invincible Slaver AI variant of the problem I described. Suppose you had a choice between 1. Creating the slaves and the Invincible Slaver AI & 2. Doing nothing. You do not get the choice to create only the slaves, it’s a package deal, slave and Slaver AI or nothing at all. Would you do it? I know I wouldn’t.