It does recommend against creating humans with lives barely worth living, and equivalently painlessly killing such people as well. If your population is a single person with utility 1000 and γ=.99, then this would recommend against creating a person with utility 1.
EDIT: I realised I wasn’t clear that the sum was over everyone that ever lived. I’ve clarified that in the post.
Actually, it recommends killing only people who’s future lifetime utility is about going to go negative, as the sum is over all humans in the world in total.
You’re correct on the “not creating” incentives.
Now, this doesn’t represent what I’d endorse (I prefer more asymmetry between life and death), but it’s good enough as an example for most cases that come up.
It does recommend against creating humans with lives barely worth living, and equivalently painlessly killing such people as well. If your population is a single person with utility 1000 and γ=.99, then this would recommend against creating a person with utility 1.
EDIT: I realised I wasn’t clear that the sum was over everyone that ever lived. I’ve clarified that in the post.
Actually, it recommends killing only people who’s future lifetime utility is about going to go negative, as the sum is over all humans in the world in total.
You’re correct on the “not creating” incentives.
Now, this doesn’t represent what I’d endorse (I prefer more asymmetry between life and death), but it’s good enough as an example for most cases that come up.