if rewinding is morally unacceptable (erasing could-have-been sentients) and you have unlimited power to direct the future, does this mean that all the could-have-beens from futures you didn’t select are on your shoulders? This is directly related to another recent post. If I choose a future with less sentients who have a higher standard of living am I responsible for the sentients that would have existed in a future where I chose to let a higher number of them be created?
If you’re a utilitarian this is the delicate point. at what point are two sentients with a certain happiness level worth one sentient with a higher happiness level?
Does a starving man steal bread to feed his family? This turns into: Should we legitimize stealing from the baker to feed as many poor as we can?
No, the theft problem is much easier than the aggregate problem.
If the only thing in our power to change is the one man’s behavior, we probably would allow the man to steal. It’s worse to let his family die. But if we start trying to let everyone steal whenever they can’t afford things, this would collapse our economy and soon mean there weren’t enough goods to even steal. So if it’s within our power to change the whole system, we wouldn’t let the man steal—instead we would eliminate poverty so that no one ever has to steal. This is obviously the optimal long-run large-scale decision, and the trick is really getting there from here (the goal is essentially undisputed).
The aggregate problem is a whole lot harder, because the goals themselves are in dispute. Which world is better, a world of 1,000 ultimately happy people, or a world of 1 billion people whose lives are just barely worth living?
what effect would it have on the point
if rewinding is morally unacceptable (erasing could-have-been sentients) and you have unlimited power to direct the future, does this mean that all the could-have-beens from futures you didn’t select are on your shoulders? This is directly related to another recent post. If I choose a future with less sentients who have a higher standard of living am I responsible for the sentients that would have existed in a future where I chose to let a higher number of them be created? If you’re a utilitarian this is the delicate point. at what point are two sentients with a certain happiness level worth one sentient with a higher happiness level? Does a starving man steal bread to feed his family? This turns into: Should we legitimize stealing from the baker to feed as many poor as we can?
No, the theft problem is much easier than the aggregate problem.
If the only thing in our power to change is the one man’s behavior, we probably would allow the man to steal. It’s worse to let his family die. But if we start trying to let everyone steal whenever they can’t afford things, this would collapse our economy and soon mean there weren’t enough goods to even steal. So if it’s within our power to change the whole system, we wouldn’t let the man steal—instead we would eliminate poverty so that no one ever has to steal. This is obviously the optimal long-run large-scale decision, and the trick is really getting there from here (the goal is essentially undisputed).
The aggregate problem is a whole lot harder, because the goals themselves are in dispute. Which world is better, a world of 1,000 ultimately happy people, or a world of 1 billion people whose lives are just barely worth living?