People want different things, and the different possible disagreement resolving mechanisms include the different varieties of utilitarianism.
In this view, the fundamental issue is whether you want the new entity to be directly counted in the disagreement resolving mechanism. If the new entity is ignored (except for impact on the utility functions of pre-existing entities, including moral viewpoints if preference utility is used*), then there’s no need to be concerned with average v. total utilitarianism.
A general policy of always including the new entity in the disagreement resolving mechanism would be extremely dangerous (utility monsters). Maybe it can be considered safe to include them under limited circumstances, but the Repugnant Conclusion indicates to me that the entities being similar to existing entities is NOT sufficient to make it safe to always include them.
(*) hedonic utility is extremely questionable imo—if you were the only entity in the universe and immortal, would it be evil not to wirehead?
Right—it’s a little misleading to cast the decision procedure as if it was some person-independent thing. If you make a decision based on how happy you think the puppy will be, it’s not because some universal law forced you to against your will, it’s because you care how happy the puppy will be.
If there’s some game-theory thing going on where you cooperate with puppies in exchange for them cooperating back (much like how Bertrand and Cedric are cooperating with each other), that’s another reason to care about the puppy’s preferences, but I don’t think actual puppies are that sophisticated.
Sure, but there’s still a meaningful question whether you’d prefer many moderately happy puppies or few very happy puppies to exist. Maybe tomorrow you’ll think of a compelling intuition one way or the other.
Sure. But it will be my intuition, and not some impersonal law. This means it’s okay for me to want things like “there should be some puppies, but not too many,” which makes perfect sense as a preference about the universe, but practically no sense in terms of population ethics.
People want different things, and the different possible disagreement resolving mechanisms include the different varieties of utilitarianism.
In this view, the fundamental issue is whether you want the new entity to be directly counted in the disagreement resolving mechanism. If the new entity is ignored (except for impact on the utility functions of pre-existing entities, including moral viewpoints if preference utility is used*), then there’s no need to be concerned with average v. total utilitarianism.
A general policy of always including the new entity in the disagreement resolving mechanism would be extremely dangerous (utility monsters). Maybe it can be considered safe to include them under limited circumstances, but the Repugnant Conclusion indicates to me that the entities being similar to existing entities is NOT sufficient to make it safe to always include them.
(*) hedonic utility is extremely questionable imo—if you were the only entity in the universe and immortal, would it be evil not to wirehead?
Right—it’s a little misleading to cast the decision procedure as if it was some person-independent thing. If you make a decision based on how happy you think the puppy will be, it’s not because some universal law forced you to against your will, it’s because you care how happy the puppy will be.
If there’s some game-theory thing going on where you cooperate with puppies in exchange for them cooperating back (much like how Bertrand and Cedric are cooperating with each other), that’s another reason to care about the puppy’s preferences, but I don’t think actual puppies are that sophisticated.
Sure, but there’s still a meaningful question whether you’d prefer many moderately happy puppies or few very happy puppies to exist. Maybe tomorrow you’ll think of a compelling intuition one way or the other.
Sure. But it will be my intuition, and not some impersonal law. This means it’s okay for me to want things like “there should be some puppies, but not too many,” which makes perfect sense as a preference about the universe, but practically no sense in terms of population ethics.