Total utilitarianism does imply the repugnant conclusion, very straightforwardly.
For example, imagine that world A has 1000000000000000000 people each with 10000000 utility and world Z has 10000000000000000000000000000000000000000 people each with 0.0000000001 utility. Which is better?
Total utilitarianism says that you just multiply. World A has 10^18 people x 10^7 utility per person = 10^25 total utility. World Z has 10^40 people x 10^-10 utility per person = 10^30 total utility. World Z is way better.
This seems repugnant; intuitively world Z is much worse than world A.
Parfit went through cleverer steps because he wanted his argument to apply more generally, not just to total utilitarianism. Even much weaker assumptions can get to this repugnant-seeming conclusion that a world like Z is better than a world like A.
The point is that lots of people are confused about axiology. When they try to give opinions about population ethics, judging in various scenarios whether one hypothetical world is better than another, they’ll wind up making judgments that are inconsistent with each other.
Total utilitarianism does imply the repugnant conclusion, very straightforwardly.
For example, imagine that world A has 1000000000000000000 people each with 10000000 utility and world Z has 10000000000000000000000000000000000000000 people each with 0.0000000001 utility. Which is better?
Total utilitarianism says that you just multiply. World A has 10^18 people x 10^7 utility per person = 10^25 total utility. World Z has 10^40 people x 10^-10 utility per person = 10^30 total utility. World Z is way better.
This seems repugnant; intuitively world Z is much worse than world A.
Parfit went through cleverer steps because he wanted his argument to apply more generally, not just to total utilitarianism. Even much weaker assumptions can get to this repugnant-seeming conclusion that a world like Z is better than a world like A.
The point is that lots of people are confused about axiology. When they try to give opinions about population ethics, judging in various scenarios whether one hypothetical world is better than another, they’ll wind up making judgments that are inconsistent with each other.