You could just not be utilitarian, especially in the specific form of not maximizing a metaphysical quantity like “happy experience”, thus leaving you with no moral obligations to counterfactual (ie: nonexistent) people, thus eliminating the Mere Addition Paradox.
Ok, I know that given the chemistry involved in “happy”, it’s not exactly a metaphysical or non-natural quantity, but it bugs me that utilitarianism says to “maximize Happy” even when, precisely as in the Mere Addition Paradox, no individual consciousness will actually experience the magnitude of Happy attained via utilitarian policies. How can a numerical measure of a subjective state of consciousness be valuable if nobody experiences the total numerical measure? It seems more sensible to restrict yourself to only moralizing about people who already exist, thus winding up closer to post-hoc consequentialism than traditional utilitarianism.
How can a numerical measure of a subjective state of consciousness be valuable if nobody experiences the total numerical measure?
The mere addition paradox also manifests for a single person. Imagine the state you are in. Now imagine if it can be (subjectively) improved by some means (e.g. fame, company, drugs, …). Keep going. Odds are, you would not find a maximum, not even a local one. After a while, you might notice that, despite incremental improvements, the state you are in is actually inferior to the original, if you compare them directly. Mathematically, one might model this as the improvement drive being non-conservative and so no scalar map from states to scalar utility exists. Whether it is worth pushing this analogy any further, I am not sure.
The mere addition paradox also manifests for a single person. Imagine the state you are in. Now imagine if it can be (subjectively) improved by some means (e.g. fame, company, drugs, …). Keep going. Odds are, you would not find a maximum, not even a local one.
Hill climbing always finds a local maximum, but that might well look very disappointing, wasteful of effort, and downright stupid when compared to some smarter means of spending the effort on finding a way to live a better life.
You could just not be utilitarian, especially in the specific form of not maximizing a metaphysical quantity like “happy experience”, thus leaving you with no moral obligations to counterfactual (ie: nonexistent) people, thus eliminating the Mere Addition Paradox.
Ok, I know that given the chemistry involved in “happy”, it’s not exactly a metaphysical or non-natural quantity, but it bugs me that utilitarianism says to “maximize Happy” even when, precisely as in the Mere Addition Paradox, no individual consciousness will actually experience the magnitude of Happy attained via utilitarian policies. How can a numerical measure of a subjective state of consciousness be valuable if nobody experiences the total numerical measure? It seems more sensible to restrict yourself to only moralizing about people who already exist, thus winding up closer to post-hoc consequentialism than traditional utilitarianism.
The mere addition paradox also manifests for a single person. Imagine the state you are in. Now imagine if it can be (subjectively) improved by some means (e.g. fame, company, drugs, …). Keep going. Odds are, you would not find a maximum, not even a local one. After a while, you might notice that, despite incremental improvements, the state you are in is actually inferior to the original, if you compare them directly. Mathematically, one might model this as the improvement drive being non-conservative and so no scalar map from states to scalar utility exists. Whether it is worth pushing this analogy any further, I am not sure.
Hill climbing always finds a local maximum, but that might well look very disappointing, wasteful of effort, and downright stupid when compared to some smarter means of spending the effort on finding a way to live a better life.