I agree that prioritarianism has the problems you mention. I note that negative-leaning utilitarianism (though not strict negative utilitarianism) has analogous problems: just as there are infinitely many ways to draw a concave welfare function, so there are infinitely many exchange rates between positive and negative experience.
Right, and I suspect the same holds for classical utilitarianism too, because there seems to be no obvious way to normalize happiness-units with suffering-units. But I know you think differently.
because there seems to be no obvious way to normalize happiness-units with suffering-units
They’re the same size when you’re indifferent between the status quo and equal chances of getting one more happiness unit or one more suffering unit. Duh.
Classical utilitarians usually argue for their view from an impartial, altruistic perspective. If they had atypical intuitions about particular cases, they would discard them if it can be shown that the intuitions don’t correspond to what is effectively best for the interests of all sentient beings. So in order to qualify as genuinely other-regarding/altruistic, the procedure one uses for coming up with a suffering/happiness exchange rate would have to produce the same output for all persons that apply it correctly, otherwise it would not be a procedure for an objective exchange rate.
The procedure you propose leads to different people giving answers that differ in orders of magnitude. If I would accept ten hours of torture for a week of vacation on the beach, and someone else would only accept ten seconds of torture for the same thing, then either of us will have a hard time justifying to force such trades onto other sentient beings for the greater good. It goes both ways of course, if classical utilitarianism is correct, too low an exchange rate would be just as bad as one that is too high (by the same margin).
Since human intuitions differ so much on the subject, one would have to either (a) establish that most people are biased and that there is in fact an exchange rate that everyone would agree on, if they were rational and knew enough, or (b) find some other way to find an objective exchange rate plus a good enough justification for why it should be relevant. I’m very skeptical concerning the feasibility of this.
Preference utilitarianism is not the same thing as hedonistic utilitarianism (they reach different conclusions), so you can’t use one to define the other.
If you don’t mind, I’d be interested in knowing why you think this is so. If you conceive of happiness and suffering as states that instantiate some phenomenal property (pleasantness and unpleasantness, respectively), then an obvious normalization of the units is in terms of felt intensity: a given instance of suffering corresponds to some instance of happiness just when one realizes the property of unpleasantness to the same degree as the other realizes the property of pleasantness. And if you, instead, conceive of happiness and suffering as states that are the objects of some intentional property (say, desiring and desiring-not), then the normalization could be done in terms of the intensity of the desires: a given instance of suffering corresponds to some instance of happiness just when the state that one desires not to be in is desired with the same intensity as the state one desires to be in.
How do you measure the intensity of desires, if not by introspectively comparing them (see below)? If you do measure something objectively, on what grounds do you justify its ethical relevance? I mean, we could measure all kinds of things, such as the amount of neurotransmitters released or activity of involved brain regions and so on, but just because there are parameters that turn out to be comparable for both pleasure and pain doesn’t meant that they automatically constitute whatever we care about.
If you instead take an approach analogous to revealed preferences (what introspective comparison of hedonic valence seems to come down to), you have to look at decision-situations where people make conscious welfare-tradeoffs. And merely being able to visualize pleasure doesn’t necessarily provoke the same reaction in all beings—it depends on contingencies of brain-wiring. We can imagine beings that are only very slightly moved by the prospect of intense, long pleasures and that wouldn’t undergo small amounts of suffering to get there.
How do you measure the intensity of desires, if not by introspectively comparing them (see below)?
You need intensity of desire to compare pains and pleasures, but also to compare pains of different intensities. So if introspection raises a problem for one type of comparison, it should raise a problem for the other type, too. Yet you think we can make comparisons within pains. So whatever reasons you have for thinking that introspection is reliable in making such comparisons, these should also be reasons for thinking that introspection is reliable for making comparisons between pains and pleasures.
Let’s assume we make use of memory to rank pains in terms of how much we don’t want to undergo them. Likewise, we may rank pleasures in terms of how much we want to have them now (or according to other measurable features). The result is two scales with comparability within the same scale. Now how do you normalize the two scales, is there not an extra source for arbitrariness? People may rank the pains the same way among themselves and the same for all the pleasures too, but when it comes to trading some pain for some pleasure, some people might be very eager to do it, whereas others might not be. Convergence of comparability of pains doesn’t necessarily imply convergence of comparability of exchange rates. You’d be comparing two separate dimensions.
As I said above, this could be done either in terms of felt intensity or intensity of desire.
This exchange seems to have proceeded as follows:
Lukas: You can’t normalize the pleasure and pain scales.
Pablo: Yes, you can, by considering either the intensity of the experience or the intensity of the desire.
Lukas: Ah, but you need to rely on introspection to do that.
Pablo: Yes, but you also need to rely on introspection to make comparisons within pains.
Lukas: But you can’t normalize the pleasure and pain scales.
As my reconstruction of the exchange indicates, I don’t think you are raising a valid objection here, since I believe I have already addressed that problem. Once you leave out worries about introspection, what are your reasons for thinking that classical utilitarians cannot make non-arbitrary comparisons between pleasures and pains, while thinking that negative utilitarians can make non-arbitrary comparisons within pains?
If you write down all we can know about pleasures (in the moment), and all we can know about pains, you may find parameters to compare (like “intensity”, or amount of neurotransmitters, or something else), but there would be no reason why people need to choose an exchange rate corresponding to some measured properties. I believe your point is that we “have reason” to pick intensity here, but I don’t see why it is rationally required of beings to care about it, and I believe empirically, many people do not care about it, and certainly you could construct artificial minds that don’t care about it.
Pleasure is not what makes decisions for us. It is the desire/craving for pleasure, and there is no reason why a craving for a specific amount of pleasure needs to always come with the same force in different minds, even if the circumstances are otherwise equal. There is also no reason why this has to be true of suffering, of course, and the corresponding desire to not have to suffer. People who value many other things strongly and who have a strong desire to stay alive, for instance, would not kill themselves even if their life mostly consists of suffering. And yet they would still be making perfectly consistent trades within different intensities and durations of suffering.
My general point is that whatever property you rely upon to make comparisons within pains you can also rely upon to make comparisons between pains and pleasures.
It seems to me that you are using intensity of desire to make comparisons within pains. If so, you can also use intensity of desire to make comparisons between pleasures and pains. That “there would be no reason why people need to choose an exchange rate corresponding to some measured properties” seems inadequate as a reply, since you could analogously argue that there is no reason why people should rely on those measured properties to make comparisons within pains.
However, if intensity of desire is not the property you are using to make comparisons within pains, just ignore the previous paragraph. My general point still stands: the property you are using, whichever it is, is also a property that you could use to make comparisons between pains and pleasures.
Preference utilitarianism is not the same thing as hedonistic utilitarianism (they reach different conclusions), so you can’t use one to justify the other.
I agree that prioritarianism has the problems you mention. I note that negative-leaning utilitarianism (though not strict negative utilitarianism) has analogous problems: just as there are infinitely many ways to draw a concave welfare function, so there are infinitely many exchange rates between positive and negative experience.
Right, and I suspect the same holds for classical utilitarianism too, because there seems to be no obvious way to normalize happiness-units with suffering-units. But I know you think differently.
They’re the same size when you’re indifferent between the status quo and equal chances of getting one more happiness unit or one more suffering unit. Duh.
Am I missing something?
Classical utilitarians usually argue for their view from an impartial, altruistic perspective. If they had atypical intuitions about particular cases, they would discard them if it can be shown that the intuitions don’t correspond to what is effectively best for the interests of all sentient beings. So in order to qualify as genuinely other-regarding/altruistic, the procedure one uses for coming up with a suffering/happiness exchange rate would have to produce the same output for all persons that apply it correctly, otherwise it would not be a procedure for an objective exchange rate.
The procedure you propose leads to different people giving answers that differ in orders of magnitude. If I would accept ten hours of torture for a week of vacation on the beach, and someone else would only accept ten seconds of torture for the same thing, then either of us will have a hard time justifying to force such trades onto other sentient beings for the greater good. It goes both ways of course, if classical utilitarianism is correct, too low an exchange rate would be just as bad as one that is too high (by the same margin).
Since human intuitions differ so much on the subject, one would have to either (a) establish that most people are biased and that there is in fact an exchange rate that everyone would agree on, if they were rational and knew enough, or (b) find some other way to find an objective exchange rate plus a good enough justification for why it should be relevant. I’m very skeptical concerning the feasibility of this.
Preference utilitarianism is not the same thing as hedonistic utilitarianism (they reach different conclusions), so you can’t use one to define the other.
Googles for
classical utilitarianism
Oops.
Note to self: Never comment anything unless I’m sure about the meaning of each word in it.
If you don’t mind, I’d be interested in knowing why you think this is so. If you conceive of happiness and suffering as states that instantiate some phenomenal property (pleasantness and unpleasantness, respectively), then an obvious normalization of the units is in terms of felt intensity: a given instance of suffering corresponds to some instance of happiness just when one realizes the property of unpleasantness to the same degree as the other realizes the property of pleasantness. And if you, instead, conceive of happiness and suffering as states that are the objects of some intentional property (say, desiring and desiring-not), then the normalization could be done in terms of the intensity of the desires: a given instance of suffering corresponds to some instance of happiness just when the state that one desires not to be in is desired with the same intensity as the state one desires to be in.
How do you measure the intensity of desires, if not by introspectively comparing them (see below)? If you do measure something objectively, on what grounds do you justify its ethical relevance? I mean, we could measure all kinds of things, such as the amount of neurotransmitters released or activity of involved brain regions and so on, but just because there are parameters that turn out to be comparable for both pleasure and pain doesn’t meant that they automatically constitute whatever we care about.
If you instead take an approach analogous to revealed preferences (what introspective comparison of hedonic valence seems to come down to), you have to look at decision-situations where people make conscious welfare-tradeoffs. And merely being able to visualize pleasure doesn’t necessarily provoke the same reaction in all beings—it depends on contingencies of brain-wiring. We can imagine beings that are only very slightly moved by the prospect of intense, long pleasures and that wouldn’t undergo small amounts of suffering to get there.
You need intensity of desire to compare pains and pleasures, but also to compare pains of different intensities. So if introspection raises a problem for one type of comparison, it should raise a problem for the other type, too. Yet you think we can make comparisons within pains. So whatever reasons you have for thinking that introspection is reliable in making such comparisons, these should also be reasons for thinking that introspection is reliable for making comparisons between pains and pleasures.
Let’s assume we make use of memory to rank pains in terms of how much we don’t want to undergo them. Likewise, we may rank pleasures in terms of how much we want to have them now (or according to other measurable features). The result is two scales with comparability within the same scale. Now how do you normalize the two scales, is there not an extra source for arbitrariness? People may rank the pains the same way among themselves and the same for all the pleasures too, but when it comes to trading some pain for some pleasure, some people might be very eager to do it, whereas others might not be. Convergence of comparability of pains doesn’t necessarily imply convergence of comparability of exchange rates. You’d be comparing two separate dimensions.
As I said above, this could be done either in terms of felt intensity or intensity of desire.
This exchange seems to have proceeded as follows:
As my reconstruction of the exchange indicates, I don’t think you are raising a valid objection here, since I believe I have already addressed that problem. Once you leave out worries about introspection, what are your reasons for thinking that classical utilitarians cannot make non-arbitrary comparisons between pleasures and pains, while thinking that negative utilitarians can make non-arbitrary comparisons within pains?
If you write down all we can know about pleasures (in the moment), and all we can know about pains, you may find parameters to compare (like “intensity”, or amount of neurotransmitters, or something else), but there would be no reason why people need to choose an exchange rate corresponding to some measured properties. I believe your point is that we “have reason” to pick intensity here, but I don’t see why it is rationally required of beings to care about it, and I believe empirically, many people do not care about it, and certainly you could construct artificial minds that don’t care about it.
Pleasure is not what makes decisions for us. It is the desire/craving for pleasure, and there is no reason why a craving for a specific amount of pleasure needs to always come with the same force in different minds, even if the circumstances are otherwise equal. There is also no reason why this has to be true of suffering, of course, and the corresponding desire to not have to suffer. People who value many other things strongly and who have a strong desire to stay alive, for instance, would not kill themselves even if their life mostly consists of suffering. And yet they would still be making perfectly consistent trades within different intensities and durations of suffering.
My general point is that whatever property you rely upon to make comparisons within pains you can also rely upon to make comparisons between pains and pleasures.
It seems to me that you are using intensity of desire to make comparisons within pains. If so, you can also use intensity of desire to make comparisons between pleasures and pains. That “there would be no reason why people need to choose an exchange rate corresponding to some measured properties” seems inadequate as a reply, since you could analogously argue that there is no reason why people should rely on those measured properties to make comparisons within pains.
However, if intensity of desire is not the property you are using to make comparisons within pains, just ignore the previous paragraph. My general point still stands: the property you are using, whichever it is, is also a property that you could use to make comparisons between pains and pleasures.
Preference utilitarianism is not the same thing as hedonistic utilitarianism (they reach different conclusions), so you can’t use one to justify the other.