I thought negative utilitarianism generally doesn’t endorse the creation of new life (assuming we can’t guarantee some standard of well-being for it). Are you foreseeing that Stuart’s baby will eventually make a positive impact by reducing suffering of others?
All else being equal, creating new lives would be a bad thing, but I don’t believe that trying to discourage people from having children on those grounds would be particularly useful (its net effect would probably just be bad PR for negative utilitarianism), nor that it would be good to have a general policy of smart and ethical people having less children than people not belonging to that class. Also, I count Stuart as my friend, and I didn’t want him to experience suffering due to his happy event being tainted by a uniformly negative response to this thread.
nor that it would be good to have a general policy of smart and ethical people having less children than people not belonging to that class.
It depends on the counterfactual. If it consists of donating all the resources a kid would cost to the best cause, then that likely trumps everything. Especially also if you take the haste consideration into account; it takes forever to raise a child and good altruists are likely cheaper to create by other means. And the part about altruists losing ground in Darwinian terms can be counteracted by sperm donations (or egg donations if there is demand for that).
Having said that, it is important to emphasize that personal factors and preferences need to taken into account in the expected value calculation, since it would be bad/irrational if someone ends up unhappy and burned out after trying too hard to be a perfect utility-maximizer.
Coherent in what sense? Prioritarianism is likely more intuitive, but a problem is that there are infinitely many ways to draw a concave welfare function and no good reasons to choose one over the other.
I don’t think negative utilitarianism is necessarily incoherent by itself, it depends on the way it is formalized.
I agree that prioritarianism has the problems you mention. I note that negative-leaning utilitarianism (though not strict negative utilitarianism) has analogous problems: just as there are infinitely many ways to draw a concave welfare function, so there are infinitely many exchange rates between positive and negative experience.
Right, and I suspect the same holds for classical utilitarianism too, because there seems to be no obvious way to normalize happiness-units with suffering-units. But I know you think differently.
because there seems to be no obvious way to normalize happiness-units with suffering-units
They’re the same size when you’re indifferent between the status quo and equal chances of getting one more happiness unit or one more suffering unit. Duh.
Classical utilitarians usually argue for their view from an impartial, altruistic perspective. If they had atypical intuitions about particular cases, they would discard them if it can be shown that the intuitions don’t correspond to what is effectively best for the interests of all sentient beings. So in order to qualify as genuinely other-regarding/altruistic, the procedure one uses for coming up with a suffering/happiness exchange rate would have to produce the same output for all persons that apply it correctly, otherwise it would not be a procedure for an objective exchange rate.
The procedure you propose leads to different people giving answers that differ in orders of magnitude. If I would accept ten hours of torture for a week of vacation on the beach, and someone else would only accept ten seconds of torture for the same thing, then either of us will have a hard time justifying to force such trades onto other sentient beings for the greater good. It goes both ways of course, if classical utilitarianism is correct, too low an exchange rate would be just as bad as one that is too high (by the same margin).
Since human intuitions differ so much on the subject, one would have to either (a) establish that most people are biased and that there is in fact an exchange rate that everyone would agree on, if they were rational and knew enough, or (b) find some other way to find an objective exchange rate plus a good enough justification for why it should be relevant. I’m very skeptical concerning the feasibility of this.
Preference utilitarianism is not the same thing as hedonistic utilitarianism (they reach different conclusions), so you can’t use one to define the other.
If you don’t mind, I’d be interested in knowing why you think this is so. If you conceive of happiness and suffering as states that instantiate some phenomenal property (pleasantness and unpleasantness, respectively), then an obvious normalization of the units is in terms of felt intensity: a given instance of suffering corresponds to some instance of happiness just when one realizes the property of unpleasantness to the same degree as the other realizes the property of pleasantness. And if you, instead, conceive of happiness and suffering as states that are the objects of some intentional property (say, desiring and desiring-not), then the normalization could be done in terms of the intensity of the desires: a given instance of suffering corresponds to some instance of happiness just when the state that one desires not to be in is desired with the same intensity as the state one desires to be in.
How do you measure the intensity of desires, if not by introspectively comparing them (see below)? If you do measure something objectively, on what grounds do you justify its ethical relevance? I mean, we could measure all kinds of things, such as the amount of neurotransmitters released or activity of involved brain regions and so on, but just because there are parameters that turn out to be comparable for both pleasure and pain doesn’t meant that they automatically constitute whatever we care about.
If you instead take an approach analogous to revealed preferences (what introspective comparison of hedonic valence seems to come down to), you have to look at decision-situations where people make conscious welfare-tradeoffs. And merely being able to visualize pleasure doesn’t necessarily provoke the same reaction in all beings—it depends on contingencies of brain-wiring. We can imagine beings that are only very slightly moved by the prospect of intense, long pleasures and that wouldn’t undergo small amounts of suffering to get there.
How do you measure the intensity of desires, if not by introspectively comparing them (see below)?
You need intensity of desire to compare pains and pleasures, but also to compare pains of different intensities. So if introspection raises a problem for one type of comparison, it should raise a problem for the other type, too. Yet you think we can make comparisons within pains. So whatever reasons you have for thinking that introspection is reliable in making such comparisons, these should also be reasons for thinking that introspection is reliable for making comparisons between pains and pleasures.
Let’s assume we make use of memory to rank pains in terms of how much we don’t want to undergo them. Likewise, we may rank pleasures in terms of how much we want to have them now (or according to other measurable features). The result is two scales with comparability within the same scale. Now how do you normalize the two scales, is there not an extra source for arbitrariness? People may rank the pains the same way among themselves and the same for all the pleasures too, but when it comes to trading some pain for some pleasure, some people might be very eager to do it, whereas others might not be. Convergence of comparability of pains doesn’t necessarily imply convergence of comparability of exchange rates. You’d be comparing two separate dimensions.
As I said above, this could be done either in terms of felt intensity or intensity of desire.
This exchange seems to have proceeded as follows:
Lukas: You can’t normalize the pleasure and pain scales.
Pablo: Yes, you can, by considering either the intensity of the experience or the intensity of the desire.
Lukas: Ah, but you need to rely on introspection to do that.
Pablo: Yes, but you also need to rely on introspection to make comparisons within pains.
Lukas: But you can’t normalize the pleasure and pain scales.
As my reconstruction of the exchange indicates, I don’t think you are raising a valid objection here, since I believe I have already addressed that problem. Once you leave out worries about introspection, what are your reasons for thinking that classical utilitarians cannot make non-arbitrary comparisons between pleasures and pains, while thinking that negative utilitarians can make non-arbitrary comparisons within pains?
If you write down all we can know about pleasures (in the moment), and all we can know about pains, you may find parameters to compare (like “intensity”, or amount of neurotransmitters, or something else), but there would be no reason why people need to choose an exchange rate corresponding to some measured properties. I believe your point is that we “have reason” to pick intensity here, but I don’t see why it is rationally required of beings to care about it, and I believe empirically, many people do not care about it, and certainly you could construct artificial minds that don’t care about it.
Pleasure is not what makes decisions for us. It is the desire/craving for pleasure, and there is no reason why a craving for a specific amount of pleasure needs to always come with the same force in different minds, even if the circumstances are otherwise equal. There is also no reason why this has to be true of suffering, of course, and the corresponding desire to not have to suffer. People who value many other things strongly and who have a strong desire to stay alive, for instance, would not kill themselves even if their life mostly consists of suffering. And yet they would still be making perfectly consistent trades within different intensities and durations of suffering.
My general point is that whatever property you rely upon to make comparisons within pains you can also rely upon to make comparisons between pains and pleasures.
It seems to me that you are using intensity of desire to make comparisons within pains. If so, you can also use intensity of desire to make comparisons between pleasures and pains. That “there would be no reason why people need to choose an exchange rate corresponding to some measured properties” seems inadequate as a reply, since you could analogously argue that there is no reason why people should rely on those measured properties to make comparisons within pains.
However, if intensity of desire is not the property you are using to make comparisons within pains, just ignore the previous paragraph. My general point still stands: the property you are using, whichever it is, is also a property that you could use to make comparisons between pains and pleasures.
Preference utilitarianism is not the same thing as hedonistic utilitarianism (they reach different conclusions), so you can’t use one to justify the other.
Negative utilitarians are known to be notoriously bad at making sense of how humans (and other animals) behave in real life (e.g., most people are willing to endure the pain of walking over hot sand for the pleasure of swimming in the sea). And I suspect this extends to the behavior of negative utilitarians themselves.
And classical utilitarians are notoriously prone to making quick judgments when tradeoffs are concerned:)
Perhaps people walk over hot sand because they really want to go swimming in the sea and would suffer otherwise. If there was absolutely nothing that subjectively bothers you about the current state you’re in, you would not act at all, so it’s highly non-obvious that people are generally trading pleasure and suffering in their daily lives. Whenever we consciously make a trade involving pleasure, the counterfactual alternative always seems to include some suffering as well in the form of unfulfilled longings/cravings.
Having said that, you can always ask why suicide percentages are so low if negative utilitarian axiology is right, and that would be a good point. But there the negative utilitarians would reply that a massive life bias is to be expected for evolutionary reasons.
If there was absolutely nothing that subjectively bothers you about the current state you’re in, you would not act at all
Why not? The obvious reply is that, even if there is nothing that bothers you about your current state, you might still be motivated to act in order to move to an even better state. In any case, your attempt to make sense of the example from a negative utilitarian framework simply doesn’t do justice to what people take themselves to be doing in these situations. Just ask people around (not antecedently committed to a particular moral theory), and you’ll see.
Introspection is not particularly trustworthy. If you consciously (as opposed to acting on auto-pilot) decide to move to “an even better state”, you have in fact evaluated your current conscious state and concluded that it is not the one you want to be in, i.e. that something (at least the fact that you’d rather want to be in some other state) bothers you about it. And that—wanting to get out of your current conscious state (or not), or changing some aspect about it—is what constitutes suffering, or whatever (some) negative utilitarians consider to be morally relevant.
If you accept this (Buddhist) axiology, it becomes a conceptual truth that conscious welfare-tradeoffs always include counterfactual suffering. And there are good reasons to accept this view. For instance, it implies that there is nothing intrinsically bad about pain asymbolia, which makes sense because the alternative implies having to go to great lengths to help people who assert that they are not bothered by the “pain” at all. Additionally, this view on suffering isn’t vulnerable to an inverted qualia thought experiment applied to crosswiring pleasure and pain, where all the behavioral dispositions are left intact. Those who think there is an “intrinsic valence” to hedonic qualia, regardless of behavioral dispositions and other subjective attitudes, are faced with the inconvenient conclusion that you couldn’t reliably tell whether you’re undergoing agony or ecstacy.
My point was that your favorite theory cannot make sense of what people take themselves to be doing in situations such as those discussed above. You may argue that we shouldn’t trust these people because introspection is not trustworthy, but then you’d be effectively biting the bullet.
If you consciously (as opposed to acting on auto-pilot) decide to move to “an even better state”, you have in fact evaluated your current conscious state and concluded that it is not the one you want to be in, i.e. that something (at least the fact that you’d rather want to be in some other state) bothers you about it.
You may, of course, use the verb ‘to be bothered’ to mean ‘judging a state to be inferior to some alternative.’ However, I though you were using the verb to mean, instead, the experiencing of some negative hedonic state. I agree that there is something that “bothers you”, in the former sense, about the above situation, but I disagree that this must be so if the term is used in the latter sense—which is the sense relevant for discussions of negative utilitarianism.
However, I though you were using the verb to mean, instead, the experiencing of some negative hedonic state.
I think that wanting to change your current state is identical with what we generally mean by being in a negative hedonic state. For reasons outlined above, I suspect that qualia aren’t indepent of all the other stuff that is going on (attitudes, dispositions, memories etc.).
Studies by Kent Berridge have established that ‘wanting’ can be dissociated from ‘liking’. This research finding (among others; Guy Kahane discusses some of these) undermines the claim that affective qualia are intextricably linked to intentional attitudes, as you seem to suggest.
I’m aware of these findings, I think there are different forms of “wanting” and we might have semantical misunderstandings here. There is pleasure that causes immediate cravings if you were to stop it, and there is pleasure that does not. So pleasure would usually cause you to want it again, but not always. I would not say that only the former is “real” pleasure. Instead I’m arguing that a frustrated craving due to the absence of some desired pleasure constitutes suffering. I’m only committed to the claim that “disliking” implies (or means) “wanting to get out” of the current state. And I think this makes perfect sense given the arguments of inverted qualia / against epiphenomenalism and my intuitive response to the case of pain asymbolia.
Negative utilitarianism implies that these people are fundamentally mistaken.
I’m not sure of what this even means. Negative utilitarianism implies one set of preferences, which not everyone shares. People who have different preferences aren’t mistaken in any sense, they just want different things.
Congratulations!
I thought negative utilitarianism generally doesn’t endorse the creation of new life (assuming we can’t guarantee some standard of well-being for it). Are you foreseeing that Stuart’s baby will eventually make a positive impact by reducing suffering of others?
“The one with the power to vanquish the Dark Lord approaches … born as the seventh month dies …”
And she has a little scar on her forehead, from the forceps!
All else being equal, creating new lives would be a bad thing, but I don’t believe that trying to discourage people from having children on those grounds would be particularly useful (its net effect would probably just be bad PR for negative utilitarianism), nor that it would be good to have a general policy of smart and ethical people having less children than people not belonging to that class. Also, I count Stuart as my friend, and I didn’t want him to experience suffering due to his happy event being tainted by a uniformly negative response to this thread.
It depends on the counterfactual. If it consists of donating all the resources a kid would cost to the best cause, then that likely trumps everything. Especially also if you take the haste consideration into account; it takes forever to raise a child and good altruists are likely cheaper to create by other means. And the part about altruists losing ground in Darwinian terms can be counteracted by sperm donations (or egg donations if there is demand for that).
Having said that, it is important to emphasize that personal factors and preferences need to taken into account in the expected value calculation, since it would be bad/irrational if someone ends up unhappy and burned out after trying too hard to be a perfect utility-maximizer.
Prioritarianism seems more coherent.
Coherent in what sense? Prioritarianism is likely more intuitive, but a problem is that there are infinitely many ways to draw a concave welfare function and no good reasons to choose one over the other. I don’t think negative utilitarianism is necessarily incoherent by itself, it depends on the way it is formalized.
I agree that prioritarianism has the problems you mention. I note that negative-leaning utilitarianism (though not strict negative utilitarianism) has analogous problems: just as there are infinitely many ways to draw a concave welfare function, so there are infinitely many exchange rates between positive and negative experience.
Right, and I suspect the same holds for classical utilitarianism too, because there seems to be no obvious way to normalize happiness-units with suffering-units. But I know you think differently.
They’re the same size when you’re indifferent between the status quo and equal chances of getting one more happiness unit or one more suffering unit. Duh.
Am I missing something?
Classical utilitarians usually argue for their view from an impartial, altruistic perspective. If they had atypical intuitions about particular cases, they would discard them if it can be shown that the intuitions don’t correspond to what is effectively best for the interests of all sentient beings. So in order to qualify as genuinely other-regarding/altruistic, the procedure one uses for coming up with a suffering/happiness exchange rate would have to produce the same output for all persons that apply it correctly, otherwise it would not be a procedure for an objective exchange rate.
The procedure you propose leads to different people giving answers that differ in orders of magnitude. If I would accept ten hours of torture for a week of vacation on the beach, and someone else would only accept ten seconds of torture for the same thing, then either of us will have a hard time justifying to force such trades onto other sentient beings for the greater good. It goes both ways of course, if classical utilitarianism is correct, too low an exchange rate would be just as bad as one that is too high (by the same margin).
Since human intuitions differ so much on the subject, one would have to either (a) establish that most people are biased and that there is in fact an exchange rate that everyone would agree on, if they were rational and knew enough, or (b) find some other way to find an objective exchange rate plus a good enough justification for why it should be relevant. I’m very skeptical concerning the feasibility of this.
Preference utilitarianism is not the same thing as hedonistic utilitarianism (they reach different conclusions), so you can’t use one to define the other.
Googles for
classical utilitarianism
Oops.
Note to self: Never comment anything unless I’m sure about the meaning of each word in it.
If you don’t mind, I’d be interested in knowing why you think this is so. If you conceive of happiness and suffering as states that instantiate some phenomenal property (pleasantness and unpleasantness, respectively), then an obvious normalization of the units is in terms of felt intensity: a given instance of suffering corresponds to some instance of happiness just when one realizes the property of unpleasantness to the same degree as the other realizes the property of pleasantness. And if you, instead, conceive of happiness and suffering as states that are the objects of some intentional property (say, desiring and desiring-not), then the normalization could be done in terms of the intensity of the desires: a given instance of suffering corresponds to some instance of happiness just when the state that one desires not to be in is desired with the same intensity as the state one desires to be in.
How do you measure the intensity of desires, if not by introspectively comparing them (see below)? If you do measure something objectively, on what grounds do you justify its ethical relevance? I mean, we could measure all kinds of things, such as the amount of neurotransmitters released or activity of involved brain regions and so on, but just because there are parameters that turn out to be comparable for both pleasure and pain doesn’t meant that they automatically constitute whatever we care about.
If you instead take an approach analogous to revealed preferences (what introspective comparison of hedonic valence seems to come down to), you have to look at decision-situations where people make conscious welfare-tradeoffs. And merely being able to visualize pleasure doesn’t necessarily provoke the same reaction in all beings—it depends on contingencies of brain-wiring. We can imagine beings that are only very slightly moved by the prospect of intense, long pleasures and that wouldn’t undergo small amounts of suffering to get there.
You need intensity of desire to compare pains and pleasures, but also to compare pains of different intensities. So if introspection raises a problem for one type of comparison, it should raise a problem for the other type, too. Yet you think we can make comparisons within pains. So whatever reasons you have for thinking that introspection is reliable in making such comparisons, these should also be reasons for thinking that introspection is reliable for making comparisons between pains and pleasures.
Let’s assume we make use of memory to rank pains in terms of how much we don’t want to undergo them. Likewise, we may rank pleasures in terms of how much we want to have them now (or according to other measurable features). The result is two scales with comparability within the same scale. Now how do you normalize the two scales, is there not an extra source for arbitrariness? People may rank the pains the same way among themselves and the same for all the pleasures too, but when it comes to trading some pain for some pleasure, some people might be very eager to do it, whereas others might not be. Convergence of comparability of pains doesn’t necessarily imply convergence of comparability of exchange rates. You’d be comparing two separate dimensions.
As I said above, this could be done either in terms of felt intensity or intensity of desire.
This exchange seems to have proceeded as follows:
As my reconstruction of the exchange indicates, I don’t think you are raising a valid objection here, since I believe I have already addressed that problem. Once you leave out worries about introspection, what are your reasons for thinking that classical utilitarians cannot make non-arbitrary comparisons between pleasures and pains, while thinking that negative utilitarians can make non-arbitrary comparisons within pains?
If you write down all we can know about pleasures (in the moment), and all we can know about pains, you may find parameters to compare (like “intensity”, or amount of neurotransmitters, or something else), but there would be no reason why people need to choose an exchange rate corresponding to some measured properties. I believe your point is that we “have reason” to pick intensity here, but I don’t see why it is rationally required of beings to care about it, and I believe empirically, many people do not care about it, and certainly you could construct artificial minds that don’t care about it.
Pleasure is not what makes decisions for us. It is the desire/craving for pleasure, and there is no reason why a craving for a specific amount of pleasure needs to always come with the same force in different minds, even if the circumstances are otherwise equal. There is also no reason why this has to be true of suffering, of course, and the corresponding desire to not have to suffer. People who value many other things strongly and who have a strong desire to stay alive, for instance, would not kill themselves even if their life mostly consists of suffering. And yet they would still be making perfectly consistent trades within different intensities and durations of suffering.
My general point is that whatever property you rely upon to make comparisons within pains you can also rely upon to make comparisons between pains and pleasures.
It seems to me that you are using intensity of desire to make comparisons within pains. If so, you can also use intensity of desire to make comparisons between pleasures and pains. That “there would be no reason why people need to choose an exchange rate corresponding to some measured properties” seems inadequate as a reply, since you could analogously argue that there is no reason why people should rely on those measured properties to make comparisons within pains.
However, if intensity of desire is not the property you are using to make comparisons within pains, just ignore the previous paragraph. My general point still stands: the property you are using, whichever it is, is also a property that you could use to make comparisons between pains and pleasures.
Preference utilitarianism is not the same thing as hedonistic utilitarianism (they reach different conclusions), so you can’t use one to justify the other.
Negative utilitarians are known to be notoriously bad at making sense of how humans (and other animals) behave in real life (e.g., most people are willing to endure the pain of walking over hot sand for the pleasure of swimming in the sea). And I suspect this extends to the behavior of negative utilitarians themselves.
And classical utilitarians are notoriously prone to making quick judgments when tradeoffs are concerned:)
Perhaps people walk over hot sand because they really want to go swimming in the sea and would suffer otherwise. If there was absolutely nothing that subjectively bothers you about the current state you’re in, you would not act at all, so it’s highly non-obvious that people are generally trading pleasure and suffering in their daily lives. Whenever we consciously make a trade involving pleasure, the counterfactual alternative always seems to include some suffering as well in the form of unfulfilled longings/cravings.
Having said that, you can always ask why suicide percentages are so low if negative utilitarian axiology is right, and that would be a good point. But there the negative utilitarians would reply that a massive life bias is to be expected for evolutionary reasons.
Why not? The obvious reply is that, even if there is nothing that bothers you about your current state, you might still be motivated to act in order to move to an even better state. In any case, your attempt to make sense of the example from a negative utilitarian framework simply doesn’t do justice to what people take themselves to be doing in these situations. Just ask people around (not antecedently committed to a particular moral theory), and you’ll see.
Introspection is not particularly trustworthy. If you consciously (as opposed to acting on auto-pilot) decide to move to “an even better state”, you have in fact evaluated your current conscious state and concluded that it is not the one you want to be in, i.e. that something (at least the fact that you’d rather want to be in some other state) bothers you about it. And that—wanting to get out of your current conscious state (or not), or changing some aspect about it—is what constitutes suffering, or whatever (some) negative utilitarians consider to be morally relevant.
If you accept this (Buddhist) axiology, it becomes a conceptual truth that conscious welfare-tradeoffs always include counterfactual suffering. And there are good reasons to accept this view. For instance, it implies that there is nothing intrinsically bad about pain asymbolia, which makes sense because the alternative implies having to go to great lengths to help people who assert that they are not bothered by the “pain” at all. Additionally, this view on suffering isn’t vulnerable to an inverted qualia thought experiment applied to crosswiring pleasure and pain, where all the behavioral dispositions are left intact. Those who think there is an “intrinsic valence” to hedonic qualia, regardless of behavioral dispositions and other subjective attitudes, are faced with the inconvenient conclusion that you couldn’t reliably tell whether you’re undergoing agony or ecstacy.
My point was that your favorite theory cannot make sense of what people take themselves to be doing in situations such as those discussed above. You may argue that we shouldn’t trust these people because introspection is not trustworthy, but then you’d be effectively biting the bullet.
You may, of course, use the verb ‘to be bothered’ to mean ‘judging a state to be inferior to some alternative.’ However, I though you were using the verb to mean, instead, the experiencing of some negative hedonic state. I agree that there is something that “bothers you”, in the former sense, about the above situation, but I disagree that this must be so if the term is used in the latter sense—which is the sense relevant for discussions of negative utilitarianism.
I think that wanting to change your current state is identical with what we generally mean by being in a negative hedonic state. For reasons outlined above, I suspect that qualia aren’t indepent of all the other stuff that is going on (attitudes, dispositions, memories etc.).
Studies by Kent Berridge have established that ‘wanting’ can be dissociated from ‘liking’. This research finding (among others; Guy Kahane discusses some of these) undermines the claim that affective qualia are intextricably linked to intentional attitudes, as you seem to suggest.
I’m aware of these findings, I think there are different forms of “wanting” and we might have semantical misunderstandings here. There is pleasure that causes immediate cravings if you were to stop it, and there is pleasure that does not. So pleasure would usually cause you to want it again, but not always. I would not say that only the former is “real” pleasure. Instead I’m arguing that a frustrated craving due to the absence of some desired pleasure constitutes suffering. I’m only committed to the claim that “disliking” implies (or means) “wanting to get out” of the current state. And I think this makes perfect sense given the arguments of inverted qualia / against epiphenomenalism and my intuitive response to the case of pain asymbolia.
Negative utilitarianism is a normative theory, not a descriptive one.
This is true. But some descriptive facts may provide evidence against a normative theory. The implicit argument was:
People often believe that they are justified in undergoing some pain in order to experience greater pleasure.
Negative utilitarianism implies that these people are fundamentally mistaken.
If (2), then this provides some reason to reject negative utilitarianism.
Of course, the argument is by no means decisive. In fact, I think there are much stronger objections to NU.
I’m not sure of what this even means. Negative utilitarianism implies one set of preferences, which not everyone shares. People who have different preferences aren’t mistaken in any sense, they just want different things.
Thanks!