This, in my opinion, is by itself a decisive argument against utilitarianism.
You mean against preference-utilitarianism.
The vast majority of utilitarians I know are hedonistic utilitarians, where this criticism doesn’t apply at all. (For some reason LW seems to be totally focused on preference-utilitarianism, as I’ve noticed by now.) As for the criticism itself: I agree! Preference-utiltiarians can come up with sensible estimates and intuitive judgements, but when you actually try to show that in theory there is one right answer, you just find a huge mess.
I agree. I’m fairly confident that, within the next several decades, we will have the technology to accurately measure and sum hedons and that hedonic utilitarianism can escape the conceptual problems inherent in preference utilitarianism. On the other hand, I do not want to maximize (my) hedons (for these kinds of reasons, among others).
we will have the technology to accurately measure and sum hedons
Err...what? Technology will tell you things about how brains (and computer programs) vary, but not which differences to count as “more pleasure” or “less pleasure.” If evaluations of pleasure happen over 10x as many neurons is there 10x as much pleasure? Or is it the causal-functional role pleasure plays in determining the behavior of a body? What if we connect many brains or programs to different sorts of virtual bodies? Probabilistically?
A rule to get a cardinal measure of pleasure across brains is going to require almost as much specification as a broader preference measure. Dualists can think of this as guesstimating “psychophysical laws” and physicalists can think of it as seeking reflective equilibrium in our stances towards different physical systems, but it’s not going to be “read out” of neuroscience without deciding a bunch of evaluative (or philosophy of mind) questions.
but it’s not going to be “read out” of neuroscience without deciding a bunch of evaluative (or philosophy of mind) questions.
Sure, but I don’t think we can predict that there will be a lot of room for deciding those philosophy of mind questions whichever way one wants to. One simply has to wait for the research results to come in. With more data to constrain the interpretations, the number and spread of plausible stable reflective equilibria might be very small.
I agree with Jayson that it is not mandatory or wise to maximize hedons. And that is because hedons are not the only valuable things. But they do constitute one valuable category. And in seeking them, the total utilitarians are closer to the right approach than the average utilitarians (I will argue in a separate reply).
Is intrapersonal comparison possible? Personal boundaries don’t matter for hedonistic utilitarianism, they only matter insofar as you may have spatio-temporally connected clusters of hedons (lives). The difficulties in comparison seem to be of an empirical nature, not a fundamental one (unlike the problems with preference-utilitarianism). If we had a good enough theory of consciousness, we could quantitatively describe the possible states of consciousness and their hedonic tones. Or not?
One common argument against hedonistic utiltiarianism is that there are “different kinds of pleasures”, and that they are “incommensurable”. But if that we’re the case, it would be irrational to accept a trade-off of the lowest pleasure of one sort for the highest pleasure of another sort, and no one would actually claim that. So even if pleasures “differ in kind”, there’d be an empirical trade-off value based on how pleasant the hedonic states actually are.
Because people are running on similar neural architectures? So all people would likely experience similar pleasure from e.g. some types of food (though not necessarily identical). The more we understand about how different types of pleasure are implemented by the brain, the more precisely we’d be able to tell whether two people are experiencing similar levels/types of pleasure. When we get to brain simulations these might get arbitrarily precise.
You make it sound as if there is some signal or register in the brain whose value represents “pleasure” in a straightforward way. To me it seems much more plausible that “pleasure” reduces to a multitude of variables that can’t be aggregated into a single-number index except through some arbitrary convention. This seems to me likely even within a single human mind, let alone when different minds (especially of different species) are compared.
That said, I do agree that the foundation of pure hedonic utilitarianism is not as obviously flawed as that of preference utilitarianism. The main problem I see with it is that it implies wireheading as the optimal outcome.
The main problem I see with it is that it implies wireheading as the optimal outcome.
Or the utilitronium shockwave, rather. Which doesn’t even require minds to wirehead anymore, but simply converts matter into maximally efficient bliss simulations. I used to find this highly counterintuitive, but after thinking about all the absurd implications of valuing preferences instead of actual states of the world, I’ve come to think of it as a perfectly reasonable thing.
The main problem I see with it is that it implies wireheading as the optimal outcome.
AFAICT, it only does so if we assume that the environment can somehow be relied upon to maintain the wireheading environment optimally even though everyone is wireheading.
Failing that assumption, it seems preferable (even under pure hedonic utilitarianism) for some fraction of total experience to be non-wireheading, but instead devoted to maintaining and improving the wireheading environment. (Indeed, it might even be preferable for that fraction to approach 100%, depending on the specifics of the environment..)
I suspect that, if that assumption were somehow true, and we somehow knew it was true (I have trouble imagining either scenario, but OK), most humans would willingly wirehead.
Right, if you cannot compare utilities, you are safe from the repugnant conclusion.
On the other hand, this is not very useful instrumentally, as a functioning society necessarily requires arbitration of individual wants. Thus some utilities must be comparable, even if others might not be. Finding a boundary between the two runs into the standard problem of two nearly identical preferences being qualitatively different.
Yes but it doesn’t have the problem Vladimir_M described above, and it can bite the bullet in the repugnant conclusion by appealing to personal identity being an illusion. Total hedonistic utilitarianism is quite hard to argue against, actually.
As I mentioned in the other reply, I’m not sure how a society of total hedonistic utilitarians would function without running into the issue of nearly identical but incommensurate preferences.
Hedonistic utilitarianism is not about preferences at all. It’s about maximizing happiness, whatever the reason or substrate for it. The utilitronium shockwave would be the best scenario for total hedonistic utilitarianism.
No, nothing of that sort. You just take the surplus of positive hedonic states over negative ones and try to maximize that. Interpersonal boundaries become irrelevant, in fact many hedonistic utilitarians think that the concept of personal identity is an illusion anyway. If you consider utility functions, then that’s preference utilitarianism or something else entirely.
Utilons aren’t hedons. You have one simple utility function that states you should maximize happiness minus suffering. That’s similar to maximizing paperclips, and it avoids the problems discussed above that preference utiltiarianism has, namely how interpersonally differing utility functions should be compared to each other.
You still seem to be claiming that (a) you can calculate a number for hedons (b) you can do arithmetic on this number. This seems problematic to me for the same reason as doing these things for utilons. How do you actually do (a) or (b)? What is the evidence that this works in practice?
I don’t claim that I, or anyone else, can do that right now. I’m saying there doesn’t seem to be a fundamental reason why that would have to remain impossible forever. Why do you think it will remain impossible forever?
As for (b), I don’t even see the problem. If (a) works, then you just do simple math after that. In case you’re worried about torture and dust specks not working out, check out section VI of this paper.
And regarding (a), here’s an example that approximates the kind of solutions we seek: In anti-depression drug tests, the groups with the actual drug and the control group have to fill out self-assessments of their subjective experiences, and at the same time their brain activity and behavior is observed. The self-reports correlate with the physical data.
I can’t speak for David (or, well, I can’t speak for that David), but for my own part, I’m willing to accept for the sake of argument that the happiness/suffering/whatever of individual minds is intersubjectively commensurable, just like I’m willing to accept for the sake of argument that people have “terminal values” which express what they really value, or that there exist “utilons” that are consistently evaluated across all situations, or a variety of other claims, despite having no evidence that any such things actually exist. I’m also willing to assume spherical cows, frictionless pulleys, and perfect vacuums for the sake of argument.
But the thing about accepting a claim for the sake of argument is that the argument I’m accepting it for the sake of has to have some payoff that makes accepting it worthwhile. As far as I can tell, the only payoff here is that it lets us conclude “hedonic utilitarianism is better than all other moral philosophies.” To me, that payoff doesn’t seem worth the bullet you’re biting by assuming the existence of intersubjectively commensurable hedons.
The self-reports correlate with the physical data.
If someone were to demonstrate a scanning device whose output could be used to calculate a “hedonic score” for a given brain across a wide range of real-world brains and brainstates without first being calibrated against that brain’s reference class, and that hedonic score could be used to reliably predict the self-reports of that brain’s happiness in a given moment, I would be surprised and would change my mind about both the degree of variation of cognitive experience and the viability of intersubjectively commensurable hedons.
If you’re claiming this has actually been demonstrated, I’d love to see the study; everything I’ve ever read about has been significantly narrower than that.
If you’re merely claiming that it’s in principle possible that we live in a world where this could be demonstrated, I agree that it’s in principle possible, but see no particular evidence to support the claim that we do.
If you’re merely claiming that it’s in principle possible that we live in a world where this could be demonstrated, I agree that it’s in principle possible, but see no particular evidence to support the claim that we do.
Well, yes. The main attraction of utilitarianism appears to be that it makes the calculation of what to do easier. But its assumptions appear ungrounded.
But what makes you think you can just do simple math on the results? And which simple math—addition, adding the logarithms, taking the average or what? What adds up to normality?
Thanks for the link. I still cannot figure out why utilons are not convertible to hedons, and even if they aren’t, why isn’t a mixed utilon/hedon maximizer susceptible to dutch booking. Maybe I’ll look through the logic again.
Hedonism doesn’t specify what sorts of brain states and physical objects have how much pleasure. There are a bewildering variety of choices to be made in cashing out a rule to classify which systems are how “happy.” Just to get started, how much pleasure is there when a computer running simulations of happy human brains is sliced in the ways discussed in this paper?
But aren’t those empirical difficulties, not fundamental ones? Don’t you think there’s a fact of the matter that will be discovered if we keep gaining more and more knowledge? Empirical problems can’t bring down an ethical theory, but if you can show that there exists a fundamental weighting problem, then that would be valid criticism.
But aren’t those empirical difficulties, not fundamental ones?
What sort of empirical fact would you discover that would resolve that? A detector for happiness radiation? The scenario in that paper is pretty well specified.
You mean against preference-utilitarianism.
The vast majority of utilitarians I know are hedonistic utilitarians, where this criticism doesn’t apply at all. (For some reason LW seems to be totally focused on preference-utilitarianism, as I’ve noticed by now.) As for the criticism itself: I agree! Preference-utiltiarians can come up with sensible estimates and intuitive judgements, but when you actually try to show that in theory there is one right answer, you just find a huge mess.
I agree. I’m fairly confident that, within the next several decades, we will have the technology to accurately measure and sum hedons and that hedonic utilitarianism can escape the conceptual problems inherent in preference utilitarianism. On the other hand, I do not want to maximize (my) hedons (for these kinds of reasons, among others).
Err...what? Technology will tell you things about how brains (and computer programs) vary, but not which differences to count as “more pleasure” or “less pleasure.” If evaluations of pleasure happen over 10x as many neurons is there 10x as much pleasure? Or is it the causal-functional role pleasure plays in determining the behavior of a body? What if we connect many brains or programs to different sorts of virtual bodies? Probabilistically?
A rule to get a cardinal measure of pleasure across brains is going to require almost as much specification as a broader preference measure. Dualists can think of this as guesstimating “psychophysical laws” and physicalists can think of it as seeking reflective equilibrium in our stances towards different physical systems, but it’s not going to be “read out” of neuroscience without deciding a bunch of evaluative (or philosophy of mind) questions.
Sure, but I don’t think we can predict that there will be a lot of room for deciding those philosophy of mind questions whichever way one wants to. One simply has to wait for the research results to come in. With more data to constrain the interpretations, the number and spread of plausible stable reflective equilibria might be very small.
I agree with Jayson that it is not mandatory or wise to maximize hedons. And that is because hedons are not the only valuable things. But they do constitute one valuable category. And in seeking them, the total utilitarians are closer to the right approach than the average utilitarians (I will argue in a separate reply).
OK, I’ve got to ask: what’s your confidence based in, in detail? It’s not clear to me that “sum hedons” even means anything.
Why do you believe that interpersonal comparison of pleasure is straightforward? To me this doesn’t seem to be the case.
Is intrapersonal comparison possible? Personal boundaries don’t matter for hedonistic utilitarianism, they only matter insofar as you may have spatio-temporally connected clusters of hedons (lives). The difficulties in comparison seem to be of an empirical nature, not a fundamental one (unlike the problems with preference-utilitarianism). If we had a good enough theory of consciousness, we could quantitatively describe the possible states of consciousness and their hedonic tones. Or not?
One common argument against hedonistic utiltiarianism is that there are “different kinds of pleasures”, and that they are “incommensurable”. But if that we’re the case, it would be irrational to accept a trade-off of the lowest pleasure of one sort for the highest pleasure of another sort, and no one would actually claim that. So even if pleasures “differ in kind”, there’d be an empirical trade-off value based on how pleasant the hedonic states actually are.
Because people are running on similar neural architectures? So all people would likely experience similar pleasure from e.g. some types of food (though not necessarily identical). The more we understand about how different types of pleasure are implemented by the brain, the more precisely we’d be able to tell whether two people are experiencing similar levels/types of pleasure. When we get to brain simulations these might get arbitrarily precise.
You make it sound as if there is some signal or register in the brain whose value represents “pleasure” in a straightforward way. To me it seems much more plausible that “pleasure” reduces to a multitude of variables that can’t be aggregated into a single-number index except through some arbitrary convention. This seems to me likely even within a single human mind, let alone when different minds (especially of different species) are compared.
That said, I do agree that the foundation of pure hedonic utilitarianism is not as obviously flawed as that of preference utilitarianism. The main problem I see with it is that it implies wireheading as the optimal outcome.
Or the utilitronium shockwave, rather. Which doesn’t even require minds to wirehead anymore, but simply converts matter into maximally efficient bliss simulations. I used to find this highly counterintuitive, but after thinking about all the absurd implications of valuing preferences instead of actual states of the world, I’ve come to think of it as a perfectly reasonable thing.
AFAICT, it only does so if we assume that the environment can somehow be relied upon to maintain the wireheading environment optimally even though everyone is wireheading.
Failing that assumption, it seems preferable (even under pure hedonic utilitarianism) for some fraction of total experience to be non-wireheading, but instead devoted to maintaining and improving the wireheading environment. (Indeed, it might even be preferable for that fraction to approach 100%, depending on the specifics of the environment..)
I suspect that, if that assumption were somehow true, and we somehow knew it was true (I have trouble imagining either scenario, but OK), most humans would willingly wirehead.
Hedonistic utilitarianism (“what matters is the aggregate happiness”) runs into the same repugnant conclusion.
But this happens exactly because interpersonal (hedonistic) utility comparison is possible.
Right, if you cannot compare utilities, you are safe from the repugnant conclusion.
On the other hand, this is not very useful instrumentally, as a functioning society necessarily requires arbitration of individual wants. Thus some utilities must be comparable, even if others might not be. Finding a boundary between the two runs into the standard problem of two nearly identical preferences being qualitatively different.
Yes but it doesn’t have the problem Vladimir_M described above, and it can bite the bullet in the repugnant conclusion by appealing to personal identity being an illusion. Total hedonistic utilitarianism is quite hard to argue against, actually.
As I mentioned in the other reply, I’m not sure how a society of total hedonistic utilitarians would function without running into the issue of nearly identical but incommensurate preferences.
Hedonistic utilitarianism is not about preferences at all. It’s about maximizing happiness, whatever the reason or substrate for it. The utilitronium shockwave would be the best scenario for total hedonistic utilitarianism.
Maybe I misunderstand how total hedonistic utilitarianism works. Don’t you ever construct an aggregate utility function?
No, nothing of that sort. You just take the surplus of positive hedonic states over negative ones and try to maximize that. Interpersonal boundaries become irrelevant, in fact many hedonistic utilitarians think that the concept of personal identity is an illusion anyway. If you consider utility functions, then that’s preference utilitarianism or something else entirely.
How is that not an aggregate utility function?
Utilons aren’t hedons. You have one simple utility function that states you should maximize happiness minus suffering. That’s similar to maximizing paperclips, and it avoids the problems discussed above that preference utiltiarianism has, namely how interpersonally differing utility functions should be compared to each other.
You still seem to be claiming that (a) you can calculate a number for hedons (b) you can do arithmetic on this number. This seems problematic to me for the same reason as doing these things for utilons. How do you actually do (a) or (b)? What is the evidence that this works in practice?
I don’t claim that I, or anyone else, can do that right now. I’m saying there doesn’t seem to be a fundamental reason why that would have to remain impossible forever. Why do you think it will remain impossible forever?
As for (b), I don’t even see the problem. If (a) works, then you just do simple math after that. In case you’re worried about torture and dust specks not working out, check out section VI of this paper.
And regarding (a), here’s an example that approximates the kind of solutions we seek: In anti-depression drug tests, the groups with the actual drug and the control group have to fill out self-assessments of their subjective experiences, and at the same time their brain activity and behavior is observed. The self-reports correlate with the physical data.
I can’t speak for David (or, well, I can’t speak for that David), but for my own part, I’m willing to accept for the sake of argument that the happiness/suffering/whatever of individual minds is intersubjectively commensurable, just like I’m willing to accept for the sake of argument that people have “terminal values” which express what they really value, or that there exist “utilons” that are consistently evaluated across all situations, or a variety of other claims, despite having no evidence that any such things actually exist. I’m also willing to assume spherical cows, frictionless pulleys, and perfect vacuums for the sake of argument.
But the thing about accepting a claim for the sake of argument is that the argument I’m accepting it for the sake of has to have some payoff that makes accepting it worthwhile. As far as I can tell, the only payoff here is that it lets us conclude “hedonic utilitarianism is better than all other moral philosophies.” To me, that payoff doesn’t seem worth the bullet you’re biting by assuming the existence of intersubjectively commensurable hedons.
If someone were to demonstrate a scanning device whose output could be used to calculate a “hedonic score” for a given brain across a wide range of real-world brains and brainstates without first being calibrated against that brain’s reference class, and that hedonic score could be used to reliably predict the self-reports of that brain’s happiness in a given moment, I would be surprised and would change my mind about both the degree of variation of cognitive experience and the viability of intersubjectively commensurable hedons.
If you’re claiming this has actually been demonstrated, I’d love to see the study; everything I’ve ever read about has been significantly narrower than that.
If you’re merely claiming that it’s in principle possible that we live in a world where this could be demonstrated, I agree that it’s in principle possible, but see no particular evidence to support the claim that we do.
Well, yes. The main attraction of utilitarianism appears to be that it makes the calculation of what to do easier. But its assumptions appear ungrounded.
But what makes you think you can just do simple math on the results? And which simple math—addition, adding the logarithms, taking the average or what? What adds up to normality?
Thanks for the link. I still cannot figure out why utilons are not convertible to hedons, and even if they aren’t, why isn’t a mixed utilon/hedon maximizer susceptible to dutch booking. Maybe I’ll look through the logic again.
Hedonism doesn’t specify what sorts of brain states and physical objects have how much pleasure. There are a bewildering variety of choices to be made in cashing out a rule to classify which systems are how “happy.” Just to get started, how much pleasure is there when a computer running simulations of happy human brains is sliced in the ways discussed in this paper?
But aren’t those empirical difficulties, not fundamental ones? Don’t you think there’s a fact of the matter that will be discovered if we keep gaining more and more knowledge? Empirical problems can’t bring down an ethical theory, but if you can show that there exists a fundamental weighting problem, then that would be valid criticism.
What sort of empirical fact would you discover that would resolve that? A detector for happiness radiation? The scenario in that paper is pretty well specified.