Hedonistic utilitarianism is not about preferences at all. It’s about maximizing happiness, whatever the reason or substrate for it. The utilitronium shockwave would be the best scenario for total hedonistic utilitarianism.
No, nothing of that sort. You just take the surplus of positive hedonic states over negative ones and try to maximize that. Interpersonal boundaries become irrelevant, in fact many hedonistic utilitarians think that the concept of personal identity is an illusion anyway. If you consider utility functions, then that’s preference utilitarianism or something else entirely.
Utilons aren’t hedons. You have one simple utility function that states you should maximize happiness minus suffering. That’s similar to maximizing paperclips, and it avoids the problems discussed above that preference utiltiarianism has, namely how interpersonally differing utility functions should be compared to each other.
You still seem to be claiming that (a) you can calculate a number for hedons (b) you can do arithmetic on this number. This seems problematic to me for the same reason as doing these things for utilons. How do you actually do (a) or (b)? What is the evidence that this works in practice?
I don’t claim that I, or anyone else, can do that right now. I’m saying there doesn’t seem to be a fundamental reason why that would have to remain impossible forever. Why do you think it will remain impossible forever?
As for (b), I don’t even see the problem. If (a) works, then you just do simple math after that. In case you’re worried about torture and dust specks not working out, check out section VI of this paper.
And regarding (a), here’s an example that approximates the kind of solutions we seek: In anti-depression drug tests, the groups with the actual drug and the control group have to fill out self-assessments of their subjective experiences, and at the same time their brain activity and behavior is observed. The self-reports correlate with the physical data.
I can’t speak for David (or, well, I can’t speak for that David), but for my own part, I’m willing to accept for the sake of argument that the happiness/suffering/whatever of individual minds is intersubjectively commensurable, just like I’m willing to accept for the sake of argument that people have “terminal values” which express what they really value, or that there exist “utilons” that are consistently evaluated across all situations, or a variety of other claims, despite having no evidence that any such things actually exist. I’m also willing to assume spherical cows, frictionless pulleys, and perfect vacuums for the sake of argument.
But the thing about accepting a claim for the sake of argument is that the argument I’m accepting it for the sake of has to have some payoff that makes accepting it worthwhile. As far as I can tell, the only payoff here is that it lets us conclude “hedonic utilitarianism is better than all other moral philosophies.” To me, that payoff doesn’t seem worth the bullet you’re biting by assuming the existence of intersubjectively commensurable hedons.
The self-reports correlate with the physical data.
If someone were to demonstrate a scanning device whose output could be used to calculate a “hedonic score” for a given brain across a wide range of real-world brains and brainstates without first being calibrated against that brain’s reference class, and that hedonic score could be used to reliably predict the self-reports of that brain’s happiness in a given moment, I would be surprised and would change my mind about both the degree of variation of cognitive experience and the viability of intersubjectively commensurable hedons.
If you’re claiming this has actually been demonstrated, I’d love to see the study; everything I’ve ever read about has been significantly narrower than that.
If you’re merely claiming that it’s in principle possible that we live in a world where this could be demonstrated, I agree that it’s in principle possible, but see no particular evidence to support the claim that we do.
If you’re merely claiming that it’s in principle possible that we live in a world where this could be demonstrated, I agree that it’s in principle possible, but see no particular evidence to support the claim that we do.
Well, yes. The main attraction of utilitarianism appears to be that it makes the calculation of what to do easier. But its assumptions appear ungrounded.
But what makes you think you can just do simple math on the results? And which simple math—addition, adding the logarithms, taking the average or what? What adds up to normality?
Thanks for the link. I still cannot figure out why utilons are not convertible to hedons, and even if they aren’t, why isn’t a mixed utilon/hedon maximizer susceptible to dutch booking. Maybe I’ll look through the logic again.
Hedonistic utilitarianism is not about preferences at all. It’s about maximizing happiness, whatever the reason or substrate for it. The utilitronium shockwave would be the best scenario for total hedonistic utilitarianism.
Maybe I misunderstand how total hedonistic utilitarianism works. Don’t you ever construct an aggregate utility function?
No, nothing of that sort. You just take the surplus of positive hedonic states over negative ones and try to maximize that. Interpersonal boundaries become irrelevant, in fact many hedonistic utilitarians think that the concept of personal identity is an illusion anyway. If you consider utility functions, then that’s preference utilitarianism or something else entirely.
How is that not an aggregate utility function?
Utilons aren’t hedons. You have one simple utility function that states you should maximize happiness minus suffering. That’s similar to maximizing paperclips, and it avoids the problems discussed above that preference utiltiarianism has, namely how interpersonally differing utility functions should be compared to each other.
You still seem to be claiming that (a) you can calculate a number for hedons (b) you can do arithmetic on this number. This seems problematic to me for the same reason as doing these things for utilons. How do you actually do (a) or (b)? What is the evidence that this works in practice?
I don’t claim that I, or anyone else, can do that right now. I’m saying there doesn’t seem to be a fundamental reason why that would have to remain impossible forever. Why do you think it will remain impossible forever?
As for (b), I don’t even see the problem. If (a) works, then you just do simple math after that. In case you’re worried about torture and dust specks not working out, check out section VI of this paper.
And regarding (a), here’s an example that approximates the kind of solutions we seek: In anti-depression drug tests, the groups with the actual drug and the control group have to fill out self-assessments of their subjective experiences, and at the same time their brain activity and behavior is observed. The self-reports correlate with the physical data.
I can’t speak for David (or, well, I can’t speak for that David), but for my own part, I’m willing to accept for the sake of argument that the happiness/suffering/whatever of individual minds is intersubjectively commensurable, just like I’m willing to accept for the sake of argument that people have “terminal values” which express what they really value, or that there exist “utilons” that are consistently evaluated across all situations, or a variety of other claims, despite having no evidence that any such things actually exist. I’m also willing to assume spherical cows, frictionless pulleys, and perfect vacuums for the sake of argument.
But the thing about accepting a claim for the sake of argument is that the argument I’m accepting it for the sake of has to have some payoff that makes accepting it worthwhile. As far as I can tell, the only payoff here is that it lets us conclude “hedonic utilitarianism is better than all other moral philosophies.” To me, that payoff doesn’t seem worth the bullet you’re biting by assuming the existence of intersubjectively commensurable hedons.
If someone were to demonstrate a scanning device whose output could be used to calculate a “hedonic score” for a given brain across a wide range of real-world brains and brainstates without first being calibrated against that brain’s reference class, and that hedonic score could be used to reliably predict the self-reports of that brain’s happiness in a given moment, I would be surprised and would change my mind about both the degree of variation of cognitive experience and the viability of intersubjectively commensurable hedons.
If you’re claiming this has actually been demonstrated, I’d love to see the study; everything I’ve ever read about has been significantly narrower than that.
If you’re merely claiming that it’s in principle possible that we live in a world where this could be demonstrated, I agree that it’s in principle possible, but see no particular evidence to support the claim that we do.
Well, yes. The main attraction of utilitarianism appears to be that it makes the calculation of what to do easier. But its assumptions appear ungrounded.
But what makes you think you can just do simple math on the results? And which simple math—addition, adding the logarithms, taking the average or what? What adds up to normality?
Thanks for the link. I still cannot figure out why utilons are not convertible to hedons, and even if they aren’t, why isn’t a mixed utilon/hedon maximizer susceptible to dutch booking. Maybe I’ll look through the logic again.