When I’m cozily in bed half-asleep and cuddled up next to my soulmate and I’m feeling perfectly fulfilled in life in this moment, the fact that my brain’s molecules aren’t being used to generate even more hedons is not a problem whatsoever.
Obviously I agree with this. I find it strange that you would take me to be disagreeing with this and defending some sort of pure pleasure version of utilitarianism. What I said was that I care about “meaning, fulfillment, love”—not just suffering, and not just pleasure either.
Where I agree with classical utilitarianism is that we should compute goodness as a function of experience, rather than e.g. preferences or world states (and then integrate over your anthropic prior, as in UDASSA). But I think that function is extremely complex, dependent on one’s entire lifetime, and not simply reducible to basic proxies like pleasure or pain.
I think I would also go a bit further, and claim that, while I agree that both pain and pleasure should be components of what makes a life experience good or bad, neither pain nor pleasure should be very large components on their own. Like I said above, I tend to think that things like meaning and fulfillment are more important.
Obviously I agree with this. I find it strange that you would take me to be disagreeing with this and defending some sort of pure pleasure version of utilitarianism. What I said was that I care about “meaning, fulfillment, love”—not just suffering, and not just pleasure either.
That seems like a misunderstanding – I didn’t mean to be saying anything about your particular views!
I only brought up classical hedonistic utilitarianism because it’s a view that many EAs still place a lot of credence on (it seems more popular than negative utilitarianism?). Your comment seemed to me to be unfairly singling out something about (strongly/exclusively) suffering-focused ethics. I wanted to point out that there are other EA-held views (not yours) where the same criticism applies the same or (arguably) even more.
Where I agree with classical utilitarianism is that we should compute goodness as a function of experience, rather than e.g. preferences or world states
Isn’t this incompatible with caring about genuine meaning and fulfillment, rather than just feelings of them? For example, it’s better for you to feel like you’re doing more good than to actually do good. It’s better to be put into an experience machine and be systematically mistaken about everything you care about, i.e. that the people you love even exist (are conscious, etc.) at all, even against your own wishes, as long as it feels more meaningful and fulfilling (and you never find out it’s all fake, or that can be outweighed). You could also have what you find meaningful changed against your wishes, e.g. made to find counting blades of grass very meaningful, more so than caring for your loved ones.
FWIW, this is also an argument for non-experientialist “preference-affecting” views, similar to person-affecting views. On common accounts of weigh or aggregate, if there are subjective goods, then they can be generated and outweigh the violation and abandonment of your prior values, even against your own wishes, if they’re strong enough.
The way you describe it you make it sound awful, but actually I think simulations are great and that you shouldn’t think that there’s a difference between being in a simulation and being in base reality (whatever that means). Simple argument: if there’s no experiment that you could ever possibly do to distinguish between two situations, then I don’t think that those two situations should be morally distinct.
Well, there could be ways to distinguish, but it could be like a dream, where much of your reasoning is extremely poor, but you’re very confident in it anyway. Like maybe you believe that your loved ones in your dream saying the word “pizza” is overwhelming evidence of their consciousness and love for you. But if you investigated properly, you could find out they’re not conscious. You just won’t, because you’ll never question it. If value is totally subjective and the accuracy of beliefs doesn’t matter (as would seem to be the case on experientialist accounts), then this seems to be fine.
Do you think simulations are so great that it’s better for people to be put into them against their wishes, as long as they perceive/judge it as more meaningful or fulfilling, even if they wouldn’t find it meaningful/fulfilling with accurate beliefs? Again, we can make it so that they don’t find out.
Similarly, would involuntary wireheading or drugging to make people find things more meaningful or fulfilling be good for those people?
Or, something like a “meaning” shockwave, similar to a hedonium shockwave, — quickly killing and replacing everyone with conscious systems that take no outside input or even have sensations (or only the bare minimum) other than to generate feelings or judgements of meaning, fulfillment, or love? (Some person-affecting views could avoid this while still matching the rest of your views.)
Of course, I think there are good practical reasons to not do things to people against their wishes, even when it’s apparently in their own best interests, but I think those don’t capture my objections. I just think it would be wrong, except possibly in limited cases, e.g. to prevent foreseeable regret. The point is that people really do often want their beliefs to be accurate, and what they value is really intended — by their own statements — to be pointed at something out there, not just the contents of their experiences. Experientialism seems like an example of Goodhart’s law to me, like hedonism might (?) seem like an example of Goodhart’s law to you.
I don’t think people and their values are in general replaceable, and if they don’t want to be manipulated, it’s worse for them (in one way) to be manipulated. And that should only be compensated for in limited cases. As far as I know, the only way to fundamentally and robustly capture that is to care about things other than just the contents of experiences and to take a kind of preference/value-affecting view.
Still, I don’t think it’s necessarily bad or worse for someone to not care about anything but the contents of their experiences. And if the state of the universe was already hedonium or just experiences of meaning, that wouldn’t be worse. It’s the fact that people do specifically care about things beyond just the contents of their experiences. If they didn’t, and also didn’t care about being manipulated, then it seems like it wouldn’t necessarily be bad to manipulate them.
Obviously I agree with this. I find it strange that you would take me to be disagreeing with this and defending some sort of pure pleasure version of utilitarianism. What I said was that I care about “meaning, fulfillment, love”—not just suffering, and not just pleasure either.
Where I agree with classical utilitarianism is that we should compute goodness as a function of experience, rather than e.g. preferences or world states (and then integrate over your anthropic prior, as in UDASSA). But I think that function is extremely complex, dependent on one’s entire lifetime, and not simply reducible to basic proxies like pleasure or pain.
I think I would also go a bit further, and claim that, while I agree that both pain and pleasure should be components of what makes a life experience good or bad, neither pain nor pleasure should be very large components on their own. Like I said above, I tend to think that things like meaning and fulfillment are more important.
That seems like a misunderstanding – I didn’t mean to be saying anything about your particular views!
I only brought up classical hedonistic utilitarianism because it’s a view that many EAs still place a lot of credence on (it seems more popular than negative utilitarianism?). Your comment seemed to me to be unfairly singling out something about (strongly/exclusively) suffering-focused ethics. I wanted to point out that there are other EA-held views (not yours) where the same criticism applies the same or (arguably) even more.
Isn’t this incompatible with caring about genuine meaning and fulfillment, rather than just feelings of them? For example, it’s better for you to feel like you’re doing more good than to actually do good. It’s better to be put into an experience machine and be systematically mistaken about everything you care about, i.e. that the people you love even exist (are conscious, etc.) at all, even against your own wishes, as long as it feels more meaningful and fulfilling (and you never find out it’s all fake, or that can be outweighed). You could also have what you find meaningful changed against your wishes, e.g. made to find counting blades of grass very meaningful, more so than caring for your loved ones.
FWIW, this is also an argument for non-experientialist “preference-affecting” views, similar to person-affecting views. On common accounts of weigh or aggregate, if there are subjective goods, then they can be generated and outweigh the violation and abandonment of your prior values, even against your own wishes, if they’re strong enough.
The way you describe it you make it sound awful, but actually I think simulations are great and that you shouldn’t think that there’s a difference between being in a simulation and being in base reality (whatever that means). Simple argument: if there’s no experiment that you could ever possibly do to distinguish between two situations, then I don’t think that those two situations should be morally distinct.
Well, there could be ways to distinguish, but it could be like a dream, where much of your reasoning is extremely poor, but you’re very confident in it anyway. Like maybe you believe that your loved ones in your dream saying the word “pizza” is overwhelming evidence of their consciousness and love for you. But if you investigated properly, you could find out they’re not conscious. You just won’t, because you’ll never question it. If value is totally subjective and the accuracy of beliefs doesn’t matter (as would seem to be the case on experientialist accounts), then this seems to be fine.
Do you think simulations are so great that it’s better for people to be put into them against their wishes, as long as they perceive/judge it as more meaningful or fulfilling, even if they wouldn’t find it meaningful/fulfilling with accurate beliefs? Again, we can make it so that they don’t find out.
Similarly, would involuntary wireheading or drugging to make people find things more meaningful or fulfilling be good for those people?
Or, something like a “meaning” shockwave, similar to a hedonium shockwave, — quickly killing and replacing everyone with conscious systems that take no outside input or even have sensations (or only the bare minimum) other than to generate feelings or judgements of meaning, fulfillment, or love? (Some person-affecting views could avoid this while still matching the rest of your views.)
Of course, I think there are good practical reasons to not do things to people against their wishes, even when it’s apparently in their own best interests, but I think those don’t capture my objections. I just think it would be wrong, except possibly in limited cases, e.g. to prevent foreseeable regret. The point is that people really do often want their beliefs to be accurate, and what they value is really intended — by their own statements — to be pointed at something out there, not just the contents of their experiences. Experientialism seems like an example of Goodhart’s law to me, like hedonism might (?) seem like an example of Goodhart’s law to you.
I don’t think people and their values are in general replaceable, and if they don’t want to be manipulated, it’s worse for them (in one way) to be manipulated. And that should only be compensated for in limited cases. As far as I know, the only way to fundamentally and robustly capture that is to care about things other than just the contents of experiences and to take a kind of preference/value-affecting view.
Still, I don’t think it’s necessarily bad or worse for someone to not care about anything but the contents of their experiences. And if the state of the universe was already hedonium or just experiences of meaning, that wouldn’t be worse. It’s the fact that people do specifically care about things beyond just the contents of their experiences. If they didn’t, and also didn’t care about being manipulated, then it seems like it wouldn’t necessarily be bad to manipulate them.