Because people are running on similar neural architectures? So all people would likely experience similar pleasure from e.g. some types of food (though not necessarily identical). The more we understand about how different types of pleasure are implemented by the brain, the more precisely we’d be able to tell whether two people are experiencing similar levels/types of pleasure. When we get to brain simulations these might get arbitrarily precise.
You make it sound as if there is some signal or register in the brain whose value represents “pleasure” in a straightforward way. To me it seems much more plausible that “pleasure” reduces to a multitude of variables that can’t be aggregated into a single-number index except through some arbitrary convention. This seems to me likely even within a single human mind, let alone when different minds (especially of different species) are compared.
That said, I do agree that the foundation of pure hedonic utilitarianism is not as obviously flawed as that of preference utilitarianism. The main problem I see with it is that it implies wireheading as the optimal outcome.
The main problem I see with it is that it implies wireheading as the optimal outcome.
Or the utilitronium shockwave, rather. Which doesn’t even require minds to wirehead anymore, but simply converts matter into maximally efficient bliss simulations. I used to find this highly counterintuitive, but after thinking about all the absurd implications of valuing preferences instead of actual states of the world, I’ve come to think of it as a perfectly reasonable thing.
The main problem I see with it is that it implies wireheading as the optimal outcome.
AFAICT, it only does so if we assume that the environment can somehow be relied upon to maintain the wireheading environment optimally even though everyone is wireheading.
Failing that assumption, it seems preferable (even under pure hedonic utilitarianism) for some fraction of total experience to be non-wireheading, but instead devoted to maintaining and improving the wireheading environment. (Indeed, it might even be preferable for that fraction to approach 100%, depending on the specifics of the environment..)
I suspect that, if that assumption were somehow true, and we somehow knew it was true (I have trouble imagining either scenario, but OK), most humans would willingly wirehead.
Because people are running on similar neural architectures? So all people would likely experience similar pleasure from e.g. some types of food (though not necessarily identical). The more we understand about how different types of pleasure are implemented by the brain, the more precisely we’d be able to tell whether two people are experiencing similar levels/types of pleasure. When we get to brain simulations these might get arbitrarily precise.
You make it sound as if there is some signal or register in the brain whose value represents “pleasure” in a straightforward way. To me it seems much more plausible that “pleasure” reduces to a multitude of variables that can’t be aggregated into a single-number index except through some arbitrary convention. This seems to me likely even within a single human mind, let alone when different minds (especially of different species) are compared.
That said, I do agree that the foundation of pure hedonic utilitarianism is not as obviously flawed as that of preference utilitarianism. The main problem I see with it is that it implies wireheading as the optimal outcome.
Or the utilitronium shockwave, rather. Which doesn’t even require minds to wirehead anymore, but simply converts matter into maximally efficient bliss simulations. I used to find this highly counterintuitive, but after thinking about all the absurd implications of valuing preferences instead of actual states of the world, I’ve come to think of it as a perfectly reasonable thing.
AFAICT, it only does so if we assume that the environment can somehow be relied upon to maintain the wireheading environment optimally even though everyone is wireheading.
Failing that assumption, it seems preferable (even under pure hedonic utilitarianism) for some fraction of total experience to be non-wireheading, but instead devoted to maintaining and improving the wireheading environment. (Indeed, it might even be preferable for that fraction to approach 100%, depending on the specifics of the environment..)
I suspect that, if that assumption were somehow true, and we somehow knew it was true (I have trouble imagining either scenario, but OK), most humans would willingly wirehead.