David Pearce on Hedonic Moral realism
(posted with David’s permission)
Many thanks for the Shulman et al paper. Insightful and incisive, just as I’d expect!
Have I any reservations?
Well, maybe one or two...:-)
One of my worries is that the authors want to “lock in” to an AMA our primitive human moral psychology—with its anthropocentric biases and self-serving rationalizations. We could try and strip these out; but an ever more universalistic moral psychology becomes progressively less human. Either way, I’m not convinced that history shows the intuitions of folk morality are any more trustworthy than the intuitions of folk physics i.e. not at all all. Do we really want an AMA that locks in even an idealised version of the moral psychology that maximised the inclusive fitness of selfish DNA in the ancestral environment? IMO posthuman life could be so much better—for all sentient beings.
Contra Shulman et al, I think we do have a good idea what utility function to endow an explicit moral agent with. An AMA should seek to maximise the cosmic abundance of subjectively hypervaluable states. [defined below] This prospect doesn’t sound very exciting, any more than sex as explained by adults to a toddler. But that’s because, like the toddler, we don’t have the primitive experiences to know what we’re missing. In one sense, becoming posthuman isn’t “human-friendly”; but equally, becoming an adult isn’t “toddler-friendly” either. In neither case would the successor choose to regress to the ancestral state.
If we want to maximise the cosmic abundance of subjectively hypervaluable states, first we will need to identify the neural signature of subjectively valuable states via improved neuroscanning technology, etc. This neural signature can then be edited and genetically amplified to create hypervaluable states - states of mind far more sublime than today’s peak experiences—which can provide background hedonic tone for everyday life. Unlike weighing up the merits of competing moral values etc, investigating the neural mechanisms of value-creation is an empirical question rather than philosophical question. True, we may ask: Could a maximally valuable cosmos—where value is defined empirically in terms of what seems valuable to the subject—really not be valuable at all? I accept that’s a deeper philosophical question that I won’t adequately explore here.
Compare how severe depressives today may often be incapable of valuing an
ything beyond the relief of suffering. Quite possibly they will be nihilistic, suicidal and/or negative utilitarians. Effective mood-brightening treatment gives them the ability to value aspects of life—without specifying the particular propositional content of those values. Effective treatment generates an effective value-creation mechanism. So analogously, why not genetically recalibrate our own hedonic treadmill to induce gradients of superhappiness? We can (potentially) create the substrates for supervalues—without specifying, or taking any kind of stance on, the propositional content of those values. The actual content of posthuman values is in any case presumably unknowable to us Here at least, we needn’t take a realist or anti-realist position on whether that content is true or false or truth-valueless. As you know, I happen to believe we live in a world where certain states on the pleasure-pain axis (e.g. bliss or agony) are intrinsically normative, and the value judgements and decision-procedures that spring from these intrinsically normative states can be potentially true or false. But their reality or otherwise is not essential to the argument that we can maximise the abundance of empirically valuable states.
Might instead a “human-friendly SuperIntelligence” be parochial and reflect the interests of one particular ancestral species that seeded its prototype? Or if It’s really a SuperIntelligence, won’t It have a “God’s-
eye- view”, just as we aspire to do in natural science—an impartial perspective? IMO a truly impartial perspective dictates creating the maximum density of the substrates of hypervaluable states within our Hubble volume. Just as there is a finite number of perfect games a chess—it makes no sense for a superintelligent chess-player to pass outside this state space of ideal states—why aim now to “freeze in” a recipe for sub-optimal or mediocre states of mind? Or freeze in to the AGI even the idealised preference architecture of one hominid species? Note that this claim isn’t intended as a challenge to the moral anti-realist: it’s not to say a world of blissfully fulfilled posthumans is truly more valuable than a pain-racked world or a world tiled with insentient paperclips. But everyone in such a blissful world would agree it is empirically more valuable.
On this basis, neuroscience needs to understand the molecular mechanisms by which subjectively valuable states are created in the mind/brain—and the mechanisms which separate the merely subjectively pleasurable [ e.g. using porn or crack-cocaine] from the subjectively valuable. This sounds horrendously difficult, as Shulman et al lay out in their review of different consequentialisms. Billions of different people can have billions of inconsistent or incommensurable desires and preferences. How on earth can their different utility functions be reconciled? But critically IMO, there is a distinction between dopamine-mediated desires—as verbalized in preferences expressing all manner of propositional content—and mu-opioid mediated hedonic tone. Positive hedonic tone doesn’t by itself specify any particular propositional content. But it’s the “engine” of value-creation in organic robots. Without it, everything in life seems valueless and meaningless, as depressives will attest. Moreover a classical [“hedonistic”] utilitarian can argue that the substrates of pure bliss are objectively measurable. They are experimentally manipulable via everything from mu-opioid knockout “animal models” to opioid antagonists to genetic “over”-expression. The neural substrates of hedonic tone are objectively quantifiable—and comparable both intrapersonally and interpersonally. Moreover they are effectively identical across members of all vertebrate species.
These are strong claims I know; but it’s notable that the pleasure-pain axis—and its neurological underpinnings (see below) - are strongly conserved in the vertebrate line. Simplifying a bit, intensity of pure bliss correlates with full mu agonist binding and with mu opioid receptor density the brain’s two ultimate “hedonic hotspots”—one in the ventral pallidum and the other medium spiny neurons of the rostromedial shell of the nucleus accumbens: a mere cubic millimeter in size in the rat; approximately a cubic centimeter in humans. See e.g.
http://www.lsa.umich.edu/psych/research&labs/berridge/research/affectiveneuroscience.html
A convergence of neuroscanning, behavioural, microelectrode studies and experimental evidence supports this hypothesis.
Here is a further test of interpersonal agreement of utility:
Administer to a range of drug-naive volunteers various mu opioid agonists of differing potency, selectivity and specificity. Correlate self-reported degree of “liking”. Just like opioid users, drug-naive subjects will consistently report codeine is less rewarding than methadone which is less rewarding than heroin, etc. A high dose of a full mu agonist reliably induces euphoria; a high dose of inverse agonist reliably induces dysphoria.
[Just don’t try this experiment at home!]
Contrast such unanimity of response to activation of our mu opioid receptors with the diversity of propositional content expressed in our preferences and desires across different times and cultures.
In short, I’d argue the claim of Shulman et al that “all utilitarianisms founder on interpersonal comparison of utility” isn’t correct—indeed it is empirically falsifiable.
Of course most of us aren’t heroin addicts. But we are all [endogenous] opioid-dependent. Thus using futuristic neuroscanning, we could measure from birth how much someone (dis)values particular persons, cultural practices, painting, music, ideology, jokes, etc as a function of activation of their reward pathways. Rather than just accepting our preferences as read, and then trying to reconcile the irreconcilable, we can explain the mechanism that creates value itself and then attempt to maximise its substrates.
Non-human animals? They can’t verbalize their feelings, so how can one test and compare how much they (dis)like a stimulus? Well, we can test how hard they will work to obtain/avoid the stimulus in question Once again, in the case of the most direct test, non-human vertebrates behaviourally show the same comparative fondness for different full agonist opioid drugs that activate the mu receptors [e.g. heroin is more enjoyable than codeine as shown by the fact non-human animals will work harder for it] as do humans with relevant drug exposure.
In principle, by identifying the molecular signature of bliss, it should be possible to multiply the cellular density of “pleasure neurons”, insert multiple extra copies of the mu opioid receptor, insert and express mu opioid receptor genes in every single neuron, and upregulate their transcription [etc] so to engineer posthuman intensities of well-being. We thereby create the potential for hypervaluable states—states of mind valuable beyond the bounds of normal human experience.
Once we gain mastery over our reward circuitry, then Derek Parfit’s “Repugnant Conclusion” as noted by Shulman et al is undercut. This is because world-wide maximal packing density of mind/brains [plus interactive immersive VR] doesn’t entail any tradeoff with quality of life. In principle, life abounding in hypervaluable experience can be just as feasible with a posthuman global population of 150 billion as with 15 billion.
Anyhow, critically for our pub discussion: one needn’t be a classical utilitarian to recognize we should maximise the cosmic abundance of hypervaluable states. For example, consider a community of fanatical pure mathematicians. The “peak experiences” which they strive to maximise all revolve around beautiful mathematical equations. They scorn anything that sounds like wireheading. They say they don’t care about pleasure or happiness—and indeed they sincerely don’t care about pleasure or happiness under that description, just mathematical theorem-proving, contemplating the incredible awesomeness of Euler’s identity, or whatever. Even so, with the right hedonic engineering, their baseline of well-being, their sense of the value of living as mathematicians, can be orders of magnitude richer than their previous peak experiences. Just as some depressives today can’t imagine the meaning of happiness, they (and we) can’t imagine superhappiness, even though there are strong theoretic grounds to believe it exists. If we tasted it, we’d never want to lose it. [Does this count as “coherent extrapolated volition” within the current Eliezer-inspired sense of the term???] Even our old peak experiences would seem boring if we ever troubled to recall them. [Why bother?] I predict posthuman life with redesigned reward circuity will be unimaginably richer and unimaginably more valuable (“from the inside”) than human life—which will be discarded and forgotten.
However, in a sense the mathematicians are a too conservative example. Is there a risk that pursuing “coherent extrapolated volition” will effectively lock in mediocrity and a poverty of imagination? By analogy, what would be “coherent extrapolated volition” of Neanderthals? Or a bat? Or a mouse? Or a community of congenitally blind tribesmen that lack any visual concepts? By contrast, posthuman desires and preferences may transcend our human conceptual scheme altogether.
Either way, I hope you’ll grant that aiming for the most [subjectively if not objectively] valuable universe isn’t really “unfriendly” in any objective sense. True, one worries: Will a posthuman really be “me”? Well, if a chrysalis could think, should it wonder: “Will a butterfly really be me?” Should it be worrying about the nature of lepidopteran identity over time? Presumably not...
Note that the maximum feasible cosmic abundance of subjectively hypervaluable states could be realized via the actions of a SuperAsperger AGI—since the substrates of (super)value can be objectively determined. No “theory of mind” or capacity for empathetic understanding on the part of the AGI is needed. As you know, I’m sceptical that classical serial computers with a von Neumann architecture will ever be conscious, let alone have an empathetic appreciation of other conscious minds. If this architectural limitations holds - I know you disagree—creating an AMA that captured human moral psychology would be an even more formidable challenge technically.
Added: David’s websites: (H/T Tim Tyler)
http://www.wireheading.com/ http://www.utilitarianism.com/
It seems the key fact that Mr. Pearce is missing is the fact he is judging this utility function to better than his “provincial” values by… his “provincial” values.
If he truly believes this is the best way to organize the universe, why doesn’t he believe that a CEV would spit it out?
It actually sounds to me like CEV will indeed spit it out. It will explain how a better understanding of what we are will lead us to abandon the constrains of the human experience in th search for maximizing the goodness of the universe, a scenario that we would understand if we were smarter, had grown more closer together and had a better grasp of the nature of identity and consciosness and subjective reward.
Where are you getting David’s views on the output of CEV from?
The neurology, while detailed, seems a little confused. In particular, adding mu-opioid receptors to every neuron in the brain sounds more like a recipe for epilepsy than for superhappiness.
I actually want to know that. Does anyone know if there is any Neanderthal DNA remaining? I’d like to see an FAI go all ‘Jurassic Park’ on it and recreate one.
Yes there is. http://en.wikipedia.org/wiki/Neanderthal_genome_project
Some of David’s many web sites:
http://www.hedweb.com/
http://www.wireheading.com/
http://www.utilitarianism.com/
http://www.abolitionist.com/
http://paradise-engineering.com/
If an AMA does not embody an extrapolation of ‘primitive human moral psychology’ how can we be sure that it will attribute value to what someone who does have that primitive psychology judges to be valuable—in this case a ‘cosmic abundance of hypervaluable states’?
I don’t think David Pearce argued for moral realism here. Judging from this sentence:
it’s not clear that he is even a moral realist at all. Does he talk about his meta-ethics somewhere else?
David has indicated in conversation that he is a “realist” in the sense that there are right and wrong answers about what to do with the universe.
You really need to offer an argument for at least one of these two things to make your point:
A utilitarian aiming to maximize subjectively hypervaluable states will not tile orgasmium.
It is good to tile orgasmium.
Here’s one very identifiable point of contention for me. If this quasi-wireheading is all “inwardly” directed (meaning there’s no sensory augmentation inherent in it), then it’s not clear to me why my current experiences should be any more boring than those of loqi[bliss].
Being interested in reality means my old peak experiences still have plenty of information worth recalling. Being interested in reality means I don’t swallow anti-boredom pills unless I anticipate at the very least a net gain in my entanglement with reality.
I don’t think an article with statements like this belong to this site.
I don’t think it’s terribly useful as a throwaway comment, distracting from the rest of the argument. It’s certainly clear that the consensus on this site is the opposite (as is my opinion), but that’s not quite enough of a good reason for a blanket “it shouldn’t be on this site”. If it had a good argument behind it (not to necessarily overcome good reasons to believe with it, but enough to at least reasonably engage those reasons) it seems vital that it be on this site.
As it happens, I don’t expect to see an argument of that form. But I’m willing to be surprised.
The content seems relevant. Can the font be fixed?