But (as far as I can tell) such a definition doesn’t explain why we aren’t micro-experiential zombies. Compare another fabulously complicated information-processing system, the enteric nervous system (“the brain in the gut”). Even if its individual membrane-bound neurons are micro-pixels of experience, there’s no phenomenally unified subject. The challenge is to explain why the awake mind-brain is different—to derive the local and global binding of our minds and the world-simulations we run from (ultimately) from physics.
davidpearce
I wish the binding problem could be solved so simply. Information flow alone isn’t enough. Compare Eric Schwitzgebel (“If Materialism Is True, the United States Is Probably Conscious”). Even if 330 million skull-bound American minds reciprocally communicate by fast electromagnetic signalling, and implement any computation you can think of, then a unified continental subject of experience doesn’t somehow switch on—or at least, not on pain of spooky “strong” emergence”.
The mystery is why 86 billion odd membrane-bound, effectively decohered classical nerve cells should be any different. Why aren’t we merely aggregates of what William James christened “mind dust”, rather than unified subjects of experience supporting local binding (individual perceptual objects) and global binding (the unity of perception and the unity of the self)?
Science doesn’t know.
What we do know is the phenomenal binding of organic minds is insanely computationally powerful, as rare neurological deceit syndromes (akinetopsia, integrative agnosia, simultanagnosia etc) illustrate.
I could now speculate on possible explanations.
But if you don’t grok the mystery, they won’t be of any interest.
Forgive me, but how do “information flows” solve the binding problem?
Just a note about “mind uploading”. On pain of “strong” emergence, classical Turing machines can’t solve the phenomenal binding problem. Their ignorance of phenomenally-bound consciousness is architecturally hardwired. Classical digital computers are zombies or (if consciousness is fundamental to the world) micro-experiential zombies, not phenomenally-bound subjects of experience with a pleasure-pain axis. Speed of execution or complexity of code make no difference: phenomenal unity isn’t going to “switch on”. Digital minds are an oxymoron.
Like the poster, I worry about s-risks. I just don’t think this is one of them.
Homunculi are real. Consider a lucid dream. When lucid, you can know that your body-image is entirely internal to your sleeping brain. You can know that the virtual head you can feel with your virtual hands is entirely internal to your sleeping brain too. Sure, the reality of this homunculus doesn’t explain how the experience is possible. Yet such an absence of explanatory power doesn’t mean that we should disavow talk of homunculi.
Waking consciousness is more controversial. But (I’d argue) you can still experience only a homunculus—but now it’s a homunculus that (normally) causally do-varies with the behaviour of an extra-cranial body.
It’s good to know we agree on genetically phasing out the biology of suffering!
Now for your thought-experiments.Quantitatively, given a choice between a tiny amount of suffering X + everyone and everything else being great, or everyone dying, NU’s would choose omnicide no matter how small X is?
To avoid status quo bias, imagine you are offered the chance to create a type-identical duplicate, New Omelas—again a blissful city of vast delights dependent on the torment of a single child. Would you accept or decline? As an NU, I’d say “no”—even though the child’s suffering is “trivial” compared to the immensity of pleasure to be gained. Likewise, I’d painlessly retire the original Omelas too. Needless to say, our existing world is a long way from Omelas. Indeed, if we include nonhuman animals, then our world may contain more suffering than happiness. Most nonhuman animals in Nature starve to death at a early age; and factory-farmed nonhumans suffer chronic distress. Maybe the CU should press a notional OFF button and retire life too.
A separate but related question: What if we also make it so that X doesn’t happen for sure, but rather happens with some probability. How low does that probability have to be before NUs would take the risk, instead of choosing omnicide? Is any probability too low?
You pose an interesting hypothetical that I’d never previously considered. If I could be 100% certain that NU is ethically correct, then the slightest risk of even trivial amounts of suffering is too high. However, prudence dictates epistemic humility. So I’d need to think some more before answering.
Back in the real world, I believe (on consequentialist NU grounds) that it’s best to enshrine in law the sanctity of human and nonhuman animal life. And (like you) I look forward to the day when we can get rid of suffering—and maybe forget NU ever existed.
It wasn’t a rhetorical question; I really wanted (and still want) to know your answer.
Thanks for clarifying. NU certainly sounds a rather bleak ethic. But NUs want us all to have fabulously rich, wonderful, joyful lives—just not at the price of anyone else’s suffering. NUs would “walk away from Omelas”. Reading JDP’s post, one might be forgiven for thinking that the biggest x-risk was from NUs. However, later this century and beyond, if (1) “omnicide” is technically feasible, and if (2) suffering persists, then there are intelligent agents who would bring the world to an end to get rid of it. You would end the world too rather than undergo some kinds of suffering. By contrast, genetically engineering a world without suffering, just fanatical life-lovers, will be safer for the future of sentience—even if you think the biggest threat to humanity comes from rogue AGI/paperclip-maximizers.
Do they also seek to create and sustain a diverse variety of experiences above hedonic zero?
Would the prospect of being unable to enjoy a rich diversity of joyful experiences sadden you? If so, then (other things being equal) any policy to promote monotonous pleasure is anti-NU.
Secular Buddhists like NUs seek to minimise and ideally get rid of all experience below hedonic zero. So does any policy option cause you even the faintest hint of disappointment? Well, other things being equal, that policy option isn’t NU. May all your dreams come true!
Anyhow, I hadn’t intended here to mount a defence of NU ethics—just counter the poster JDP’s implication that NU is necessarily more of an x-risk than CU.
Many thanks for an excellent overview. But here’s a question. Does an ethic of negative utilitarianism or classical utilitarianism pose a bigger long-term risk to civilisation?
Naively, the answer is obvious. If granted the opportunity, NUs would e.g. initiate a vacuum phase transition, program seed AI with a NU utility function, and do anything humanly possible to bring life and suffering to an end. By contrast, classical utilitarians worry about x-risk and advocate Longtermism (cf. https://www.hedweb.com/quora/2015.html#longtermism).
However, I think the answer is more complicated. Negative utilitarians (like me) advocate a creating a world based entirely on gradients of genetically programmed well-being. In my view, phasing out the biology of mental and physical pain in favour of a new motivational architecture is the most realistic way to prevent suffering in our forward light-cone. By contrast, classical utilitarians are committed, ultimately, to some kind of apocalyptic “utilitronium shockwave” – an all-consuming cosmic orgasm. Classical utilitarianism says we must maximize the cosmic abundance of pure bliss. Negative utilitarians can uphold complex life and civilisation.
Can preference utilitarians, classical utilitarians and negative utilitarians hammer out some kind of cosmological policy consensus? Not ideal by anyone’s lights, but good enough? So long as we don’t create more experience below “hedonic zero” in our forward light-cone, NUs are untroubled by wildly differing outcomes. There is clearly a tension between preference utilitarianism and classical utilitarianism; but most(?) preference utilitarians are relaxed about having hedonic ranges shifted upwards—perhaps even radically upwards—if recalibration is done safely, intelligently and conservatively—a big “if”, for sure. Surrounding the sphere of sentient agents in our Local Supercluster(?) with a sea of hedonium propagated by von Neumann probes or whatever is a matter of indifference to most preference utilitarians and NUs but mandated(?) by CU.
Is this too rosy a scenario?
Eli, sorry, could you elaborate? Thanks!
Eli, fair point.
Eli, it’s too quick to dismiss placing moral value on all conscious creatures as “very warm-and-fuzzy”. If we’re psychologising, then we might equally say that working towards the well-being of all sentience reflects the cognitive style of a rule-bound hyper-systematiser. No, chickens aren’t going to win any Fields medals—though chickens can recognise logical relationships and perform transitive inferences (cf. the “pecking order”). But nonhuman animals can still experience states of extreme distress. Uncontrolled panic, for example, feels awful regardless of your species-identity. Such panic involves a complete absence or breakdown of reflective self-awareness—illustrating how the most intense forms of consciousness don’t involve sophisticated meta-cognition.
Either way, if we can ethically justify spending, say, $100,000 salvaging a 23-week-old human micro-preemie, then impartial benevolence dictates caring for beings of greater sentience and sapience as well—or at the very least, not actively harming them.
“Health is a state of complete [sic] physical, mental and social well-being”: the World Health Organization definition of health. Knb, I don’t doubt that sometimes you’re right. But Is phasing out the biology of involuntary suffering really too “extreme”—any more than radical life-extension or radical intelligence-amplification? When talking to anyone new to transhumanism, I try also to make the most compelling case I can for radical superlongevity and extreme superintelligence—biological, Kurzweilian and MIRI conceptions alike. Yet for a large minority of people—stretching from Buddhists to wholly secular victims of chronic depression and chronic pain disorders—dealing with suffering in one guise or another is the central issue. Recall how for hundreds of millions of people in the world today, time hangs heavy—and the prospect of intelligence-amplification without improved subjective well-being leaves them cold. So your worry cuts both ways.
Anyhow, IMO the makers of the BIOPS video have done a fantastic job. Kudos. I gather future episodes of the series will tackle different conceptions of posthuman superintelligence—not least from the MIRI perspective.
This is a difficult question. By analogy, should rich cannibals or human child abusers be legally permitted to indulge their pleasures if they offset the harm they cause with sufficiently large charitable donations to orphanages or children’s charities elsewhere? On (indirect) utilitarian grounds if nothing else, we would all(?) favour an absolute legal prohibition on cannibalism and human child abuse. This analogy breaks down if the neuroscientfic evidence suggesting that pigs, for example, are at least as sentient as prelinguistic human toddlers turns out to be mistaken. I’m deeply pessimistic this is the case.
Could you possibly say a bit more about why the mirror test is inadequate as a test of possession of a self-concept? Either way, making self-awareness a precondition of moral status has troubling implications. For example, consider what happens to verbally competent adults when feelings intense fear turn into uncontrollable panic. In states of “blind” panic, reflective self-awareness and the capacity for any kind of meta-cognition is lost. Panic disorder is extraordinarily unpleasant. Are we to make the claim that such panic-ridden states aren’t themselves important—only the memories of such states that a traumatised subject reports when s/he regains a measure of composure and some semblance of reflective self-awareness is restored? A pig, for example, or a prelinguistic human toddler, doesn’t have the meta-cognitive capacity to self-reflect on such states. But I don’t think we are ethically entitled to induce them—any more than we are ethically entitled to waterboard a normal adult human. I would hope posthuman superintelligence can engineer such states out of existence - in human and nonhuman animals alike.
Birds lack a neocortex. But members of at least one species, the European magpie, have convincingly passed the “mirror test” [cf. “Mirror-Induced Behavior in the Magpie (Pica pica): Evidence of Self-Recognition” http://www.plosbiology.org/article/fetchObject.action?representation=PDF&uri=info:doi/10.1371/journal.pbio.0060202] Most ethologists recognise passing the mirror test as evidence of a self-concept. As well as higher primates (chimpanzees, orang utans, bonobos, gorillas) members of other species who have passed the mirror test include elephants, orcas and bottlenose dolphins. Humans generally fail the mirror test below the age of eighteen months.
Lumifer, should the charge of “mind-killers” be levelled at anti-speciesists or meat-eaters? (If you were being ironic, apologies for being so literal-minded.)
You remark that “A physical object implementing the state-machine-which-is-us and being in a certain state is what we mean by having a unified mental state.” You can stipulatively define a unified mental state in this way. But this definition is not what I (or most people) mean by “unified mental state”. Science doesn’t currently know why we aren’t (at most) just 86 billion membrane-bound pixels of experience.