A very nice post. Perhaps you might also discuss Felipe De Brigard’s “Inverted Experience Machine Argument”
http://www.unc.edu/~brigard/Xmach.pdf
To what extent does our response to Nozick’s Experience Machine Argument typically reflect status quo bias rather than a desire to connect with ultimate reality?
If we really do want to “stay in touch” with reality, then we can’t wirehead or plug into an “Experience Machine”. But this constraint does not rule out radical superhappiness. By genetically recalibrating the hedonic treadmill, we could in principle enjoy rich, intelligent, complex lives based on information-sensitive gradients of bliss—eventually, perhaps, intelligent bliss orders of magnitude richer than anything physiologically accessible today. Optionally, genetic recalibration of our hedonic set-points could in principle leave much if not all of our existing preference architecture intact—defanging Nozick’s Experience Machine Argument—while immensely enriching our quality of life. Radical hedonic recalibration is also easier than, say, the idealised logical reconciliation of Coherent Extrapolated Volition because hedonic recalibration doesn’t entail choosing between mutually inconsistent values—unless of course one’s values are bound up with the inflicting or undergoing suffering.
IMO one big complication with discussions of “wireheading” is that our understanding of intracranial self-stimulation has changed since Olds and Milner discovered the “pleasure centres”. Taking a mu opioid agonist like heroin is in some ways the opposite of wireheading because heroin induces pure bliss without desire (shades of Buddhist nirvana?), whereas intracranial self-stimulation of the mesolimbic dopamine system involves a frenzy of anticipation rather than pure happiness. So it’s often convenient to think of mu opioid agonists as mediating “liking” and dopamine agonists as mediating “wanting”. We have two ultimate cubic centimetre sized “hedonic hotspots” in the rostral shell of the nucleus accumbens and ventral pallidum http://www.lsa.umich.edu/psych/research%26labs/berridge/publications/Berridge%202003%20Brain%20%26%20Cog%20Pleasures%20of%20brain.pdf
where mu opioid agonists play a critical signalling role. But anatomical location is critical. Thus the mu opioid agonist remifentanil actually induces dysphoria
http://www.ncbi.nlm.nih.gov/pubmed/18801832
To what extent does our response to Nozick’s Experience Machine Argument typically reflect status quo bias rather than a desire to connect with ultimate reality?
I think the argument that people don’t really want to stay in touch with reality but rather want to stay in touch with their past makes a lot of sense. After all we construct our model of reality from our past experiences. One could argue that this is another example of a substitute measure, used to save computational resources: Instead of caring about reality we care about our memories making sense and being meaningful.
On the other hand I assume I wasn’t the only one mentally applauding Neo for swallowing the red pill.
Perhaps you might also discuss Felipe De Brigard’s “Inverted Experience Machine Argument” http://www.unc.edu/~brigard/Xmach.pdf To what extent does our response to Nozick’s Experience Machine Argument typically reflect status quo bias rather than a desire to connect with ultimate reality?
I would argue that the reason people find the experience machine repellant is that under Nozick’s original formulation the machine failed to fulfill several basic human desires for which “staying in touch with reality” is usually instrumental to.
The most obvious of these is social interaction with other people. Most people don’t just want an experience that is sensually identical to interacting with other people, they want to actually interact with other people, form friendships, fall in love, and make a difference in people’s lives. If we made the experience machine multiplayer, so that a person’s friends and relatives can plug into the machine together and interact with each other, I think that a much more significant percentage of the human race would want to plug in.
Other examples of these desires Nozick’s machine doesn’t fulfill include the desire to learn about the world’s history and science, the desire to have children, the desire to have an accurate memory of one’s life, and the desire to engage in contests where it is possible one will lose. If the experience machine was further “defanged” to allow people to engage in these experiences I think most people would take it.
In fact, the history of human progress could be regarded as an attempt to convert the entire universe into a “defanged experience machine.”
I don’t know. I’ve had a lot of dreams where I’ve felt I understood some really cool concept, woke up, told it to someone, and when my head cleared the person told me I’d just spouted gibberish at them. So the feeling of understanding can definitely be simulated without actual understanding, but I’m not sure that’s the same thing as simulating the experience of understanding.
I wonder if thinking you understand mathematics without actually doing so counts as “simulating the understanding of mathematics.” When I was little there was a period of time where I thought I understood quadratic equations, but had it totally wrong, is that “simulating?”
Maybe the reason it’s not really coherent is that many branches of math can be worked out and understood entirely in your head if you have a good enough memory, so an experience machine couldn’t add anything to the experience, (except maybe having virtual paper to make notes on).
A very nice post. Perhaps you might also discuss Felipe De Brigard’s “Inverted Experience Machine Argument” http://www.unc.edu/~brigard/Xmach.pdf To what extent does our response to Nozick’s Experience Machine Argument typically reflect status quo bias rather than a desire to connect with ultimate reality?
If we really do want to “stay in touch” with reality, then we can’t wirehead or plug into an “Experience Machine”. But this constraint does not rule out radical superhappiness. By genetically recalibrating the hedonic treadmill, we could in principle enjoy rich, intelligent, complex lives based on information-sensitive gradients of bliss—eventually, perhaps, intelligent bliss orders of magnitude richer than anything physiologically accessible today. Optionally, genetic recalibration of our hedonic set-points could in principle leave much if not all of our existing preference architecture intact—defanging Nozick’s Experience Machine Argument—while immensely enriching our quality of life. Radical hedonic recalibration is also easier than, say, the idealised logical reconciliation of Coherent Extrapolated Volition because hedonic recalibration doesn’t entail choosing between mutually inconsistent values—unless of course one’s values are bound up with the inflicting or undergoing suffering.
IMO one big complication with discussions of “wireheading” is that our understanding of intracranial self-stimulation has changed since Olds and Milner discovered the “pleasure centres”. Taking a mu opioid agonist like heroin is in some ways the opposite of wireheading because heroin induces pure bliss without desire (shades of Buddhist nirvana?), whereas intracranial self-stimulation of the mesolimbic dopamine system involves a frenzy of anticipation rather than pure happiness. So it’s often convenient to think of mu opioid agonists as mediating “liking” and dopamine agonists as mediating “wanting”. We have two ultimate cubic centimetre sized “hedonic hotspots” in the rostral shell of the nucleus accumbens and ventral pallidum http://www.lsa.umich.edu/psych/research%26labs/berridge/publications/Berridge%202003%20Brain%20%26%20Cog%20Pleasures%20of%20brain.pdf where mu opioid agonists play a critical signalling role. But anatomical location is critical. Thus the mu opioid agonist remifentanil actually induces dysphoria http://www.ncbi.nlm.nih.gov/pubmed/18801832
the opposite of what one might naively suppose.
I think the argument that people don’t really want to stay in touch with reality but rather want to stay in touch with their past makes a lot of sense. After all we construct our model of reality from our past experiences. One could argue that this is another example of a substitute measure, used to save computational resources: Instead of caring about reality we care about our memories making sense and being meaningful.
On the other hand I assume I wasn’t the only one mentally applauding Neo for swallowing the red pill.
I would argue that the reason people find the experience machine repellant is that under Nozick’s original formulation the machine failed to fulfill several basic human desires for which “staying in touch with reality” is usually instrumental to.
The most obvious of these is social interaction with other people. Most people don’t just want an experience that is sensually identical to interacting with other people, they want to actually interact with other people, form friendships, fall in love, and make a difference in people’s lives. If we made the experience machine multiplayer, so that a person’s friends and relatives can plug into the machine together and interact with each other, I think that a much more significant percentage of the human race would want to plug in.
Other examples of these desires Nozick’s machine doesn’t fulfill include the desire to learn about the world’s history and science, the desire to have children, the desire to have an accurate memory of one’s life, and the desire to engage in contests where it is possible one will lose. If the experience machine was further “defanged” to allow people to engage in these experiences I think most people would take it.
In fact, the history of human progress could be regarded as an attempt to convert the entire universe into a “defanged experience machine.”
Is simulating the experience of understanding mathematics a coherent concept?
I don’t know. I’ve had a lot of dreams where I’ve felt I understood some really cool concept, woke up, told it to someone, and when my head cleared the person told me I’d just spouted gibberish at them. So the feeling of understanding can definitely be simulated without actual understanding, but I’m not sure that’s the same thing as simulating the experience of understanding.
I wonder if thinking you understand mathematics without actually doing so counts as “simulating the understanding of mathematics.” When I was little there was a period of time where I thought I understood quadratic equations, but had it totally wrong, is that “simulating?”
Maybe the reason it’s not really coherent is that many branches of math can be worked out and understood entirely in your head if you have a good enough memory, so an experience machine couldn’t add anything to the experience, (except maybe having virtual paper to make notes on).