They are permitted by informed consent. (A new mother may not know in detail what oxytocin does, but would have to be singularly incurious not to have asked other mothers what it’s like to become a mother.)
you would allow a third party to tasp you and get you addicted to wireheading
No, I wouldn’t. I required the third party to pay attention to my preferences, not just my happiness, and I’ve already stated my preference to not be wireheaded.
I can’t help but get the feeling that you have some preconceived notions about my personal views which are preventing you from reading my comments carefully. ETA: Well, no, maybe you just believe remote stimulation of the pleasure centers of one’s brain to be inherently addicting, whereas I just assumed that a superintelligent being hitting my brain with remote stimulation could avoid causing addiction if it was motivated to do so.
Well, no, maybe you just believe remote stimulation of the pleasure centers of one’s brain to be inherently addicting, whereas I just assumed that a superintelligent being hitting my brain with remote stimulation could avoid causing addiction if it was motivated to do so.
Well, I figure wireheading is either intrinsically addicting, by definition (what else could addiction be motivated by but pleasure?) or so close to it as to make little practical difference; there are a number of rat/mice studies which entail sticking electrodes into the pleasure center and gaining complete control and the researchers don’t mention any mice/rat ever heroically defying the stimulus through sheer force of will, which suggests very bad things for any humans so situated.
there are a number of rat/mice studies which entail sticking electrodes into the pleasure center and gaining complete control and the researchers don’t mention any mice/rat ever heroically defying the stimulus through sheer force of will, which suggests very bad things for any humans so situated.
Perhaps the sheer-force-of-will meters were malfunctioning in these experiments.
More seriously, lets create a series of thought experiments, all involving actions by “Friendly” AI. (FAI. Those were scare quotes. I won’t use them again. You have been warned!). In each case, the question in the thought experiment is whether the FAI behavior described is prima facie evidence that the FAI has been misprogrammed.
Thought experiment #1: The FAI has been instructed to respect the autonomy of the human will, but also to try to prevent humans from hurting themselves. Therefore, in cases where humans have threatened suicide, the FAI offers the alternative of becoming a Niven wirehead. No tasping, it is strictly voluntary.
Thought experiment #2: The FAI makes the wirehead option available to all of mankind. It also makes available effective, but somewhat unpleasant, addiction treatment programs for those who have tried the wire, but now wish to quit.
Thought experiment #3: The request for addiction treatment is irrevocable, once treated, humans do not have the option of becoming rewired.
Thought experiment #4: Practicing wireheads are prohibited from contributing genetically to the future human population. At least part of the motivation of the FAI in the whole wirehead policy is eugenic. The FAI wishes to make happiness more self-actualized in human nature, and less dependent on the FAI and its supplied technologies.
Thought experiment #5: This eugenic intervention is in conflict with various other possible eugenic interventions which the FAI is contemplating. In particular, the goal of making mankind more rational seems to be in irreconcilable conflict with the goal of making mankind more happiness-self-actualized. The FAI consults the fine print of its programming and decides in favor of self actualized happiness and against rationality.
Please, carry on with the scare quotes. Or maybe don’t use a capital F.
Apparently: “Friendly Artificial Intelligence” is a term that was coined by researcher Eliezer Yudkowsky of the Singularity Institute for Artificial Intelligence as a term of art distinct from the everyday meaning of the word “friendly”. However, nobody seems to be terribly clear about exactly what it means. If you were hoping to pin that down using a consensus, it looks as though you may be out of luck.
As an aside, I wonder how Eliezer’s FAI is going to decide whether to use eugenics. Using the equivalent of worldwide vote doesn’t look like a good idea to me.
How about purely voluntary choice of ‘designer babies’ for your own reproduction, within guidelines set by worldwide vote? Does that sound any more like a good idea? Frankly, it doesn’t seem all that scary to me, at least not as compared with other directions that the FAI might want to take us.
I agree that eugenics is far from the scariest thing FAI could do.
Not sure about designer babies, I don’t have any gut reaction to the issue, and a serious elicitation effort will likely cause me to just make stuff up.
Only now neuroscientists are starting to recognize a difference between “reward” and “pleasure”, or call it “wanting” and “liking”… A University of Michigan study analyzed the brains of rats eating a favorite food. They found separate circuits for “wanting” and “liking”, and were able to knock out either circuit without affecting the other (it was actually kind of cute—they measured the number of times the rats licked their lips as a proxy for “liking”, though of course they had a highly technical rationale behind it). When they knocked out the “liking” system, the rats would eat exactly as much of the food without making any of the satisifed lip-licking expression, and areas of the brain thought to be correlated with pleasure wouldn’t show up in the MRI. Knock out “wanting”, and the rats seem to enjoy the food as much when they get it but not be especially motivated to seek it out.
That’s interesting. Hadn’t seen that. So you are suggesting that addiction as we know it for drugs etc. is going through the ‘wanting’ circuit, but wireheading would go through the ‘liking’ circuit, and so wouldn’t resemble the former?
what else could addiction be motivated by but pleasure?
Wanting is not the same thing as pleasure. The experiments that created the popular conception of wireheading were not actually stimulating the rats’ pleasure center, only the anticipation center.
Consider that there are probably many things you enjoy doing when you do them, but which you are not normally motivated to do. (Classic example: I live in Florida, but almost never go to the beach.)
Clearly, pleasure in the sense of enjoying something is not addictive. If you stimulated the part of my brain that enjoys the beach, it would not result in me perpetually pushing the button in order to continue having the pleasure.
Frankly, I suspect that if somebody invented a way to use TMS or ultrasonics to actually stimulate the pleasure center of the brain, most people would either use them once or twice and put them on the shelf, or else just use them to relax a bit after work.
Weirdly enough, most true pleasures aren’t really addictive, because you need some sort of challenge to seize the interest of your dopamine reward system. Chaotic relationships, skill development (incl. videogames), gambling… these things are addictive precisely because they’re not purely pleasurable, and this stimulates the same parts of the brain that get hit by wireheading and some drugs.
To put it another way, the rats kept pushing the button not because it gave them pleasure, but simply because it stimulated the part of their brain that made them want to push the button more. The rats probably died feeling like they were “just about to” get to the next level in a video game, or finally get back with their estranged spouse, or some other just-out-of-reach goal, rather than in orgasmic bliss.
They are permitted by informed consent. (A new mother may not know in detail what oxytocin does, but would have to be singularly incurious not to have asked other mothers what it’s like to become a mother.)
No, I wouldn’t. I required the third party to pay attention to my preferences, not just my happiness, and I’ve already stated my preference to not be wireheaded.
I can’t help but get the feeling that you have some preconceived notions about my personal views which are preventing you from reading my comments carefully. ETA: Well, no, maybe you just believe remote stimulation of the pleasure centers of one’s brain to be inherently addicting, whereas I just assumed that a superintelligent being hitting my brain with remote stimulation could avoid causing addiction if it was motivated to do so.
Well, I figure wireheading is either intrinsically addicting, by definition (what else could addiction be motivated by but pleasure?) or so close to it as to make little practical difference; there are a number of rat/mice studies which entail sticking electrodes into the pleasure center and gaining complete control and the researchers don’t mention any mice/rat ever heroically defying the stimulus through sheer force of will, which suggests very bad things for any humans so situated.
Perhaps the sheer-force-of-will meters were malfunctioning in these experiments.
More seriously, lets create a series of thought experiments, all involving actions by “Friendly” AI. (FAI. Those were scare quotes. I won’t use them again. You have been warned!). In each case, the question in the thought experiment is whether the FAI behavior described is prima facie evidence that the FAI has been misprogrammed.
Thought experiment #1: The FAI has been instructed to respect the autonomy of the human will, but also to try to prevent humans from hurting themselves. Therefore, in cases where humans have threatened suicide, the FAI offers the alternative of becoming a Niven wirehead. No tasping, it is strictly voluntary.
Thought experiment #2: The FAI makes the wirehead option available to all of mankind. It also makes available effective, but somewhat unpleasant, addiction treatment programs for those who have tried the wire, but now wish to quit.
Thought experiment #3: The request for addiction treatment is irrevocable, once treated, humans do not have the option of becoming rewired.
Thought experiment #4: Practicing wireheads are prohibited from contributing genetically to the future human population. At least part of the motivation of the FAI in the whole wirehead policy is eugenic. The FAI wishes to make happiness more self-actualized in human nature, and less dependent on the FAI and its supplied technologies.
Thought experiment #5: This eugenic intervention is in conflict with various other possible eugenic interventions which the FAI is contemplating. In particular, the goal of making mankind more rational seems to be in irreconcilable conflict with the goal of making mankind more happiness-self-actualized. The FAI consults the fine print of its programming and decides in favor of self actualized happiness and against rationality.
Please, carry on with the scare quotes. Or maybe don’t use a capital F.
Apparently: “Friendly Artificial Intelligence” is a term that was coined by researcher Eliezer Yudkowsky of the Singularity Institute for Artificial Intelligence as a term of art distinct from the everyday meaning of the word “friendly”. However, nobody seems to be terribly clear about exactly what it means. If you were hoping to pin that down using a consensus, it looks as though you may be out of luck.
As an aside, I wonder how Eliezer’s FAI is going to decide whether to use eugenics. Using the equivalent of worldwide vote doesn’t look like a good idea to me.
How about purely voluntary choice of ‘designer babies’ for your own reproduction, within guidelines set by worldwide vote? Does that sound any more like a good idea? Frankly, it doesn’t seem all that scary to me, at least not as compared with other directions that the FAI might want to take us.
I agree that eugenics is far from the scariest thing FAI could do.
Not sure about designer babies, I don’t have any gut reaction to the issue, and a serious elicitation effort will likely cause me to just make stuff up.
Yvain wrote:
That’s interesting. Hadn’t seen that. So you are suggesting that addiction as we know it for drugs etc. is going through the ‘wanting’ circuit, but wireheading would go through the ‘liking’ circuit, and so wouldn’t resemble the former?
Yvain’s post suggested it; I just stuck it in my cache.
Wanting is not the same thing as pleasure. The experiments that created the popular conception of wireheading were not actually stimulating the rats’ pleasure center, only the anticipation center.
Consider that there are probably many things you enjoy doing when you do them, but which you are not normally motivated to do. (Classic example: I live in Florida, but almost never go to the beach.)
Clearly, pleasure in the sense of enjoying something is not addictive. If you stimulated the part of my brain that enjoys the beach, it would not result in me perpetually pushing the button in order to continue having the pleasure.
Frankly, I suspect that if somebody invented a way to use TMS or ultrasonics to actually stimulate the pleasure center of the brain, most people would either use them once or twice and put them on the shelf, or else just use them to relax a bit after work.
Weirdly enough, most true pleasures aren’t really addictive, because you need some sort of challenge to seize the interest of your dopamine reward system. Chaotic relationships, skill development (incl. videogames), gambling… these things are addictive precisely because they’re not purely pleasurable, and this stimulates the same parts of the brain that get hit by wireheading and some drugs.
To put it another way, the rats kept pushing the button not because it gave them pleasure, but simply because it stimulated the part of their brain that made them want to push the button more. The rats probably died feeling like they were “just about to” get to the next level in a video game, or finally get back with their estranged spouse, or some other just-out-of-reach goal, rather than in orgasmic bliss.