The basic idea about parents and hedonic psychology, as I understand it, is that your moment-to-moment happiness is not typically very high when you have kids, but your “tell me a story” medium/long term reflective happiness may be quite high.
Neither of those is privileged. Have you ever spent a day doing nothing but indulging yourself (watching movies, eating your favourite foods, relaxing)? If you’re anything like me you find that even thought most moments during the day were pleasant, the overall experience of the day was nasty and depressing.
Basically, happiness is not an integral of moment-to-moment pleasure, so while it’s naive to say parenting is an unqualified joy, it’s not so bleak as to be only a good thing after the memories are distorted by time.
As a parent I can report that most days my day-wise maximum moment-to-moment happiness is due to some interaction with my child.
I’m inclined to believe you, but note that what you said doesn’t quite contradict the hypothesis, which is that if you were not a parent, your day-wise maximum (from any source) would probably be higher.
Also, beware of attributing more power to introspection than it deserves, especially when the waters are already muddied by the normativity of parents’ love for their children. You say your happiest moments are with your child, but a graph of dopamine vs. time might (uninspiringly) show bigger spikes whenever you ate sugar. Or it might not. My point is that I’m not sure how much we should trust our own reflections on our happiness.
note that what you said doesn’t quite contradict the hypothesis
Fair point. So let me just state that as far as I can tell, the average of my DWMM2M happiness is higher than it was before my child was born, and I expect that in a counterfactual world where my spouse and I didn’t want a child and consequently didn’t have one, my DWMM2M happiness would not be as great as in this one. It’s just that knowing what I know (including what I’ve learned from this site) and having been programmed by evolution to love a stupendous badass (and that stupendous badass having been equally programmed to love me back), I find that watching that s.b. unfold into a human before my eyes causes me happiness of a regularity and intensity that I personally have never experienced before.
My point is that I’m not sure how much we should trust our own reflections on our happiness.
I would mischievously point out things like the oxytocin released after childbirth ought to make us especially wary of bias when it comes to kids. After all, there is no area of our life that evolution could be more concerned about than the kids. (Even your life is worth less than a kid or two, arguably, from its POV.)
Er, what? Please draw a clearer connection between the notion of having preferences over the way things actually are and the notion that our evolutionarily constructed bias/carrot/stick system is a ‘noble lie’.
I’m not categorically against being tasped by a third party, but I’d want that third party to pay attention to my preferences, not merely my happiness. I’d also require the third party to be more intelligent than the most intelligent human who ever existed, and not by a small margin either.
Alright, I’ll put it another way. You seem very cavalier about having your utility-function/preferences without your volition. You defend a new mother’s utility-function/preferences being modified by oxytocin, and in this comment you would allow a third party to tasp you and get you addicted to wireheading. When exactly are such involuntary manipulations permitted?
They are permitted by informed consent. (A new mother may not know in detail what oxytocin does, but would have to be singularly incurious not to have asked other mothers what it’s like to become a mother.)
you would allow a third party to tasp you and get you addicted to wireheading
No, I wouldn’t. I required the third party to pay attention to my preferences, not just my happiness, and I’ve already stated my preference to not be wireheaded.
I can’t help but get the feeling that you have some preconceived notions about my personal views which are preventing you from reading my comments carefully. ETA: Well, no, maybe you just believe remote stimulation of the pleasure centers of one’s brain to be inherently addicting, whereas I just assumed that a superintelligent being hitting my brain with remote stimulation could avoid causing addiction if it was motivated to do so.
Well, no, maybe you just believe remote stimulation of the pleasure centers of one’s brain to be inherently addicting, whereas I just assumed that a superintelligent being hitting my brain with remote stimulation could avoid causing addiction if it was motivated to do so.
Well, I figure wireheading is either intrinsically addicting, by definition (what else could addiction be motivated by but pleasure?) or so close to it as to make little practical difference; there are a number of rat/mice studies which entail sticking electrodes into the pleasure center and gaining complete control and the researchers don’t mention any mice/rat ever heroically defying the stimulus through sheer force of will, which suggests very bad things for any humans so situated.
there are a number of rat/mice studies which entail sticking electrodes into the pleasure center and gaining complete control and the researchers don’t mention any mice/rat ever heroically defying the stimulus through sheer force of will, which suggests very bad things for any humans so situated.
Perhaps the sheer-force-of-will meters were malfunctioning in these experiments.
More seriously, lets create a series of thought experiments, all involving actions by “Friendly” AI. (FAI. Those were scare quotes. I won’t use them again. You have been warned!). In each case, the question in the thought experiment is whether the FAI behavior described is prima facie evidence that the FAI has been misprogrammed.
Thought experiment #1: The FAI has been instructed to respect the autonomy of the human will, but also to try to prevent humans from hurting themselves. Therefore, in cases where humans have threatened suicide, the FAI offers the alternative of becoming a Niven wirehead. No tasping, it is strictly voluntary.
Thought experiment #2: The FAI makes the wirehead option available to all of mankind. It also makes available effective, but somewhat unpleasant, addiction treatment programs for those who have tried the wire, but now wish to quit.
Thought experiment #3: The request for addiction treatment is irrevocable, once treated, humans do not have the option of becoming rewired.
Thought experiment #4: Practicing wireheads are prohibited from contributing genetically to the future human population. At least part of the motivation of the FAI in the whole wirehead policy is eugenic. The FAI wishes to make happiness more self-actualized in human nature, and less dependent on the FAI and its supplied technologies.
Thought experiment #5: This eugenic intervention is in conflict with various other possible eugenic interventions which the FAI is contemplating. In particular, the goal of making mankind more rational seems to be in irreconcilable conflict with the goal of making mankind more happiness-self-actualized. The FAI consults the fine print of its programming and decides in favor of self actualized happiness and against rationality.
Please, carry on with the scare quotes. Or maybe don’t use a capital F.
Apparently: “Friendly Artificial Intelligence” is a term that was coined by researcher Eliezer Yudkowsky of the Singularity Institute for Artificial Intelligence as a term of art distinct from the everyday meaning of the word “friendly”. However, nobody seems to be terribly clear about exactly what it means. If you were hoping to pin that down using a consensus, it looks as though you may be out of luck.
As an aside, I wonder how Eliezer’s FAI is going to decide whether to use eugenics. Using the equivalent of worldwide vote doesn’t look like a good idea to me.
How about purely voluntary choice of ‘designer babies’ for your own reproduction, within guidelines set by worldwide vote? Does that sound any more like a good idea? Frankly, it doesn’t seem all that scary to me, at least not as compared with other directions that the FAI might want to take us.
I agree that eugenics is far from the scariest thing FAI could do.
Not sure about designer babies, I don’t have any gut reaction to the issue, and a serious elicitation effort will likely cause me to just make stuff up.
Only now neuroscientists are starting to recognize a difference between “reward” and “pleasure”, or call it “wanting” and “liking”… A University of Michigan study analyzed the brains of rats eating a favorite food. They found separate circuits for “wanting” and “liking”, and were able to knock out either circuit without affecting the other (it was actually kind of cute—they measured the number of times the rats licked their lips as a proxy for “liking”, though of course they had a highly technical rationale behind it). When they knocked out the “liking” system, the rats would eat exactly as much of the food without making any of the satisifed lip-licking expression, and areas of the brain thought to be correlated with pleasure wouldn’t show up in the MRI. Knock out “wanting”, and the rats seem to enjoy the food as much when they get it but not be especially motivated to seek it out.
That’s interesting. Hadn’t seen that. So you are suggesting that addiction as we know it for drugs etc. is going through the ‘wanting’ circuit, but wireheading would go through the ‘liking’ circuit, and so wouldn’t resemble the former?
what else could addiction be motivated by but pleasure?
Wanting is not the same thing as pleasure. The experiments that created the popular conception of wireheading were not actually stimulating the rats’ pleasure center, only the anticipation center.
Consider that there are probably many things you enjoy doing when you do them, but which you are not normally motivated to do. (Classic example: I live in Florida, but almost never go to the beach.)
Clearly, pleasure in the sense of enjoying something is not addictive. If you stimulated the part of my brain that enjoys the beach, it would not result in me perpetually pushing the button in order to continue having the pleasure.
Frankly, I suspect that if somebody invented a way to use TMS or ultrasonics to actually stimulate the pleasure center of the brain, most people would either use them once or twice and put them on the shelf, or else just use them to relax a bit after work.
Weirdly enough, most true pleasures aren’t really addictive, because you need some sort of challenge to seize the interest of your dopamine reward system. Chaotic relationships, skill development (incl. videogames), gambling… these things are addictive precisely because they’re not purely pleasurable, and this stimulates the same parts of the brain that get hit by wireheading and some drugs.
To put it another way, the rats kept pushing the button not because it gave them pleasure, but simply because it stimulated the part of their brain that made them want to push the button more. The rats probably died feeling like they were “just about to” get to the next level in a video game, or finally get back with their estranged spouse, or some other just-out-of-reach goal, rather than in orgasmic bliss.
Hm… not obviously so. Any reductionist explanation of happiness from any source is going to end up mentioning hormones & chemicals in the brain, but it doesn’t follow that wanting happiness (& hence wanting the attendant chemicals) = wanting to wirehead.
I struggle to articulate my objection to wireheading, but it has something to do with the shallowness of pleasure that is totally non-contingent on my actions and thoughts. It is definitely not about some false dichotomy between “natural” and “artificial” happiness; after all, Nature doesn’t have a clue what the difference between them is (nor do I).
It is definitely not about some false dichotomy between “natural” and “artificial” happiness; after all, Nature doesn’t have a clue what the difference between them is (nor do I).
Certainly not, but we do need to understand utility functions and their modification; if we don’t, then bad things might happen. For example (I steal this example from EY), a ‘FAI’ might decide to be Friendly by rewiring our brains to simply be really really happy no matter what, and paperclip the rest of the universe. To most people, this would be a bad outcome, and is an intuitive argument that there are good and bad kinds of happiness, and the distinctions probably have something to do with properties of the external world.
The basic idea about parents and hedonic psychology, as I understand it, is that your moment-to-moment happiness is not typically very high when you have kids, but your “tell me a story” medium/long term reflective happiness may be quite high.
Neither of those is privileged. Have you ever spent a day doing nothing but indulging yourself (watching movies, eating your favourite foods, relaxing)? If you’re anything like me you find that even thought most moments during the day were pleasant, the overall experience of the day was nasty and depressing.
Basically, happiness is not an integral of moment-to-moment pleasure, so while it’s naive to say parenting is an unqualified joy, it’s not so bleak as to be only a good thing after the memories are distorted by time.
As a parent I can report that most days my day-wise maximum moment-to-moment happiness is due to some interaction with my child.
But then, my child is indisputably the most lovable child on the planet.
(welcome thread link not necessary)
Then let me just say, welcome!
I’m inclined to believe you, but note that what you said doesn’t quite contradict the hypothesis, which is that if you were not a parent, your day-wise maximum (from any source) would probably be higher.
Also, beware of attributing more power to introspection than it deserves, especially when the waters are already muddied by the normativity of parents’ love for their children. You say your happiest moments are with your child, but a graph of dopamine vs. time might (uninspiringly) show bigger spikes whenever you ate sugar. Or it might not. My point is that I’m not sure how much we should trust our own reflections on our happiness.
Fair point. So let me just state that as far as I can tell, the average of my DWMM2M happiness is higher than it was before my child was born, and I expect that in a counterfactual world where my spouse and I didn’t want a child and consequently didn’t have one, my DWMM2M happiness would not be as great as in this one. It’s just that knowing what I know (including what I’ve learned from this site) and having been programmed by evolution to love a stupendous badass (and that stupendous badass having been equally programmed to love me back), I find that watching that s.b. unfold into a human before my eyes causes me happiness of a regularity and intensity that I personally have never experienced before.
I would mischievously point out things like the oxytocin released after childbirth ought to make us especially wary of bias when it comes to kids. After all, there is no area of our life that evolution could be more concerned about than the kids. (Even your life is worth less than a kid or two, arguably, from its POV.)
That oxytocin &c. causes us to bond with and become partial to our children does not make any causally subsequent happiness less real.
So, then, you would wirehead? It seems to me to be the same position.
I wouldn’t: I have preferences about the way things actually are, not just how they appear to me or what I’m experiencing at any given moment.
So that use of oxytocin (and any other fun little biases and sticks and carrots built into us) is a ‘noble lie’, justified by its results?
In keeping with the Niven theme, so, then you would not object to being tasped by a third party solicitous of your happiness?
Er, what? Please draw a clearer connection between the notion of having preferences over the way things actually are and the notion that our evolutionarily constructed bias/carrot/stick system is a ‘noble lie’.
I’m not categorically against being tasped by a third party, but I’d want that third party to pay attention to my preferences, not merely my happiness. I’d also require the third party to be more intelligent than the most intelligent human who ever existed, and not by a small margin either.
Alright, I’ll put it another way. You seem very cavalier about having your utility-function/preferences without your volition. You defend a new mother’s utility-function/preferences being modified by oxytocin, and in this comment you would allow a third party to tasp you and get you addicted to wireheading. When exactly are such involuntary manipulations permitted?
They are permitted by informed consent. (A new mother may not know in detail what oxytocin does, but would have to be singularly incurious not to have asked other mothers what it’s like to become a mother.)
No, I wouldn’t. I required the third party to pay attention to my preferences, not just my happiness, and I’ve already stated my preference to not be wireheaded.
I can’t help but get the feeling that you have some preconceived notions about my personal views which are preventing you from reading my comments carefully. ETA: Well, no, maybe you just believe remote stimulation of the pleasure centers of one’s brain to be inherently addicting, whereas I just assumed that a superintelligent being hitting my brain with remote stimulation could avoid causing addiction if it was motivated to do so.
Well, I figure wireheading is either intrinsically addicting, by definition (what else could addiction be motivated by but pleasure?) or so close to it as to make little practical difference; there are a number of rat/mice studies which entail sticking electrodes into the pleasure center and gaining complete control and the researchers don’t mention any mice/rat ever heroically defying the stimulus through sheer force of will, which suggests very bad things for any humans so situated.
Perhaps the sheer-force-of-will meters were malfunctioning in these experiments.
More seriously, lets create a series of thought experiments, all involving actions by “Friendly” AI. (FAI. Those were scare quotes. I won’t use them again. You have been warned!). In each case, the question in the thought experiment is whether the FAI behavior described is prima facie evidence that the FAI has been misprogrammed.
Thought experiment #1: The FAI has been instructed to respect the autonomy of the human will, but also to try to prevent humans from hurting themselves. Therefore, in cases where humans have threatened suicide, the FAI offers the alternative of becoming a Niven wirehead. No tasping, it is strictly voluntary.
Thought experiment #2: The FAI makes the wirehead option available to all of mankind. It also makes available effective, but somewhat unpleasant, addiction treatment programs for those who have tried the wire, but now wish to quit.
Thought experiment #3: The request for addiction treatment is irrevocable, once treated, humans do not have the option of becoming rewired.
Thought experiment #4: Practicing wireheads are prohibited from contributing genetically to the future human population. At least part of the motivation of the FAI in the whole wirehead policy is eugenic. The FAI wishes to make happiness more self-actualized in human nature, and less dependent on the FAI and its supplied technologies.
Thought experiment #5: This eugenic intervention is in conflict with various other possible eugenic interventions which the FAI is contemplating. In particular, the goal of making mankind more rational seems to be in irreconcilable conflict with the goal of making mankind more happiness-self-actualized. The FAI consults the fine print of its programming and decides in favor of self actualized happiness and against rationality.
Please, carry on with the scare quotes. Or maybe don’t use a capital F.
Apparently: “Friendly Artificial Intelligence” is a term that was coined by researcher Eliezer Yudkowsky of the Singularity Institute for Artificial Intelligence as a term of art distinct from the everyday meaning of the word “friendly”. However, nobody seems to be terribly clear about exactly what it means. If you were hoping to pin that down using a consensus, it looks as though you may be out of luck.
As an aside, I wonder how Eliezer’s FAI is going to decide whether to use eugenics. Using the equivalent of worldwide vote doesn’t look like a good idea to me.
How about purely voluntary choice of ‘designer babies’ for your own reproduction, within guidelines set by worldwide vote? Does that sound any more like a good idea? Frankly, it doesn’t seem all that scary to me, at least not as compared with other directions that the FAI might want to take us.
I agree that eugenics is far from the scariest thing FAI could do.
Not sure about designer babies, I don’t have any gut reaction to the issue, and a serious elicitation effort will likely cause me to just make stuff up.
Yvain wrote:
That’s interesting. Hadn’t seen that. So you are suggesting that addiction as we know it for drugs etc. is going through the ‘wanting’ circuit, but wireheading would go through the ‘liking’ circuit, and so wouldn’t resemble the former?
Yvain’s post suggested it; I just stuck it in my cache.
Wanting is not the same thing as pleasure. The experiments that created the popular conception of wireheading were not actually stimulating the rats’ pleasure center, only the anticipation center.
Consider that there are probably many things you enjoy doing when you do them, but which you are not normally motivated to do. (Classic example: I live in Florida, but almost never go to the beach.)
Clearly, pleasure in the sense of enjoying something is not addictive. If you stimulated the part of my brain that enjoys the beach, it would not result in me perpetually pushing the button in order to continue having the pleasure.
Frankly, I suspect that if somebody invented a way to use TMS or ultrasonics to actually stimulate the pleasure center of the brain, most people would either use them once or twice and put them on the shelf, or else just use them to relax a bit after work.
Weirdly enough, most true pleasures aren’t really addictive, because you need some sort of challenge to seize the interest of your dopamine reward system. Chaotic relationships, skill development (incl. videogames), gambling… these things are addictive precisely because they’re not purely pleasurable, and this stimulates the same parts of the brain that get hit by wireheading and some drugs.
To put it another way, the rats kept pushing the button not because it gave them pleasure, but simply because it stimulated the part of their brain that made them want to push the button more. The rats probably died feeling like they were “just about to” get to the next level in a video game, or finally get back with their estranged spouse, or some other just-out-of-reach goal, rather than in orgasmic bliss.
Hm… not obviously so. Any reductionist explanation of happiness from any source is going to end up mentioning hormones & chemicals in the brain, but it doesn’t follow that wanting happiness (& hence wanting the attendant chemicals) = wanting to wirehead.
I struggle to articulate my objection to wireheading, but it has something to do with the shallowness of pleasure that is totally non-contingent on my actions and thoughts. It is definitely not about some false dichotomy between “natural” and “artificial” happiness; after all, Nature doesn’t have a clue what the difference between them is (nor do I).
Certainly not, but we do need to understand utility functions and their modification; if we don’t, then bad things might happen. For example (I steal this example from EY), a ‘FAI’ might decide to be Friendly by rewiring our brains to simply be really really happy no matter what, and paperclip the rest of the universe. To most people, this would be a bad outcome, and is an intuitive argument that there are good and bad kinds of happiness, and the distinctions probably have something to do with properties of the external world.