The researchers wrote up their findings on the lottery winners and the accident victims in the Journal of Personality and Social Psychology. The paper is now considered one of the founding texts of happiness studies, a field that has yielded some surprisingly morose results. It’s not just hitting the jackpot that fails to lift spirits; a whole range of activities that people tend to think will make them happy—getting a raise, moving to California, having kids—do not, it turns out, have that effect. (Studies have shown that women find caring for their children less pleasurable than napping or jogging and only slightly more satisfying than doing the dishes.)
(Glad I kept this citation; knew at some point I would run into someone claiming parenthood is a joy. Wish I had the one that said parenthood was a net gain in happiness only years/decades later after the memories have been distorted enough.)
The basic idea about parents and hedonic psychology, as I understand it, is that your moment-to-moment happiness is not typically very high when you have kids, but your “tell me a story” medium/long term reflective happiness may be quite high.
Neither of those is privileged. Have you ever spent a day doing nothing but indulging yourself (watching movies, eating your favourite foods, relaxing)? If you’re anything like me you find that even thought most moments during the day were pleasant, the overall experience of the day was nasty and depressing.
Basically, happiness is not an integral of moment-to-moment pleasure, so while it’s naive to say parenting is an unqualified joy, it’s not so bleak as to be only a good thing after the memories are distorted by time.
As a parent I can report that most days my day-wise maximum moment-to-moment happiness is due to some interaction with my child.
I’m inclined to believe you, but note that what you said doesn’t quite contradict the hypothesis, which is that if you were not a parent, your day-wise maximum (from any source) would probably be higher.
Also, beware of attributing more power to introspection than it deserves, especially when the waters are already muddied by the normativity of parents’ love for their children. You say your happiest moments are with your child, but a graph of dopamine vs. time might (uninspiringly) show bigger spikes whenever you ate sugar. Or it might not. My point is that I’m not sure how much we should trust our own reflections on our happiness.
note that what you said doesn’t quite contradict the hypothesis
Fair point. So let me just state that as far as I can tell, the average of my DWMM2M happiness is higher than it was before my child was born, and I expect that in a counterfactual world where my spouse and I didn’t want a child and consequently didn’t have one, my DWMM2M happiness would not be as great as in this one. It’s just that knowing what I know (including what I’ve learned from this site) and having been programmed by evolution to love a stupendous badass (and that stupendous badass having been equally programmed to love me back), I find that watching that s.b. unfold into a human before my eyes causes me happiness of a regularity and intensity that I personally have never experienced before.
My point is that I’m not sure how much we should trust our own reflections on our happiness.
I would mischievously point out things like the oxytocin released after childbirth ought to make us especially wary of bias when it comes to kids. After all, there is no area of our life that evolution could be more concerned about than the kids. (Even your life is worth less than a kid or two, arguably, from its POV.)
Er, what? Please draw a clearer connection between the notion of having preferences over the way things actually are and the notion that our evolutionarily constructed bias/carrot/stick system is a ‘noble lie’.
I’m not categorically against being tasped by a third party, but I’d want that third party to pay attention to my preferences, not merely my happiness. I’d also require the third party to be more intelligent than the most intelligent human who ever existed, and not by a small margin either.
Alright, I’ll put it another way. You seem very cavalier about having your utility-function/preferences without your volition. You defend a new mother’s utility-function/preferences being modified by oxytocin, and in this comment you would allow a third party to tasp you and get you addicted to wireheading. When exactly are such involuntary manipulations permitted?
They are permitted by informed consent. (A new mother may not know in detail what oxytocin does, but would have to be singularly incurious not to have asked other mothers what it’s like to become a mother.)
you would allow a third party to tasp you and get you addicted to wireheading
No, I wouldn’t. I required the third party to pay attention to my preferences, not just my happiness, and I’ve already stated my preference to not be wireheaded.
I can’t help but get the feeling that you have some preconceived notions about my personal views which are preventing you from reading my comments carefully. ETA: Well, no, maybe you just believe remote stimulation of the pleasure centers of one’s brain to be inherently addicting, whereas I just assumed that a superintelligent being hitting my brain with remote stimulation could avoid causing addiction if it was motivated to do so.
Well, no, maybe you just believe remote stimulation of the pleasure centers of one’s brain to be inherently addicting, whereas I just assumed that a superintelligent being hitting my brain with remote stimulation could avoid causing addiction if it was motivated to do so.
Well, I figure wireheading is either intrinsically addicting, by definition (what else could addiction be motivated by but pleasure?) or so close to it as to make little practical difference; there are a number of rat/mice studies which entail sticking electrodes into the pleasure center and gaining complete control and the researchers don’t mention any mice/rat ever heroically defying the stimulus through sheer force of will, which suggests very bad things for any humans so situated.
there are a number of rat/mice studies which entail sticking electrodes into the pleasure center and gaining complete control and the researchers don’t mention any mice/rat ever heroically defying the stimulus through sheer force of will, which suggests very bad things for any humans so situated.
Perhaps the sheer-force-of-will meters were malfunctioning in these experiments.
More seriously, lets create a series of thought experiments, all involving actions by “Friendly” AI. (FAI. Those were scare quotes. I won’t use them again. You have been warned!). In each case, the question in the thought experiment is whether the FAI behavior described is prima facie evidence that the FAI has been misprogrammed.
Thought experiment #1: The FAI has been instructed to respect the autonomy of the human will, but also to try to prevent humans from hurting themselves. Therefore, in cases where humans have threatened suicide, the FAI offers the alternative of becoming a Niven wirehead. No tasping, it is strictly voluntary.
Thought experiment #2: The FAI makes the wirehead option available to all of mankind. It also makes available effective, but somewhat unpleasant, addiction treatment programs for those who have tried the wire, but now wish to quit.
Thought experiment #3: The request for addiction treatment is irrevocable, once treated, humans do not have the option of becoming rewired.
Thought experiment #4: Practicing wireheads are prohibited from contributing genetically to the future human population. At least part of the motivation of the FAI in the whole wirehead policy is eugenic. The FAI wishes to make happiness more self-actualized in human nature, and less dependent on the FAI and its supplied technologies.
Thought experiment #5: This eugenic intervention is in conflict with various other possible eugenic interventions which the FAI is contemplating. In particular, the goal of making mankind more rational seems to be in irreconcilable conflict with the goal of making mankind more happiness-self-actualized. The FAI consults the fine print of its programming and decides in favor of self actualized happiness and against rationality.
Please, carry on with the scare quotes. Or maybe don’t use a capital F.
Apparently: “Friendly Artificial Intelligence” is a term that was coined by researcher Eliezer Yudkowsky of the Singularity Institute for Artificial Intelligence as a term of art distinct from the everyday meaning of the word “friendly”. However, nobody seems to be terribly clear about exactly what it means. If you were hoping to pin that down using a consensus, it looks as though you may be out of luck.
As an aside, I wonder how Eliezer’s FAI is going to decide whether to use eugenics. Using the equivalent of worldwide vote doesn’t look like a good idea to me.
How about purely voluntary choice of ‘designer babies’ for your own reproduction, within guidelines set by worldwide vote? Does that sound any more like a good idea? Frankly, it doesn’t seem all that scary to me, at least not as compared with other directions that the FAI might want to take us.
I agree that eugenics is far from the scariest thing FAI could do.
Not sure about designer babies, I don’t have any gut reaction to the issue, and a serious elicitation effort will likely cause me to just make stuff up.
Only now neuroscientists are starting to recognize a difference between “reward” and “pleasure”, or call it “wanting” and “liking”… A University of Michigan study analyzed the brains of rats eating a favorite food. They found separate circuits for “wanting” and “liking”, and were able to knock out either circuit without affecting the other (it was actually kind of cute—they measured the number of times the rats licked their lips as a proxy for “liking”, though of course they had a highly technical rationale behind it). When they knocked out the “liking” system, the rats would eat exactly as much of the food without making any of the satisifed lip-licking expression, and areas of the brain thought to be correlated with pleasure wouldn’t show up in the MRI. Knock out “wanting”, and the rats seem to enjoy the food as much when they get it but not be especially motivated to seek it out.
That’s interesting. Hadn’t seen that. So you are suggesting that addiction as we know it for drugs etc. is going through the ‘wanting’ circuit, but wireheading would go through the ‘liking’ circuit, and so wouldn’t resemble the former?
what else could addiction be motivated by but pleasure?
Wanting is not the same thing as pleasure. The experiments that created the popular conception of wireheading were not actually stimulating the rats’ pleasure center, only the anticipation center.
Consider that there are probably many things you enjoy doing when you do them, but which you are not normally motivated to do. (Classic example: I live in Florida, but almost never go to the beach.)
Clearly, pleasure in the sense of enjoying something is not addictive. If you stimulated the part of my brain that enjoys the beach, it would not result in me perpetually pushing the button in order to continue having the pleasure.
Frankly, I suspect that if somebody invented a way to use TMS or ultrasonics to actually stimulate the pleasure center of the brain, most people would either use them once or twice and put them on the shelf, or else just use them to relax a bit after work.
Weirdly enough, most true pleasures aren’t really addictive, because you need some sort of challenge to seize the interest of your dopamine reward system. Chaotic relationships, skill development (incl. videogames), gambling… these things are addictive precisely because they’re not purely pleasurable, and this stimulates the same parts of the brain that get hit by wireheading and some drugs.
To put it another way, the rats kept pushing the button not because it gave them pleasure, but simply because it stimulated the part of their brain that made them want to push the button more. The rats probably died feeling like they were “just about to” get to the next level in a video game, or finally get back with their estranged spouse, or some other just-out-of-reach goal, rather than in orgasmic bliss.
Hm… not obviously so. Any reductionist explanation of happiness from any source is going to end up mentioning hormones & chemicals in the brain, but it doesn’t follow that wanting happiness (& hence wanting the attendant chemicals) = wanting to wirehead.
I struggle to articulate my objection to wireheading, but it has something to do with the shallowness of pleasure that is totally non-contingent on my actions and thoughts. It is definitely not about some false dichotomy between “natural” and “artificial” happiness; after all, Nature doesn’t have a clue what the difference between them is (nor do I).
It is definitely not about some false dichotomy between “natural” and “artificial” happiness; after all, Nature doesn’t have a clue what the difference between them is (nor do I).
Certainly not, but we do need to understand utility functions and their modification; if we don’t, then bad things might happen. For example (I steal this example from EY), a ‘FAI’ might decide to be Friendly by rewiring our brains to simply be really really happy no matter what, and paperclip the rest of the universe. To most people, this would be a bad outcome, and is an intuitive argument that there are good and bad kinds of happiness, and the distinctions probably have something to do with properties of the external world.
I’m not going to claim having children is “rational”, but to judge it by the happiness of “caring for children” is about the same as to judge quality of food by enjoyment of doing the dishes. This is very one-dimensional.
Moreover I actually think it’s foolish to use any kind of logical process (such as reading this study) to make decisions in this area except for extreme circumstances such as not having enough money or having genetic diseases.
The reason for my attitude is that I think besides the positive upsides to having kids (there are many, if you’re lucky) there is a huge aspect of regret minimization involved; it seems to me Nature choose stick rather than a carrot here.
ETA: I should perhaps say short-term carrot and a long term stick
Natural selection is the Omega, and the mind propagated through generations by natural selection is the hitchhiker. The mind only gets to the “decide to pay”/”decide to care for children” if it had the right decision theory before the “rescue”/”copy to next generation”.
I see the parallelism. If you ask me, though, I would say that it’s not a Parfitian filter, but a prototypical example of a filter to demonstrate that the idea of a filter is valid.
Perhaps I am being obtuse. Let me try to articulate a third filter, and get your reasoning on whether it is Parfitian or not.
As it happens, there exist certain patterns in nature which may be reliably counted upon to correlate with decision-theory-relevant properties. One example is the changing color of ripening fruit. Now, species with decision theories that attribute significance to these patterns will be more successful at propagating than those that do not, and therefore will be more widespread. This is a filter. Is it Parfitian?
No, because a self-interested agent could regard it as optimal to judge based on that pattern by only looking at causal benefits (CaMELs) to itself. In contrast, an agent could only regard it as optimal to care for offspring (to the extent we observe in parents) based on considering SAMELs, or having a utility function contorted to the point that its actions could more easily be explained by reference to SAMELs.
Let me try to work this out again, from scratch. A Parfit’s hitchhiking involves the following steps in order:
Omega examines the agent.
Omega offers the agent the deal.
The agent accepts the deal.
Omega gives the agent utility.
The agent gives Omega utility.
Parenthood breaks this chain in two ways: first, the “Omega” in step 2 is not the “Omega” in step 4, and neither of these are the “Omega” in step 5; and second, step 1 never occurs. Remember, “natural selection” isn’t an agent—it’s a process, like supply and demand, that necessarily happens.
Consider, for contrast, division of labor. (Edit: The following scenario is malformed. See followup comment, below.) Let’s say that we have Ag, the agent, and Om, the Omega, in the EEA. Om wants to hunt, but Om has children.
Om examines Ag and comes to the conclusion that Ag will cooperate.
Om asks Ag to watch Om’s children while on the hunt, in exchange for a portion of the proceeds.
Ag agrees.
Ag watches Om’s children while Om hunts.
Om returns successful, and gives Ag a share of the bounty.
Here, all five steps occur in order, Om is Om throughout and Ag is Ag throughout, and both Om and Ag gain utility (meat, in this case) by the exchange.
Why does it matter that the Omegas are different? (I dispute that they are, but let’s ignore that for now.) The parallel only requires functional equivalence to “whatever Omega would do”, not Omega’s identity persistence. (And indeed Parfit’s other point was that the identity distinction is less clear than we might think.)
Why does it matter that natural selection isn’t an agent? All that’s necessary is that it be an optimization process—Omega’s role in the canonical PH would be no different if it were somehow specified to “just” be an optimization process rather than an agent.
What is the purpose of the EEA DoL example? It removes a critical aspect of PH and Parfitian filters—that optimality requires recognition of SAMELs. Here, if Ag doesn’t watch the children, Om sees this and can withhold the share of the bounty. If Ag could only consider CaMELs (and couldn’t have anything in its utility function that sneaks in recognition of SAMELs), Ag would still see why it should care for the children.
First: yes, I have the scenario wrong—correct would be to switch Ag and Om, and have:
Om examines Ag and comes to the conclusion that Ag will cooperate.
Om offers to watch Ag’s children while Ag hunts, in exchange for a portion of the proceeds.
Ag agrees.
Om watches Ag’s children while Ag hunts.
Ag returns successful, and gives Om a share of the bounty.
In this case, Om has already given Ag utility—the ability to hunt—on the expectation that Ag will give up utility—meat—at a later time. I will edit in a note indicating the erroneous formulation in the original comment.
Second: what we are comparing are cases where an agent gives no utility to cooperating with Omega, but uses a decision theory that does so because it boosts the agent’s utility (e.g. the prototypical case) and cases where the agent gives positive utility to cooperating with Omega (e.g. if the agent and Omega were the same person and the net change is sufficiently positive). What we need to do to determine if the isomorphism with Parfit’s hitchhiker is sufficient is to identify a case where the agent’s actions will differ.
It seems to me that the latter case, the agent will give utility to Omega even if Omega never gives utility to the agent. Parfit’s hitchhikers do not give money to Nomega, the predictor agent who wasn’t at the scene and never gave them a ride—they only give money when the SAMEL is present. Therefore: if a parent is willing to make sacrifices when their parent didn’t, the Parfit parallel is poor and Theory 2a is the better fit. Agreed?
I’m not sure I understand all the steps in your reasoning, but I think I can start by responding to your conclusion:
Therefore: if a parent is willing to make sacrifices when their parent didn’t, the Parfit parallel is poor and Theory 2a is the better fit. Agreed?
As best I can understand you, yes. If there’s e.g. a species that does not care for its young, then one day, one of them does, that action would not be best explained by its recognition (or acting as it if had recognition) of a SAMEL (because there was no “AM”) -- it would have to be chalked up to some random change in its psychology.
However—and this is the important part—by making that choice, and passing the genes partly responsible for that choice, into the next generation, it opens up the possibility of exploring a new part of the “organism design space”: the part which which is improved my modifications predicated on some period of parent-child care [1].
If that change, and further moves into that attractor [2], improve fitness, then future generations will care for their children, with the same psychological impetus as the first one. They feel as if they just care about their children, not that they have to act on a SAMEL. However, 2b remains a superior explanation because it makes fewer assumptions (except for the organism to first have the mutation, which is part of the filter); 2b needn’t assume that the welfare of the child is a terminal value.
And note that the combined phenomena do produce functional equivalence to recognition of a SAMEL. If the care-for-children mode enhances fitness, then it is correct to say, “If the organism in n-th generation after mutation did not regard it as optimal to care for the (n+1)th generation, it would not be here”, and it is correct to say that that phenomenon is responsible for the organism’s decision (such as it is a decision) to care for its offspring. Given these factors, an organism that chooses to care for its offspring is acting equivalently to one motivated by the SAMEL. Thus, 2b can account for the same behavior with fewer assumptions.
As for the EEA DoL arrangement (if the above remarks haven’t screened off the point you were making with it): Om can still, er, withhold the children. But let’s ignore that possibility on grounds of Least Convenient Possible World. Even so, there are still causal benefits to Ag keeping up its end—the possibility of making future such arrangements. But let’s assume that Ag can still come out ahead by stiffing Om.
In that case, yes, Ag would have to recognize SAMELs to justify paying Om. I’d go on to make the normal point about Ag having already cleaved itself off into the world where there are fewer Om offers if it doesn’t see this SAMEL, but honestly, I forgot the point behind this scenario so I’ll leave it at that.
(Bitter aside: I wish more of the discussion for my article was like this, rather than being 90% hogged by unrelated arguments about PCT.)
[1] Jaron Lanier refers to this replication mode as “neoteny”, which I don’t think is the right meaning of the term, but I thought I’d mention it because he discussed the importance of a childhood period in his manifesto that I just read.
[2] I maybe should have added in the article that the reasoning “caring for children = good for fitness” only applies to certain path-dependent domains of attraction in the design space, and doesn’t hold for all organisms.
This may not be my true objection (I think it is abundantly clear at this point that I am not adept at identifying my true objections), but I just don’t understand your objection to 2a. As far as I can tell, it boils down to “never assume that an agent has terms in its utility functions for other agents”, but I’m not assuming—there is an evolutionary advantage to having a term in your utility function for your children. By the optimization criteria of evolution, the only reason not to support a child is if you are convinced that the child is either not related or an evolutionary dead-end (at which point it becomes “no child of mine” or some such). In contrast, the Parfit-hitchhiker mechanism involves upholding contracts, none of which your child offered, and therefore seems an entirely unrelated mechanism at the level of the individual organism.
(Regarding my hypothetical, I was merely trying to demonstrate that I understood the nature of the hypothetical—it has no further significance.)
your objection to 2a. As far as I can tell, it boils down to “never assume that an agent has terms in its utility functions for other agents”,
No, my objection is: “never assume more terminal values (terms in UF) than necessary”, and I’ve shown how you can get away with not assuming that parents terminally value their children—just as a theoretical exercise of course, and not to deny the genuine heartfelt love that parents have for their children.
but I’m not assuming—there is an evolutionary advantage to having a term in your utility function for your children.
There is an evolutionary advantage to having a cognitive system that outputs the action “care for children even at cost to self”. At a psychological level, this is accomplished by the feeling of “caring” and “love”. But is that love due to a utility function weighting, or to a decision theory that (acts as if it recognizes) SAMELs? The mere fact of the psychology, and of the child-favoring acts does not settle this. (Recall the problem of how a ordering of outcomes can be recast as any combination of utility weightings and probabilities.)
You can account for the psychological phenomenon more parsimoniously [1] by assuming the action results from choice-machinery that implicitly recognizes SAMELs—and on top of that, get a bonus explanation of why a class of reasoning (moral reasoning) feels different—it’s the kind that mustn’t be convinced by the lack of a causal benefit to the self.
In contrast, the Parfit-hitchhiker mechanism involves upholding contracts, none of which your child offered, and therefore seems an entirely unrelated mechanism at the level of the individual organism.
My version is precisely written to exclude contracts—the ideal PH inferences still go through, and so natural selection (which I argue is a PF) is sufficiently similar. If they don’t “attach” themselves to a child-favoring decision theory, they simply don’t get “rescued” into the n-th generation of that gene’s existence. No need to find an isomorphism to a contract.
[1] Holy Shi-ite—that’s three p-words with a different initial consonant sound!
Why does the cognitive system that identifies SAMELs fire when you have a child? The situation is not visibly similar to that of Parfit’s hitchhiker. Unless you are suggesting that parenthood simply activates the same precommitment mechanism that the decision theory uses when Parfit-hitchhiking...?
I don’t understand the point of these questions. You’re stuck with the same explanatory difficulties with the opposite theory: why does the cognitive system that identifies _changes in utility function_ fire when you have a child? Does parenthood activate the same terminal values that a PH survivor does upon waking up?
A utility function need not change when a child is born. After all, a utility function is a mapping from states-of-the-world to utilities and the birth of a child is merely a change in the state of the world.
Nonetheless, utility mapping functions can change as a result of information which doesn’t betoken a change in the state-of-the-world, but merely in your understanding your own desires. For example, your first taste of garlic ice cream. Or, more to the point, new parents sometimes report dramatic changes in outlook simply from observation of their baby’s first smile. The world has not changed, but somehow your place within it has.
See sibling reply to Robin. How are you showing a explanatory advantage to attributing the behavior to utility functions rather than SAMEL recognition? (Or what were you otherwise trying to establish?)
How are you showing a explanatory advantage to attributing the behavior to utility functions rather than SAMEL recognition? (Or what were you otherwise trying to establish?)
I wasn’t trying to show an advantage. You asked a question about my preferred explanatory framework. I interpreted the question to be something like, “How does the birth of a child trigger a particular special cognitive function?”. My answer was that it doesn’t. The birth of a baby is a change in the state of the world, and machinery for this (Bayesian updating) is already built in.
If you insist that I show an explanatory advantage, I would make two (not intended to be very convincing!) points:
“Occam’s razor” suggests that I shouldn’t introduce entities (SAMELs, in this case) that I don’t really need.
“Perplexed’s tweezers” suggests that I shouldn’t put too much trust in explanations (SAMELs, in this case) that I don’t really understand.
Okay, but if your preferred explanatory framework is strictly worse per the MML formalism (equivalent to rationalist Occam’s razor), then that would be a reason that my explanation is preferred.
You claim that my explanation fails by this metric:
“Occam’s razor” suggests that I shouldn’t introduce entities (SAMELs, in this case) that I don’t really need.
However, the two theories we’re deciding between (2a and 2b) don’t explicitly involve SAMELs in either case. [1]
Theory 2a: Parents have a utility function that places positive weight on both themselves and their children. Theory 2b: Parents have a utility function that places positive weight on only themselves (!!!); however, they are limited to implementing decision theories capable of surviving natural selection.
The only entity in 2b that is not in 2a is the claim that parents are limited to implementing decision theories capable of surviving natural selection. But as I said in footnote 2, this doesn’t penalize it under Occam’s Razor, because that must be assumed in both cases, so there’s no net penalty for 2b—implications of existing assumptions do not count toward the complexity/length of your explanation (for reasons I can explain in greater depth if you wish).
But to be honest, I’m losing track of the point being established by your objections (for which I apologize), so I’d appreciate it if you could (for my sake) explicitly put them back in the context of the article and this exchange.
[1] Before you glare in frustration at my apparent sudden attempt to throw SAMELs under the bus: the thesis of the article does involve SAMELs, but at that point, it’s either explaining more phenomena (i.e. psychology of moral intuitions), or showing the equivalence to acting on SAMELs.
You claim that my explanation fails by this metric:
“Occam’s razor” suggests that I shouldn’t introduce entities (SAMELs, in this case) that I don’t really need.
However, the two theories we’re deciding between (2a and 2b) don’t explicitly involve SAMELs in either case. [1]
Theory 2a: Parents have a utility function that places positive weight on both themselves and their children.
Theory 2b: Parents have a utility function that places positive weight on only themselves (!!!); however, they are limited to implementing decision theories capable of surviving natural selection.
Ok, I accept your argument that Occam is neutral between you and I. SAMELs aren’t involved at decision time in 2b, just as “Inclusive fitness” and “Hamilton’s rule” aren’t involved at decision time in 2a.
I will point out though, since we are looking only at the present, that the utility function in 2a can, in principle, be examined using “revealed preference”, whereas your purely selfish child-neutral utility function is a theoretical construct which would be hard to measure, even in principle.
Without Occam, I have to fall back on my second objection, the one I facetiously named “Perplexed’s tweezers”. I simply don’t understand your theory well enough to criticize it. Apparently your decision theory (like my offspring-inclusive utility function) is installed by natural selection. Ok, but what is the decision theory you end up with? I claim that my evolution-installed decision theory is just garden-variety utility maximization. What is your evolution-installed decision theory?
If you made this clear already and I failed to pick up on it, I apologize.
Ok, I accept your argument that Occam is neutral between you and I.
Hold on—that’s not what I said. I said that it was neutral on the issue of including “they can only use decision theories that could survive natural selection”. I claim it is not neutral on the supposition of additional terms in the utility function, as 2a does.
SAMELs aren’t involved at decision time in 2b, just as “Inclusive fitness” and “Hamilton’s rule” aren’t involved at decision time in 2a.
It doesn’t matter. They (inclusive fitness and Hamilton’s rule) have to be assumed (or implied by something that has to be assumed) anyway, because we’re dealing with people, so they’ll add the same complexity to both explanations.
I will point out though, since we are looking only at the present, that the utility function in 2a can, in principle, be examined using “revealed preference”, whereas your purely selfish child-neutral utility function is a theoretical construct which would be hard to measure, even in principle.
As I’ve explained to you several times, looking at actions does not imply a unique utility function, so you can’t claim that you’ve measured it just by looking at their actions. The utility functions “I care about myself and my child” and “I care about myself” can produce the same actions, as I’ve demonstrated, because certain (biologically plausible) decision theories can output the action “care for child at expense of self”, even in the absence of a causal benefit to the self.
I simply don’t understand your theory well enough to criticize it. … what is the decision theory you end up with?
It is a class of DTs, the kind that count acausal benefits (SAMELs) on par with causal ones. The SAMELs need not be consciously recognized as such but they do need to feel different to motivate the behavior.
However, I could be more helpful if you asked specific questions about specific passages. Previously, you claimed that after reading it, you didn’t see how natural selection is like Omega, even after I pointed to the passage. That made me a sad panda.
You more than made up for it with the Parfit’s robot idea, though :-)
We are clearly talking past each other, and it does not seem to me that it would be productive to continue.
For example, I have repeatedly responded to your claim (not explanation!) that the 2a utility function is not susceptible to “revealed preference”. You have never acknowledged my response, but continue claiming that you have explained it to me.
I simply don’t understand your theory well enough to criticize it. … what is the decision theory you end up with?
It is a class of DTs, the kind that count acausal benefits (SAMELs) on par with causal ones.
I have to interpret that as a policy of using some other kind of “surgery” for counterfactuals. Something other than the standard kind of surgery used in causal decision theory (CDT). So the obvious questions become, “So, what kind of surgery do you advocate?” and “How do you know when to use this strange surgery rather than the one Pearl suggests?”.
The SAMELs need not be consciously recognized as such but they do need to feel different to motivate the behavior.
That sentence may mean something to you, but I can’t even tell who is doing the feeling, what that feeling is different from, and what (or who) is doing the motivating.
… You more than made up for it with the Parfit’s robot idea, though.
It wasn’t my idea. It was timtyler’s. Maybe you will have better luck explaining your ideas to him. He was patient enough to explain the robot to me twice.
Too many SAMELs and CAMELs for me. I didn’t even get as far as seeing the analogy between natural selection and Omega. However, unlike you, I thought: this doesn’t sound very interesting; I can’t be bothered. Retrospectively, I do now get the bit in the summary—if that is what it is all about. I could probably weigh in on how parental care works in mammals—but without absorbing all the associated context, I doubt I would be contributing positively.
Thanks for the robot credit. It doesn’t feel like my idea either. After some hanging around Yudkowsky, it soon becomes clear that most of the material about decision theory here is partly in the context of a decision theory for machine intelligence—so substituting in a machine seems very natural.
Anyway, we don’t want you on too different a page—even if it does produce nice stories about the motivtions of stranded hitch-hikers.
For example, I have repeatedly responded to your claim (not explanation!) that the 2a utility function is not susceptible to “revealed preference”. You have never acknowledged my response, but continue claiming that you have explained it to me.
You have certainly posted responses; I don’t recall you saying anything responsive, though, i.e. something that would establish that seeing someone’s actions suffices to identify a unique (enough) utility function, at least in this case—and I can show you more of the difficulties of such a task, if you would like. But yes, please point me to where you think you’ve said something responsive, as I just defined responsive.
I have to interpret that as a policy of using some other kind of “surgery” for counterfactuals. Something other than the standard kind of surgery used in causal decision theory (CDT). So the obvious questions become, “So, what kind of surgery do you advocate?” and “How do you know when to use this strange surgery rather than the one Pearl suggests?”.
Nothing I’ve described requires doing anything differently than Pearl’s kind of counterfactual surgery. For example, see EY’s exposition of Timeless Decision Theory, which does standard CF surgery but differs in how it calculates probabilities on results given a particular surgery, for purposes of calculating expected utility.
And that’s really the crux of it: The trick in TDT—and explaining human behavior with SAMELs—is that you can keep the same (genuinely) terminal values, but have a better chance of achieving them if you change the probability weighting, and change it in a way that assigns more expected utility to SAMEL-based actions.
Those probabilities are more like beliefs than values. And as another poster demonstrated a while back, you can take any agent’s decision ranking, and claim it was from various different value/belief combinations. For example, if someone reaches for an apple instead of reaching for an orange, you can say, consistently with this observation, that:
they prefer the apple to the orange, and believe they have 100% chance of getting what they reach for (pure value-based decision)
are indifferent between the apple and the orange, but believe that they have a higher chance of getting the reached-for fruit by reaching for the apple (pure belief-based decision)
or anything in between.
TDT, then, doesn’t need to posit additional values (like “honor”) -- it just changes its beliefs about the probabilities. Agents acting on SAMELs do the same thing, and I claim this leads to a simpler description of behavior.
The SAMELs need not be consciously recognized as such but they do need to feel different to motivate the behavior.
That sentence may mean something to you, but I can’t even tell who is doing the feeling, what that feeling is different from, and what (or who) is doing the motivating.
I can answer that, but I should probably just explain the confusing distinctions: From the inside, it is the feeling (like “love”) that is psychologically responsible for the agent’s decision. My point is that this “love” action is identical to what would result from deciding based on SAMELs (and not valuing the loved one), even though it feels like love, not like identifying a SAMEL.
So, in short, the agent feels the love, the love motivates the behavior (psychologically); and, as a group, the set of feelings explainable through SAMELs feel different than other kinds of feelings.
In my haste to shut this conversation down, I have written a falsehood. Allow me to correct it, and then, please, let us stop.
Regarding “revealed preference, you ask where I previously responded to you. Here it is. It is not nearly as complete a response as I had remembered. In any case, as I read through what we each have written regarding “revealed preference”, I find that not only do we disagree as to what the phrase means, I suspect we also are both wrong. This “revealed preference” dispute is such a mess that I really don’t want to continue it. I apologize for claiming I had corrected you, when actually I had only counter-asserted.
As Perplexed said, there is no requirement that the utility function change—and, in fact, no reason to believe that it does not already have positive terms for children before reproduction. A lot of people report wanting children.
I’m asking these questions because we clearly have not established agreement, and I want to determine why. I assume that either we are using conflicting data, applying incompatible rules of inference, or simply misreading each other’s writing. It was this last possibility I was probing with that last question.
As Perplexed said, there is no requirement that the utility function change—and, in fact, no reason to believe that it does not already have positive terms for children before reproduction. A lot of people report wanting children.
Okay, but by the same token, there’s no need to assume recognition of the SAMEL (that favors producing and caring for children) changes. (And if it matters, a lot of people report not wanting children, but then wanting to care for their children upon involuntary parenthood.)
None of the things you’re pointing out seem to differentiate the utility function-term explanation from the SAMEL-recognition explanation.
So you’re agreeing with me in this one respect? (I don’t mean to sound confrontational, I just want to make sure you didn’t reverse something by accident.)
The pattern of “not wanting children, but then wanting to spend resources to care for the children” is better explained by a SAMEL pattern than by a utility function pattern. The fact of people wanting children can be sufficiently explained by the reasons people give for wanting children: a desire for a legacy, an expected sense of fulfillment from parenthood, etcetera. Finally, the fact that this is a SAMEL pattern doesn’t mean that the adaptation works on SAMEL patterns—the ability of Parfit’s hitchhiker to precommit to paying Omega is a separate adaptation from the childrearing instinct.
How does “not wanting children, but then wanting to spend resources to care for the children” involve SAMELs in a way that wanting to have children does not?
Yes, you can explain people’s pursuance of goals by the reasons they say. The problem is that this isn’t the best explanation. As you keep adding new terminal values to explain the actions, you complicate the explanation. If you can do without these—and I think I’ve shown you can—you’re left with a superior explanation.
The fact that it feels like “pursuing a legacy” on the inside does not favor that being the superior explanation. Remember, the desire to pay Omega in PH feels like gratefulness on the inside—like the Omega has some otherwise inherent deservedness of receiving the payment. But in both cases, “If the survivor did not regard it as optimal to pay, the survivor would not be here”, and the SAMEL explanation only requires that humans have choice-machinery that favors acting on these (already given) facts.
There is no pre-commitment on the part of human hitchhikers in the sense that they are inextricably bound to pay—they are still making a choice, even though selection has been applied on the set of hitchhikers. It is not their precommitment that leads them to pay, but their choice-machinery’s having alerted them to the optimality of doing so—which feels like gratefulness.
My tongue detects sweetness, not healthfulness, even though the process that created the adaptation was designed to recognize healthfulness.
Not everyone wants children, as you said. It is not evolutionarily necessary for people to want children—only for them to want sex. That anyone wants children might be explained by the reasons they actually give for wanting children.
See above.
I am not invested in the word “precommitment”—we are describing the same behavior on the part of the hitchhiker.
My tongue detects sweetness, not healthfulness, even though the process that created the adaptation was designed to recognize healthfulness.
This is the crux of the matter—desire for energy-dense consumables was selected for because quickly gathering energy was adaptive. It feels like sweetness from the inside: this is the region of qualiaspace that corresponds to feeling a motivation to act on that means-ends link. It does not feel like quickly gathering energy. Similarly, being motivated by SAMELs needn’t feel like such a recognition—it feels like an “otherwise-ungrounded inherent deservedness of others of being treated well” (or badly).
Not everyone wants children, as you said. It is not evolutionarily necessary for people to want children—only for them to want sex. That anyone wants children might be explained by the reasons they actually give for wanting children.
I am not invested in the word “precommitment”—we are describing the same behavior on the part of the hitchhiker.
Okay, reviewing your point, I have to partially agree—general desire to act on SAMELs need not be (and probably isn’t) the same choice machinery that motivates specific child-bearing acts. The purpose of the situation was to show how you can account for behavior without complicating the utility function. Rather than additionally positing that someone terminally values their children, we can say that they are self-interested, but that only certain decision theories ever make it to the next generation.
In both cases, we have to rely on “if they did not regard it as optimal to care for their children (and given genetic psychological continuity), they would not be there”, but only in 2a must we elevate this caring to a terminal value for purposes of explanation.
It feels like sweetness from the inside: this is the region of qualiaspace that corresponds to feeling a motivation to act on that means-ends link.
This is good, but
It does not feel like quickly gathering energy.
is still hiding some confusion (in me, anyway.) Why say that it doesn’t feel like quickly gathering energy? What would feel like quickly gathering energy?
I’m now imagining a sucking-in-lines-qualia, (warning tvtropes) lurking in a region of qualia-space only accessible to sentient energy weaponry. And I’m kinda jealous.
is still hiding some confusion (in me, anyway.) Why say that it doesn’t feel like quickly gathering energy?
Getting a nutrient feed via IV doesn’t feel like sweetness, but does involve quickly getting energy.
What would feel like quickly gathering energy?
If you had a cognitive system that directly recognized any gain in energy, and credited it as good, for that reason, then you would have a quale that is best described as “feeling like gathering energy”. But that requires a whole different architecture.
Including about my claim that it provides a more parsimonious explanation of parents’ actions not to include concern for their children as a terminal value?
Yes—if you expected concern for children to be a terminal value, you would not expect to see adults of breeding age who do not want children. (That is the specific evidence that convinced me.) I don’t think I’ve quite worked out your position on Parfitian hitchhiking, but I don’t see any difference between what you claim and what I claim regarding parenthood.
Natural selection is the Omega, and the mind propagated through generations by natural selection is the hitchhiker. The mind only gets to the “decide to pay”/”decide to care for children” if it had the right decision theory before the “rescue”/”copy to next generation”.
You should put that in the article. (True, it’s a causal iteration rather than an acausal prediction. But it’ll still make the article clearer.)
Consider this situation: You are given the choice between personally receiving a small prize or giving your children a much larger prize. Whatever you choose, it is possible that your children will one day face a similar choice. Being your children, they resemble you in many ways and are more likely than not to choose similarly to you. Its not quite a Parfit’s Hitchhiker even from your childrens’ perspective—the consequences of their choice are in the past, not the future—but it’s close, and the result is the same.
Parenthood doesn’t look like a Parfait’s Hitchhiker* to me—are you mentioning it for some other reason?
* Err, Parfit’s Hitchhiker. Thanks, Alicorn!
Edit: I have updated my position downthread.
http://www.newyorker.com/arts/critics/books/2010/03/22/100322crbo_books_kolbert?currentPage=all
(Glad I kept this citation; knew at some point I would run into someone claiming parenthood is a joy. Wish I had the one that said parenthood was a net gain in happiness only years/decades later after the memories have been distorted enough.)
The basic idea about parents and hedonic psychology, as I understand it, is that your moment-to-moment happiness is not typically very high when you have kids, but your “tell me a story” medium/long term reflective happiness may be quite high.
Neither of those is privileged. Have you ever spent a day doing nothing but indulging yourself (watching movies, eating your favourite foods, relaxing)? If you’re anything like me you find that even thought most moments during the day were pleasant, the overall experience of the day was nasty and depressing.
Basically, happiness is not an integral of moment-to-moment pleasure, so while it’s naive to say parenting is an unqualified joy, it’s not so bleak as to be only a good thing after the memories are distorted by time.
As a parent I can report that most days my day-wise maximum moment-to-moment happiness is due to some interaction with my child.
But then, my child is indisputably the most lovable child on the planet.
(welcome thread link not necessary)
Then let me just say, welcome!
I’m inclined to believe you, but note that what you said doesn’t quite contradict the hypothesis, which is that if you were not a parent, your day-wise maximum (from any source) would probably be higher.
Also, beware of attributing more power to introspection than it deserves, especially when the waters are already muddied by the normativity of parents’ love for their children. You say your happiest moments are with your child, but a graph of dopamine vs. time might (uninspiringly) show bigger spikes whenever you ate sugar. Or it might not. My point is that I’m not sure how much we should trust our own reflections on our happiness.
Fair point. So let me just state that as far as I can tell, the average of my DWMM2M happiness is higher than it was before my child was born, and I expect that in a counterfactual world where my spouse and I didn’t want a child and consequently didn’t have one, my DWMM2M happiness would not be as great as in this one. It’s just that knowing what I know (including what I’ve learned from this site) and having been programmed by evolution to love a stupendous badass (and that stupendous badass having been equally programmed to love me back), I find that watching that s.b. unfold into a human before my eyes causes me happiness of a regularity and intensity that I personally have never experienced before.
I would mischievously point out things like the oxytocin released after childbirth ought to make us especially wary of bias when it comes to kids. After all, there is no area of our life that evolution could be more concerned about than the kids. (Even your life is worth less than a kid or two, arguably, from its POV.)
That oxytocin &c. causes us to bond with and become partial to our children does not make any causally subsequent happiness less real.
So, then, you would wirehead? It seems to me to be the same position.
I wouldn’t: I have preferences about the way things actually are, not just how they appear to me or what I’m experiencing at any given moment.
So that use of oxytocin (and any other fun little biases and sticks and carrots built into us) is a ‘noble lie’, justified by its results?
In keeping with the Niven theme, so, then you would not object to being tasped by a third party solicitous of your happiness?
Er, what? Please draw a clearer connection between the notion of having preferences over the way things actually are and the notion that our evolutionarily constructed bias/carrot/stick system is a ‘noble lie’.
I’m not categorically against being tasped by a third party, but I’d want that third party to pay attention to my preferences, not merely my happiness. I’d also require the third party to be more intelligent than the most intelligent human who ever existed, and not by a small margin either.
Alright, I’ll put it another way. You seem very cavalier about having your utility-function/preferences without your volition. You defend a new mother’s utility-function/preferences being modified by oxytocin, and in this comment you would allow a third party to tasp you and get you addicted to wireheading. When exactly are such involuntary manipulations permitted?
They are permitted by informed consent. (A new mother may not know in detail what oxytocin does, but would have to be singularly incurious not to have asked other mothers what it’s like to become a mother.)
No, I wouldn’t. I required the third party to pay attention to my preferences, not just my happiness, and I’ve already stated my preference to not be wireheaded.
I can’t help but get the feeling that you have some preconceived notions about my personal views which are preventing you from reading my comments carefully. ETA: Well, no, maybe you just believe remote stimulation of the pleasure centers of one’s brain to be inherently addicting, whereas I just assumed that a superintelligent being hitting my brain with remote stimulation could avoid causing addiction if it was motivated to do so.
Well, I figure wireheading is either intrinsically addicting, by definition (what else could addiction be motivated by but pleasure?) or so close to it as to make little practical difference; there are a number of rat/mice studies which entail sticking electrodes into the pleasure center and gaining complete control and the researchers don’t mention any mice/rat ever heroically defying the stimulus through sheer force of will, which suggests very bad things for any humans so situated.
Perhaps the sheer-force-of-will meters were malfunctioning in these experiments.
More seriously, lets create a series of thought experiments, all involving actions by “Friendly” AI. (FAI. Those were scare quotes. I won’t use them again. You have been warned!). In each case, the question in the thought experiment is whether the FAI behavior described is prima facie evidence that the FAI has been misprogrammed.
Thought experiment #1: The FAI has been instructed to respect the autonomy of the human will, but also to try to prevent humans from hurting themselves. Therefore, in cases where humans have threatened suicide, the FAI offers the alternative of becoming a Niven wirehead. No tasping, it is strictly voluntary.
Thought experiment #2: The FAI makes the wirehead option available to all of mankind. It also makes available effective, but somewhat unpleasant, addiction treatment programs for those who have tried the wire, but now wish to quit.
Thought experiment #3: The request for addiction treatment is irrevocable, once treated, humans do not have the option of becoming rewired.
Thought experiment #4: Practicing wireheads are prohibited from contributing genetically to the future human population. At least part of the motivation of the FAI in the whole wirehead policy is eugenic. The FAI wishes to make happiness more self-actualized in human nature, and less dependent on the FAI and its supplied technologies.
Thought experiment #5: This eugenic intervention is in conflict with various other possible eugenic interventions which the FAI is contemplating. In particular, the goal of making mankind more rational seems to be in irreconcilable conflict with the goal of making mankind more happiness-self-actualized. The FAI consults the fine print of its programming and decides in favor of self actualized happiness and against rationality.
Please, carry on with the scare quotes. Or maybe don’t use a capital F.
Apparently: “Friendly Artificial Intelligence” is a term that was coined by researcher Eliezer Yudkowsky of the Singularity Institute for Artificial Intelligence as a term of art distinct from the everyday meaning of the word “friendly”. However, nobody seems to be terribly clear about exactly what it means. If you were hoping to pin that down using a consensus, it looks as though you may be out of luck.
As an aside, I wonder how Eliezer’s FAI is going to decide whether to use eugenics. Using the equivalent of worldwide vote doesn’t look like a good idea to me.
How about purely voluntary choice of ‘designer babies’ for your own reproduction, within guidelines set by worldwide vote? Does that sound any more like a good idea? Frankly, it doesn’t seem all that scary to me, at least not as compared with other directions that the FAI might want to take us.
I agree that eugenics is far from the scariest thing FAI could do.
Not sure about designer babies, I don’t have any gut reaction to the issue, and a serious elicitation effort will likely cause me to just make stuff up.
Yvain wrote:
That’s interesting. Hadn’t seen that. So you are suggesting that addiction as we know it for drugs etc. is going through the ‘wanting’ circuit, but wireheading would go through the ‘liking’ circuit, and so wouldn’t resemble the former?
Yvain’s post suggested it; I just stuck it in my cache.
Wanting is not the same thing as pleasure. The experiments that created the popular conception of wireheading were not actually stimulating the rats’ pleasure center, only the anticipation center.
Consider that there are probably many things you enjoy doing when you do them, but which you are not normally motivated to do. (Classic example: I live in Florida, but almost never go to the beach.)
Clearly, pleasure in the sense of enjoying something is not addictive. If you stimulated the part of my brain that enjoys the beach, it would not result in me perpetually pushing the button in order to continue having the pleasure.
Frankly, I suspect that if somebody invented a way to use TMS or ultrasonics to actually stimulate the pleasure center of the brain, most people would either use them once or twice and put them on the shelf, or else just use them to relax a bit after work.
Weirdly enough, most true pleasures aren’t really addictive, because you need some sort of challenge to seize the interest of your dopamine reward system. Chaotic relationships, skill development (incl. videogames), gambling… these things are addictive precisely because they’re not purely pleasurable, and this stimulates the same parts of the brain that get hit by wireheading and some drugs.
To put it another way, the rats kept pushing the button not because it gave them pleasure, but simply because it stimulated the part of their brain that made them want to push the button more. The rats probably died feeling like they were “just about to” get to the next level in a video game, or finally get back with their estranged spouse, or some other just-out-of-reach goal, rather than in orgasmic bliss.
Hm… not obviously so. Any reductionist explanation of happiness from any source is going to end up mentioning hormones & chemicals in the brain, but it doesn’t follow that wanting happiness (& hence wanting the attendant chemicals) = wanting to wirehead.
I struggle to articulate my objection to wireheading, but it has something to do with the shallowness of pleasure that is totally non-contingent on my actions and thoughts. It is definitely not about some false dichotomy between “natural” and “artificial” happiness; after all, Nature doesn’t have a clue what the difference between them is (nor do I).
Certainly not, but we do need to understand utility functions and their modification; if we don’t, then bad things might happen. For example (I steal this example from EY), a ‘FAI’ might decide to be Friendly by rewiring our brains to simply be really really happy no matter what, and paperclip the rest of the universe. To most people, this would be a bad outcome, and is an intuitive argument that there are good and bad kinds of happiness, and the distinctions probably have something to do with properties of the external world.
I’m not going to claim having children is “rational”, but to judge it by the happiness of “caring for children” is about the same as to judge quality of food by enjoyment of doing the dishes. This is very one-dimensional.
Moreover I actually think it’s foolish to use any kind of logical process (such as reading this study) to make decisions in this area except for extreme circumstances such as not having enough money or having genetic diseases.
The reason for my attitude is that I think besides the positive upsides to having kids (there are many, if you’re lucky) there is a huge aspect of regret minimization involved; it seems to me Nature choose stick rather than a carrot here.
ETA: I should perhaps say short-term carrot and a long term stick
I wasn’t proposing that parenthood is a joy—I may have misunderstood what SilasBarta meant by “utility function places positive weight”.
“Utility function of agent A places positive weight on X” is equivalent to “A regards X as a terminal value”.
Now I’m trying to figure out how a parfait could drive a car.
Deliciously.
From the Simpsons: “We would also have accepted ‘snacktacularly’.”
(For our non-native readers: snacktacular = snack + spectacular.)
Very well, thank you.
Natural selection is the Omega, and the mind propagated through generations by natural selection is the hitchhiker. The mind only gets to the “decide to pay”/”decide to care for children” if it had the right decision theory before the “rescue”/”copy to next generation”.
Does it look similar now?
I see the parallelism. If you ask me, though, I would say that it’s not a Parfitian filter, but a prototypical example of a filter to demonstrate that the idea of a filter is valid.
What’s the difference?
Perhaps I am being obtuse. Let me try to articulate a third filter, and get your reasoning on whether it is Parfitian or not.
As it happens, there exist certain patterns in nature which may be reliably counted upon to correlate with decision-theory-relevant properties. One example is the changing color of ripening fruit. Now, species with decision theories that attribute significance to these patterns will be more successful at propagating than those that do not, and therefore will be more widespread. This is a filter. Is it Parfitian?
No, because a self-interested agent could regard it as optimal to judge based on that pattern by only looking at causal benefits (CaMELs) to itself. In contrast, an agent could only regard it as optimal to care for offspring (to the extent we observe in parents) based on considering SAMELs, or having a utility function contorted to the point that its actions could more easily be explained by reference to SAMELs.
Let me try to work this out again, from scratch. A Parfit’s hitchhiking involves the following steps in order:
Omega examines the agent.
Omega offers the agent the deal.
The agent accepts the deal.
Omega gives the agent utility.
The agent gives Omega utility.
Parenthood breaks this chain in two ways: first, the “Omega” in step 2 is not the “Omega” in step 4, and neither of these are the “Omega” in step 5; and second, step 1 never occurs. Remember, “natural selection” isn’t an agent—it’s a process, like supply and demand, that necessarily happens.
Consider, for contrast, division of labor. (Edit: The following scenario is malformed. See followup comment, below.) Let’s say that we have Ag, the agent, and Om, the Omega, in the EEA. Om wants to hunt, but Om has children.
Om examines Ag and comes to the conclusion that Ag will cooperate.
Om asks Ag to watch Om’s children while on the hunt, in exchange for a portion of the proceeds.
Ag agrees.
Ag watches Om’s children while Om hunts.
Om returns successful, and gives Ag a share of the bounty.
Here, all five steps occur in order, Om is Om throughout and Ag is Ag throughout, and both Om and Ag gain utility (meat, in this case) by the exchange.
Does that clarify our disagreement?
Somewhat, but I’m confused:
Why does it matter that the Omegas are different? (I dispute that they are, but let’s ignore that for now.) The parallel only requires functional equivalence to “whatever Omega would do”, not Omega’s identity persistence. (And indeed Parfit’s other point was that the identity distinction is less clear than we might think.)
Why does it matter that natural selection isn’t an agent? All that’s necessary is that it be an optimization process—Omega’s role in the canonical PH would be no different if it were somehow specified to “just” be an optimization process rather than an agent.
What is the purpose of the EEA DoL example? It removes a critical aspect of PH and Parfitian filters—that optimality requires recognition of SAMELs. Here, if Ag doesn’t watch the children, Om sees this and can withhold the share of the bounty. If Ag could only consider CaMELs (and couldn’t have anything in its utility function that sneaks in recognition of SAMELs), Ag would still see why it should care for the children.
(Wow, that’s a lot of abbreviations...)
Taking your objections out of order:
First: yes, I have the scenario wrong—correct would be to switch Ag and Om, and have:
Om examines Ag and comes to the conclusion that Ag will cooperate.
Om offers to watch Ag’s children while Ag hunts, in exchange for a portion of the proceeds.
Ag agrees.
Om watches Ag’s children while Ag hunts.
Ag returns successful, and gives Om a share of the bounty.
In this case, Om has already given Ag utility—the ability to hunt—on the expectation that Ag will give up utility—meat—at a later time. I will edit in a note indicating the erroneous formulation in the original comment.
Second: what we are comparing are cases where an agent gives no utility to cooperating with Omega, but uses a decision theory that does so because it boosts the agent’s utility (e.g. the prototypical case) and cases where the agent gives positive utility to cooperating with Omega (e.g. if the agent and Omega were the same person and the net change is sufficiently positive). What we need to do to determine if the isomorphism with Parfit’s hitchhiker is sufficient is to identify a case where the agent’s actions will differ.
It seems to me that the latter case, the agent will give utility to Omega even if Omega never gives utility to the agent. Parfit’s hitchhikers do not give money to Nomega, the predictor agent who wasn’t at the scene and never gave them a ride—they only give money when the SAMEL is present. Therefore: if a parent is willing to make sacrifices when their parent didn’t, the Parfit parallel is poor and Theory 2a is the better fit. Agreed?
I’m not sure I understand all the steps in your reasoning, but I think I can start by responding to your conclusion:
As best I can understand you, yes. If there’s e.g. a species that does not care for its young, then one day, one of them does, that action would not be best explained by its recognition (or acting as it if had recognition) of a SAMEL (because there was no “AM”) -- it would have to be chalked up to some random change in its psychology.
However—and this is the important part—by making that choice, and passing the genes partly responsible for that choice, into the next generation, it opens up the possibility of exploring a new part of the “organism design space”: the part which which is improved my modifications predicated on some period of parent-child care [1].
If that change, and further moves into that attractor [2], improve fitness, then future generations will care for their children, with the same psychological impetus as the first one. They feel as if they just care about their children, not that they have to act on a SAMEL. However, 2b remains a superior explanation because it makes fewer assumptions (except for the organism to first have the mutation, which is part of the filter); 2b needn’t assume that the welfare of the child is a terminal value.
And note that the combined phenomena do produce functional equivalence to recognition of a SAMEL. If the care-for-children mode enhances fitness, then it is correct to say, “If the organism in n-th generation after mutation did not regard it as optimal to care for the (n+1)th generation, it would not be here”, and it is correct to say that that phenomenon is responsible for the organism’s decision (such as it is a decision) to care for its offspring. Given these factors, an organism that chooses to care for its offspring is acting equivalently to one motivated by the SAMEL. Thus, 2b can account for the same behavior with fewer assumptions.
As for the EEA DoL arrangement (if the above remarks haven’t screened off the point you were making with it): Om can still, er, withhold the children. But let’s ignore that possibility on grounds of Least Convenient Possible World. Even so, there are still causal benefits to Ag keeping up its end—the possibility of making future such arrangements. But let’s assume that Ag can still come out ahead by stiffing Om.
In that case, yes, Ag would have to recognize SAMELs to justify paying Om. I’d go on to make the normal point about Ag having already cleaved itself off into the world where there are fewer Om offers if it doesn’t see this SAMEL, but honestly, I forgot the point behind this scenario so I’ll leave it at that.
(Bitter aside: I wish more of the discussion for my article was like this, rather than being 90% hogged by unrelated arguments about PCT.)
[1] Jaron Lanier refers to this replication mode as “neoteny”, which I don’t think is the right meaning of the term, but I thought I’d mention it because he discussed the importance of a childhood period in his manifesto that I just read.
[2] I maybe should have added in the article that the reasoning “caring for children = good for fitness” only applies to certain path-dependent domains of attraction in the design space, and doesn’t hold for all organisms.
This may not be my true objection (I think it is abundantly clear at this point that I am not adept at identifying my true objections), but I just don’t understand your objection to 2a. As far as I can tell, it boils down to “never assume that an agent has terms in its utility functions for other agents”, but I’m not assuming—there is an evolutionary advantage to having a term in your utility function for your children. By the optimization criteria of evolution, the only reason not to support a child is if you are convinced that the child is either not related or an evolutionary dead-end (at which point it becomes “no child of mine” or some such). In contrast, the Parfit-hitchhiker mechanism involves upholding contracts, none of which your child offered, and therefore seems an entirely unrelated mechanism at the level of the individual organism.
(Regarding my hypothetical, I was merely trying to demonstrate that I understood the nature of the hypothetical—it has no further significance.)
No, my objection is: “never assume more terminal values (terms in UF) than necessary”, and I’ve shown how you can get away with not assuming that parents terminally value their children—just as a theoretical exercise of course, and not to deny the genuine heartfelt love that parents have for their children.
There is an evolutionary advantage to having a cognitive system that outputs the action “care for children even at cost to self”. At a psychological level, this is accomplished by the feeling of “caring” and “love”. But is that love due to a utility function weighting, or to a decision theory that (acts as if it recognizes) SAMELs? The mere fact of the psychology, and of the child-favoring acts does not settle this. (Recall the problem of how a ordering of outcomes can be recast as any combination of utility weightings and probabilities.)
You can account for the psychological phenomenon more parsimoniously [1] by assuming the action results from choice-machinery that implicitly recognizes SAMELs—and on top of that, get a bonus explanation of why a class of reasoning (moral reasoning) feels different—it’s the kind that mustn’t be convinced by the lack of a causal benefit to the self.
My version is precisely written to exclude contracts—the ideal PH inferences still go through, and so natural selection (which I argue is a PF) is sufficiently similar. If they don’t “attach” themselves to a child-favoring decision theory, they simply don’t get “rescued” into the n-th generation of that gene’s existence. No need to find an isomorphism to a contract.
[1] Holy Shi-ite—that’s three p-words with a different initial consonant sound!
Why does the cognitive system that identifies SAMELs fire when you have a child? The situation is not visibly similar to that of Parfit’s hitchhiker. Unless you are suggesting that parenthood simply activates the same precommitment mechanism that the decision theory uses when Parfit-hitchhiking...?
I don’t understand the point of these questions. You’re stuck with the same explanatory difficulties with the opposite theory: why does the cognitive system that identifies _changes in utility function_ fire when you have a child? Does parenthood activate the same terminal values that a PH survivor does upon waking up?
A utility function need not change when a child is born. After all, a utility function is a mapping from states-of-the-world to utilities and the birth of a child is merely a change in the state of the world.
Nonetheless, utility mapping functions can change as a result of information which doesn’t betoken a change in the state-of-the-world, but merely in your understanding your own desires. For example, your first taste of garlic ice cream. Or, more to the point, new parents sometimes report dramatic changes in outlook simply from observation of their baby’s first smile. The world has not changed, but somehow your place within it has.
See sibling reply to Robin. How are you showing a explanatory advantage to attributing the behavior to utility functions rather than SAMEL recognition? (Or what were you otherwise trying to establish?)
I wasn’t trying to show an advantage. You asked a question about my preferred explanatory framework. I interpreted the question to be something like, “How does the birth of a child trigger a particular special cognitive function?”. My answer was that it doesn’t. The birth of a baby is a change in the state of the world, and machinery for this (Bayesian updating) is already built in.
If you insist that I show an explanatory advantage, I would make two (not intended to be very convincing!) points:
“Occam’s razor” suggests that I shouldn’t introduce entities (SAMELs, in this case) that I don’t really need.
“Perplexed’s tweezers” suggests that I shouldn’t put too much trust in explanations (SAMELs, in this case) that I don’t really understand.
Okay, but if your preferred explanatory framework is strictly worse per the MML formalism (equivalent to rationalist Occam’s razor), then that would be a reason that my explanation is preferred.
You claim that my explanation fails by this metric:
However, the two theories we’re deciding between (2a and 2b) don’t explicitly involve SAMELs in either case. [1]
The only entity in 2b that is not in 2a is the claim that parents are limited to implementing decision theories capable of surviving natural selection. But as I said in footnote 2, this doesn’t penalize it under Occam’s Razor, because that must be assumed in both cases, so there’s no net penalty for 2b—implications of existing assumptions do not count toward the complexity/length of your explanation (for reasons I can explain in greater depth if you wish).
But to be honest, I’m losing track of the point being established by your objections (for which I apologize), so I’d appreciate it if you could (for my sake) explicitly put them back in the context of the article and this exchange.
[1] Before you glare in frustration at my apparent sudden attempt to throw SAMELs under the bus: the thesis of the article does involve SAMELs, but at that point, it’s either explaining more phenomena (i.e. psychology of moral intuitions), or showing the equivalence to acting on SAMELs.
Ok, I accept your argument that Occam is neutral between you and I. SAMELs aren’t involved at decision time in 2b, just as “Inclusive fitness” and “Hamilton’s rule” aren’t involved at decision time in 2a.
I will point out though, since we are looking only at the present, that the utility function in 2a can, in principle, be examined using “revealed preference”, whereas your purely selfish child-neutral utility function is a theoretical construct which would be hard to measure, even in principle.
Without Occam, I have to fall back on my second objection, the one I facetiously named “Perplexed’s tweezers”. I simply don’t understand your theory well enough to criticize it. Apparently your decision theory (like my offspring-inclusive utility function) is installed by natural selection. Ok, but what is the decision theory you end up with? I claim that my evolution-installed decision theory is just garden-variety utility maximization. What is your evolution-installed decision theory?
If you made this clear already and I failed to pick up on it, I apologize.
Hold on—that’s not what I said. I said that it was neutral on the issue of including “they can only use decision theories that could survive natural selection”. I claim it is not neutral on the supposition of additional terms in the utility function, as 2a does.
It doesn’t matter. They (inclusive fitness and Hamilton’s rule) have to be assumed (or implied by something that has to be assumed) anyway, because we’re dealing with people, so they’ll add the same complexity to both explanations.
As I’ve explained to you several times, looking at actions does not imply a unique utility function, so you can’t claim that you’ve measured it just by looking at their actions. The utility functions “I care about myself and my child” and “I care about myself” can produce the same actions, as I’ve demonstrated, because certain (biologically plausible) decision theories can output the action “care for child at expense of self”, even in the absence of a causal benefit to the self.
It is a class of DTs, the kind that count acausal benefits (SAMELs) on par with causal ones. The SAMELs need not be consciously recognized as such but they do need to feel different to motivate the behavior.
However, I could be more helpful if you asked specific questions about specific passages. Previously, you claimed that after reading it, you didn’t see how natural selection is like Omega, even after I pointed to the passage. That made me a sad panda.
You more than made up for it with the Parfit’s robot idea, though :-)
We are clearly talking past each other, and it does not seem to me that it would be productive to continue.
For example, I have repeatedly responded to your claim (not explanation!) that the 2a utility function is not susceptible to “revealed preference”. You have never acknowledged my response, but continue claiming that you have explained it to me.
I have to interpret that as a policy of using some other kind of “surgery” for counterfactuals. Something other than the standard kind of surgery used in causal decision theory (CDT). So the obvious questions become, “So, what kind of surgery do you advocate?” and “How do you know when to use this strange surgery rather than the one Pearl suggests?”.
That sentence may mean something to you, but I can’t even tell who is doing the feeling, what that feeling is different from, and what (or who) is doing the motivating.
It wasn’t my idea. It was timtyler’s. Maybe you will have better luck explaining your ideas to him. He was patient enough to explain the robot to me twice.
Too many SAMELs and CAMELs for me. I didn’t even get as far as seeing the analogy between natural selection and Omega. However, unlike you, I thought: this doesn’t sound very interesting; I can’t be bothered. Retrospectively, I do now get the bit in the summary—if that is what it is all about. I could probably weigh in on how parental care works in mammals—but without absorbing all the associated context, I doubt I would be contributing positively.
Thanks for the robot credit. It doesn’t feel like my idea either. After some hanging around Yudkowsky, it soon becomes clear that most of the material about decision theory here is partly in the context of a decision theory for machine intelligence—so substituting in a machine seems very natural.
Anyway, we don’t want you on too different a page—even if it does produce nice stories about the motivtions of stranded hitch-hikers.
You have certainly posted responses; I don’t recall you saying anything responsive, though, i.e. something that would establish that seeing someone’s actions suffices to identify a unique (enough) utility function, at least in this case—and I can show you more of the difficulties of such a task, if you would like. But yes, please point me to where you think you’ve said something responsive, as I just defined responsive.
Nothing I’ve described requires doing anything differently than Pearl’s kind of counterfactual surgery. For example, see EY’s exposition of Timeless Decision Theory, which does standard CF surgery but differs in how it calculates probabilities on results given a particular surgery, for purposes of calculating expected utility.
And that’s really the crux of it: The trick in TDT—and explaining human behavior with SAMELs—is that you can keep the same (genuinely) terminal values, but have a better chance of achieving them if you change the probability weighting, and change it in a way that assigns more expected utility to SAMEL-based actions.
Those probabilities are more like beliefs than values. And as another poster demonstrated a while back, you can take any agent’s decision ranking, and claim it was from various different value/belief combinations. For example, if someone reaches for an apple instead of reaching for an orange, you can say, consistently with this observation, that:
they prefer the apple to the orange, and believe they have 100% chance of getting what they reach for (pure value-based decision)
are indifferent between the apple and the orange, but believe that they have a higher chance of getting the reached-for fruit by reaching for the apple (pure belief-based decision)
or anything in between.
TDT, then, doesn’t need to posit additional values (like “honor”) -- it just changes its beliefs about the probabilities. Agents acting on SAMELs do the same thing, and I claim this leads to a simpler description of behavior.
I can answer that, but I should probably just explain the confusing distinctions: From the inside, it is the feeling (like “love”) that is psychologically responsible for the agent’s decision. My point is that this “love” action is identical to what would result from deciding based on SAMELs (and not valuing the loved one), even though it feels like love, not like identifying a SAMEL.
So, in short, the agent feels the love, the love motivates the behavior (psychologically); and, as a group, the set of feelings explainable through SAMELs feel different than other kinds of feelings.
In my haste to shut this conversation down, I have written a falsehood. Allow me to correct it, and then, please, let us stop.
Regarding “revealed preference, you ask where I previously responded to you. Here it is. It is not nearly as complete a response as I had remembered. In any case, as I read through what we each have written regarding “revealed preference”, I find that not only do we disagree as to what the phrase means, I suspect we also are both wrong. This “revealed preference” dispute is such a mess that I really don’t want to continue it. I apologize for claiming I had corrected you, when actually I had only counter-asserted.
I like the tweezers, but would like a better name for it.
As Perplexed said, there is no requirement that the utility function change—and, in fact, no reason to believe that it does not already have positive terms for children before reproduction. A lot of people report wanting children.
I’m asking these questions because we clearly have not established agreement, and I want to determine why. I assume that either we are using conflicting data, applying incompatible rules of inference, or simply misreading each other’s writing. It was this last possibility I was probing with that last question.
Okay, but by the same token, there’s no need to assume recognition of the SAMEL (that favors producing and caring for children) changes. (And if it matters, a lot of people report not wanting children, but then wanting to care for their children upon involuntary parenthood.)
None of the things you’re pointing out seem to differentiate the utility function-term explanation from the SAMEL-recognition explanation.
That’s a test that favors the SAMEL explanation, I think.
So you’re agreeing with me in this one respect? (I don’t mean to sound confrontational, I just want to make sure you didn’t reverse something by accident.)
Right—here’s what I’ve got.
The pattern of “not wanting children, but then wanting to spend resources to care for the children” is better explained by a SAMEL pattern than by a utility function pattern. The fact of people wanting children can be sufficiently explained by the reasons people give for wanting children: a desire for a legacy, an expected sense of fulfillment from parenthood, etcetera. Finally, the fact that this is a SAMEL pattern doesn’t mean that the adaptation works on SAMEL patterns—the ability of Parfit’s hitchhiker to precommit to paying Omega is a separate adaptation from the childrearing instinct.
I’m still not following:
How does “not wanting children, but then wanting to spend resources to care for the children” involve SAMELs in a way that wanting to have children does not?
Yes, you can explain people’s pursuance of goals by the reasons they say. The problem is that this isn’t the best explanation. As you keep adding new terminal values to explain the actions, you complicate the explanation. If you can do without these—and I think I’ve shown you can—you’re left with a superior explanation.
The fact that it feels like “pursuing a legacy” on the inside does not favor that being the superior explanation. Remember, the desire to pay Omega in PH feels like gratefulness on the inside—like the Omega has some otherwise inherent deservedness of receiving the payment. But in both cases, “If the survivor did not regard it as optimal to pay, the survivor would not be here”, and the SAMEL explanation only requires that humans have choice-machinery that favors acting on these (already given) facts.
There is no pre-commitment on the part of human hitchhikers in the sense that they are inextricably bound to pay—they are still making a choice, even though selection has been applied on the set of hitchhikers. It is not their precommitment that leads them to pay, but their choice-machinery’s having alerted them to the optimality of doing so—which feels like gratefulness.
My tongue detects sweetness, not healthfulness, even though the process that created the adaptation was designed to recognize healthfulness.
Not everyone wants children, as you said. It is not evolutionarily necessary for people to want children—only for them to want sex. That anyone wants children might be explained by the reasons they actually give for wanting children.
See above.
I am not invested in the word “precommitment”—we are describing the same behavior on the part of the hitchhiker.
This is the crux of the matter—desire for energy-dense consumables was selected for because quickly gathering energy was adaptive. It feels like sweetness from the inside: this is the region of qualiaspace that corresponds to feeling a motivation to act on that means-ends link. It does not feel like quickly gathering energy. Similarly, being motivated by SAMELs needn’t feel like such a recognition—it feels like an “otherwise-ungrounded inherent deservedness of others of being treated well” (or badly).
Okay, reviewing your point, I have to partially agree—general desire to act on SAMELs need not be (and probably isn’t) the same choice machinery that motivates specific child-bearing acts. The purpose of the situation was to show how you can account for behavior without complicating the utility function. Rather than additionally positing that someone terminally values their children, we can say that they are self-interested, but that only certain decision theories ever make it to the next generation.
In both cases, we have to rely on “if they did not regard it as optimal to care for their children (and given genetic psychological continuity), they would not be there”, but only in 2a must we elevate this caring to a terminal value for purposes of explanation.
This is good, but
is still hiding some confusion (in me, anyway.) Why say that it doesn’t feel like quickly gathering energy? What would feel like quickly gathering energy?
I’m now imagining a sucking-in-lines-qualia, (warning tvtropes) lurking in a region of qualia-space only accessible to sentient energy weaponry. And I’m kinda jealous.
Getting a nutrient feed via IV doesn’t feel like sweetness, but does involve quickly getting energy.
If you had a cognitive system that directly recognized any gain in energy, and credited it as good, for that reason, then you would have a quale that is best described as “feeling like gathering energy”. But that requires a whole different architecture.
It sounds like we agree.
Including about my claim that it provides a more parsimonious explanation of parents’ actions not to include concern for their children as a terminal value?
Yes—if you expected concern for children to be a terminal value, you would not expect to see adults of breeding age who do not want children. (That is the specific evidence that convinced me.) I don’t think I’ve quite worked out your position on Parfitian hitchhiking, but I don’t see any difference between what you claim and what I claim regarding parenthood.
I spoke correctly—I didn’t express agreement on the broader issue because I don’t want to update too hastily. I’m still thinking.
You should put that in the article. (True, it’s a causal iteration rather than an acausal prediction. But it’ll still make the article clearer.)
Thanks for the suggestion, I’ve added it.
Now I want a parfait.
Consider this situation: You are given the choice between personally receiving a small prize or giving your children a much larger prize. Whatever you choose, it is possible that your children will one day face a similar choice. Being your children, they resemble you in many ways and are more likely than not to choose similarly to you. Its not quite a Parfit’s Hitchhiker even from your childrens’ perspective—the consequences of their choice are in the past, not the future—but it’s close, and the result is the same.
I see what you mean, but I think the parallel is pretty weak.