Natural selection is the Omega, and the mind propagated through generations by natural selection is the hitchhiker. The mind only gets to the “decide to pay”/”decide to care for children” if it had the right decision theory before the “rescue”/”copy to next generation”.
I see the parallelism. If you ask me, though, I would say that it’s not a Parfitian filter, but a prototypical example of a filter to demonstrate that the idea of a filter is valid.
Perhaps I am being obtuse. Let me try to articulate a third filter, and get your reasoning on whether it is Parfitian or not.
As it happens, there exist certain patterns in nature which may be reliably counted upon to correlate with decision-theory-relevant properties. One example is the changing color of ripening fruit. Now, species with decision theories that attribute significance to these patterns will be more successful at propagating than those that do not, and therefore will be more widespread. This is a filter. Is it Parfitian?
No, because a self-interested agent could regard it as optimal to judge based on that pattern by only looking at causal benefits (CaMELs) to itself. In contrast, an agent could only regard it as optimal to care for offspring (to the extent we observe in parents) based on considering SAMELs, or having a utility function contorted to the point that its actions could more easily be explained by reference to SAMELs.
Let me try to work this out again, from scratch. A Parfit’s hitchhiking involves the following steps in order:
Omega examines the agent.
Omega offers the agent the deal.
The agent accepts the deal.
Omega gives the agent utility.
The agent gives Omega utility.
Parenthood breaks this chain in two ways: first, the “Omega” in step 2 is not the “Omega” in step 4, and neither of these are the “Omega” in step 5; and second, step 1 never occurs. Remember, “natural selection” isn’t an agent—it’s a process, like supply and demand, that necessarily happens.
Consider, for contrast, division of labor. (Edit: The following scenario is malformed. See followup comment, below.) Let’s say that we have Ag, the agent, and Om, the Omega, in the EEA. Om wants to hunt, but Om has children.
Om examines Ag and comes to the conclusion that Ag will cooperate.
Om asks Ag to watch Om’s children while on the hunt, in exchange for a portion of the proceeds.
Ag agrees.
Ag watches Om’s children while Om hunts.
Om returns successful, and gives Ag a share of the bounty.
Here, all five steps occur in order, Om is Om throughout and Ag is Ag throughout, and both Om and Ag gain utility (meat, in this case) by the exchange.
Why does it matter that the Omegas are different? (I dispute that they are, but let’s ignore that for now.) The parallel only requires functional equivalence to “whatever Omega would do”, not Omega’s identity persistence. (And indeed Parfit’s other point was that the identity distinction is less clear than we might think.)
Why does it matter that natural selection isn’t an agent? All that’s necessary is that it be an optimization process—Omega’s role in the canonical PH would be no different if it were somehow specified to “just” be an optimization process rather than an agent.
What is the purpose of the EEA DoL example? It removes a critical aspect of PH and Parfitian filters—that optimality requires recognition of SAMELs. Here, if Ag doesn’t watch the children, Om sees this and can withhold the share of the bounty. If Ag could only consider CaMELs (and couldn’t have anything in its utility function that sneaks in recognition of SAMELs), Ag would still see why it should care for the children.
First: yes, I have the scenario wrong—correct would be to switch Ag and Om, and have:
Om examines Ag and comes to the conclusion that Ag will cooperate.
Om offers to watch Ag’s children while Ag hunts, in exchange for a portion of the proceeds.
Ag agrees.
Om watches Ag’s children while Ag hunts.
Ag returns successful, and gives Om a share of the bounty.
In this case, Om has already given Ag utility—the ability to hunt—on the expectation that Ag will give up utility—meat—at a later time. I will edit in a note indicating the erroneous formulation in the original comment.
Second: what we are comparing are cases where an agent gives no utility to cooperating with Omega, but uses a decision theory that does so because it boosts the agent’s utility (e.g. the prototypical case) and cases where the agent gives positive utility to cooperating with Omega (e.g. if the agent and Omega were the same person and the net change is sufficiently positive). What we need to do to determine if the isomorphism with Parfit’s hitchhiker is sufficient is to identify a case where the agent’s actions will differ.
It seems to me that the latter case, the agent will give utility to Omega even if Omega never gives utility to the agent. Parfit’s hitchhikers do not give money to Nomega, the predictor agent who wasn’t at the scene and never gave them a ride—they only give money when the SAMEL is present. Therefore: if a parent is willing to make sacrifices when their parent didn’t, the Parfit parallel is poor and Theory 2a is the better fit. Agreed?
I’m not sure I understand all the steps in your reasoning, but I think I can start by responding to your conclusion:
Therefore: if a parent is willing to make sacrifices when their parent didn’t, the Parfit parallel is poor and Theory 2a is the better fit. Agreed?
As best I can understand you, yes. If there’s e.g. a species that does not care for its young, then one day, one of them does, that action would not be best explained by its recognition (or acting as it if had recognition) of a SAMEL (because there was no “AM”) -- it would have to be chalked up to some random change in its psychology.
However—and this is the important part—by making that choice, and passing the genes partly responsible for that choice, into the next generation, it opens up the possibility of exploring a new part of the “organism design space”: the part which which is improved my modifications predicated on some period of parent-child care [1].
If that change, and further moves into that attractor [2], improve fitness, then future generations will care for their children, with the same psychological impetus as the first one. They feel as if they just care about their children, not that they have to act on a SAMEL. However, 2b remains a superior explanation because it makes fewer assumptions (except for the organism to first have the mutation, which is part of the filter); 2b needn’t assume that the welfare of the child is a terminal value.
And note that the combined phenomena do produce functional equivalence to recognition of a SAMEL. If the care-for-children mode enhances fitness, then it is correct to say, “If the organism in n-th generation after mutation did not regard it as optimal to care for the (n+1)th generation, it would not be here”, and it is correct to say that that phenomenon is responsible for the organism’s decision (such as it is a decision) to care for its offspring. Given these factors, an organism that chooses to care for its offspring is acting equivalently to one motivated by the SAMEL. Thus, 2b can account for the same behavior with fewer assumptions.
As for the EEA DoL arrangement (if the above remarks haven’t screened off the point you were making with it): Om can still, er, withhold the children. But let’s ignore that possibility on grounds of Least Convenient Possible World. Even so, there are still causal benefits to Ag keeping up its end—the possibility of making future such arrangements. But let’s assume that Ag can still come out ahead by stiffing Om.
In that case, yes, Ag would have to recognize SAMELs to justify paying Om. I’d go on to make the normal point about Ag having already cleaved itself off into the world where there are fewer Om offers if it doesn’t see this SAMEL, but honestly, I forgot the point behind this scenario so I’ll leave it at that.
(Bitter aside: I wish more of the discussion for my article was like this, rather than being 90% hogged by unrelated arguments about PCT.)
[1] Jaron Lanier refers to this replication mode as “neoteny”, which I don’t think is the right meaning of the term, but I thought I’d mention it because he discussed the importance of a childhood period in his manifesto that I just read.
[2] I maybe should have added in the article that the reasoning “caring for children = good for fitness” only applies to certain path-dependent domains of attraction in the design space, and doesn’t hold for all organisms.
This may not be my true objection (I think it is abundantly clear at this point that I am not adept at identifying my true objections), but I just don’t understand your objection to 2a. As far as I can tell, it boils down to “never assume that an agent has terms in its utility functions for other agents”, but I’m not assuming—there is an evolutionary advantage to having a term in your utility function for your children. By the optimization criteria of evolution, the only reason not to support a child is if you are convinced that the child is either not related or an evolutionary dead-end (at which point it becomes “no child of mine” or some such). In contrast, the Parfit-hitchhiker mechanism involves upholding contracts, none of which your child offered, and therefore seems an entirely unrelated mechanism at the level of the individual organism.
(Regarding my hypothetical, I was merely trying to demonstrate that I understood the nature of the hypothetical—it has no further significance.)
your objection to 2a. As far as I can tell, it boils down to “never assume that an agent has terms in its utility functions for other agents”,
No, my objection is: “never assume more terminal values (terms in UF) than necessary”, and I’ve shown how you can get away with not assuming that parents terminally value their children—just as a theoretical exercise of course, and not to deny the genuine heartfelt love that parents have for their children.
but I’m not assuming—there is an evolutionary advantage to having a term in your utility function for your children.
There is an evolutionary advantage to having a cognitive system that outputs the action “care for children even at cost to self”. At a psychological level, this is accomplished by the feeling of “caring” and “love”. But is that love due to a utility function weighting, or to a decision theory that (acts as if it recognizes) SAMELs? The mere fact of the psychology, and of the child-favoring acts does not settle this. (Recall the problem of how a ordering of outcomes can be recast as any combination of utility weightings and probabilities.)
You can account for the psychological phenomenon more parsimoniously [1] by assuming the action results from choice-machinery that implicitly recognizes SAMELs—and on top of that, get a bonus explanation of why a class of reasoning (moral reasoning) feels different—it’s the kind that mustn’t be convinced by the lack of a causal benefit to the self.
In contrast, the Parfit-hitchhiker mechanism involves upholding contracts, none of which your child offered, and therefore seems an entirely unrelated mechanism at the level of the individual organism.
My version is precisely written to exclude contracts—the ideal PH inferences still go through, and so natural selection (which I argue is a PF) is sufficiently similar. If they don’t “attach” themselves to a child-favoring decision theory, they simply don’t get “rescued” into the n-th generation of that gene’s existence. No need to find an isomorphism to a contract.
[1] Holy Shi-ite—that’s three p-words with a different initial consonant sound!
Why does the cognitive system that identifies SAMELs fire when you have a child? The situation is not visibly similar to that of Parfit’s hitchhiker. Unless you are suggesting that parenthood simply activates the same precommitment mechanism that the decision theory uses when Parfit-hitchhiking...?
I don’t understand the point of these questions. You’re stuck with the same explanatory difficulties with the opposite theory: why does the cognitive system that identifies _changes in utility function_ fire when you have a child? Does parenthood activate the same terminal values that a PH survivor does upon waking up?
A utility function need not change when a child is born. After all, a utility function is a mapping from states-of-the-world to utilities and the birth of a child is merely a change in the state of the world.
Nonetheless, utility mapping functions can change as a result of information which doesn’t betoken a change in the state-of-the-world, but merely in your understanding your own desires. For example, your first taste of garlic ice cream. Or, more to the point, new parents sometimes report dramatic changes in outlook simply from observation of their baby’s first smile. The world has not changed, but somehow your place within it has.
See sibling reply to Robin. How are you showing a explanatory advantage to attributing the behavior to utility functions rather than SAMEL recognition? (Or what were you otherwise trying to establish?)
How are you showing a explanatory advantage to attributing the behavior to utility functions rather than SAMEL recognition? (Or what were you otherwise trying to establish?)
I wasn’t trying to show an advantage. You asked a question about my preferred explanatory framework. I interpreted the question to be something like, “How does the birth of a child trigger a particular special cognitive function?”. My answer was that it doesn’t. The birth of a baby is a change in the state of the world, and machinery for this (Bayesian updating) is already built in.
If you insist that I show an explanatory advantage, I would make two (not intended to be very convincing!) points:
“Occam’s razor” suggests that I shouldn’t introduce entities (SAMELs, in this case) that I don’t really need.
“Perplexed’s tweezers” suggests that I shouldn’t put too much trust in explanations (SAMELs, in this case) that I don’t really understand.
Okay, but if your preferred explanatory framework is strictly worse per the MML formalism (equivalent to rationalist Occam’s razor), then that would be a reason that my explanation is preferred.
You claim that my explanation fails by this metric:
“Occam’s razor” suggests that I shouldn’t introduce entities (SAMELs, in this case) that I don’t really need.
However, the two theories we’re deciding between (2a and 2b) don’t explicitly involve SAMELs in either case. [1]
Theory 2a: Parents have a utility function that places positive weight on both themselves and their children. Theory 2b: Parents have a utility function that places positive weight on only themselves (!!!); however, they are limited to implementing decision theories capable of surviving natural selection.
The only entity in 2b that is not in 2a is the claim that parents are limited to implementing decision theories capable of surviving natural selection. But as I said in footnote 2, this doesn’t penalize it under Occam’s Razor, because that must be assumed in both cases, so there’s no net penalty for 2b—implications of existing assumptions do not count toward the complexity/length of your explanation (for reasons I can explain in greater depth if you wish).
But to be honest, I’m losing track of the point being established by your objections (for which I apologize), so I’d appreciate it if you could (for my sake) explicitly put them back in the context of the article and this exchange.
[1] Before you glare in frustration at my apparent sudden attempt to throw SAMELs under the bus: the thesis of the article does involve SAMELs, but at that point, it’s either explaining more phenomena (i.e. psychology of moral intuitions), or showing the equivalence to acting on SAMELs.
You claim that my explanation fails by this metric:
“Occam’s razor” suggests that I shouldn’t introduce entities (SAMELs, in this case) that I don’t really need.
However, the two theories we’re deciding between (2a and 2b) don’t explicitly involve SAMELs in either case. [1]
Theory 2a: Parents have a utility function that places positive weight on both themselves and their children.
Theory 2b: Parents have a utility function that places positive weight on only themselves (!!!); however, they are limited to implementing decision theories capable of surviving natural selection.
Ok, I accept your argument that Occam is neutral between you and I. SAMELs aren’t involved at decision time in 2b, just as “Inclusive fitness” and “Hamilton’s rule” aren’t involved at decision time in 2a.
I will point out though, since we are looking only at the present, that the utility function in 2a can, in principle, be examined using “revealed preference”, whereas your purely selfish child-neutral utility function is a theoretical construct which would be hard to measure, even in principle.
Without Occam, I have to fall back on my second objection, the one I facetiously named “Perplexed’s tweezers”. I simply don’t understand your theory well enough to criticize it. Apparently your decision theory (like my offspring-inclusive utility function) is installed by natural selection. Ok, but what is the decision theory you end up with? I claim that my evolution-installed decision theory is just garden-variety utility maximization. What is your evolution-installed decision theory?
If you made this clear already and I failed to pick up on it, I apologize.
Ok, I accept your argument that Occam is neutral between you and I.
Hold on—that’s not what I said. I said that it was neutral on the issue of including “they can only use decision theories that could survive natural selection”. I claim it is not neutral on the supposition of additional terms in the utility function, as 2a does.
SAMELs aren’t involved at decision time in 2b, just as “Inclusive fitness” and “Hamilton’s rule” aren’t involved at decision time in 2a.
It doesn’t matter. They (inclusive fitness and Hamilton’s rule) have to be assumed (or implied by something that has to be assumed) anyway, because we’re dealing with people, so they’ll add the same complexity to both explanations.
I will point out though, since we are looking only at the present, that the utility function in 2a can, in principle, be examined using “revealed preference”, whereas your purely selfish child-neutral utility function is a theoretical construct which would be hard to measure, even in principle.
As I’ve explained to you several times, looking at actions does not imply a unique utility function, so you can’t claim that you’ve measured it just by looking at their actions. The utility functions “I care about myself and my child” and “I care about myself” can produce the same actions, as I’ve demonstrated, because certain (biologically plausible) decision theories can output the action “care for child at expense of self”, even in the absence of a causal benefit to the self.
I simply don’t understand your theory well enough to criticize it. … what is the decision theory you end up with?
It is a class of DTs, the kind that count acausal benefits (SAMELs) on par with causal ones. The SAMELs need not be consciously recognized as such but they do need to feel different to motivate the behavior.
However, I could be more helpful if you asked specific questions about specific passages. Previously, you claimed that after reading it, you didn’t see how natural selection is like Omega, even after I pointed to the passage. That made me a sad panda.
You more than made up for it with the Parfit’s robot idea, though :-)
We are clearly talking past each other, and it does not seem to me that it would be productive to continue.
For example, I have repeatedly responded to your claim (not explanation!) that the 2a utility function is not susceptible to “revealed preference”. You have never acknowledged my response, but continue claiming that you have explained it to me.
I simply don’t understand your theory well enough to criticize it. … what is the decision theory you end up with?
It is a class of DTs, the kind that count acausal benefits (SAMELs) on par with causal ones.
I have to interpret that as a policy of using some other kind of “surgery” for counterfactuals. Something other than the standard kind of surgery used in causal decision theory (CDT). So the obvious questions become, “So, what kind of surgery do you advocate?” and “How do you know when to use this strange surgery rather than the one Pearl suggests?”.
The SAMELs need not be consciously recognized as such but they do need to feel different to motivate the behavior.
That sentence may mean something to you, but I can’t even tell who is doing the feeling, what that feeling is different from, and what (or who) is doing the motivating.
… You more than made up for it with the Parfit’s robot idea, though.
It wasn’t my idea. It was timtyler’s. Maybe you will have better luck explaining your ideas to him. He was patient enough to explain the robot to me twice.
Too many SAMELs and CAMELs for me. I didn’t even get as far as seeing the analogy between natural selection and Omega. However, unlike you, I thought: this doesn’t sound very interesting; I can’t be bothered. Retrospectively, I do now get the bit in the summary—if that is what it is all about. I could probably weigh in on how parental care works in mammals—but without absorbing all the associated context, I doubt I would be contributing positively.
Thanks for the robot credit. It doesn’t feel like my idea either. After some hanging around Yudkowsky, it soon becomes clear that most of the material about decision theory here is partly in the context of a decision theory for machine intelligence—so substituting in a machine seems very natural.
Anyway, we don’t want you on too different a page—even if it does produce nice stories about the motivtions of stranded hitch-hikers.
For example, I have repeatedly responded to your claim (not explanation!) that the 2a utility function is not susceptible to “revealed preference”. You have never acknowledged my response, but continue claiming that you have explained it to me.
You have certainly posted responses; I don’t recall you saying anything responsive, though, i.e. something that would establish that seeing someone’s actions suffices to identify a unique (enough) utility function, at least in this case—and I can show you more of the difficulties of such a task, if you would like. But yes, please point me to where you think you’ve said something responsive, as I just defined responsive.
I have to interpret that as a policy of using some other kind of “surgery” for counterfactuals. Something other than the standard kind of surgery used in causal decision theory (CDT). So the obvious questions become, “So, what kind of surgery do you advocate?” and “How do you know when to use this strange surgery rather than the one Pearl suggests?”.
Nothing I’ve described requires doing anything differently than Pearl’s kind of counterfactual surgery. For example, see EY’s exposition of Timeless Decision Theory, which does standard CF surgery but differs in how it calculates probabilities on results given a particular surgery, for purposes of calculating expected utility.
And that’s really the crux of it: The trick in TDT—and explaining human behavior with SAMELs—is that you can keep the same (genuinely) terminal values, but have a better chance of achieving them if you change the probability weighting, and change it in a way that assigns more expected utility to SAMEL-based actions.
Those probabilities are more like beliefs than values. And as another poster demonstrated a while back, you can take any agent’s decision ranking, and claim it was from various different value/belief combinations. For example, if someone reaches for an apple instead of reaching for an orange, you can say, consistently with this observation, that:
they prefer the apple to the orange, and believe they have 100% chance of getting what they reach for (pure value-based decision)
are indifferent between the apple and the orange, but believe that they have a higher chance of getting the reached-for fruit by reaching for the apple (pure belief-based decision)
or anything in between.
TDT, then, doesn’t need to posit additional values (like “honor”) -- it just changes its beliefs about the probabilities. Agents acting on SAMELs do the same thing, and I claim this leads to a simpler description of behavior.
The SAMELs need not be consciously recognized as such but they do need to feel different to motivate the behavior.
That sentence may mean something to you, but I can’t even tell who is doing the feeling, what that feeling is different from, and what (or who) is doing the motivating.
I can answer that, but I should probably just explain the confusing distinctions: From the inside, it is the feeling (like “love”) that is psychologically responsible for the agent’s decision. My point is that this “love” action is identical to what would result from deciding based on SAMELs (and not valuing the loved one), even though it feels like love, not like identifying a SAMEL.
So, in short, the agent feels the love, the love motivates the behavior (psychologically); and, as a group, the set of feelings explainable through SAMELs feel different than other kinds of feelings.
In my haste to shut this conversation down, I have written a falsehood. Allow me to correct it, and then, please, let us stop.
Regarding “revealed preference, you ask where I previously responded to you. Here it is. It is not nearly as complete a response as I had remembered. In any case, as I read through what we each have written regarding “revealed preference”, I find that not only do we disagree as to what the phrase means, I suspect we also are both wrong. This “revealed preference” dispute is such a mess that I really don’t want to continue it. I apologize for claiming I had corrected you, when actually I had only counter-asserted.
As Perplexed said, there is no requirement that the utility function change—and, in fact, no reason to believe that it does not already have positive terms for children before reproduction. A lot of people report wanting children.
I’m asking these questions because we clearly have not established agreement, and I want to determine why. I assume that either we are using conflicting data, applying incompatible rules of inference, or simply misreading each other’s writing. It was this last possibility I was probing with that last question.
As Perplexed said, there is no requirement that the utility function change—and, in fact, no reason to believe that it does not already have positive terms for children before reproduction. A lot of people report wanting children.
Okay, but by the same token, there’s no need to assume recognition of the SAMEL (that favors producing and caring for children) changes. (And if it matters, a lot of people report not wanting children, but then wanting to care for their children upon involuntary parenthood.)
None of the things you’re pointing out seem to differentiate the utility function-term explanation from the SAMEL-recognition explanation.
So you’re agreeing with me in this one respect? (I don’t mean to sound confrontational, I just want to make sure you didn’t reverse something by accident.)
The pattern of “not wanting children, but then wanting to spend resources to care for the children” is better explained by a SAMEL pattern than by a utility function pattern. The fact of people wanting children can be sufficiently explained by the reasons people give for wanting children: a desire for a legacy, an expected sense of fulfillment from parenthood, etcetera. Finally, the fact that this is a SAMEL pattern doesn’t mean that the adaptation works on SAMEL patterns—the ability of Parfit’s hitchhiker to precommit to paying Omega is a separate adaptation from the childrearing instinct.
How does “not wanting children, but then wanting to spend resources to care for the children” involve SAMELs in a way that wanting to have children does not?
Yes, you can explain people’s pursuance of goals by the reasons they say. The problem is that this isn’t the best explanation. As you keep adding new terminal values to explain the actions, you complicate the explanation. If you can do without these—and I think I’ve shown you can—you’re left with a superior explanation.
The fact that it feels like “pursuing a legacy” on the inside does not favor that being the superior explanation. Remember, the desire to pay Omega in PH feels like gratefulness on the inside—like the Omega has some otherwise inherent deservedness of receiving the payment. But in both cases, “If the survivor did not regard it as optimal to pay, the survivor would not be here”, and the SAMEL explanation only requires that humans have choice-machinery that favors acting on these (already given) facts.
There is no pre-commitment on the part of human hitchhikers in the sense that they are inextricably bound to pay—they are still making a choice, even though selection has been applied on the set of hitchhikers. It is not their precommitment that leads them to pay, but their choice-machinery’s having alerted them to the optimality of doing so—which feels like gratefulness.
My tongue detects sweetness, not healthfulness, even though the process that created the adaptation was designed to recognize healthfulness.
Not everyone wants children, as you said. It is not evolutionarily necessary for people to want children—only for them to want sex. That anyone wants children might be explained by the reasons they actually give for wanting children.
See above.
I am not invested in the word “precommitment”—we are describing the same behavior on the part of the hitchhiker.
My tongue detects sweetness, not healthfulness, even though the process that created the adaptation was designed to recognize healthfulness.
This is the crux of the matter—desire for energy-dense consumables was selected for because quickly gathering energy was adaptive. It feels like sweetness from the inside: this is the region of qualiaspace that corresponds to feeling a motivation to act on that means-ends link. It does not feel like quickly gathering energy. Similarly, being motivated by SAMELs needn’t feel like such a recognition—it feels like an “otherwise-ungrounded inherent deservedness of others of being treated well” (or badly).
Not everyone wants children, as you said. It is not evolutionarily necessary for people to want children—only for them to want sex. That anyone wants children might be explained by the reasons they actually give for wanting children.
I am not invested in the word “precommitment”—we are describing the same behavior on the part of the hitchhiker.
Okay, reviewing your point, I have to partially agree—general desire to act on SAMELs need not be (and probably isn’t) the same choice machinery that motivates specific child-bearing acts. The purpose of the situation was to show how you can account for behavior without complicating the utility function. Rather than additionally positing that someone terminally values their children, we can say that they are self-interested, but that only certain decision theories ever make it to the next generation.
In both cases, we have to rely on “if they did not regard it as optimal to care for their children (and given genetic psychological continuity), they would not be there”, but only in 2a must we elevate this caring to a terminal value for purposes of explanation.
It feels like sweetness from the inside: this is the region of qualiaspace that corresponds to feeling a motivation to act on that means-ends link.
This is good, but
It does not feel like quickly gathering energy.
is still hiding some confusion (in me, anyway.) Why say that it doesn’t feel like quickly gathering energy? What would feel like quickly gathering energy?
I’m now imagining a sucking-in-lines-qualia, (warning tvtropes) lurking in a region of qualia-space only accessible to sentient energy weaponry. And I’m kinda jealous.
is still hiding some confusion (in me, anyway.) Why say that it doesn’t feel like quickly gathering energy?
Getting a nutrient feed via IV doesn’t feel like sweetness, but does involve quickly getting energy.
What would feel like quickly gathering energy?
If you had a cognitive system that directly recognized any gain in energy, and credited it as good, for that reason, then you would have a quale that is best described as “feeling like gathering energy”. But that requires a whole different architecture.
Including about my claim that it provides a more parsimonious explanation of parents’ actions not to include concern for their children as a terminal value?
Yes—if you expected concern for children to be a terminal value, you would not expect to see adults of breeding age who do not want children. (That is the specific evidence that convinced me.) I don’t think I’ve quite worked out your position on Parfitian hitchhiking, but I don’t see any difference between what you claim and what I claim regarding parenthood.
Natural selection is the Omega, and the mind propagated through generations by natural selection is the hitchhiker. The mind only gets to the “decide to pay”/”decide to care for children” if it had the right decision theory before the “rescue”/”copy to next generation”.
You should put that in the article. (True, it’s a causal iteration rather than an acausal prediction. But it’ll still make the article clearer.)
Natural selection is the Omega, and the mind propagated through generations by natural selection is the hitchhiker. The mind only gets to the “decide to pay”/”decide to care for children” if it had the right decision theory before the “rescue”/”copy to next generation”.
Does it look similar now?
I see the parallelism. If you ask me, though, I would say that it’s not a Parfitian filter, but a prototypical example of a filter to demonstrate that the idea of a filter is valid.
What’s the difference?
Perhaps I am being obtuse. Let me try to articulate a third filter, and get your reasoning on whether it is Parfitian or not.
As it happens, there exist certain patterns in nature which may be reliably counted upon to correlate with decision-theory-relevant properties. One example is the changing color of ripening fruit. Now, species with decision theories that attribute significance to these patterns will be more successful at propagating than those that do not, and therefore will be more widespread. This is a filter. Is it Parfitian?
No, because a self-interested agent could regard it as optimal to judge based on that pattern by only looking at causal benefits (CaMELs) to itself. In contrast, an agent could only regard it as optimal to care for offspring (to the extent we observe in parents) based on considering SAMELs, or having a utility function contorted to the point that its actions could more easily be explained by reference to SAMELs.
Let me try to work this out again, from scratch. A Parfit’s hitchhiking involves the following steps in order:
Omega examines the agent.
Omega offers the agent the deal.
The agent accepts the deal.
Omega gives the agent utility.
The agent gives Omega utility.
Parenthood breaks this chain in two ways: first, the “Omega” in step 2 is not the “Omega” in step 4, and neither of these are the “Omega” in step 5; and second, step 1 never occurs. Remember, “natural selection” isn’t an agent—it’s a process, like supply and demand, that necessarily happens.
Consider, for contrast, division of labor. (Edit: The following scenario is malformed. See followup comment, below.) Let’s say that we have Ag, the agent, and Om, the Omega, in the EEA. Om wants to hunt, but Om has children.
Om examines Ag and comes to the conclusion that Ag will cooperate.
Om asks Ag to watch Om’s children while on the hunt, in exchange for a portion of the proceeds.
Ag agrees.
Ag watches Om’s children while Om hunts.
Om returns successful, and gives Ag a share of the bounty.
Here, all five steps occur in order, Om is Om throughout and Ag is Ag throughout, and both Om and Ag gain utility (meat, in this case) by the exchange.
Does that clarify our disagreement?
Somewhat, but I’m confused:
Why does it matter that the Omegas are different? (I dispute that they are, but let’s ignore that for now.) The parallel only requires functional equivalence to “whatever Omega would do”, not Omega’s identity persistence. (And indeed Parfit’s other point was that the identity distinction is less clear than we might think.)
Why does it matter that natural selection isn’t an agent? All that’s necessary is that it be an optimization process—Omega’s role in the canonical PH would be no different if it were somehow specified to “just” be an optimization process rather than an agent.
What is the purpose of the EEA DoL example? It removes a critical aspect of PH and Parfitian filters—that optimality requires recognition of SAMELs. Here, if Ag doesn’t watch the children, Om sees this and can withhold the share of the bounty. If Ag could only consider CaMELs (and couldn’t have anything in its utility function that sneaks in recognition of SAMELs), Ag would still see why it should care for the children.
(Wow, that’s a lot of abbreviations...)
Taking your objections out of order:
First: yes, I have the scenario wrong—correct would be to switch Ag and Om, and have:
Om examines Ag and comes to the conclusion that Ag will cooperate.
Om offers to watch Ag’s children while Ag hunts, in exchange for a portion of the proceeds.
Ag agrees.
Om watches Ag’s children while Ag hunts.
Ag returns successful, and gives Om a share of the bounty.
In this case, Om has already given Ag utility—the ability to hunt—on the expectation that Ag will give up utility—meat—at a later time. I will edit in a note indicating the erroneous formulation in the original comment.
Second: what we are comparing are cases where an agent gives no utility to cooperating with Omega, but uses a decision theory that does so because it boosts the agent’s utility (e.g. the prototypical case) and cases where the agent gives positive utility to cooperating with Omega (e.g. if the agent and Omega were the same person and the net change is sufficiently positive). What we need to do to determine if the isomorphism with Parfit’s hitchhiker is sufficient is to identify a case where the agent’s actions will differ.
It seems to me that the latter case, the agent will give utility to Omega even if Omega never gives utility to the agent. Parfit’s hitchhikers do not give money to Nomega, the predictor agent who wasn’t at the scene and never gave them a ride—they only give money when the SAMEL is present. Therefore: if a parent is willing to make sacrifices when their parent didn’t, the Parfit parallel is poor and Theory 2a is the better fit. Agreed?
I’m not sure I understand all the steps in your reasoning, but I think I can start by responding to your conclusion:
As best I can understand you, yes. If there’s e.g. a species that does not care for its young, then one day, one of them does, that action would not be best explained by its recognition (or acting as it if had recognition) of a SAMEL (because there was no “AM”) -- it would have to be chalked up to some random change in its psychology.
However—and this is the important part—by making that choice, and passing the genes partly responsible for that choice, into the next generation, it opens up the possibility of exploring a new part of the “organism design space”: the part which which is improved my modifications predicated on some period of parent-child care [1].
If that change, and further moves into that attractor [2], improve fitness, then future generations will care for their children, with the same psychological impetus as the first one. They feel as if they just care about their children, not that they have to act on a SAMEL. However, 2b remains a superior explanation because it makes fewer assumptions (except for the organism to first have the mutation, which is part of the filter); 2b needn’t assume that the welfare of the child is a terminal value.
And note that the combined phenomena do produce functional equivalence to recognition of a SAMEL. If the care-for-children mode enhances fitness, then it is correct to say, “If the organism in n-th generation after mutation did not regard it as optimal to care for the (n+1)th generation, it would not be here”, and it is correct to say that that phenomenon is responsible for the organism’s decision (such as it is a decision) to care for its offspring. Given these factors, an organism that chooses to care for its offspring is acting equivalently to one motivated by the SAMEL. Thus, 2b can account for the same behavior with fewer assumptions.
As for the EEA DoL arrangement (if the above remarks haven’t screened off the point you were making with it): Om can still, er, withhold the children. But let’s ignore that possibility on grounds of Least Convenient Possible World. Even so, there are still causal benefits to Ag keeping up its end—the possibility of making future such arrangements. But let’s assume that Ag can still come out ahead by stiffing Om.
In that case, yes, Ag would have to recognize SAMELs to justify paying Om. I’d go on to make the normal point about Ag having already cleaved itself off into the world where there are fewer Om offers if it doesn’t see this SAMEL, but honestly, I forgot the point behind this scenario so I’ll leave it at that.
(Bitter aside: I wish more of the discussion for my article was like this, rather than being 90% hogged by unrelated arguments about PCT.)
[1] Jaron Lanier refers to this replication mode as “neoteny”, which I don’t think is the right meaning of the term, but I thought I’d mention it because he discussed the importance of a childhood period in his manifesto that I just read.
[2] I maybe should have added in the article that the reasoning “caring for children = good for fitness” only applies to certain path-dependent domains of attraction in the design space, and doesn’t hold for all organisms.
This may not be my true objection (I think it is abundantly clear at this point that I am not adept at identifying my true objections), but I just don’t understand your objection to 2a. As far as I can tell, it boils down to “never assume that an agent has terms in its utility functions for other agents”, but I’m not assuming—there is an evolutionary advantage to having a term in your utility function for your children. By the optimization criteria of evolution, the only reason not to support a child is if you are convinced that the child is either not related or an evolutionary dead-end (at which point it becomes “no child of mine” or some such). In contrast, the Parfit-hitchhiker mechanism involves upholding contracts, none of which your child offered, and therefore seems an entirely unrelated mechanism at the level of the individual organism.
(Regarding my hypothetical, I was merely trying to demonstrate that I understood the nature of the hypothetical—it has no further significance.)
No, my objection is: “never assume more terminal values (terms in UF) than necessary”, and I’ve shown how you can get away with not assuming that parents terminally value their children—just as a theoretical exercise of course, and not to deny the genuine heartfelt love that parents have for their children.
There is an evolutionary advantage to having a cognitive system that outputs the action “care for children even at cost to self”. At a psychological level, this is accomplished by the feeling of “caring” and “love”. But is that love due to a utility function weighting, or to a decision theory that (acts as if it recognizes) SAMELs? The mere fact of the psychology, and of the child-favoring acts does not settle this. (Recall the problem of how a ordering of outcomes can be recast as any combination of utility weightings and probabilities.)
You can account for the psychological phenomenon more parsimoniously [1] by assuming the action results from choice-machinery that implicitly recognizes SAMELs—and on top of that, get a bonus explanation of why a class of reasoning (moral reasoning) feels different—it’s the kind that mustn’t be convinced by the lack of a causal benefit to the self.
My version is precisely written to exclude contracts—the ideal PH inferences still go through, and so natural selection (which I argue is a PF) is sufficiently similar. If they don’t “attach” themselves to a child-favoring decision theory, they simply don’t get “rescued” into the n-th generation of that gene’s existence. No need to find an isomorphism to a contract.
[1] Holy Shi-ite—that’s three p-words with a different initial consonant sound!
Why does the cognitive system that identifies SAMELs fire when you have a child? The situation is not visibly similar to that of Parfit’s hitchhiker. Unless you are suggesting that parenthood simply activates the same precommitment mechanism that the decision theory uses when Parfit-hitchhiking...?
I don’t understand the point of these questions. You’re stuck with the same explanatory difficulties with the opposite theory: why does the cognitive system that identifies _changes in utility function_ fire when you have a child? Does parenthood activate the same terminal values that a PH survivor does upon waking up?
A utility function need not change when a child is born. After all, a utility function is a mapping from states-of-the-world to utilities and the birth of a child is merely a change in the state of the world.
Nonetheless, utility mapping functions can change as a result of information which doesn’t betoken a change in the state-of-the-world, but merely in your understanding your own desires. For example, your first taste of garlic ice cream. Or, more to the point, new parents sometimes report dramatic changes in outlook simply from observation of their baby’s first smile. The world has not changed, but somehow your place within it has.
See sibling reply to Robin. How are you showing a explanatory advantage to attributing the behavior to utility functions rather than SAMEL recognition? (Or what were you otherwise trying to establish?)
I wasn’t trying to show an advantage. You asked a question about my preferred explanatory framework. I interpreted the question to be something like, “How does the birth of a child trigger a particular special cognitive function?”. My answer was that it doesn’t. The birth of a baby is a change in the state of the world, and machinery for this (Bayesian updating) is already built in.
If you insist that I show an explanatory advantage, I would make two (not intended to be very convincing!) points:
“Occam’s razor” suggests that I shouldn’t introduce entities (SAMELs, in this case) that I don’t really need.
“Perplexed’s tweezers” suggests that I shouldn’t put too much trust in explanations (SAMELs, in this case) that I don’t really understand.
Okay, but if your preferred explanatory framework is strictly worse per the MML formalism (equivalent to rationalist Occam’s razor), then that would be a reason that my explanation is preferred.
You claim that my explanation fails by this metric:
However, the two theories we’re deciding between (2a and 2b) don’t explicitly involve SAMELs in either case. [1]
The only entity in 2b that is not in 2a is the claim that parents are limited to implementing decision theories capable of surviving natural selection. But as I said in footnote 2, this doesn’t penalize it under Occam’s Razor, because that must be assumed in both cases, so there’s no net penalty for 2b—implications of existing assumptions do not count toward the complexity/length of your explanation (for reasons I can explain in greater depth if you wish).
But to be honest, I’m losing track of the point being established by your objections (for which I apologize), so I’d appreciate it if you could (for my sake) explicitly put them back in the context of the article and this exchange.
[1] Before you glare in frustration at my apparent sudden attempt to throw SAMELs under the bus: the thesis of the article does involve SAMELs, but at that point, it’s either explaining more phenomena (i.e. psychology of moral intuitions), or showing the equivalence to acting on SAMELs.
Ok, I accept your argument that Occam is neutral between you and I. SAMELs aren’t involved at decision time in 2b, just as “Inclusive fitness” and “Hamilton’s rule” aren’t involved at decision time in 2a.
I will point out though, since we are looking only at the present, that the utility function in 2a can, in principle, be examined using “revealed preference”, whereas your purely selfish child-neutral utility function is a theoretical construct which would be hard to measure, even in principle.
Without Occam, I have to fall back on my second objection, the one I facetiously named “Perplexed’s tweezers”. I simply don’t understand your theory well enough to criticize it. Apparently your decision theory (like my offspring-inclusive utility function) is installed by natural selection. Ok, but what is the decision theory you end up with? I claim that my evolution-installed decision theory is just garden-variety utility maximization. What is your evolution-installed decision theory?
If you made this clear already and I failed to pick up on it, I apologize.
Hold on—that’s not what I said. I said that it was neutral on the issue of including “they can only use decision theories that could survive natural selection”. I claim it is not neutral on the supposition of additional terms in the utility function, as 2a does.
It doesn’t matter. They (inclusive fitness and Hamilton’s rule) have to be assumed (or implied by something that has to be assumed) anyway, because we’re dealing with people, so they’ll add the same complexity to both explanations.
As I’ve explained to you several times, looking at actions does not imply a unique utility function, so you can’t claim that you’ve measured it just by looking at their actions. The utility functions “I care about myself and my child” and “I care about myself” can produce the same actions, as I’ve demonstrated, because certain (biologically plausible) decision theories can output the action “care for child at expense of self”, even in the absence of a causal benefit to the self.
It is a class of DTs, the kind that count acausal benefits (SAMELs) on par with causal ones. The SAMELs need not be consciously recognized as such but they do need to feel different to motivate the behavior.
However, I could be more helpful if you asked specific questions about specific passages. Previously, you claimed that after reading it, you didn’t see how natural selection is like Omega, even after I pointed to the passage. That made me a sad panda.
You more than made up for it with the Parfit’s robot idea, though :-)
We are clearly talking past each other, and it does not seem to me that it would be productive to continue.
For example, I have repeatedly responded to your claim (not explanation!) that the 2a utility function is not susceptible to “revealed preference”. You have never acknowledged my response, but continue claiming that you have explained it to me.
I have to interpret that as a policy of using some other kind of “surgery” for counterfactuals. Something other than the standard kind of surgery used in causal decision theory (CDT). So the obvious questions become, “So, what kind of surgery do you advocate?” and “How do you know when to use this strange surgery rather than the one Pearl suggests?”.
That sentence may mean something to you, but I can’t even tell who is doing the feeling, what that feeling is different from, and what (or who) is doing the motivating.
It wasn’t my idea. It was timtyler’s. Maybe you will have better luck explaining your ideas to him. He was patient enough to explain the robot to me twice.
Too many SAMELs and CAMELs for me. I didn’t even get as far as seeing the analogy between natural selection and Omega. However, unlike you, I thought: this doesn’t sound very interesting; I can’t be bothered. Retrospectively, I do now get the bit in the summary—if that is what it is all about. I could probably weigh in on how parental care works in mammals—but without absorbing all the associated context, I doubt I would be contributing positively.
Thanks for the robot credit. It doesn’t feel like my idea either. After some hanging around Yudkowsky, it soon becomes clear that most of the material about decision theory here is partly in the context of a decision theory for machine intelligence—so substituting in a machine seems very natural.
Anyway, we don’t want you on too different a page—even if it does produce nice stories about the motivtions of stranded hitch-hikers.
You have certainly posted responses; I don’t recall you saying anything responsive, though, i.e. something that would establish that seeing someone’s actions suffices to identify a unique (enough) utility function, at least in this case—and I can show you more of the difficulties of such a task, if you would like. But yes, please point me to where you think you’ve said something responsive, as I just defined responsive.
Nothing I’ve described requires doing anything differently than Pearl’s kind of counterfactual surgery. For example, see EY’s exposition of Timeless Decision Theory, which does standard CF surgery but differs in how it calculates probabilities on results given a particular surgery, for purposes of calculating expected utility.
And that’s really the crux of it: The trick in TDT—and explaining human behavior with SAMELs—is that you can keep the same (genuinely) terminal values, but have a better chance of achieving them if you change the probability weighting, and change it in a way that assigns more expected utility to SAMEL-based actions.
Those probabilities are more like beliefs than values. And as another poster demonstrated a while back, you can take any agent’s decision ranking, and claim it was from various different value/belief combinations. For example, if someone reaches for an apple instead of reaching for an orange, you can say, consistently with this observation, that:
they prefer the apple to the orange, and believe they have 100% chance of getting what they reach for (pure value-based decision)
are indifferent between the apple and the orange, but believe that they have a higher chance of getting the reached-for fruit by reaching for the apple (pure belief-based decision)
or anything in between.
TDT, then, doesn’t need to posit additional values (like “honor”) -- it just changes its beliefs about the probabilities. Agents acting on SAMELs do the same thing, and I claim this leads to a simpler description of behavior.
I can answer that, but I should probably just explain the confusing distinctions: From the inside, it is the feeling (like “love”) that is psychologically responsible for the agent’s decision. My point is that this “love” action is identical to what would result from deciding based on SAMELs (and not valuing the loved one), even though it feels like love, not like identifying a SAMEL.
So, in short, the agent feels the love, the love motivates the behavior (psychologically); and, as a group, the set of feelings explainable through SAMELs feel different than other kinds of feelings.
In my haste to shut this conversation down, I have written a falsehood. Allow me to correct it, and then, please, let us stop.
Regarding “revealed preference, you ask where I previously responded to you. Here it is. It is not nearly as complete a response as I had remembered. In any case, as I read through what we each have written regarding “revealed preference”, I find that not only do we disagree as to what the phrase means, I suspect we also are both wrong. This “revealed preference” dispute is such a mess that I really don’t want to continue it. I apologize for claiming I had corrected you, when actually I had only counter-asserted.
I like the tweezers, but would like a better name for it.
As Perplexed said, there is no requirement that the utility function change—and, in fact, no reason to believe that it does not already have positive terms for children before reproduction. A lot of people report wanting children.
I’m asking these questions because we clearly have not established agreement, and I want to determine why. I assume that either we are using conflicting data, applying incompatible rules of inference, or simply misreading each other’s writing. It was this last possibility I was probing with that last question.
Okay, but by the same token, there’s no need to assume recognition of the SAMEL (that favors producing and caring for children) changes. (And if it matters, a lot of people report not wanting children, but then wanting to care for their children upon involuntary parenthood.)
None of the things you’re pointing out seem to differentiate the utility function-term explanation from the SAMEL-recognition explanation.
That’s a test that favors the SAMEL explanation, I think.
So you’re agreeing with me in this one respect? (I don’t mean to sound confrontational, I just want to make sure you didn’t reverse something by accident.)
Right—here’s what I’ve got.
The pattern of “not wanting children, but then wanting to spend resources to care for the children” is better explained by a SAMEL pattern than by a utility function pattern. The fact of people wanting children can be sufficiently explained by the reasons people give for wanting children: a desire for a legacy, an expected sense of fulfillment from parenthood, etcetera. Finally, the fact that this is a SAMEL pattern doesn’t mean that the adaptation works on SAMEL patterns—the ability of Parfit’s hitchhiker to precommit to paying Omega is a separate adaptation from the childrearing instinct.
I’m still not following:
How does “not wanting children, but then wanting to spend resources to care for the children” involve SAMELs in a way that wanting to have children does not?
Yes, you can explain people’s pursuance of goals by the reasons they say. The problem is that this isn’t the best explanation. As you keep adding new terminal values to explain the actions, you complicate the explanation. If you can do without these—and I think I’ve shown you can—you’re left with a superior explanation.
The fact that it feels like “pursuing a legacy” on the inside does not favor that being the superior explanation. Remember, the desire to pay Omega in PH feels like gratefulness on the inside—like the Omega has some otherwise inherent deservedness of receiving the payment. But in both cases, “If the survivor did not regard it as optimal to pay, the survivor would not be here”, and the SAMEL explanation only requires that humans have choice-machinery that favors acting on these (already given) facts.
There is no pre-commitment on the part of human hitchhikers in the sense that they are inextricably bound to pay—they are still making a choice, even though selection has been applied on the set of hitchhikers. It is not their precommitment that leads them to pay, but their choice-machinery’s having alerted them to the optimality of doing so—which feels like gratefulness.
My tongue detects sweetness, not healthfulness, even though the process that created the adaptation was designed to recognize healthfulness.
Not everyone wants children, as you said. It is not evolutionarily necessary for people to want children—only for them to want sex. That anyone wants children might be explained by the reasons they actually give for wanting children.
See above.
I am not invested in the word “precommitment”—we are describing the same behavior on the part of the hitchhiker.
This is the crux of the matter—desire for energy-dense consumables was selected for because quickly gathering energy was adaptive. It feels like sweetness from the inside: this is the region of qualiaspace that corresponds to feeling a motivation to act on that means-ends link. It does not feel like quickly gathering energy. Similarly, being motivated by SAMELs needn’t feel like such a recognition—it feels like an “otherwise-ungrounded inherent deservedness of others of being treated well” (or badly).
Okay, reviewing your point, I have to partially agree—general desire to act on SAMELs need not be (and probably isn’t) the same choice machinery that motivates specific child-bearing acts. The purpose of the situation was to show how you can account for behavior without complicating the utility function. Rather than additionally positing that someone terminally values their children, we can say that they are self-interested, but that only certain decision theories ever make it to the next generation.
In both cases, we have to rely on “if they did not regard it as optimal to care for their children (and given genetic psychological continuity), they would not be there”, but only in 2a must we elevate this caring to a terminal value for purposes of explanation.
This is good, but
is still hiding some confusion (in me, anyway.) Why say that it doesn’t feel like quickly gathering energy? What would feel like quickly gathering energy?
I’m now imagining a sucking-in-lines-qualia, (warning tvtropes) lurking in a region of qualia-space only accessible to sentient energy weaponry. And I’m kinda jealous.
Getting a nutrient feed via IV doesn’t feel like sweetness, but does involve quickly getting energy.
If you had a cognitive system that directly recognized any gain in energy, and credited it as good, for that reason, then you would have a quale that is best described as “feeling like gathering energy”. But that requires a whole different architecture.
It sounds like we agree.
Including about my claim that it provides a more parsimonious explanation of parents’ actions not to include concern for their children as a terminal value?
Yes—if you expected concern for children to be a terminal value, you would not expect to see adults of breeding age who do not want children. (That is the specific evidence that convinced me.) I don’t think I’ve quite worked out your position on Parfitian hitchhiking, but I don’t see any difference between what you claim and what I claim regarding parenthood.
I spoke correctly—I didn’t express agreement on the broader issue because I don’t want to update too hastily. I’m still thinking.
You should put that in the article. (True, it’s a causal iteration rather than an acausal prediction. But it’ll still make the article clearer.)
Thanks for the suggestion, I’ve added it.