I don’t understand the point of these questions. You’re stuck with the same explanatory difficulties with the opposite theory: why does the cognitive system that identifies _changes in utility function_ fire when you have a child? Does parenthood activate the same terminal values that a PH survivor does upon waking up?
A utility function need not change when a child is born. After all, a utility function is a mapping from states-of-the-world to utilities and the birth of a child is merely a change in the state of the world.
Nonetheless, utility mapping functions can change as a result of information which doesn’t betoken a change in the state-of-the-world, but merely in your understanding your own desires. For example, your first taste of garlic ice cream. Or, more to the point, new parents sometimes report dramatic changes in outlook simply from observation of their baby’s first smile. The world has not changed, but somehow your place within it has.
See sibling reply to Robin. How are you showing a explanatory advantage to attributing the behavior to utility functions rather than SAMEL recognition? (Or what were you otherwise trying to establish?)
How are you showing a explanatory advantage to attributing the behavior to utility functions rather than SAMEL recognition? (Or what were you otherwise trying to establish?)
I wasn’t trying to show an advantage. You asked a question about my preferred explanatory framework. I interpreted the question to be something like, “How does the birth of a child trigger a particular special cognitive function?”. My answer was that it doesn’t. The birth of a baby is a change in the state of the world, and machinery for this (Bayesian updating) is already built in.
If you insist that I show an explanatory advantage, I would make two (not intended to be very convincing!) points:
“Occam’s razor” suggests that I shouldn’t introduce entities (SAMELs, in this case) that I don’t really need.
“Perplexed’s tweezers” suggests that I shouldn’t put too much trust in explanations (SAMELs, in this case) that I don’t really understand.
Okay, but if your preferred explanatory framework is strictly worse per the MML formalism (equivalent to rationalist Occam’s razor), then that would be a reason that my explanation is preferred.
You claim that my explanation fails by this metric:
“Occam’s razor” suggests that I shouldn’t introduce entities (SAMELs, in this case) that I don’t really need.
However, the two theories we’re deciding between (2a and 2b) don’t explicitly involve SAMELs in either case. [1]
Theory 2a: Parents have a utility function that places positive weight on both themselves and their children. Theory 2b: Parents have a utility function that places positive weight on only themselves (!!!); however, they are limited to implementing decision theories capable of surviving natural selection.
The only entity in 2b that is not in 2a is the claim that parents are limited to implementing decision theories capable of surviving natural selection. But as I said in footnote 2, this doesn’t penalize it under Occam’s Razor, because that must be assumed in both cases, so there’s no net penalty for 2b—implications of existing assumptions do not count toward the complexity/length of your explanation (for reasons I can explain in greater depth if you wish).
But to be honest, I’m losing track of the point being established by your objections (for which I apologize), so I’d appreciate it if you could (for my sake) explicitly put them back in the context of the article and this exchange.
[1] Before you glare in frustration at my apparent sudden attempt to throw SAMELs under the bus: the thesis of the article does involve SAMELs, but at that point, it’s either explaining more phenomena (i.e. psychology of moral intuitions), or showing the equivalence to acting on SAMELs.
You claim that my explanation fails by this metric:
“Occam’s razor” suggests that I shouldn’t introduce entities (SAMELs, in this case) that I don’t really need.
However, the two theories we’re deciding between (2a and 2b) don’t explicitly involve SAMELs in either case. [1]
Theory 2a: Parents have a utility function that places positive weight on both themselves and their children.
Theory 2b: Parents have a utility function that places positive weight on only themselves (!!!); however, they are limited to implementing decision theories capable of surviving natural selection.
Ok, I accept your argument that Occam is neutral between you and I. SAMELs aren’t involved at decision time in 2b, just as “Inclusive fitness” and “Hamilton’s rule” aren’t involved at decision time in 2a.
I will point out though, since we are looking only at the present, that the utility function in 2a can, in principle, be examined using “revealed preference”, whereas your purely selfish child-neutral utility function is a theoretical construct which would be hard to measure, even in principle.
Without Occam, I have to fall back on my second objection, the one I facetiously named “Perplexed’s tweezers”. I simply don’t understand your theory well enough to criticize it. Apparently your decision theory (like my offspring-inclusive utility function) is installed by natural selection. Ok, but what is the decision theory you end up with? I claim that my evolution-installed decision theory is just garden-variety utility maximization. What is your evolution-installed decision theory?
If you made this clear already and I failed to pick up on it, I apologize.
Ok, I accept your argument that Occam is neutral between you and I.
Hold on—that’s not what I said. I said that it was neutral on the issue of including “they can only use decision theories that could survive natural selection”. I claim it is not neutral on the supposition of additional terms in the utility function, as 2a does.
SAMELs aren’t involved at decision time in 2b, just as “Inclusive fitness” and “Hamilton’s rule” aren’t involved at decision time in 2a.
It doesn’t matter. They (inclusive fitness and Hamilton’s rule) have to be assumed (or implied by something that has to be assumed) anyway, because we’re dealing with people, so they’ll add the same complexity to both explanations.
I will point out though, since we are looking only at the present, that the utility function in 2a can, in principle, be examined using “revealed preference”, whereas your purely selfish child-neutral utility function is a theoretical construct which would be hard to measure, even in principle.
As I’ve explained to you several times, looking at actions does not imply a unique utility function, so you can’t claim that you’ve measured it just by looking at their actions. The utility functions “I care about myself and my child” and “I care about myself” can produce the same actions, as I’ve demonstrated, because certain (biologically plausible) decision theories can output the action “care for child at expense of self”, even in the absence of a causal benefit to the self.
I simply don’t understand your theory well enough to criticize it. … what is the decision theory you end up with?
It is a class of DTs, the kind that count acausal benefits (SAMELs) on par with causal ones. The SAMELs need not be consciously recognized as such but they do need to feel different to motivate the behavior.
However, I could be more helpful if you asked specific questions about specific passages. Previously, you claimed that after reading it, you didn’t see how natural selection is like Omega, even after I pointed to the passage. That made me a sad panda.
You more than made up for it with the Parfit’s robot idea, though :-)
We are clearly talking past each other, and it does not seem to me that it would be productive to continue.
For example, I have repeatedly responded to your claim (not explanation!) that the 2a utility function is not susceptible to “revealed preference”. You have never acknowledged my response, but continue claiming that you have explained it to me.
I simply don’t understand your theory well enough to criticize it. … what is the decision theory you end up with?
It is a class of DTs, the kind that count acausal benefits (SAMELs) on par with causal ones.
I have to interpret that as a policy of using some other kind of “surgery” for counterfactuals. Something other than the standard kind of surgery used in causal decision theory (CDT). So the obvious questions become, “So, what kind of surgery do you advocate?” and “How do you know when to use this strange surgery rather than the one Pearl suggests?”.
The SAMELs need not be consciously recognized as such but they do need to feel different to motivate the behavior.
That sentence may mean something to you, but I can’t even tell who is doing the feeling, what that feeling is different from, and what (or who) is doing the motivating.
… You more than made up for it with the Parfit’s robot idea, though.
It wasn’t my idea. It was timtyler’s. Maybe you will have better luck explaining your ideas to him. He was patient enough to explain the robot to me twice.
Too many SAMELs and CAMELs for me. I didn’t even get as far as seeing the analogy between natural selection and Omega. However, unlike you, I thought: this doesn’t sound very interesting; I can’t be bothered. Retrospectively, I do now get the bit in the summary—if that is what it is all about. I could probably weigh in on how parental care works in mammals—but without absorbing all the associated context, I doubt I would be contributing positively.
Thanks for the robot credit. It doesn’t feel like my idea either. After some hanging around Yudkowsky, it soon becomes clear that most of the material about decision theory here is partly in the context of a decision theory for machine intelligence—so substituting in a machine seems very natural.
Anyway, we don’t want you on too different a page—even if it does produce nice stories about the motivtions of stranded hitch-hikers.
For example, I have repeatedly responded to your claim (not explanation!) that the 2a utility function is not susceptible to “revealed preference”. You have never acknowledged my response, but continue claiming that you have explained it to me.
You have certainly posted responses; I don’t recall you saying anything responsive, though, i.e. something that would establish that seeing someone’s actions suffices to identify a unique (enough) utility function, at least in this case—and I can show you more of the difficulties of such a task, if you would like. But yes, please point me to where you think you’ve said something responsive, as I just defined responsive.
I have to interpret that as a policy of using some other kind of “surgery” for counterfactuals. Something other than the standard kind of surgery used in causal decision theory (CDT). So the obvious questions become, “So, what kind of surgery do you advocate?” and “How do you know when to use this strange surgery rather than the one Pearl suggests?”.
Nothing I’ve described requires doing anything differently than Pearl’s kind of counterfactual surgery. For example, see EY’s exposition of Timeless Decision Theory, which does standard CF surgery but differs in how it calculates probabilities on results given a particular surgery, for purposes of calculating expected utility.
And that’s really the crux of it: The trick in TDT—and explaining human behavior with SAMELs—is that you can keep the same (genuinely) terminal values, but have a better chance of achieving them if you change the probability weighting, and change it in a way that assigns more expected utility to SAMEL-based actions.
Those probabilities are more like beliefs than values. And as another poster demonstrated a while back, you can take any agent’s decision ranking, and claim it was from various different value/belief combinations. For example, if someone reaches for an apple instead of reaching for an orange, you can say, consistently with this observation, that:
they prefer the apple to the orange, and believe they have 100% chance of getting what they reach for (pure value-based decision)
are indifferent between the apple and the orange, but believe that they have a higher chance of getting the reached-for fruit by reaching for the apple (pure belief-based decision)
or anything in between.
TDT, then, doesn’t need to posit additional values (like “honor”) -- it just changes its beliefs about the probabilities. Agents acting on SAMELs do the same thing, and I claim this leads to a simpler description of behavior.
The SAMELs need not be consciously recognized as such but they do need to feel different to motivate the behavior.
That sentence may mean something to you, but I can’t even tell who is doing the feeling, what that feeling is different from, and what (or who) is doing the motivating.
I can answer that, but I should probably just explain the confusing distinctions: From the inside, it is the feeling (like “love”) that is psychologically responsible for the agent’s decision. My point is that this “love” action is identical to what would result from deciding based on SAMELs (and not valuing the loved one), even though it feels like love, not like identifying a SAMEL.
So, in short, the agent feels the love, the love motivates the behavior (psychologically); and, as a group, the set of feelings explainable through SAMELs feel different than other kinds of feelings.
In my haste to shut this conversation down, I have written a falsehood. Allow me to correct it, and then, please, let us stop.
Regarding “revealed preference, you ask where I previously responded to you. Here it is. It is not nearly as complete a response as I had remembered. In any case, as I read through what we each have written regarding “revealed preference”, I find that not only do we disagree as to what the phrase means, I suspect we also are both wrong. This “revealed preference” dispute is such a mess that I really don’t want to continue it. I apologize for claiming I had corrected you, when actually I had only counter-asserted.
As Perplexed said, there is no requirement that the utility function change—and, in fact, no reason to believe that it does not already have positive terms for children before reproduction. A lot of people report wanting children.
I’m asking these questions because we clearly have not established agreement, and I want to determine why. I assume that either we are using conflicting data, applying incompatible rules of inference, or simply misreading each other’s writing. It was this last possibility I was probing with that last question.
As Perplexed said, there is no requirement that the utility function change—and, in fact, no reason to believe that it does not already have positive terms for children before reproduction. A lot of people report wanting children.
Okay, but by the same token, there’s no need to assume recognition of the SAMEL (that favors producing and caring for children) changes. (And if it matters, a lot of people report not wanting children, but then wanting to care for their children upon involuntary parenthood.)
None of the things you’re pointing out seem to differentiate the utility function-term explanation from the SAMEL-recognition explanation.
So you’re agreeing with me in this one respect? (I don’t mean to sound confrontational, I just want to make sure you didn’t reverse something by accident.)
The pattern of “not wanting children, but then wanting to spend resources to care for the children” is better explained by a SAMEL pattern than by a utility function pattern. The fact of people wanting children can be sufficiently explained by the reasons people give for wanting children: a desire for a legacy, an expected sense of fulfillment from parenthood, etcetera. Finally, the fact that this is a SAMEL pattern doesn’t mean that the adaptation works on SAMEL patterns—the ability of Parfit’s hitchhiker to precommit to paying Omega is a separate adaptation from the childrearing instinct.
How does “not wanting children, but then wanting to spend resources to care for the children” involve SAMELs in a way that wanting to have children does not?
Yes, you can explain people’s pursuance of goals by the reasons they say. The problem is that this isn’t the best explanation. As you keep adding new terminal values to explain the actions, you complicate the explanation. If you can do without these—and I think I’ve shown you can—you’re left with a superior explanation.
The fact that it feels like “pursuing a legacy” on the inside does not favor that being the superior explanation. Remember, the desire to pay Omega in PH feels like gratefulness on the inside—like the Omega has some otherwise inherent deservedness of receiving the payment. But in both cases, “If the survivor did not regard it as optimal to pay, the survivor would not be here”, and the SAMEL explanation only requires that humans have choice-machinery that favors acting on these (already given) facts.
There is no pre-commitment on the part of human hitchhikers in the sense that they are inextricably bound to pay—they are still making a choice, even though selection has been applied on the set of hitchhikers. It is not their precommitment that leads them to pay, but their choice-machinery’s having alerted them to the optimality of doing so—which feels like gratefulness.
My tongue detects sweetness, not healthfulness, even though the process that created the adaptation was designed to recognize healthfulness.
Not everyone wants children, as you said. It is not evolutionarily necessary for people to want children—only for them to want sex. That anyone wants children might be explained by the reasons they actually give for wanting children.
See above.
I am not invested in the word “precommitment”—we are describing the same behavior on the part of the hitchhiker.
My tongue detects sweetness, not healthfulness, even though the process that created the adaptation was designed to recognize healthfulness.
This is the crux of the matter—desire for energy-dense consumables was selected for because quickly gathering energy was adaptive. It feels like sweetness from the inside: this is the region of qualiaspace that corresponds to feeling a motivation to act on that means-ends link. It does not feel like quickly gathering energy. Similarly, being motivated by SAMELs needn’t feel like such a recognition—it feels like an “otherwise-ungrounded inherent deservedness of others of being treated well” (or badly).
Not everyone wants children, as you said. It is not evolutionarily necessary for people to want children—only for them to want sex. That anyone wants children might be explained by the reasons they actually give for wanting children.
I am not invested in the word “precommitment”—we are describing the same behavior on the part of the hitchhiker.
Okay, reviewing your point, I have to partially agree—general desire to act on SAMELs need not be (and probably isn’t) the same choice machinery that motivates specific child-bearing acts. The purpose of the situation was to show how you can account for behavior without complicating the utility function. Rather than additionally positing that someone terminally values their children, we can say that they are self-interested, but that only certain decision theories ever make it to the next generation.
In both cases, we have to rely on “if they did not regard it as optimal to care for their children (and given genetic psychological continuity), they would not be there”, but only in 2a must we elevate this caring to a terminal value for purposes of explanation.
It feels like sweetness from the inside: this is the region of qualiaspace that corresponds to feeling a motivation to act on that means-ends link.
This is good, but
It does not feel like quickly gathering energy.
is still hiding some confusion (in me, anyway.) Why say that it doesn’t feel like quickly gathering energy? What would feel like quickly gathering energy?
I’m now imagining a sucking-in-lines-qualia, (warning tvtropes) lurking in a region of qualia-space only accessible to sentient energy weaponry. And I’m kinda jealous.
is still hiding some confusion (in me, anyway.) Why say that it doesn’t feel like quickly gathering energy?
Getting a nutrient feed via IV doesn’t feel like sweetness, but does involve quickly getting energy.
What would feel like quickly gathering energy?
If you had a cognitive system that directly recognized any gain in energy, and credited it as good, for that reason, then you would have a quale that is best described as “feeling like gathering energy”. But that requires a whole different architecture.
Including about my claim that it provides a more parsimonious explanation of parents’ actions not to include concern for their children as a terminal value?
Yes—if you expected concern for children to be a terminal value, you would not expect to see adults of breeding age who do not want children. (That is the specific evidence that convinced me.) I don’t think I’ve quite worked out your position on Parfitian hitchhiking, but I don’t see any difference between what you claim and what I claim regarding parenthood.
I don’t understand the point of these questions. You’re stuck with the same explanatory difficulties with the opposite theory: why does the cognitive system that identifies _changes in utility function_ fire when you have a child? Does parenthood activate the same terminal values that a PH survivor does upon waking up?
A utility function need not change when a child is born. After all, a utility function is a mapping from states-of-the-world to utilities and the birth of a child is merely a change in the state of the world.
Nonetheless, utility mapping functions can change as a result of information which doesn’t betoken a change in the state-of-the-world, but merely in your understanding your own desires. For example, your first taste of garlic ice cream. Or, more to the point, new parents sometimes report dramatic changes in outlook simply from observation of their baby’s first smile. The world has not changed, but somehow your place within it has.
See sibling reply to Robin. How are you showing a explanatory advantage to attributing the behavior to utility functions rather than SAMEL recognition? (Or what were you otherwise trying to establish?)
I wasn’t trying to show an advantage. You asked a question about my preferred explanatory framework. I interpreted the question to be something like, “How does the birth of a child trigger a particular special cognitive function?”. My answer was that it doesn’t. The birth of a baby is a change in the state of the world, and machinery for this (Bayesian updating) is already built in.
If you insist that I show an explanatory advantage, I would make two (not intended to be very convincing!) points:
“Occam’s razor” suggests that I shouldn’t introduce entities (SAMELs, in this case) that I don’t really need.
“Perplexed’s tweezers” suggests that I shouldn’t put too much trust in explanations (SAMELs, in this case) that I don’t really understand.
Okay, but if your preferred explanatory framework is strictly worse per the MML formalism (equivalent to rationalist Occam’s razor), then that would be a reason that my explanation is preferred.
You claim that my explanation fails by this metric:
However, the two theories we’re deciding between (2a and 2b) don’t explicitly involve SAMELs in either case. [1]
The only entity in 2b that is not in 2a is the claim that parents are limited to implementing decision theories capable of surviving natural selection. But as I said in footnote 2, this doesn’t penalize it under Occam’s Razor, because that must be assumed in both cases, so there’s no net penalty for 2b—implications of existing assumptions do not count toward the complexity/length of your explanation (for reasons I can explain in greater depth if you wish).
But to be honest, I’m losing track of the point being established by your objections (for which I apologize), so I’d appreciate it if you could (for my sake) explicitly put them back in the context of the article and this exchange.
[1] Before you glare in frustration at my apparent sudden attempt to throw SAMELs under the bus: the thesis of the article does involve SAMELs, but at that point, it’s either explaining more phenomena (i.e. psychology of moral intuitions), or showing the equivalence to acting on SAMELs.
Ok, I accept your argument that Occam is neutral between you and I. SAMELs aren’t involved at decision time in 2b, just as “Inclusive fitness” and “Hamilton’s rule” aren’t involved at decision time in 2a.
I will point out though, since we are looking only at the present, that the utility function in 2a can, in principle, be examined using “revealed preference”, whereas your purely selfish child-neutral utility function is a theoretical construct which would be hard to measure, even in principle.
Without Occam, I have to fall back on my second objection, the one I facetiously named “Perplexed’s tweezers”. I simply don’t understand your theory well enough to criticize it. Apparently your decision theory (like my offspring-inclusive utility function) is installed by natural selection. Ok, but what is the decision theory you end up with? I claim that my evolution-installed decision theory is just garden-variety utility maximization. What is your evolution-installed decision theory?
If you made this clear already and I failed to pick up on it, I apologize.
Hold on—that’s not what I said. I said that it was neutral on the issue of including “they can only use decision theories that could survive natural selection”. I claim it is not neutral on the supposition of additional terms in the utility function, as 2a does.
It doesn’t matter. They (inclusive fitness and Hamilton’s rule) have to be assumed (or implied by something that has to be assumed) anyway, because we’re dealing with people, so they’ll add the same complexity to both explanations.
As I’ve explained to you several times, looking at actions does not imply a unique utility function, so you can’t claim that you’ve measured it just by looking at their actions. The utility functions “I care about myself and my child” and “I care about myself” can produce the same actions, as I’ve demonstrated, because certain (biologically plausible) decision theories can output the action “care for child at expense of self”, even in the absence of a causal benefit to the self.
It is a class of DTs, the kind that count acausal benefits (SAMELs) on par with causal ones. The SAMELs need not be consciously recognized as such but they do need to feel different to motivate the behavior.
However, I could be more helpful if you asked specific questions about specific passages. Previously, you claimed that after reading it, you didn’t see how natural selection is like Omega, even after I pointed to the passage. That made me a sad panda.
You more than made up for it with the Parfit’s robot idea, though :-)
We are clearly talking past each other, and it does not seem to me that it would be productive to continue.
For example, I have repeatedly responded to your claim (not explanation!) that the 2a utility function is not susceptible to “revealed preference”. You have never acknowledged my response, but continue claiming that you have explained it to me.
I have to interpret that as a policy of using some other kind of “surgery” for counterfactuals. Something other than the standard kind of surgery used in causal decision theory (CDT). So the obvious questions become, “So, what kind of surgery do you advocate?” and “How do you know when to use this strange surgery rather than the one Pearl suggests?”.
That sentence may mean something to you, but I can’t even tell who is doing the feeling, what that feeling is different from, and what (or who) is doing the motivating.
It wasn’t my idea. It was timtyler’s. Maybe you will have better luck explaining your ideas to him. He was patient enough to explain the robot to me twice.
Too many SAMELs and CAMELs for me. I didn’t even get as far as seeing the analogy between natural selection and Omega. However, unlike you, I thought: this doesn’t sound very interesting; I can’t be bothered. Retrospectively, I do now get the bit in the summary—if that is what it is all about. I could probably weigh in on how parental care works in mammals—but without absorbing all the associated context, I doubt I would be contributing positively.
Thanks for the robot credit. It doesn’t feel like my idea either. After some hanging around Yudkowsky, it soon becomes clear that most of the material about decision theory here is partly in the context of a decision theory for machine intelligence—so substituting in a machine seems very natural.
Anyway, we don’t want you on too different a page—even if it does produce nice stories about the motivtions of stranded hitch-hikers.
You have certainly posted responses; I don’t recall you saying anything responsive, though, i.e. something that would establish that seeing someone’s actions suffices to identify a unique (enough) utility function, at least in this case—and I can show you more of the difficulties of such a task, if you would like. But yes, please point me to where you think you’ve said something responsive, as I just defined responsive.
Nothing I’ve described requires doing anything differently than Pearl’s kind of counterfactual surgery. For example, see EY’s exposition of Timeless Decision Theory, which does standard CF surgery but differs in how it calculates probabilities on results given a particular surgery, for purposes of calculating expected utility.
And that’s really the crux of it: The trick in TDT—and explaining human behavior with SAMELs—is that you can keep the same (genuinely) terminal values, but have a better chance of achieving them if you change the probability weighting, and change it in a way that assigns more expected utility to SAMEL-based actions.
Those probabilities are more like beliefs than values. And as another poster demonstrated a while back, you can take any agent’s decision ranking, and claim it was from various different value/belief combinations. For example, if someone reaches for an apple instead of reaching for an orange, you can say, consistently with this observation, that:
they prefer the apple to the orange, and believe they have 100% chance of getting what they reach for (pure value-based decision)
are indifferent between the apple and the orange, but believe that they have a higher chance of getting the reached-for fruit by reaching for the apple (pure belief-based decision)
or anything in between.
TDT, then, doesn’t need to posit additional values (like “honor”) -- it just changes its beliefs about the probabilities. Agents acting on SAMELs do the same thing, and I claim this leads to a simpler description of behavior.
I can answer that, but I should probably just explain the confusing distinctions: From the inside, it is the feeling (like “love”) that is psychologically responsible for the agent’s decision. My point is that this “love” action is identical to what would result from deciding based on SAMELs (and not valuing the loved one), even though it feels like love, not like identifying a SAMEL.
So, in short, the agent feels the love, the love motivates the behavior (psychologically); and, as a group, the set of feelings explainable through SAMELs feel different than other kinds of feelings.
In my haste to shut this conversation down, I have written a falsehood. Allow me to correct it, and then, please, let us stop.
Regarding “revealed preference, you ask where I previously responded to you. Here it is. It is not nearly as complete a response as I had remembered. In any case, as I read through what we each have written regarding “revealed preference”, I find that not only do we disagree as to what the phrase means, I suspect we also are both wrong. This “revealed preference” dispute is such a mess that I really don’t want to continue it. I apologize for claiming I had corrected you, when actually I had only counter-asserted.
I like the tweezers, but would like a better name for it.
As Perplexed said, there is no requirement that the utility function change—and, in fact, no reason to believe that it does not already have positive terms for children before reproduction. A lot of people report wanting children.
I’m asking these questions because we clearly have not established agreement, and I want to determine why. I assume that either we are using conflicting data, applying incompatible rules of inference, or simply misreading each other’s writing. It was this last possibility I was probing with that last question.
Okay, but by the same token, there’s no need to assume recognition of the SAMEL (that favors producing and caring for children) changes. (And if it matters, a lot of people report not wanting children, but then wanting to care for their children upon involuntary parenthood.)
None of the things you’re pointing out seem to differentiate the utility function-term explanation from the SAMEL-recognition explanation.
That’s a test that favors the SAMEL explanation, I think.
So you’re agreeing with me in this one respect? (I don’t mean to sound confrontational, I just want to make sure you didn’t reverse something by accident.)
Right—here’s what I’ve got.
The pattern of “not wanting children, but then wanting to spend resources to care for the children” is better explained by a SAMEL pattern than by a utility function pattern. The fact of people wanting children can be sufficiently explained by the reasons people give for wanting children: a desire for a legacy, an expected sense of fulfillment from parenthood, etcetera. Finally, the fact that this is a SAMEL pattern doesn’t mean that the adaptation works on SAMEL patterns—the ability of Parfit’s hitchhiker to precommit to paying Omega is a separate adaptation from the childrearing instinct.
I’m still not following:
How does “not wanting children, but then wanting to spend resources to care for the children” involve SAMELs in a way that wanting to have children does not?
Yes, you can explain people’s pursuance of goals by the reasons they say. The problem is that this isn’t the best explanation. As you keep adding new terminal values to explain the actions, you complicate the explanation. If you can do without these—and I think I’ve shown you can—you’re left with a superior explanation.
The fact that it feels like “pursuing a legacy” on the inside does not favor that being the superior explanation. Remember, the desire to pay Omega in PH feels like gratefulness on the inside—like the Omega has some otherwise inherent deservedness of receiving the payment. But in both cases, “If the survivor did not regard it as optimal to pay, the survivor would not be here”, and the SAMEL explanation only requires that humans have choice-machinery that favors acting on these (already given) facts.
There is no pre-commitment on the part of human hitchhikers in the sense that they are inextricably bound to pay—they are still making a choice, even though selection has been applied on the set of hitchhikers. It is not their precommitment that leads them to pay, but their choice-machinery’s having alerted them to the optimality of doing so—which feels like gratefulness.
My tongue detects sweetness, not healthfulness, even though the process that created the adaptation was designed to recognize healthfulness.
Not everyone wants children, as you said. It is not evolutionarily necessary for people to want children—only for them to want sex. That anyone wants children might be explained by the reasons they actually give for wanting children.
See above.
I am not invested in the word “precommitment”—we are describing the same behavior on the part of the hitchhiker.
This is the crux of the matter—desire for energy-dense consumables was selected for because quickly gathering energy was adaptive. It feels like sweetness from the inside: this is the region of qualiaspace that corresponds to feeling a motivation to act on that means-ends link. It does not feel like quickly gathering energy. Similarly, being motivated by SAMELs needn’t feel like such a recognition—it feels like an “otherwise-ungrounded inherent deservedness of others of being treated well” (or badly).
Okay, reviewing your point, I have to partially agree—general desire to act on SAMELs need not be (and probably isn’t) the same choice machinery that motivates specific child-bearing acts. The purpose of the situation was to show how you can account for behavior without complicating the utility function. Rather than additionally positing that someone terminally values their children, we can say that they are self-interested, but that only certain decision theories ever make it to the next generation.
In both cases, we have to rely on “if they did not regard it as optimal to care for their children (and given genetic psychological continuity), they would not be there”, but only in 2a must we elevate this caring to a terminal value for purposes of explanation.
This is good, but
is still hiding some confusion (in me, anyway.) Why say that it doesn’t feel like quickly gathering energy? What would feel like quickly gathering energy?
I’m now imagining a sucking-in-lines-qualia, (warning tvtropes) lurking in a region of qualia-space only accessible to sentient energy weaponry. And I’m kinda jealous.
Getting a nutrient feed via IV doesn’t feel like sweetness, but does involve quickly getting energy.
If you had a cognitive system that directly recognized any gain in energy, and credited it as good, for that reason, then you would have a quale that is best described as “feeling like gathering energy”. But that requires a whole different architecture.
It sounds like we agree.
Including about my claim that it provides a more parsimonious explanation of parents’ actions not to include concern for their children as a terminal value?
Yes—if you expected concern for children to be a terminal value, you would not expect to see adults of breeding age who do not want children. (That is the specific evidence that convinced me.) I don’t think I’ve quite worked out your position on Parfitian hitchhiking, but I don’t see any difference between what you claim and what I claim regarding parenthood.
I spoke correctly—I didn’t express agreement on the broader issue because I don’t want to update too hastily. I’m still thinking.