See sibling reply to Robin. How are you showing a explanatory advantage to attributing the behavior to utility functions rather than SAMEL recognition? (Or what were you otherwise trying to establish?)
How are you showing a explanatory advantage to attributing the behavior to utility functions rather than SAMEL recognition? (Or what were you otherwise trying to establish?)
I wasn’t trying to show an advantage. You asked a question about my preferred explanatory framework. I interpreted the question to be something like, “How does the birth of a child trigger a particular special cognitive function?”. My answer was that it doesn’t. The birth of a baby is a change in the state of the world, and machinery for this (Bayesian updating) is already built in.
If you insist that I show an explanatory advantage, I would make two (not intended to be very convincing!) points:
“Occam’s razor” suggests that I shouldn’t introduce entities (SAMELs, in this case) that I don’t really need.
“Perplexed’s tweezers” suggests that I shouldn’t put too much trust in explanations (SAMELs, in this case) that I don’t really understand.
Okay, but if your preferred explanatory framework is strictly worse per the MML formalism (equivalent to rationalist Occam’s razor), then that would be a reason that my explanation is preferred.
You claim that my explanation fails by this metric:
“Occam’s razor” suggests that I shouldn’t introduce entities (SAMELs, in this case) that I don’t really need.
However, the two theories we’re deciding between (2a and 2b) don’t explicitly involve SAMELs in either case. [1]
Theory 2a: Parents have a utility function that places positive weight on both themselves and their children. Theory 2b: Parents have a utility function that places positive weight on only themselves (!!!); however, they are limited to implementing decision theories capable of surviving natural selection.
The only entity in 2b that is not in 2a is the claim that parents are limited to implementing decision theories capable of surviving natural selection. But as I said in footnote 2, this doesn’t penalize it under Occam’s Razor, because that must be assumed in both cases, so there’s no net penalty for 2b—implications of existing assumptions do not count toward the complexity/length of your explanation (for reasons I can explain in greater depth if you wish).
But to be honest, I’m losing track of the point being established by your objections (for which I apologize), so I’d appreciate it if you could (for my sake) explicitly put them back in the context of the article and this exchange.
[1] Before you glare in frustration at my apparent sudden attempt to throw SAMELs under the bus: the thesis of the article does involve SAMELs, but at that point, it’s either explaining more phenomena (i.e. psychology of moral intuitions), or showing the equivalence to acting on SAMELs.
You claim that my explanation fails by this metric:
“Occam’s razor” suggests that I shouldn’t introduce entities (SAMELs, in this case) that I don’t really need.
However, the two theories we’re deciding between (2a and 2b) don’t explicitly involve SAMELs in either case. [1]
Theory 2a: Parents have a utility function that places positive weight on both themselves and their children.
Theory 2b: Parents have a utility function that places positive weight on only themselves (!!!); however, they are limited to implementing decision theories capable of surviving natural selection.
Ok, I accept your argument that Occam is neutral between you and I. SAMELs aren’t involved at decision time in 2b, just as “Inclusive fitness” and “Hamilton’s rule” aren’t involved at decision time in 2a.
I will point out though, since we are looking only at the present, that the utility function in 2a can, in principle, be examined using “revealed preference”, whereas your purely selfish child-neutral utility function is a theoretical construct which would be hard to measure, even in principle.
Without Occam, I have to fall back on my second objection, the one I facetiously named “Perplexed’s tweezers”. I simply don’t understand your theory well enough to criticize it. Apparently your decision theory (like my offspring-inclusive utility function) is installed by natural selection. Ok, but what is the decision theory you end up with? I claim that my evolution-installed decision theory is just garden-variety utility maximization. What is your evolution-installed decision theory?
If you made this clear already and I failed to pick up on it, I apologize.
Ok, I accept your argument that Occam is neutral between you and I.
Hold on—that’s not what I said. I said that it was neutral on the issue of including “they can only use decision theories that could survive natural selection”. I claim it is not neutral on the supposition of additional terms in the utility function, as 2a does.
SAMELs aren’t involved at decision time in 2b, just as “Inclusive fitness” and “Hamilton’s rule” aren’t involved at decision time in 2a.
It doesn’t matter. They (inclusive fitness and Hamilton’s rule) have to be assumed (or implied by something that has to be assumed) anyway, because we’re dealing with people, so they’ll add the same complexity to both explanations.
I will point out though, since we are looking only at the present, that the utility function in 2a can, in principle, be examined using “revealed preference”, whereas your purely selfish child-neutral utility function is a theoretical construct which would be hard to measure, even in principle.
As I’ve explained to you several times, looking at actions does not imply a unique utility function, so you can’t claim that you’ve measured it just by looking at their actions. The utility functions “I care about myself and my child” and “I care about myself” can produce the same actions, as I’ve demonstrated, because certain (biologically plausible) decision theories can output the action “care for child at expense of self”, even in the absence of a causal benefit to the self.
I simply don’t understand your theory well enough to criticize it. … what is the decision theory you end up with?
It is a class of DTs, the kind that count acausal benefits (SAMELs) on par with causal ones. The SAMELs need not be consciously recognized as such but they do need to feel different to motivate the behavior.
However, I could be more helpful if you asked specific questions about specific passages. Previously, you claimed that after reading it, you didn’t see how natural selection is like Omega, even after I pointed to the passage. That made me a sad panda.
You more than made up for it with the Parfit’s robot idea, though :-)
We are clearly talking past each other, and it does not seem to me that it would be productive to continue.
For example, I have repeatedly responded to your claim (not explanation!) that the 2a utility function is not susceptible to “revealed preference”. You have never acknowledged my response, but continue claiming that you have explained it to me.
I simply don’t understand your theory well enough to criticize it. … what is the decision theory you end up with?
It is a class of DTs, the kind that count acausal benefits (SAMELs) on par with causal ones.
I have to interpret that as a policy of using some other kind of “surgery” for counterfactuals. Something other than the standard kind of surgery used in causal decision theory (CDT). So the obvious questions become, “So, what kind of surgery do you advocate?” and “How do you know when to use this strange surgery rather than the one Pearl suggests?”.
The SAMELs need not be consciously recognized as such but they do need to feel different to motivate the behavior.
That sentence may mean something to you, but I can’t even tell who is doing the feeling, what that feeling is different from, and what (or who) is doing the motivating.
… You more than made up for it with the Parfit’s robot idea, though.
It wasn’t my idea. It was timtyler’s. Maybe you will have better luck explaining your ideas to him. He was patient enough to explain the robot to me twice.
Too many SAMELs and CAMELs for me. I didn’t even get as far as seeing the analogy between natural selection and Omega. However, unlike you, I thought: this doesn’t sound very interesting; I can’t be bothered. Retrospectively, I do now get the bit in the summary—if that is what it is all about. I could probably weigh in on how parental care works in mammals—but without absorbing all the associated context, I doubt I would be contributing positively.
Thanks for the robot credit. It doesn’t feel like my idea either. After some hanging around Yudkowsky, it soon becomes clear that most of the material about decision theory here is partly in the context of a decision theory for machine intelligence—so substituting in a machine seems very natural.
Anyway, we don’t want you on too different a page—even if it does produce nice stories about the motivtions of stranded hitch-hikers.
For example, I have repeatedly responded to your claim (not explanation!) that the 2a utility function is not susceptible to “revealed preference”. You have never acknowledged my response, but continue claiming that you have explained it to me.
You have certainly posted responses; I don’t recall you saying anything responsive, though, i.e. something that would establish that seeing someone’s actions suffices to identify a unique (enough) utility function, at least in this case—and I can show you more of the difficulties of such a task, if you would like. But yes, please point me to where you think you’ve said something responsive, as I just defined responsive.
I have to interpret that as a policy of using some other kind of “surgery” for counterfactuals. Something other than the standard kind of surgery used in causal decision theory (CDT). So the obvious questions become, “So, what kind of surgery do you advocate?” and “How do you know when to use this strange surgery rather than the one Pearl suggests?”.
Nothing I’ve described requires doing anything differently than Pearl’s kind of counterfactual surgery. For example, see EY’s exposition of Timeless Decision Theory, which does standard CF surgery but differs in how it calculates probabilities on results given a particular surgery, for purposes of calculating expected utility.
And that’s really the crux of it: The trick in TDT—and explaining human behavior with SAMELs—is that you can keep the same (genuinely) terminal values, but have a better chance of achieving them if you change the probability weighting, and change it in a way that assigns more expected utility to SAMEL-based actions.
Those probabilities are more like beliefs than values. And as another poster demonstrated a while back, you can take any agent’s decision ranking, and claim it was from various different value/belief combinations. For example, if someone reaches for an apple instead of reaching for an orange, you can say, consistently with this observation, that:
they prefer the apple to the orange, and believe they have 100% chance of getting what they reach for (pure value-based decision)
are indifferent between the apple and the orange, but believe that they have a higher chance of getting the reached-for fruit by reaching for the apple (pure belief-based decision)
or anything in between.
TDT, then, doesn’t need to posit additional values (like “honor”) -- it just changes its beliefs about the probabilities. Agents acting on SAMELs do the same thing, and I claim this leads to a simpler description of behavior.
The SAMELs need not be consciously recognized as such but they do need to feel different to motivate the behavior.
That sentence may mean something to you, but I can’t even tell who is doing the feeling, what that feeling is different from, and what (or who) is doing the motivating.
I can answer that, but I should probably just explain the confusing distinctions: From the inside, it is the feeling (like “love”) that is psychologically responsible for the agent’s decision. My point is that this “love” action is identical to what would result from deciding based on SAMELs (and not valuing the loved one), even though it feels like love, not like identifying a SAMEL.
So, in short, the agent feels the love, the love motivates the behavior (psychologically); and, as a group, the set of feelings explainable through SAMELs feel different than other kinds of feelings.
In my haste to shut this conversation down, I have written a falsehood. Allow me to correct it, and then, please, let us stop.
Regarding “revealed preference, you ask where I previously responded to you. Here it is. It is not nearly as complete a response as I had remembered. In any case, as I read through what we each have written regarding “revealed preference”, I find that not only do we disagree as to what the phrase means, I suspect we also are both wrong. This “revealed preference” dispute is such a mess that I really don’t want to continue it. I apologize for claiming I had corrected you, when actually I had only counter-asserted.
See sibling reply to Robin. How are you showing a explanatory advantage to attributing the behavior to utility functions rather than SAMEL recognition? (Or what were you otherwise trying to establish?)
I wasn’t trying to show an advantage. You asked a question about my preferred explanatory framework. I interpreted the question to be something like, “How does the birth of a child trigger a particular special cognitive function?”. My answer was that it doesn’t. The birth of a baby is a change in the state of the world, and machinery for this (Bayesian updating) is already built in.
If you insist that I show an explanatory advantage, I would make two (not intended to be very convincing!) points:
“Occam’s razor” suggests that I shouldn’t introduce entities (SAMELs, in this case) that I don’t really need.
“Perplexed’s tweezers” suggests that I shouldn’t put too much trust in explanations (SAMELs, in this case) that I don’t really understand.
Okay, but if your preferred explanatory framework is strictly worse per the MML formalism (equivalent to rationalist Occam’s razor), then that would be a reason that my explanation is preferred.
You claim that my explanation fails by this metric:
However, the two theories we’re deciding between (2a and 2b) don’t explicitly involve SAMELs in either case. [1]
The only entity in 2b that is not in 2a is the claim that parents are limited to implementing decision theories capable of surviving natural selection. But as I said in footnote 2, this doesn’t penalize it under Occam’s Razor, because that must be assumed in both cases, so there’s no net penalty for 2b—implications of existing assumptions do not count toward the complexity/length of your explanation (for reasons I can explain in greater depth if you wish).
But to be honest, I’m losing track of the point being established by your objections (for which I apologize), so I’d appreciate it if you could (for my sake) explicitly put them back in the context of the article and this exchange.
[1] Before you glare in frustration at my apparent sudden attempt to throw SAMELs under the bus: the thesis of the article does involve SAMELs, but at that point, it’s either explaining more phenomena (i.e. psychology of moral intuitions), or showing the equivalence to acting on SAMELs.
Ok, I accept your argument that Occam is neutral between you and I. SAMELs aren’t involved at decision time in 2b, just as “Inclusive fitness” and “Hamilton’s rule” aren’t involved at decision time in 2a.
I will point out though, since we are looking only at the present, that the utility function in 2a can, in principle, be examined using “revealed preference”, whereas your purely selfish child-neutral utility function is a theoretical construct which would be hard to measure, even in principle.
Without Occam, I have to fall back on my second objection, the one I facetiously named “Perplexed’s tweezers”. I simply don’t understand your theory well enough to criticize it. Apparently your decision theory (like my offspring-inclusive utility function) is installed by natural selection. Ok, but what is the decision theory you end up with? I claim that my evolution-installed decision theory is just garden-variety utility maximization. What is your evolution-installed decision theory?
If you made this clear already and I failed to pick up on it, I apologize.
Hold on—that’s not what I said. I said that it was neutral on the issue of including “they can only use decision theories that could survive natural selection”. I claim it is not neutral on the supposition of additional terms in the utility function, as 2a does.
It doesn’t matter. They (inclusive fitness and Hamilton’s rule) have to be assumed (or implied by something that has to be assumed) anyway, because we’re dealing with people, so they’ll add the same complexity to both explanations.
As I’ve explained to you several times, looking at actions does not imply a unique utility function, so you can’t claim that you’ve measured it just by looking at their actions. The utility functions “I care about myself and my child” and “I care about myself” can produce the same actions, as I’ve demonstrated, because certain (biologically plausible) decision theories can output the action “care for child at expense of self”, even in the absence of a causal benefit to the self.
It is a class of DTs, the kind that count acausal benefits (SAMELs) on par with causal ones. The SAMELs need not be consciously recognized as such but they do need to feel different to motivate the behavior.
However, I could be more helpful if you asked specific questions about specific passages. Previously, you claimed that after reading it, you didn’t see how natural selection is like Omega, even after I pointed to the passage. That made me a sad panda.
You more than made up for it with the Parfit’s robot idea, though :-)
We are clearly talking past each other, and it does not seem to me that it would be productive to continue.
For example, I have repeatedly responded to your claim (not explanation!) that the 2a utility function is not susceptible to “revealed preference”. You have never acknowledged my response, but continue claiming that you have explained it to me.
I have to interpret that as a policy of using some other kind of “surgery” for counterfactuals. Something other than the standard kind of surgery used in causal decision theory (CDT). So the obvious questions become, “So, what kind of surgery do you advocate?” and “How do you know when to use this strange surgery rather than the one Pearl suggests?”.
That sentence may mean something to you, but I can’t even tell who is doing the feeling, what that feeling is different from, and what (or who) is doing the motivating.
It wasn’t my idea. It was timtyler’s. Maybe you will have better luck explaining your ideas to him. He was patient enough to explain the robot to me twice.
Too many SAMELs and CAMELs for me. I didn’t even get as far as seeing the analogy between natural selection and Omega. However, unlike you, I thought: this doesn’t sound very interesting; I can’t be bothered. Retrospectively, I do now get the bit in the summary—if that is what it is all about. I could probably weigh in on how parental care works in mammals—but without absorbing all the associated context, I doubt I would be contributing positively.
Thanks for the robot credit. It doesn’t feel like my idea either. After some hanging around Yudkowsky, it soon becomes clear that most of the material about decision theory here is partly in the context of a decision theory for machine intelligence—so substituting in a machine seems very natural.
Anyway, we don’t want you on too different a page—even if it does produce nice stories about the motivtions of stranded hitch-hikers.
You have certainly posted responses; I don’t recall you saying anything responsive, though, i.e. something that would establish that seeing someone’s actions suffices to identify a unique (enough) utility function, at least in this case—and I can show you more of the difficulties of such a task, if you would like. But yes, please point me to where you think you’ve said something responsive, as I just defined responsive.
Nothing I’ve described requires doing anything differently than Pearl’s kind of counterfactual surgery. For example, see EY’s exposition of Timeless Decision Theory, which does standard CF surgery but differs in how it calculates probabilities on results given a particular surgery, for purposes of calculating expected utility.
And that’s really the crux of it: The trick in TDT—and explaining human behavior with SAMELs—is that you can keep the same (genuinely) terminal values, but have a better chance of achieving them if you change the probability weighting, and change it in a way that assigns more expected utility to SAMEL-based actions.
Those probabilities are more like beliefs than values. And as another poster demonstrated a while back, you can take any agent’s decision ranking, and claim it was from various different value/belief combinations. For example, if someone reaches for an apple instead of reaching for an orange, you can say, consistently with this observation, that:
they prefer the apple to the orange, and believe they have 100% chance of getting what they reach for (pure value-based decision)
are indifferent between the apple and the orange, but believe that they have a higher chance of getting the reached-for fruit by reaching for the apple (pure belief-based decision)
or anything in between.
TDT, then, doesn’t need to posit additional values (like “honor”) -- it just changes its beliefs about the probabilities. Agents acting on SAMELs do the same thing, and I claim this leads to a simpler description of behavior.
I can answer that, but I should probably just explain the confusing distinctions: From the inside, it is the feeling (like “love”) that is psychologically responsible for the agent’s decision. My point is that this “love” action is identical to what would result from deciding based on SAMELs (and not valuing the loved one), even though it feels like love, not like identifying a SAMEL.
So, in short, the agent feels the love, the love motivates the behavior (psychologically); and, as a group, the set of feelings explainable through SAMELs feel different than other kinds of feelings.
In my haste to shut this conversation down, I have written a falsehood. Allow me to correct it, and then, please, let us stop.
Regarding “revealed preference, you ask where I previously responded to you. Here it is. It is not nearly as complete a response as I had remembered. In any case, as I read through what we each have written regarding “revealed preference”, I find that not only do we disagree as to what the phrase means, I suspect we also are both wrong. This “revealed preference” dispute is such a mess that I really don’t want to continue it. I apologize for claiming I had corrected you, when actually I had only counter-asserted.
I like the tweezers, but would like a better name for it.