For example, I have repeatedly responded to your claim (not explanation!) that the 2a utility function is not susceptible to “revealed preference”. You have never acknowledged my response, but continue claiming that you have explained it to me.
You have certainly posted responses; I don’t recall you saying anything responsive, though, i.e. something that would establish that seeing someone’s actions suffices to identify a unique (enough) utility function, at least in this case—and I can show you more of the difficulties of such a task, if you would like. But yes, please point me to where you think you’ve said something responsive, as I just defined responsive.
I have to interpret that as a policy of using some other kind of “surgery” for counterfactuals. Something other than the standard kind of surgery used in causal decision theory (CDT). So the obvious questions become, “So, what kind of surgery do you advocate?” and “How do you know when to use this strange surgery rather than the one Pearl suggests?”.
Nothing I’ve described requires doing anything differently than Pearl’s kind of counterfactual surgery. For example, see EY’s exposition of Timeless Decision Theory, which does standard CF surgery but differs in how it calculates probabilities on results given a particular surgery, for purposes of calculating expected utility.
And that’s really the crux of it: The trick in TDT—and explaining human behavior with SAMELs—is that you can keep the same (genuinely) terminal values, but have a better chance of achieving them if you change the probability weighting, and change it in a way that assigns more expected utility to SAMEL-based actions.
Those probabilities are more like beliefs than values. And as another poster demonstrated a while back, you can take any agent’s decision ranking, and claim it was from various different value/belief combinations. For example, if someone reaches for an apple instead of reaching for an orange, you can say, consistently with this observation, that:
they prefer the apple to the orange, and believe they have 100% chance of getting what they reach for (pure value-based decision)
are indifferent between the apple and the orange, but believe that they have a higher chance of getting the reached-for fruit by reaching for the apple (pure belief-based decision)
or anything in between.
TDT, then, doesn’t need to posit additional values (like “honor”) -- it just changes its beliefs about the probabilities. Agents acting on SAMELs do the same thing, and I claim this leads to a simpler description of behavior.
The SAMELs need not be consciously recognized as such but they do need to feel different to motivate the behavior.
That sentence may mean something to you, but I can’t even tell who is doing the feeling, what that feeling is different from, and what (or who) is doing the motivating.
I can answer that, but I should probably just explain the confusing distinctions: From the inside, it is the feeling (like “love”) that is psychologically responsible for the agent’s decision. My point is that this “love” action is identical to what would result from deciding based on SAMELs (and not valuing the loved one), even though it feels like love, not like identifying a SAMEL.
So, in short, the agent feels the love, the love motivates the behavior (psychologically); and, as a group, the set of feelings explainable through SAMELs feel different than other kinds of feelings.
In my haste to shut this conversation down, I have written a falsehood. Allow me to correct it, and then, please, let us stop.
Regarding “revealed preference, you ask where I previously responded to you. Here it is. It is not nearly as complete a response as I had remembered. In any case, as I read through what we each have written regarding “revealed preference”, I find that not only do we disagree as to what the phrase means, I suspect we also are both wrong. This “revealed preference” dispute is such a mess that I really don’t want to continue it. I apologize for claiming I had corrected you, when actually I had only counter-asserted.
You have certainly posted responses; I don’t recall you saying anything responsive, though, i.e. something that would establish that seeing someone’s actions suffices to identify a unique (enough) utility function, at least in this case—and I can show you more of the difficulties of such a task, if you would like. But yes, please point me to where you think you’ve said something responsive, as I just defined responsive.
Nothing I’ve described requires doing anything differently than Pearl’s kind of counterfactual surgery. For example, see EY’s exposition of Timeless Decision Theory, which does standard CF surgery but differs in how it calculates probabilities on results given a particular surgery, for purposes of calculating expected utility.
And that’s really the crux of it: The trick in TDT—and explaining human behavior with SAMELs—is that you can keep the same (genuinely) terminal values, but have a better chance of achieving them if you change the probability weighting, and change it in a way that assigns more expected utility to SAMEL-based actions.
Those probabilities are more like beliefs than values. And as another poster demonstrated a while back, you can take any agent’s decision ranking, and claim it was from various different value/belief combinations. For example, if someone reaches for an apple instead of reaching for an orange, you can say, consistently with this observation, that:
they prefer the apple to the orange, and believe they have 100% chance of getting what they reach for (pure value-based decision)
are indifferent between the apple and the orange, but believe that they have a higher chance of getting the reached-for fruit by reaching for the apple (pure belief-based decision)
or anything in between.
TDT, then, doesn’t need to posit additional values (like “honor”) -- it just changes its beliefs about the probabilities. Agents acting on SAMELs do the same thing, and I claim this leads to a simpler description of behavior.
I can answer that, but I should probably just explain the confusing distinctions: From the inside, it is the feeling (like “love”) that is psychologically responsible for the agent’s decision. My point is that this “love” action is identical to what would result from deciding based on SAMELs (and not valuing the loved one), even though it feels like love, not like identifying a SAMEL.
So, in short, the agent feels the love, the love motivates the behavior (psychologically); and, as a group, the set of feelings explainable through SAMELs feel different than other kinds of feelings.
In my haste to shut this conversation down, I have written a falsehood. Allow me to correct it, and then, please, let us stop.
Regarding “revealed preference, you ask where I previously responded to you. Here it is. It is not nearly as complete a response as I had remembered. In any case, as I read through what we each have written regarding “revealed preference”, I find that not only do we disagree as to what the phrase means, I suspect we also are both wrong. This “revealed preference” dispute is such a mess that I really don’t want to continue it. I apologize for claiming I had corrected you, when actually I had only counter-asserted.