This post gives what could be called an “epistemic Hansonian explanation”. A normal (“instrumental”) Hansonian explanation treats humans as agents that possess hidden goals, whose actions follow closely from those goals, and explains their actual actions in terms of these hypothetical goals. People don’t respond to easily available information about quality of healthcare, but (hypothetically) do respond to information about how prestigious a hospital is. Which goal does this behavior optimize for? Affiliation with prestigious institutions, apparently. Therefore, humans don’t really care about health, they care about prestige instead. As Anna’s recent post discusses, the problem with this explanation is that human behavior doesn’t closely follow any coherent goals at all, so even if we posit that humans have goals, these goals can’t be found by asking “What goals does the behavior optimize?”
Similarly in this instance, when you ask humans a question, you get an answer. Answers to the question “How happy are you with your life these days?” are (hypothetically) best explained by respondents’ current mood. Which question are the responses good answers for? The question about the current mood. Therefore, the respondents don’t really answer the question about their average happiness, they answer the question about their current mood instead.
The problem with these explanations seems to be the same: we try to fit the behavior (actions and responses to questions both) to the idea of humans as agents, whose behavior closely optimizes the goals they really pursue, and whose answers closely answer the questions they really consider. But there seems to be no reality to the (coherent) goals and beliefs (or questions one actually considers) that fall out of a descriptive model of humans as agents, even if there are coherent goals and beliefs somewhere, too loosely connected to actions and anticipations to be apparent in them.
I am probably not qualified to make good guesses about this, but as an avid reader of O.B., I think Hanson would be among the first people to agree with you that humans aren’t subconsciously enacting coherent goals. The agent with hidden goals models, similar to many situations where Markov model-like formalism is adopted, is just an expedient tool that might offer some correlation with what an agent will do in a given future situation. Affiliation with prestigious institutions, while probably not a coherent goal held over time by many people, does seem to correlate with certain actions (endorsing credentialed folks’ predictions, trusting confident-seeming doctors, approving municipal construction projects despite being told to explicitly account for planning bias, etc.)
I guess what I’m suggesting is that you’re right that people don’t have these as coherent goals, but I don’t see any better predictive model and would still use the hidden goal model until a better one comes up. IMO, the ‘better one’ will just be a deeper level Markov model. Maybe we don’t have easily explicable hidden goals that lend themselves to English summaries, but presumably we do have cognitive principles that differ from noise and cause correlation with survival behaviors. Small model approximations of this are of course bad and not the whole story, but are better than anything else at the moment and often times useful.
Yes, these “hidden goals” and “hidden questions” of descriptive idealization have predictive power, behavior clusters around them. The (potential) error is in implying that these hold a more fundamental role, existence as actual goals/questions/beliefs, that their properties contradict those of their (more idealistic) loosely-connected counterparts in a normative idealization. People behaving in a way that ignores more direct signals of healthcare quality and instead pursuing healthcare providers’ prestige doesn’t easily contradict people normatively caring about quality of healthcare.
Sure, all it tells us is that the signal we evolved to extract from the environment when worrying about healthcare is related to credential. That was probably a great way to actually solve healthcare problems in various time periods past. If you really do care about healthcare, and the environment around you affords a low-cost signal in the form of credential that correlates to better healthcare, then you’ll slowly adopt that policy or die; higher-cost signals might yield better healthcare but at the expense of putting yourself at a disadvantage compared to competitors using the low-cost signal.
When I hear the term ‘hidden goal’ in these models, I generally substitute “goal that would have correctly yielded the desired outcome in less data-rich environments.” I agree it is misleading to tout some statement like, “look how foolish people are because they care more about credential than about the real data behind doctors’ successes or treatments’ survival rates.” But I also don’t think Hanson or Kahneman are saying anything like that. I think they are saying, “Look at how unfortunate our intrinsic, evolved signal-processing machinery is. What worked great when the best you could do was hope to live to the age of 30 as a hunter gatherer turns out not to really be that great at all when you state more explicit goals tied to the data. Gee, if we could be more aware of the residual hunter-gatherer mechanisms that produced these cognitive artifacts, maybe we could correct for them or take advantage of them in some useful way.” Perhaps “vestigial goals” is a better term for what Hanson calls “hidden goals.”
Hanson-type explanations: do they assume coherent goals? What if we disregard goals and focus on urges. So, if people respond to the prestige of the hospital rather than the health care it provides, we might then say that their urges pertain more to prestige than to health care. How does Hansonian explanation require coherent goals?
But we can use one of these explanations (your hidden goal is to optimize status, etc.) to predict yet-unobserved behavior in other contexts.
In the case of “did I answer an easier / more accessible question than was really posed?”, you may just be inventing a new just-so story in every case. So like all self-help/productivity tricks, I can use one hoping that they remind me to act more deliberately when it matters, more than they waste our energy, but I can’t be sure it’s more than placebo.
This post gives what could be called an “epistemic Hansonian explanation”. A normal (“instrumental”) Hansonian explanation treats humans as agents that possess hidden goals, whose actions follow closely from those goals, and explains their actual actions in terms of these hypothetical goals. People don’t respond to easily available information about quality of healthcare, but (hypothetically) do respond to information about how prestigious a hospital is. Which goal does this behavior optimize for? Affiliation with prestigious institutions, apparently. Therefore, humans don’t really care about health, they care about prestige instead. As Anna’s recent post discusses, the problem with this explanation is that human behavior doesn’t closely follow any coherent goals at all, so even if we posit that humans have goals, these goals can’t be found by asking “What goals does the behavior optimize?”
Similarly in this instance, when you ask humans a question, you get an answer. Answers to the question “How happy are you with your life these days?” are (hypothetically) best explained by respondents’ current mood. Which question are the responses good answers for? The question about the current mood. Therefore, the respondents don’t really answer the question about their average happiness, they answer the question about their current mood instead.
The problem with these explanations seems to be the same: we try to fit the behavior (actions and responses to questions both) to the idea of humans as agents, whose behavior closely optimizes the goals they really pursue, and whose answers closely answer the questions they really consider. But there seems to be no reality to the (coherent) goals and beliefs (or questions one actually considers) that fall out of a descriptive model of humans as agents, even if there are coherent goals and beliefs somewhere, too loosely connected to actions and anticipations to be apparent in them.
I am probably not qualified to make good guesses about this, but as an avid reader of O.B., I think Hanson would be among the first people to agree with you that humans aren’t subconsciously enacting coherent goals. The agent with hidden goals models, similar to many situations where Markov model-like formalism is adopted, is just an expedient tool that might offer some correlation with what an agent will do in a given future situation. Affiliation with prestigious institutions, while probably not a coherent goal held over time by many people, does seem to correlate with certain actions (endorsing credentialed folks’ predictions, trusting confident-seeming doctors, approving municipal construction projects despite being told to explicitly account for planning bias, etc.)
I guess what I’m suggesting is that you’re right that people don’t have these as coherent goals, but I don’t see any better predictive model and would still use the hidden goal model until a better one comes up. IMO, the ‘better one’ will just be a deeper level Markov model. Maybe we don’t have easily explicable hidden goals that lend themselves to English summaries, but presumably we do have cognitive principles that differ from noise and cause correlation with survival behaviors. Small model approximations of this are of course bad and not the whole story, but are better than anything else at the moment and often times useful.
Yes, these “hidden goals” and “hidden questions” of descriptive idealization have predictive power, behavior clusters around them. The (potential) error is in implying that these hold a more fundamental role, existence as actual goals/questions/beliefs, that their properties contradict those of their (more idealistic) loosely-connected counterparts in a normative idealization. People behaving in a way that ignores more direct signals of healthcare quality and instead pursuing healthcare providers’ prestige doesn’t easily contradict people normatively caring about quality of healthcare.
Sure, all it tells us is that the signal we evolved to extract from the environment when worrying about healthcare is related to credential. That was probably a great way to actually solve healthcare problems in various time periods past. If you really do care about healthcare, and the environment around you affords a low-cost signal in the form of credential that correlates to better healthcare, then you’ll slowly adopt that policy or die; higher-cost signals might yield better healthcare but at the expense of putting yourself at a disadvantage compared to competitors using the low-cost signal.
When I hear the term ‘hidden goal’ in these models, I generally substitute “goal that would have correctly yielded the desired outcome in less data-rich environments.” I agree it is misleading to tout some statement like, “look how foolish people are because they care more about credential than about the real data behind doctors’ successes or treatments’ survival rates.” But I also don’t think Hanson or Kahneman are saying anything like that. I think they are saying, “Look at how unfortunate our intrinsic, evolved signal-processing machinery is. What worked great when the best you could do was hope to live to the age of 30 as a hunter gatherer turns out not to really be that great at all when you state more explicit goals tied to the data. Gee, if we could be more aware of the residual hunter-gatherer mechanisms that produced these cognitive artifacts, maybe we could correct for them or take advantage of them in some useful way.” Perhaps “vestigial goals” is a better term for what Hanson calls “hidden goals.”
Hanson-type explanations: do they assume coherent goals? What if we disregard goals and focus on urges. So, if people respond to the prestige of the hospital rather than the health care it provides, we might then say that their urges pertain more to prestige than to health care. How does Hansonian explanation require coherent goals?
No. Just hidden ones.
But why does Hanson need “goals” at all. Why not “hidden urges”?
I think Hanson’s term ‘hidden goals’ basically means ‘vestigial urges’ in the sense that he uses it.
Well put.
But we can use one of these explanations (your hidden goal is to optimize status, etc.) to predict yet-unobserved behavior in other contexts.
In the case of “did I answer an easier / more accessible question than was really posed?”, you may just be inventing a new just-so story in every case. So like all self-help/productivity tricks, I can use one hoping that they remind me to act more deliberately when it matters, more than they waste our energy, but I can’t be sure it’s more than placebo.