That seems to be of questionable relevance—since utilities in decision theory are all inside a single agent. Different agents having different values is not an issue in such contexts.
In the real world, when it is observed that a consumer purchased an orange, it is impossible to say what good or set of goods or behavioral options were discarded in preference of purchasing an orange. In this sense, preference is not revealed at all in the sense of ordinal utility.
However, even if you ignore that, WARP is trivially proven false by actual human behavior: people demonstrably do sometimes choose differently based on context. That’s what makes ordinal utilities a “spherical cow” abstraction.
(WARP’s inapplicability when applied to real (non-spherical) humans, in one sentence: “I feel like having an apple today, instead of an orange.” QED: humans are not “economic agents” under WARP, since they don’t consistently choose A over B in environments where both A and B are available.)
However, even if you ignore that, WARP is trivially proven false by actual
human behavior: people demonstrably do sometimes choose differently
based on context. That’s what makes ordinal utilities a “spherical cow” abstraction.
The first sentence is true—but the second sentence doesn’t follow from it logically—or in any other way I can see.
It is true that there are some problems modelling humans as von Neumann–Morgenstern agents—but that’s no reason to throw out the concept of utility. Utility is a much more fundamental and useful concept.
The first sentence is true—but the second sentence doesn’t follow from it logically—or in any other way I can see
WARP can’t be used to predict a human’s behavior in even the most trivial real situations. That makes it a “spherical cow” because it’s a simplifying assumption adopted to make the math easier, at the cost of predictive accuracy.
It is true that there are some problems modelling humans as von Neumann–Morgenstern agents—but that’s no reason to throw out the concept of utility.
That sounds to me uncannily similar to, “it is true that there are some problems modeling celestial movement using crystal spheres—but that’s no reason to throw out the concept of celestial bodies moving in perfect circles.”
There is an obvious surface similarity—but so what? You constructed the sentence that way deliberately. You would need to make an analogy for arguing like that to have any force—and the required analogy looks like a bad one to me.
You would need to make an analogy for arguing like that to have any force—and the required analogy looks like a bad one to me.
How so? I’m pointing out that the only actual intelligent agents we know of don’t actually work like economic agents on the inside. That seems like a very strong analogy to Newtonian gravity vs. “crystal spheres”.
Economic agency/utility models may have the Platonic purity of crystal spheres, but:
We know for a fact they’re not what actually happens in reality, and
They have to be tortured considerably to make them “predict” what happens in reality.
It seems to me like arguing that we can’t build a good computer model of a bridge—because inside the model is all bits, while inside the actual bridge is all spinning atoms.
Computers can model anything. That is because they are universal. It doesn’t matter that computers work differently inside from the thing they are modelling.
Just the same applies to partially-recursive utility functions—they are a universal modelling tool—and can model any computable agent.
It seems to me like arguing that we can’t build a good computer model of a bridge—because inside the model is all bits, while inside the actual bridge is all spinning atoms.
Not at all. I’m saying that just as it takes more bits to describe a system of crystal spheres to predict planetary motion than it does to make the same predictions with a Newtonian solar system model, so too does it take more bits to predict a human’s behavior with a utility function, than it does to describe a human with interests and tolerances.
Indeed, your argument seems to be along the lines that since everything is made of atoms, we should model bridges using them. What were your words? Oh yes:
they are a universal modelling tool
Right. That very universality is exactly what makes them a poor model of human intelligence: they don’t concentrate probability space in the same way, and therefore don’t compress well.
Sure—but whay you claimed was a “spherical cow” was “ordinal utilities” which is a totally different concept.
It was you who brought the revealed preferences into it, in order to claim that humans were close enough to spherical cows. I merely pointed out that revealed preferences in even their weakest form are just another spherical cow, and thus don’t constitute evidence for the usefulness of ordinal utility.
That’s treating the “Weak Axiom of Revealed Preference” as the “weakest form” of revealed preference. However, that is not something that I consider to be correct.
The idea I introduced revealed preference to support was that humans act like a single agent in at least one important sense—namely that they have a single brain and a single body.
The idea I introduced revealed preference to support was that humans act like a single agent in at least one important sense—namely that they have a single brain and a single body.
Single brain and body doesn’t mean anything when that brain is riddled with sometimes-conflicting goals… which is precisely what refutes WARP.
(See also Ainslie’s notion of “picoeconomics”, i.e. modeling individual humans as a collection of competing agents—which is closely related to the tolerance model I’ve been giving examples of in this thread.)
Competing sub-goals are fine. Deep Blue wanted to promote its pawn as well as protect its king—and those aims conflict. Such conflicts don’t stop utilities being assigned and moves from being made. You only have one body—and it is going to do something.
Diamonds are not fungible—and yet they have prices. Same difference here, I figure.
What’s the price of one red paperclip? Is it the same price as a house?
That seems to be of questionable relevance—since utilities in decision theory are all inside a single agent. Different agents having different values is not an issue in such contexts.
That’s a big part of the problem right there: humans aren’t “single agents” in this sense.
Humans are single agents in a number of senses—and are individual enough for the idea of revealed preference to be useful.
From the page you linked (emphasis added):
However, even if you ignore that, WARP is trivially proven false by actual human behavior: people demonstrably do sometimes choose differently based on context. That’s what makes ordinal utilities a “spherical cow” abstraction.
(WARP’s inapplicability when applied to real (non-spherical) humans, in one sentence: “I feel like having an apple today, instead of an orange.” QED: humans are not “economic agents” under WARP, since they don’t consistently choose A over B in environments where both A and B are available.)
The first sentence is true—but the second sentence doesn’t follow from it logically—or in any other way I can see.
It is true that there are some problems modelling humans as von Neumann–Morgenstern agents—but that’s no reason to throw out the concept of utility. Utility is a much more fundamental and useful concept.
WARP can’t be used to predict a human’s behavior in even the most trivial real situations. That makes it a “spherical cow” because it’s a simplifying assumption adopted to make the math easier, at the cost of predictive accuracy.
That sounds to me uncannily similar to, “it is true that there are some problems modeling celestial movement using crystal spheres—but that’s no reason to throw out the concept of celestial bodies moving in perfect circles.”
There is an obvious surface similarity—but so what? You constructed the sentence that way deliberately. You would need to make an analogy for arguing like that to have any force—and the required analogy looks like a bad one to me.
How so? I’m pointing out that the only actual intelligent agents we know of don’t actually work like economic agents on the inside. That seems like a very strong analogy to Newtonian gravity vs. “crystal spheres”.
Economic agency/utility models may have the Platonic purity of crystal spheres, but:
We know for a fact they’re not what actually happens in reality, and
They have to be tortured considerably to make them “predict” what happens in reality.
It seems to me like arguing that we can’t build a good computer model of a bridge—because inside the model is all bits, while inside the actual bridge is all spinning atoms.
Computers can model anything. That is because they are universal. It doesn’t matter that computers work differently inside from the thing they are modelling.
Just the same applies to partially-recursive utility functions—they are a universal modelling tool—and can model any computable agent.
Not at all. I’m saying that just as it takes more bits to describe a system of crystal spheres to predict planetary motion than it does to make the same predictions with a Newtonian solar system model, so too does it take more bits to predict a human’s behavior with a utility function, than it does to describe a human with interests and tolerances.
Indeed, your argument seems to be along the lines that since everything is made of atoms, we should model bridges using them. What were your words? Oh yes:
Right. That very universality is exactly what makes them a poor model of human intelligence: they don’t concentrate probability space in the same way, and therefore don’t compress well.
Sure—but whay you claimed was a “spherical cow” was “ordinal utilities” which is a totally different concept.
It was you who brought the revealed preferences into it, in order to claim that humans were close enough to spherical cows. I merely pointed out that revealed preferences in even their weakest form are just another spherical cow, and thus don’t constitute evidence for the usefulness of ordinal utility.
That’s treating the “Weak Axiom of Revealed Preference” as the “weakest form” of revealed preference. However, that is not something that I consider to be correct.
The idea I introduced revealed preference to support was that humans act like a single agent in at least one important sense—namely that they have a single brain and a single body.
Single brain and body doesn’t mean anything when that brain is riddled with sometimes-conflicting goals… which is precisely what refutes WARP.
(See also Ainslie’s notion of “picoeconomics”, i.e. modeling individual humans as a collection of competing agents—which is closely related to the tolerance model I’ve been giving examples of in this thread.)
That sounds interesting. Is there anything serious about it available online? Every paper I could find was behind a paywall.
Ainslie’s précis of his book Breakdown of Will
Yvain’s Less Wrong post “Applied Picoeconomics”
Muchas gracias.
Competing sub-goals are fine. Deep Blue wanted to promote its pawn as well as protect its king—and those aims conflict. Such conflicts don’t stop utilities being assigned and moves from being made. You only have one body—and it is going to do something.
Then why did you even bring this up in the first place?
Probably for the same reason you threadjacked to talk about PCT ;-)