Sure—but whay you claimed was a “spherical cow” was “ordinal utilities” which is a totally different concept.
It was you who brought the revealed preferences into it, in order to claim that humans were close enough to spherical cows. I merely pointed out that revealed preferences in even their weakest form are just another spherical cow, and thus don’t constitute evidence for the usefulness of ordinal utility.
That’s treating the “Weak Axiom of Revealed Preference” as the “weakest form” of revealed preference. However, that is not something that I consider to be correct.
The idea I introduced revealed preference to support was that humans act like a single agent in at least one important sense—namely that they have a single brain and a single body.
The idea I introduced revealed preference to support was that humans act like a single agent in at least one important sense—namely that they have a single brain and a single body.
Single brain and body doesn’t mean anything when that brain is riddled with sometimes-conflicting goals… which is precisely what refutes WARP.
(See also Ainslie’s notion of “picoeconomics”, i.e. modeling individual humans as a collection of competing agents—which is closely related to the tolerance model I’ve been giving examples of in this thread.)
Competing sub-goals are fine. Deep Blue wanted to promote its pawn as well as protect its king—and those aims conflict. Such conflicts don’t stop utilities being assigned and moves from being made. You only have one body—and it is going to do something.
It was you who brought the revealed preferences into it, in order to claim that humans were close enough to spherical cows. I merely pointed out that revealed preferences in even their weakest form are just another spherical cow, and thus don’t constitute evidence for the usefulness of ordinal utility.
That’s treating the “Weak Axiom of Revealed Preference” as the “weakest form” of revealed preference. However, that is not something that I consider to be correct.
The idea I introduced revealed preference to support was that humans act like a single agent in at least one important sense—namely that they have a single brain and a single body.
Single brain and body doesn’t mean anything when that brain is riddled with sometimes-conflicting goals… which is precisely what refutes WARP.
(See also Ainslie’s notion of “picoeconomics”, i.e. modeling individual humans as a collection of competing agents—which is closely related to the tolerance model I’ve been giving examples of in this thread.)
That sounds interesting. Is there anything serious about it available online? Every paper I could find was behind a paywall.
Ainslie’s précis of his book Breakdown of Will
Yvain’s Less Wrong post “Applied Picoeconomics”
Muchas gracias.
Competing sub-goals are fine. Deep Blue wanted to promote its pawn as well as protect its king—and those aims conflict. Such conflicts don’t stop utilities being assigned and moves from being made. You only have one body—and it is going to do something.
Then why did you even bring this up in the first place?
Probably for the same reason you threadjacked to talk about PCT ;-)