The idea I introduced revealed preference to support was that humans act like a single agent in at least one important sense—namely that they have a single brain and a single body.
Single brain and body doesn’t mean anything when that brain is riddled with sometimes-conflicting goals… which is precisely what refutes WARP.
(See also Ainslie’s notion of “picoeconomics”, i.e. modeling individual humans as a collection of competing agents—which is closely related to the tolerance model I’ve been giving examples of in this thread.)
Competing sub-goals are fine. Deep Blue wanted to promote its pawn as well as protect its king—and those aims conflict. Such conflicts don’t stop utilities being assigned and moves from being made. You only have one body—and it is going to do something.
Single brain and body doesn’t mean anything when that brain is riddled with sometimes-conflicting goals… which is precisely what refutes WARP.
(See also Ainslie’s notion of “picoeconomics”, i.e. modeling individual humans as a collection of competing agents—which is closely related to the tolerance model I’ve been giving examples of in this thread.)
That sounds interesting. Is there anything serious about it available online? Every paper I could find was behind a paywall.
Ainslie’s précis of his book Breakdown of Will
Yvain’s Less Wrong post “Applied Picoeconomics”
Muchas gracias.
Competing sub-goals are fine. Deep Blue wanted to promote its pawn as well as protect its king—and those aims conflict. Such conflicts don’t stop utilities being assigned and moves from being made. You only have one body—and it is going to do something.
Then why did you even bring this up in the first place?
Probably for the same reason you threadjacked to talk about PCT ;-)