Speaking for myself, any text mentioning “revealed preferences” gets flagged as high probability of being in the “not even wrong” territory. Or, more precisely, in the “motte and bailey” territory, because that’s where the whole concept resides.
I mean, the “motte” of the revealed preferences is that when people talk all the day about how they want X, but actually everything they do points towards Y, it is reasonable to assume they are probably just bullshiting.
And the “bailey” is taking what actually happened and saying “this is your true preference” even in cases when the person who talked about X actually took some steps towards X, but they failed because… well, usually because the person did something stupid or half-assed. Essentially, to say that “what happened = the true preference” assumes too much rationality and computing power on behalf of the person we are judging. (And ignoring the effects of luck. If I throw a coin and achieve two different results in two Everett branches, does it mean that my “revealed preferences” were in a superposition before the coin landed? Or is it okay if each branch predicts that head/tails has actually been my revealed preference all the time?)
I am not sure what is the point of this article. Is it “if you want X, take a look at whether you seem from outside as ‘a person who really wants X’, and perhaps adjust your actions accordingly?” That’s my best guess, but the previous part about AgentyBot just got the whole text flagged in my mind.
“revealed preferences” gets flagged as high probability of being in the “not even wrong” territory.
“”motte and bailey” “this is your true preference”
Quoted from above, “It’s important to note that revealed preferences are different to preferences, they are in fact distinctly different. They are their own subset. Revealed preferences are just another description that informs the map of, “me as a person”. In many ways, a revealed preference is much much more real than a simple preference that does not actually come about.”
He originally developed it to replace utility theory because utility has no basis in reality. Just correlation. (similar to the way that math is not real but is in fact correlated with reality very very well that we usually just accept it as being real). Specifically he cared about consumer action and based on some simple premises about how a consumer’s choices were fixed, he theorised that you could use a revealed preference (a persons actions lead them to make choices that they believe will increase their utilons) in the place of a utility (a person has preferences that lead to greater utilons).
I don’t think he actually succeeded in replacing utility with revealed preference theory (see: how common each theory is today), but I think he certainly made big discoveries and found something very concrete compared to utility. Something in which we can genuinely use as valuable concepts.
Speaking for myself, any text mentioning “revealed preferences” gets flagged as high probability of being in the “not even wrong” territory. Or, more precisely, in the “motte and bailey” territory, because that’s where the whole concept resides.
I mean, the “motte” of the revealed preferences is that when people talk all the day about how they want X, but actually everything they do points towards Y, it is reasonable to assume they are probably just bullshiting.
And the “bailey” is taking what actually happened and saying “this is your true preference” even in cases when the person who talked about X actually took some steps towards X, but they failed because… well, usually because the person did something stupid or half-assed. Essentially, to say that “what happened = the true preference” assumes too much rationality and computing power on behalf of the person we are judging. (And ignoring the effects of luck. If I throw a coin and achieve two different results in two Everett branches, does it mean that my “revealed preferences” were in a superposition before the coin landed? Or is it okay if each branch predicts that head/tails has actually been my revealed preference all the time?)
I am not sure what is the point of this article. Is it “if you want X, take a look at whether you seem from outside as ‘a person who really wants X’, and perhaps adjust your actions accordingly?” That’s my best guess, but the previous part about AgentyBot just got the whole text flagged in my mind.
Quoted from above, “It’s important to note that revealed preferences are different to preferences, they are in fact distinctly different. They are their own subset. Revealed preferences are just another description that informs the map of, “me as a person”. In many ways, a revealed preference is much much more real than a simple preference that does not actually come about.”
Revealed preferences is a very real economic theory by Paul Samuelson -
https://en.wikipedia.org/wiki/Paul_Samuelson
He originally developed it to replace utility theory because utility has no basis in reality. Just correlation. (similar to the way that math is not real but is in fact correlated with reality very very well that we usually just accept it as being real). Specifically he cared about consumer action and based on some simple premises about how a consumer’s choices were fixed, he theorised that you could use a revealed preference (a persons actions lead them to make choices that they believe will increase their utilons) in the place of a utility (a person has preferences that lead to greater utilons).
I don’t think he actually succeeded in replacing utility with revealed preference theory (see: how common each theory is today), but I think he certainly made big discoveries and found something very concrete compared to utility. Something in which we can genuinely use as valuable concepts.
Further information:
https://www.researchgate.net/publication/228247551_Paul_Samuelson_and_Revealed_Preference_Theory