Somewhat ironic that LW is badly in need of better captcha.
MrFailSauce
I read him, he is just incorrect. “People hate losses more than they hate gains” is not explained by DMU. They dislike losses to an extent far greater than predicted by DMU, and more importantly, this dislike is largely scale invariant.
If you go read papers like the original K&T, you’ll see that their data set is just a bunch of statements that are predicted to be equally preferrable under DMU (because marginal utility doesn’t change much for small changes in wealth). What changes the preference is simply whether K&T phrase the question in terms of a loss or a gain.
So...unsurprisingly, Kahneman is accurately describing the theory that won him the Nobel prize.
The result you got is pretty close to the fft of f(t) = t
Which is roughly what you got from sorting noise.
All finite length sequences exist in any infinite random sequence. So, in the same way that all the works of shakespeare exist inside an infinite random sequence, so too does a complete representation of any finite universe.
I suppose one could argue by the anthropic principle that we happen to exist in a well ordered finite subsequence of an infinite random sequence. But it is sort of like multiverse theories where it lacks the explanatory power or verifiability of simpler theories.
Maybe I’m being dense, and missing the mystery, but I think this reference might be helpful.
I mean...he quotes Kahneman; claiming the guy doesn’t know the implications of his own theory.
Losses hurt more than gains even at scales where DMU predicts that they should not. (because your DMU curve is approximately flat for small losses and gains) Loss aversion is the psychological result which explains this effect.
This is the author’s conclusion: “So, please, don’t go around claiming that behavioral economists are incorporating some brilliant newfound insight that people hate losses more than they like gains. We’ve known about this in price theory since Alfred Marshall’s 1890 Principles of Economics.”
Sorry nope. Alfred Marhall’s Principles would have made the wrong prediction.
That makes a lot of sense to me. Aversion to small losses makes a ton of sense as a blanket rule, when the gamble is: lose: don’t eat today win: eat double today don’t play: eat today
Our ancestors probably faced this gamble since long before humans were even humans. Under those stable conditions, a heuristic accounting for scale would have been needlessly expensive.
In short, the author is wrong. Diminishing marginal utility only really applies when the stakes are on the order of the agent’s total wealth, whereas the loss aversion asymmetry holds true for relatively small sums.
I think this is an interesting concept and want to see where you go with it. But just devil’s advocating, there are some pretty strong counterexamples for micromanagement. For example, many imperative languages can be ridiculously inefficient. And try solving an NP complete problem with a genetic algorithm and you’ll just get stuck in a local minimum.
Simplicity and emergence are often surprisingly effective but they’re just tools in a large toolbox.