If people inevitably sometimes make mistakes when interpreting theories, and theory-driven mistakes are more likely to be catastrophic than the mistakes people make when acting according to “atheoretical” learning from experience and imitation, then unusually theory-driven people are more likely to make catastrophic mistakes. In the absence of a way to prevent people from sometimes making mistakes when interpreting theories, this seems like a pretty strong argument in favor of atheoretical learning from experience and imitation!
This is particularly pertinent if, in a lot of cases where more sober theorists tend to say, “Well, the true theory wouldn’t have recommended that,” the reason the sober theorists believe that is because they expect true theories to not wildly contradict the wisdom of atheoretical learning from experience and imitation, rather than because they’ve personally pinpointed the error in the interpretation.
(“But I don’t need to know the answer. I just recite to myself, over and over, until I can choose sleep: It all adds up to normality.”)
And that’s even if there is an error. A reckless financier who accepts a 89% chance of losing it all for an 11% chance of dectupling their empire would be rational if they truly had linear utility for money. (Even while sober people with sublinear utility functions shake their heads at the allegedly foolish spectacle of the bankruptcy in 89% of possible worlds.)
I think the causality runs the other way though; people who are crazy and grandiose are likely to come up with spurious theories to justify actions they wanted to take anyway. Experience and imitation shows us that non-crazy people successfully use theories to do non-crazy things all the time, so much so that you probably take it for granted.
And that’s even if there is an error. A reckless financier who accepts a 89% chance of losing it all for an 11% chance of dectupling their empire would be rational if they truly had linear utility for money.
But of course no human financier has a utility function, let alone one that can be expressed only in terms of money, let alone one that’s linear in money. So in this hypothetical, yes, there is an error.
(SBF said his utility was linear in money. I think he probably wasn’t confused enough to think that was literally true, but I do think he was confused about the math.)
And that’s even if there is an error. A reckless financier who accepts a 89% chance of losing it all for an 11% chance of dectupling their empire would be rational if they truly had linear utility for money. (Even while sober people with sublinear utility functions shake their heads at the allegedly foolish spectacle of the bankruptcy in 89% of possible worlds.)
This is related to a very important point: Without more assumptions, there is no way to distinguish via outcomes the following 2 cases: irrationality while pursuing your values and being rational but having very different or strange values.
(Also, I dislike the implication that it all adds up to normality, unless something else is meant or it’s trivial, since you can’t define normality without a context.)
If people inevitably sometimes make mistakes when interpreting theories, and theory-driven mistakes are more likely to be catastrophic than the mistakes people make when acting according to “atheoretical” learning from experience and imitation, then unusually theory-driven people are more likely to make catastrophic mistakes. In the absence of a way to prevent people from sometimes making mistakes when interpreting theories, this seems like a pretty strong argument in favor of atheoretical learning from experience and imitation!
This is particularly pertinent if, in a lot of cases where more sober theorists tend to say, “Well, the true theory wouldn’t have recommended that,” the reason the sober theorists believe that is because they expect true theories to not wildly contradict the wisdom of atheoretical learning from experience and imitation, rather than because they’ve personally pinpointed the error in the interpretation.
(“But I don’t need to know the answer. I just recite to myself, over and over, until I can choose sleep: It all adds up to normality.”)
And that’s even if there is an error. A reckless financier who accepts a 89% chance of losing it all for an 11% chance of dectupling their empire would be rational if they truly had linear utility for money. (Even while sober people with sublinear utility functions shake their heads at the allegedly foolish spectacle of the bankruptcy in 89% of possible worlds.)
I think the causality runs the other way though; people who are crazy and grandiose are likely to come up with spurious theories to justify actions they wanted to take anyway. Experience and imitation shows us that non-crazy people successfully use theories to do non-crazy things all the time, so much so that you probably take it for granted.
But of course no human financier has a utility function, let alone one that can be expressed only in terms of money, let alone one that’s linear in money. So in this hypothetical, yes, there is an error.
(SBF said his utility was linear in money. I think he probably wasn’t confused enough to think that was literally true, but I do think he was confused about the math.)
This is related to a very important point: Without more assumptions, there is no way to distinguish via outcomes the following 2 cases: irrationality while pursuing your values and being rational but having very different or strange values.
(Also, I dislike the implication that it all adds up to normality, unless something else is meant or it’s trivial, since you can’t define normality without a context.)