The success of Yvain’s post threw me off completely. My experience has been opposite to what he describes: x-rationality, which I’ve been working on since the mid-to-late nineties, has been centrally important to successses I’ve had in business and family life. Yet the LessWrong community, which I greatly respect, broadly endorsed Yvain’s argument that:
There seems to me to be approximately zero empirical evidence that x-rationality has a large effect on your practical success, and some anecdotal empirical evidence against it.
So that left me pondering what’s different in my experience. I’ve been working on these things longer than most, and am more skilled than many, but that seemed unlikely to be the key.
The difference, I now think, is that I’ve been lucky enough to spend huge amounts of time in deeply rationalist organizations and groups—the companies I’ve worked at, my marriage, my circle of friends.
And rational groups kick ass.
An individual can unpack free will or figure out that the Copenhagen interpretation is nonsense. But I agree with Yvain that in a lonely rationalist’s individual life, the extra oomph of x-rationality may well be drowned in the noise of all the other factors of success and failure.
But groups! Groups magnify the importance of rational thinking tremendously:
Whereas a rational individual is still limited by her individual intelligence, creativity, and charisma, a rational group can promote the single best idea, leader, or method out of hundreds or thousands or millions.
Groups have powerful feedback loops; small dysfunctions can grow into disaster by repeated reflection, and small positives can cascade into massive success.
In a particularly powerful feedback process, groups can select for and promote exceptional members.
Groups often operate in spaces where small differences are crucial. Companies with slightly better risk management are currently preparing to dominate the financial space. Countries with slightly more rational systems have generated the 0.5% of extra annual growth that leads, over centuries, to dramatically improved ways of life. Even in family life, a bit more rationality can easily be the difference between gradual divergence and gradual convergence.
And we’re not even talking about the extra power of x-rationality. Imagine a couple that truly understood Aumann, a company that grokked the Planning Fallacy, a polity that consistently tried Pulling the Rope Sideways.
When it comes to groups—sized from two to a billion—Yvain couldn’t be more wrong.
Update: Orthonormal points out that I don’t provide many concrete examples; I only link to three above. I’ll try to put more here as I think of them:
In Better, Atul Gawande talks about ways in which some groups of doctors have dramatically improved by becoming more group-rational, including OB standbys like keeping score and having sensitive discussions privately (and thus more openly).
Google seems like an extremely rational place for a public company. Two strong signals are that they are extremely data-driven and have used prediction markets. To be painfully clear, I’m not claiming that Google’s success is due to the use of prediction markets, merely that these datapoints help demonstrate Google’s overall rationality.
As AlanCrowe points out in the comments, Warren Buffett and Charlie Munger have a rationalist approach.
Rational Groups Kick Ass
Reply to: Extreme Rationality: It’s Not That Great
Belaboring of: Rational Me Or We?
Related to: A Sense That More Is Possible
The success of Yvain’s post threw me off completely. My experience has been opposite to what he describes: x-rationality, which I’ve been working on since the mid-to-late nineties, has been centrally important to successses I’ve had in business and family life. Yet the LessWrong community, which I greatly respect, broadly endorsed Yvain’s argument that:
So that left me pondering what’s different in my experience. I’ve been working on these things longer than most, and am more skilled than many, but that seemed unlikely to be the key.
The difference, I now think, is that I’ve been lucky enough to spend huge amounts of time in deeply rationalist organizations and groups—the companies I’ve worked at, my marriage, my circle of friends.
And rational groups kick ass.
An individual can unpack free will or figure out that the Copenhagen interpretation is nonsense. But I agree with Yvain that in a lonely rationalist’s individual life, the extra oomph of x-rationality may well be drowned in the noise of all the other factors of success and failure.
But groups! Groups magnify the importance of rational thinking tremendously:
Whereas a rational individual is still limited by her individual intelligence, creativity, and charisma, a rational group can promote the single best idea, leader, or method out of hundreds or thousands or millions.
Groups have powerful feedback loops; small dysfunctions can grow into disaster by repeated reflection, and small positives can cascade into massive success.
In a particularly powerful feedback process, groups can select for and promote exceptional members.
Groups can establish rules/norms/patterns that 1) directly improve members and 2) counteract members’ weaknesses.
Groups often operate in spaces where small differences are crucial. Companies with slightly better risk management are currently preparing to dominate the financial space. Countries with slightly more rational systems have generated the 0.5% of extra annual growth that leads, over centuries, to dramatically improved ways of life. Even in family life, a bit more rationality can easily be the difference between gradual divergence and gradual convergence.
And we’re not even talking about the extra power of x-rationality. Imagine a couple that truly understood Aumann, a company that grokked the Planning Fallacy, a polity that consistently tried Pulling the Rope Sideways.
When it comes to groups—sized from two to a billion—Yvain couldn’t be more wrong.
Update: Orthonormal points out that I don’t provide many concrete examples; I only link to three above. I’ll try to put more here as I think of them:
In Better, Atul Gawande talks about ways in which some groups of doctors have dramatically improved by becoming more group-rational, including OB standbys like keeping score and having sensitive discussions privately (and thus more openly).
Google seems like an extremely rational place for a public company. Two strong signals are that they are extremely data-driven and have used prediction markets. To be painfully clear, I’m not claiming that Google’s success is due to the use of prediction markets, merely that these datapoints help demonstrate Google’s overall rationality.
As AlanCrowe points out in the comments, Warren Buffett and Charlie Munger have a rationalist approach.