Looking back, I have quite different thoughts on this essay (and the comments) than I did when it was published. Or at least much more legible explanations; the seeds of these thoughts have been around for a while.
On The Essay
The basketballism analogy remains excellent. Yet searching the comments, I’m surprised that nobody ever mentioned the Fosbury Flop or the Three-Year Swim Club. In sports, from time to time somebody comes along with some crazy new technique and shatters all the records.
Comparing rationality practice to sports practice, rationality has not yet had its Fosbury Flop.
I think it’s coming. I’d give ~60% chance that rationality will have had its first Fosbury Flop in another five years, and ~40% chance that the first Fosbury Flop of rationality is specifically a refined and better-understood version of gears-level modelling. It’s the sort of thing that people already sometimes approximate by intuition or accident, but has the potential to yield much larger returns once the technique is explicitly identified and intentionally developed.
Once that sort of technique is refined, the returns to studying technique become much larger.
On The Comments—What Does Rationalist Self-Improvement Look Like?
Scott’s prototypical picture of rationalist self-improvement “starts looking a lot like therapy”. A concrete image:
My archetypal idea of “rationalist self-help” is sitting around at a CFAR workshop trying very hard to examine your mental blocks.
… and I find it striking that people mostly didn’t argue with that picture, so much as argue that it’s actually pretty helpful to just avoid a lot of socially-respectable stupid mistakes.
I very strongly doubt that the Fosbury Flop of rationality is going to look like therapy. It’s going to look like engineering. There will very likely be math.
Today’s “rationalist self-help” does look a lot like therapy, but it’s not the thing which is going to have impressive yields from studying the techniques.
On The Comments—What Benefits Should Rationalist Self-Improvement Yield?
This is one question where I didn’t have a clear answer a year ago, but now I feel ready to put a stake in the ground. Money cannot buy expertise. Whenever expertise is profitable, there will be charlatans, and the charlatans win the social status game at least as often as not. 2020 has provided ample evidence of this, and in places where actually making the map match the territory matters.
The only way to reliably avoid the charlatans is to learn enough oneself, model enough of the world oneself, acquire enough expertise oneself, to distinguish the truth from the bullshit.
That is a key real-world skill—arguably the key real-world skill—at which rationalists should outperform the rest of the world, especially as a community. Rationalist self-improvement should make us better at modelling the world, better at finding the truth in a pile of bullshit. And 2020 gave us a great problem on which to show off that skill. We did well compared to the rest of the world, and there’s a whole lot of room left to grow.
It has now been 4 years since this post, and 3 years since your prediction. Two thoughts:
prediction markets have really taken off in the past few years and this has been a substantial upgrade in keeping abreast of what’s going on in the world.
the Fosbury Flop of rationality might have already happened: Korzybski’s consciousness of abstraction. It’s just not used because of the dozens to hundred hours it takes.
I think it’s coming. I’d give ~60% chance that rationality will have had its first Fosbury Flop in another five years, and ~40% chance that the first Fosbury Flop of rationality is specifically a refined and better-understood version of gears-level modelling.
The fosbury flop succeeded because there’s very clear win conditions for high jump, and it beat everyone else.
I think it’s much harder to have a fosbury flop for rationality because even if the technique is better it’s not immediately obvious and therefore far less memetic.
Missing some other necessary conditions here, but I think your point is correct.
This is a big part of why having a rationalist community matters. Presumably people had jumping competitions in antiquity, and probably at some point someone tried something Fosbury-like (and managed to not break their spine in the process). But it wasn’t until we had a big international sporting community that the conditions were set for it to spread.
Now we have a community of people who are on the lookout for better learning/modelling/problem-solving techniques, and have some decent (though far from perfect) epistemic tools in place to distinguish such things from self-help bullshit. Memetically, a Fosbury flop of rationality probably won’t be as immediately obvious a success as the Fosbury flop, since we don’t have a rationality Olympics (and if we did, it would be Goodharted). On the other hand, we have the internet, we have much faster diffusion of information, and we have a community of people who actively experiment with this sort of stuff, so it’s not obvious whether a successful new technique would spread more quickly or less on net.
The fosbury flop is a good analogy. Where i think it comes short is that rationality is indeed a much more complex thing than jumping. You would need more than just the invention and application of a technique by one person for a paradigm shift—It would at least also require distilling the technique well, learning how to teach it well, and changing the rationality cannon in light of it.
I think a paradigm shift would happen when a new rationality cannon will be created and adopted that outperforms the current sequences (very likely also containing new techniques) - and i think that’s doable (for a start, see the flaws in the sequence Eliezer himself described in the preface).
This isn’t low hanging fruit, as it would require a lot of effort from skilled and knowledgeable people, but i would say it’s at least visible fruit, so to speak.
if the central claim of rationality is that there are a small number of generic techniques that can make you better at a wide range of things, then the basketball analogy is misleading because its a specific skill. The central claim of rationality was that there is such a small number of generic techniques, ie. remove biases and use Bayes. Bayes (Bayes!, Bayes!) was considered the Fosbury Flop for everything. But that seems not to have worked , and to have been quietly dropped. All the defences of rationalism in this article implicitly use a toolbox approach, although law thinking is explicitly recommended.
Looking back, I have quite different thoughts on this essay (and the comments) than I did when it was published. Or at least much more legible explanations; the seeds of these thoughts have been around for a while.
On The Essay
The basketballism analogy remains excellent. Yet searching the comments, I’m surprised that nobody ever mentioned the Fosbury Flop or the Three-Year Swim Club. In sports, from time to time somebody comes along with some crazy new technique and shatters all the records.
Comparing rationality practice to sports practice, rationality has not yet had its Fosbury Flop.
I think it’s coming. I’d give ~60% chance that rationality will have had its first Fosbury Flop in another five years, and ~40% chance that the first Fosbury Flop of rationality is specifically a refined and better-understood version of gears-level modelling. It’s the sort of thing that people already sometimes approximate by intuition or accident, but has the potential to yield much larger returns once the technique is explicitly identified and intentionally developed.
Once that sort of technique is refined, the returns to studying technique become much larger.
On The Comments—What Does Rationalist Self-Improvement Look Like?
Scott’s prototypical picture of rationalist self-improvement “starts looking a lot like therapy”. A concrete image:
… and I find it striking that people mostly didn’t argue with that picture, so much as argue that it’s actually pretty helpful to just avoid a lot of socially-respectable stupid mistakes.
I very strongly doubt that the Fosbury Flop of rationality is going to look like therapy. It’s going to look like engineering. There will very likely be math.
Today’s “rationalist self-help” does look a lot like therapy, but it’s not the thing which is going to have impressive yields from studying the techniques.
On The Comments—What Benefits Should Rationalist Self-Improvement Yield?
This is one question where I didn’t have a clear answer a year ago, but now I feel ready to put a stake in the ground. Money cannot buy expertise. Whenever expertise is profitable, there will be charlatans, and the charlatans win the social status game at least as often as not. 2020 has provided ample evidence of this, and in places where actually making the map match the territory matters.
The only way to reliably avoid the charlatans is to learn enough oneself, model enough of the world oneself, acquire enough expertise oneself, to distinguish the truth from the bullshit.
That is a key real-world skill—arguably the key real-world skill—at which rationalists should outperform the rest of the world, especially as a community. Rationalist self-improvement should make us better at modelling the world, better at finding the truth in a pile of bullshit. And 2020 gave us a great problem on which to show off that skill. We did well compared to the rest of the world, and there’s a whole lot of room left to grow.
It has now been 4 years since this post, and 3 years since your prediction. Two thoughts:
prediction markets have really taken off in the past few years and this has been a substantial upgrade in keeping abreast of what’s going on in the world.
the Fosbury Flop of rationality might have already happened: Korzybski’s consciousness of abstraction. It’s just not used because of the dozens to hundred hours it takes.
The fosbury flop succeeded because there’s very clear win conditions for high jump, and it beat everyone else.
I think it’s much harder to have a fosbury flop for rationality because even if the technique is better it’s not immediately obvious and therefore far less memetic.
Missing some other necessary conditions here, but I think your point is correct.
This is a big part of why having a rationalist community matters. Presumably people had jumping competitions in antiquity, and probably at some point someone tried something Fosbury-like (and managed to not break their spine in the process). But it wasn’t until we had a big international sporting community that the conditions were set for it to spread.
Now we have a community of people who are on the lookout for better learning/modelling/problem-solving techniques, and have some decent (though far from perfect) epistemic tools in place to distinguish such things from self-help bullshit. Memetically, a Fosbury flop of rationality probably won’t be as immediately obvious a success as the Fosbury flop, since we don’t have a rationality Olympics (and if we did, it would be Goodharted). On the other hand, we have the internet, we have much faster diffusion of information, and we have a community of people who actively experiment with this sort of stuff, so it’s not obvious whether a successful new technique would spread more quickly or less on net.
The fosbury flop is a good analogy. Where i think it comes short is that rationality is indeed a much more complex thing than jumping. You would need more than just the invention and application of a technique by one person for a paradigm shift—It would at least also require distilling the technique well, learning how to teach it well, and changing the rationality cannon in light of it.
I think a paradigm shift would happen when a new rationality cannon will be created and adopted that outperforms the current sequences (very likely also containing new techniques) - and i think that’s doable (for a start, see the flaws in the sequence Eliezer himself described in the preface).
This isn’t low hanging fruit, as it would require a lot of effort from skilled and knowledgeable people, but i would say it’s at least visible fruit, so to speak.
The obvious candidate for the Rationalist Fosbury flop is the development of good Forecasting environment/software/culture/theory etc.
if the central claim of rationality is that there are a small number of generic techniques that can make you better at a wide range of things, then the basketball analogy is misleading because its a specific skill. The central claim of rationality was that there is such a small number of generic techniques, ie. remove biases and use Bayes. Bayes (Bayes!, Bayes!) was considered the Fosbury Flop for everything. But that seems not to have worked , and to have been quietly dropped. All the defences of rationalism in this article implicitly use a toolbox approach, although law thinking is explicitly recommended.