Everything we have so far, on alignment and macrostrategy, came from human minds that were not really tuning their cognitive strategies
I don’t think that’s true. I’d independently intuited my way into something like this post, and I suspect that a lot of people successfully doing high-impact cognitive work likewise stumble their way into something like this technique. Perhaps not consciously, nor at the full scale this post describes, but well enough that explicitly adopting it will only lead to marginal further improvements.
Which is the case for a lot of LW-style rationality techniques, I think. Most people who can use them and would receive benefits from using them would’ve developed them on their own eventually. Consuming LW content just speeds this process up.
So this sort of thing is useful at the individual level, but in most cases, you ain’t “beating the market” with this — you just do well. And a hypothetical wide-scale adoption would lead to a modest elevation of the “sanity waterline”, but not any sort of cognitive revolution (second-order effects aside).
I don’t think that’s true. I’d independently intuited my way into something like this post, and I suspect that a lot of people successfully doing high-impact cognitive work likewise stumble their way into something like this technique. Perhaps not consciously, nor at the full scale this post describes, but well enough that explicitly adopting it will only lead to marginal further improvements.
Which is the case for a lot of LW-style rationality techniques, I think. Most people who can use them and would receive benefits from using them would’ve developed them on their own eventually. Consuming LW content just speeds this process up.
So this sort of thing is useful at the individual level, but in most cases, you ain’t “beating the market” with this — you just do well. And a hypothetical wide-scale adoption would lead to a modest elevation of the “sanity waterline”, but not any sort of cognitive revolution (second-order effects aside).