I agree with a bunch of these concerns. FWIW, it wouldn’t surprise me if the current rationalist community still behaviorally undervalues “specialized jargon”. (Or, rather than jargon, concept handles a la https://slatestarcodex.com/2014/03/15/can-it-be-wrong-to-crystallize-patterns/.) I don’t have a strong view on whether rationalists undervalue of overvalue this kind of thing, but it seems worth commenting on since it’s being discussed a lot here.
When I observe the reasons people ended up ‘working smarter’ or changing course in a good way, it often involves a new lens they started applying to something. I think one of the biggest problems the rationalist community faces is a lack of dakka and a lack of lead bullets. But I guess I want to caution against treating abstraction and execution as too much of a dichotomy, such that we have to choose between “novel LW posts are useful and high-status” and “conscientiousness and follow-through is useful and high-status” and see-saw between the two.
The important thing is cutting the enemy, and I think the kinds of problems that rationalists are in an especially good position to solve require individuals to exhibit large amounts of execution and follow-through while (on a timescale of years) doing a large number of big and small course-corrections to improve their productivity or change their strategy.
It might be that we’re doing too much reflection and too much coming up with lenses. It might also be that we’re not doing enough grunt work and not doing enough reflection and lenscrafting. Physical tasks don’t care whether we’re already doing an abnormal amount of one or the other; the universe just hands us problems of a certain difficulty, and if we fall short on any of the requirements then we fail.
It might also be that this varies by individual, such that it’s best to just make sure people are aware of these different concerns so they can check which holds true in their own circumstance.
I agree with a bunch of these concerns. FWIW, it wouldn’t surprise me if the current rationalist community still behaviorally undervalues “specialized jargon”. (Or, rather than jargon, concept handles a la https://slatestarcodex.com/2014/03/15/can-it-be-wrong-to-crystallize-patterns/.) I don’t have a strong view on whether rationalists undervalue of overvalue this kind of thing, but it seems worth commenting on since it’s being discussed a lot here.
When I observe the reasons people ended up ‘working smarter’ or changing course in a good way, it often involves a new lens they started applying to something. I think one of the biggest problems the rationalist community faces is a lack of dakka and a lack of lead bullets. But I guess I want to caution against treating abstraction and execution as too much of a dichotomy, such that we have to choose between “novel LW posts are useful and high-status” and “conscientiousness and follow-through is useful and high-status” and see-saw between the two.
The important thing is cutting the enemy, and I think the kinds of problems that rationalists are in an especially good position to solve require individuals to exhibit large amounts of execution and follow-through while (on a timescale of years) doing a large number of big and small course-corrections to improve their productivity or change their strategy.
It might be that we’re doing too much reflection and too much coming up with lenses. It might also be that we’re not doing enough grunt work and not doing enough reflection and lenscrafting. Physical tasks don’t care whether we’re already doing an abnormal amount of one or the other; the universe just hands us problems of a certain difficulty, and if we fall short on any of the requirements then we fail.
It might also be that this varies by individual, such that it’s best to just make sure people are aware of these different concerns so they can check which holds true in their own circumstance.