Excited to see what comes out of this. I do want to raise attention to this failure mode covered in the sequences. however. I’d love for those who do the program try to bind their results to reality in some way, ideally having a concrete result of how they’re substantively stronger afterwards, and how this replicated with other participants who did the training.
This failure mode is definitely real. OTOH, demands for immediate, legible results can kill off many valuable options. There are major improvements in my life that are measurable (e.g. ability to take moral stands when people are yelling at me, ability to think on my feet while anxious) but can’t be attributed to any one action[1]. If you took away everything that couldn’t objectively justify itself in a few months, I’d be much worse off, even though probably a good chunk of what was cut was valueless.
Broadly agree the failure mode is important; also I’m fairly confident basically all the listed mentors understand this problem of rationality education / “how to improve yourself” schools / etc. and I’d hope can help participants to avoid it.
I would subtly push back against optimizing for something like being measurably stronger on a timescale like 2 months. In my experience actually functional things in this space typically work by increasing the growth rate of [something hard to measure], so instead of e.g. 15% p.a. you get 80% p.a.
For example then, how would someone know this is a useful thing based on other signals? It’s totally valid to suggest using something else, but is there one? If not, you’re going to have a selection effect against people for whom that matters
Excited to see what comes out of this. I do want to raise attention to this failure mode covered in the sequences. however. I’d love for those who do the program try to bind their results to reality in some way, ideally having a concrete result of how they’re substantively stronger afterwards, and how this replicated with other participants who did the training.
This failure mode is definitely real. OTOH, demands for immediate, legible results can kill off many valuable options. There are major improvements in my life that are measurable (e.g. ability to take moral stands when people are yelling at me, ability to think on my feet while anxious) but can’t be attributed to any one action[1]. If you took away everything that couldn’t objectively justify itself in a few months, I’d be much worse off, even though probably a good chunk of what was cut was valueless.
or can be attributed to specific actions, but only partially. The sum of improvements with traceable causes is far less than the total improvement.
Broadly agree the failure mode is important; also I’m fairly confident basically all the listed mentors understand this problem of rationality education / “how to improve yourself” schools / etc. and I’d hope can help participants to avoid it.
I would subtly push back against optimizing for something like being measurably stronger on a timescale like 2 months. In my experience actually functional things in this space typically work by increasing the growth rate of [something hard to measure], so instead of e.g. 15% p.a. you get 80% p.a.
For example then, how would someone know this is a useful thing based on other signals? It’s totally valid to suggest using something else, but is there one? If not, you’re going to have a selection effect against people for whom that matters