If “refining the art of human rationality” is our goal, we should be doing a lot more outreach and a lot more production of very accessible rationality materials.
I agree, and I’m in favor of this sort of thing. I try to do this sort of thing among my friends. Sometimes it works, at least a little bit.
On the other hand, if we’re trying to save Earth from being turned into paperclips, we ought to focus our efforts on people who’re smart enough to be able to meaningfully contribute to AI risk reduction.
On the other other hand, there are people here who could help with sanity-line-raising materials who can’t help with rationality training as a way to avert AI x-risk.
On the other otherother hand, some people who might be able to help with AI risk might get into the possibly-less-important sanity-waterline-raising projects, and this would be a bad thing.
I agree, and I’m in favor of this sort of thing. I try to do this sort of thing among my friends. Sometimes it works, at least a little bit.
On the other hand, if we’re trying to save Earth from being turned into paperclips, we ought to focus our efforts on people who’re smart enough to be able to meaningfully contribute to AI risk reduction.
On the other other hand, there are people here who could help with sanity-line-raising materials who can’t help with rationality training as a way to avert AI x-risk.
On the other other other hand, some people who might be able to help with AI risk might get into the possibly-less-important sanity-waterline-raising projects, and this would be a bad thing.