The self-help / life hacking / personal development community is actually better (in my opinion) at helping people become more rational than this site ostensibly devoted to rationality.
Hmm. The self-help / life hacking / personal development community may well be better than LW at focussing on practice, on concrete life-improvements, and on eliciting deep-seated motivation. But AFAICT these communities are not aiming at epistemic rationality in our sense, and are consequently not hitting it even as well as we are. LW, for all its faults, has had fair success at teaching folks how to thinking usefully about abstract, tricky subjects on which human discussions often tend rapidly toward nonsense (e.g. existential risk, optimal philanthropy, or ethics). It has done so by teaching such subskills as:
Never attempting to prove empirical facts from definitions;
Never saying or implying “but decent people shouldn’t believe X, so X is false”;
Being curious; participating in conversations with intent to update opinions, rather than merely to defend one’s prior beliefs;
Asking what potential evidence would move you, or would move the other person;
Not expecting all sides of a policy discussion to line up;
Aspiring to have true beliefs, rather than to make up rationalizations that back the group’s notions of virtue.
By all means, let’s copy the more effective, doing-oriented aspects of life hacking communities. But let’s do so while continuing to distinguish epistemic rationality as one of our key goals, since, as Steven notes, this goal seems almost unique to LW, is achieved here more than elsewhere, and is necessary for tackling e.g. existential risk reduction.
Hmm. The self-help / life hacking / personal development community may well be better than LW at focussing on practice, on concrete life-improvements, and on eliciting deep-seated motivation. But AFAICT these communities are not aiming at epistemic rationality in our sense, and are consequently not hitting it even as well as we are. LW, for all its faults, has had fair success at teaching folks how to thinking usefully about abstract, tricky subjects on which human discussions often tend rapidly toward nonsense (e.g. existential risk, optimal philanthropy, or ethics). It has done so by teaching such subskills as:
Never attempting to prove empirical facts from definitions;
Never saying or implying “but decent people shouldn’t believe X, so X is false”;
Being curious; participating in conversations with intent to update opinions, rather than merely to defend one’s prior beliefs;
Asking what potential evidence would move you, or would move the other person;
Not expecting all sides of a policy discussion to line up;
Aspiring to have true beliefs, rather than to make up rationalizations that back the group’s notions of virtue.
By all means, let’s copy the more effective, doing-oriented aspects of life hacking communities. But let’s do so while continuing to distinguish epistemic rationality as one of our key goals, since, as Steven notes, this goal seems almost unique to LW, is achieved here more than elsewhere, and is necessary for tackling e.g. existential risk reduction.