A useful question is how much rationality training is optimal. The brain can make intuitive guesses very quickly, and these guesses are fairly accurate most of the time, while meta-cognitive rationality checks slow decision-making and don’t guarantee correctness. A Rationalist wants to make optimal decisions, and this often means going as meta as possible and striving to consider all known information: cognitive biases, general science, etc.
It is currently impossible to derive morality using logic and science (i.e. to derive “should” from “is”), and has been since the era of Wittgenstein or possibly Hume. So, I can’t make any general statements about what anyone should do.
Assuming you want to be right, happy, and powerful, I recommend learning domain knowledge that helps build or do useful things (e.g. engineering or something else technical for a career; identifying what succeeds and practicing relevant techniques to succeed at routine tasks such as car-buying), and only studying enough psychology/logic/meta-cognition to usually avoid costly errors. The amount of valuable knowledge/skill I accumulate tracks closely with my success.
Practically, in engineering work, guarding against overconfidence is a huge part of the job. It’s easy to be excited about a new and untested idea, but engineers typically learn humility after a few embarrassing and expensive failures. Experienced engineers are careful not to deliver an expensive or complicated product to customers until it’s gone through extensive review and testing, and even then there is a budget/insurance for unexpected problems. This is for products using established methods and practices, that can be rigorously tested under known operating conditions. Meta-cognition is inherently more difficult to test (e.g. it’s unwise to do destructive testing). LW rationality content generally describes well-validated theories, but prescribing actions based on these theories requires subjective value judgments.
tl;dr: Rationality helps but data/experience is what’s critical for making effective decisions. If you haven’t validated your theory with experiments, it’s probably wrong.
A Rationalist wants to make optimal decisions, and this often means going as meta as possible and striving to consider all known information: cognitive biases, general science, etc. [...] Rationality helps but data/experience is what’s critical for making effective decisions. If you haven’t validated your theory with experiments, it’s probably wrong.
I don’t think that’s fair criticism. This community values doing Fermi estimates and checking whether the estimates are near the correct number. We have prediction book for calibrating our prediction abilities against the real world.
A useful question is how much rationality training is optimal. The brain can make intuitive guesses very quickly, and these guesses are fairly accurate most of the time, while meta-cognitive rationality checks slow decision-making and don’t guarantee correctness. A Rationalist wants to make optimal decisions, and this often means going as meta as possible and striving to consider all known information: cognitive biases, general science, etc.
It is currently impossible to derive morality using logic and science (i.e. to derive “should” from “is”), and has been since the era of Wittgenstein or possibly Hume. So, I can’t make any general statements about what anyone should do.
Assuming you want to be right, happy, and powerful, I recommend learning domain knowledge that helps build or do useful things (e.g. engineering or something else technical for a career; identifying what succeeds and practicing relevant techniques to succeed at routine tasks such as car-buying), and only studying enough psychology/logic/meta-cognition to usually avoid costly errors. The amount of valuable knowledge/skill I accumulate tracks closely with my success.
Practically, in engineering work, guarding against overconfidence is a huge part of the job. It’s easy to be excited about a new and untested idea, but engineers typically learn humility after a few embarrassing and expensive failures. Experienced engineers are careful not to deliver an expensive or complicated product to customers until it’s gone through extensive review and testing, and even then there is a budget/insurance for unexpected problems. This is for products using established methods and practices, that can be rigorously tested under known operating conditions. Meta-cognition is inherently more difficult to test (e.g. it’s unwise to do destructive testing). LW rationality content generally describes well-validated theories, but prescribing actions based on these theories requires subjective value judgments.
tl;dr: Rationality helps but data/experience is what’s critical for making effective decisions. If you haven’t validated your theory with experiments, it’s probably wrong.
I don’t think that’s fair criticism. This community values doing Fermi estimates and checking whether the estimates are near the correct number. We have prediction book for calibrating our prediction abilities against the real world.