tangent: System 1 seems to control how “profound”, and thus likely-to-apply-in-the-future, any given concept feels. Venkatesh Rao has written a piece on this I can’t find right now, but the gist was that we glom onto concepts that allow more efficient mental organization. For example discovering that two phenomenon we thought were separate are actually sub-cases of some more basic phenomenon. An important point is that we do this speciously, as our pattern recognition is overactive (it is worth false alarms when checking for leopards). This predicts wide ranging failures such as religion, policy wonkery, conspiracy theories, etc.
Anyway, the point is that my process for finding, evaluating, and adding such concepts to my permanent repository of cognitive tools is not well defined and this bothers me. I’ve tried explicitly adding concepts to my permanent toolbox without being positive they will be helpful, for example when I used Anki (spaced repetition software) to help remember biases and fallacies. I found it hard to stick with this even though it did in fact seem to help me notice when making specific errors more. So I guess what I’m basically asking is why aren’t we spending a lot more time improving the checklist of rationality habits, especially via empiricism.
For example discovering that two phenomenon we thought were separate are actually sub-cases of some more basic phenomenon.
I like to do this a lot in mathematics, but fortunately mathematical language is both rich and rigorous enough that I can avoid false alarms in that context (category theory in particular is full of examples of phenomena that look separate but can rigorously be shown to be subcases of a more basic phenomenon).
I think of mathematics as being a conspiracy theorist’s fantasy land: it works nearly the way a conspiracy theorist thinks reality works.
So I guess what I’m basically asking is why aren’t we spending a lot more time improving the checklist of rationality habits, especially via empiricism.
Well, that’s something like what CFAR is trying to do.
tangent: System 1 seems to control how “profound”, and thus likely-to-apply-in-the-future, any given concept feels. Venkatesh Rao has written a piece on this I can’t find right now, but the gist was that we glom onto concepts that allow more efficient mental organization. For example discovering that two phenomenon we thought were separate are actually sub-cases of some more basic phenomenon. An important point is that we do this speciously, as our pattern recognition is overactive (it is worth false alarms when checking for leopards). This predicts wide ranging failures such as religion, policy wonkery, conspiracy theories, etc.
Anyway, the point is that my process for finding, evaluating, and adding such concepts to my permanent repository of cognitive tools is not well defined and this bothers me. I’ve tried explicitly adding concepts to my permanent toolbox without being positive they will be helpful, for example when I used Anki (spaced repetition software) to help remember biases and fallacies. I found it hard to stick with this even though it did in fact seem to help me notice when making specific errors more. So I guess what I’m basically asking is why aren’t we spending a lot more time improving the checklist of rationality habits, especially via empiricism.
I like to do this a lot in mathematics, but fortunately mathematical language is both rich and rigorous enough that I can avoid false alarms in that context (category theory in particular is full of examples of phenomena that look separate but can rigorously be shown to be subcases of a more basic phenomenon).
I think of mathematics as being a conspiracy theorist’s fantasy land: it works nearly the way a conspiracy theorist thinks reality works.
Well, that’s something like what CFAR is trying to do.