The point I am making is that your claim—that rationality is sometimes the right tool for the job, and sometimes not—doesn’t follow from your argument, any more than it follows from the observation that, say, the skill of cooking pasta is not, strictly speaking, “rationality”. Figuring out how to cook pasta, or figuring out how to figure out how to cook pasta, or figuring out whether to cook pasta, or figuring out how to determine whether you should cook pasta and how much time you should spend on figuring out how to cook pasta… these things might recognizably be “rationality”, but the skill of cooking pasta is just a skill.
But what do we conclude from that? That—contrary to previous beliefs—sometimes we shouldn’t apply rationality? That the definition of “rationality” should somehow exclude cooking pasta, placing that skill outside of the domain of “rationality”? I mean, this—
Rationality is the study (and applied skill) of finding cognitive algorithms that form better beliefs and make better decisions. Sometimes this is the appropriate tool for the job, and sometimes it’s not.
—does it include cooking pasta or not? (Suppose you say “no”, are we to understand that heretofore the consensus answer among “rationalists” was instead “yes”?) This seems like a silly thing to be asking, but that’s because there’s a motte-and-bailey here: the motte is “cooking pasta is not really ‘rationality’, per se, it’s just a skill”, while the bailey is “we shouldn’t apply rationality to the domain of pasta-cooking, or cooking of any sort, or [some even broader category of activity]”.
To put it another way, if “the study (and applied skill) of finding cognitive algorithms that form better beliefs and make better decisions” is sometimes the wrong tool for the job, but figuring out whether it’s the right tool for the job (or the question one meta level up from that, or from that, etc.) is “rationality”, then evidently “rationality” is not, in fact, “the study (and applied skill) of finding cognitive algorithms that form better beliefs and make better decisions”, but rather something much broader and more fundamental than that. The definition is self-defeating. As is unsurprising; indeed, the idea that “rationality” absolutely should not be defined this narrowly is one of the most important ideas in the Sequences!
To put it another way, if “the study (and applied skill) of finding cognitive algorithms that form better beliefs and make better decisions” is sometimes the wrong tool for the job, but figuring out whether it’s the right tool for the job (or the question one meta level up from that, or from that, etc.) is “rationality”, then evidently “rationality” is not, in fact, “the study (and applied skill) of finding cognitive algorithms that form better beliefs and make better decisions”, but rather something much broader and more fundamental than that
Why should this be? Naively, this is covered under make better decisions, because the method used to solve a problem is surely a decision. More broadly, it feels like we definitely want rationality to have the property that we can determine the limits of the art, using the art; and also that we can expand the limits of the art, using the art. Math has this property, and we don’t consider that to not be math but something more fundamental; not even in light of incompleteness theorems.
For the cooking pasta example: it feels like we should be able to rationally consider the time it would take to grok cooking pasta, compare it to the time it would take to just follow a good recipe, and conclude just following the recipe is a better decision. More specifically, we should be able to decide whether investing in improving our beliefs about pasta cooking is better or worse than going with our current beliefs and using a recipe, on a case-by-case basis.
It is only contradictory insofar as I wrote it using the beliefs and decisions phraseology from Raemon’s definition, which isn’t much, but what I am really interested in is hearing more about your intuitions behind why applying the definition to meta-level questions points away from the usefulness of the definition.
Note that I am not really interested in Raemon’s specific definition per se, so if this is a broader intuition and you’d prefer to use other examples to illustrate that would be just fine.
I do agree there is a meta-level skill of figuring out what tools to use, and I do think that meta-level skill is still pretty central to what I call rationality (which includes “applying cognitive algorithms to make good decisions”). But it’s not necessarily the case that studying that skill will pay off
To me this paragraph covered the point your making and further took a stance on the efficacy of such an approach.
Note I added that after Said wrote this comment. (partly as a response to Said’s comment)
The place where I feel like I addressed this (though somewhat less clearly) in the original version was:
There is rationality involved in sifting through the various practices people suggest to you, and figuring out which ones work best. But, the specific skill of “sifting out the good from the bad” isn’t always the best approach. It might take years to become good at it, and it’s not obvious that those years of getting good at it will pay off.
Rationality
This was covered in the post, not sure what point your making.
The point I am making is that your claim—that rationality is sometimes the right tool for the job, and sometimes not—doesn’t follow from your argument, any more than it follows from the observation that, say, the skill of cooking pasta is not, strictly speaking, “rationality”. Figuring out how to cook pasta, or figuring out how to figure out how to cook pasta, or figuring out whether to cook pasta, or figuring out how to determine whether you should cook pasta and how much time you should spend on figuring out how to cook pasta… these things might recognizably be “rationality”, but the skill of cooking pasta is just a skill.
But what do we conclude from that? That—contrary to previous beliefs—sometimes we shouldn’t apply rationality? That the definition of “rationality” should somehow exclude cooking pasta, placing that skill outside of the domain of “rationality”? I mean, this—
—does it include cooking pasta or not? (Suppose you say “no”, are we to understand that heretofore the consensus answer among “rationalists” was instead “yes”?) This seems like a silly thing to be asking, but that’s because there’s a motte-and-bailey here: the motte is “cooking pasta is not really ‘rationality’, per se, it’s just a skill”, while the bailey is “we shouldn’t apply rationality to the domain of pasta-cooking, or cooking of any sort, or [some even broader category of activity]”.
To put it another way, if “the study (and applied skill) of finding cognitive algorithms that form better beliefs and make better decisions” is sometimes the wrong tool for the job, but figuring out whether it’s the right tool for the job (or the question one meta level up from that, or from that, etc.) is “rationality”, then evidently “rationality” is not, in fact, “the study (and applied skill) of finding cognitive algorithms that form better beliefs and make better decisions”, but rather something much broader and more fundamental than that. The definition is self-defeating. As is unsurprising; indeed, the idea that “rationality” absolutely should not be defined this narrowly is one of the most important ideas in the Sequences!
I do not follow this section:
Why should this be? Naively, this is covered under make better decisions, because the method used to solve a problem is surely a decision. More broadly, it feels like we definitely want rationality to have the property that we can determine the limits of the art, using the art; and also that we can expand the limits of the art, using the art. Math has this property, and we don’t consider that to not be math but something more fundamental; not even in light of incompleteness theorems.
For the cooking pasta example: it feels like we should be able to rationally consider the time it would take to grok cooking pasta, compare it to the time it would take to just follow a good recipe, and conclude just following the recipe is a better decision. More specifically, we should be able to decide whether investing in improving our beliefs about pasta cooking is better or worse than going with our current beliefs and using a recipe, on a case-by-case basis.
I agree with your second paragraph, but I’m not sure how it’s meant to be contradicting what I wrote.
It is only contradictory insofar as I wrote it using the beliefs and decisions phraseology from Raemon’s definition, which isn’t much, but what I am really interested in is hearing more about your intuitions behind why applying the definition to meta-level questions points away from the usefulness of the definition.
Note that I am not really interested in Raemon’s specific definition per se, so if this is a broader intuition and you’d prefer to use other examples to illustrate that would be just fine.
To me this paragraph covered the point your making and further took a stance on the efficacy of such an approach.
Note I added that after Said wrote this comment. (partly as a response to Said’s comment)
The place where I feel like I addressed this (though somewhat less clearly) in the original version was: