To put it another way, if “the study (and applied skill) of finding cognitive algorithms that form better beliefs and make better decisions” is sometimes the wrong tool for the job, but figuring out whether it’s the right tool for the job (or the question one meta level up from that, or from that, etc.) is “rationality”, then evidently “rationality” is not, in fact, “the study (and applied skill) of finding cognitive algorithms that form better beliefs and make better decisions”, but rather something much broader and more fundamental than that
Why should this be? Naively, this is covered under make better decisions, because the method used to solve a problem is surely a decision. More broadly, it feels like we definitely want rationality to have the property that we can determine the limits of the art, using the art; and also that we can expand the limits of the art, using the art. Math has this property, and we don’t consider that to not be math but something more fundamental; not even in light of incompleteness theorems.
For the cooking pasta example: it feels like we should be able to rationally consider the time it would take to grok cooking pasta, compare it to the time it would take to just follow a good recipe, and conclude just following the recipe is a better decision. More specifically, we should be able to decide whether investing in improving our beliefs about pasta cooking is better or worse than going with our current beliefs and using a recipe, on a case-by-case basis.
It is only contradictory insofar as I wrote it using the beliefs and decisions phraseology from Raemon’s definition, which isn’t much, but what I am really interested in is hearing more about your intuitions behind why applying the definition to meta-level questions points away from the usefulness of the definition.
Note that I am not really interested in Raemon’s specific definition per se, so if this is a broader intuition and you’d prefer to use other examples to illustrate that would be just fine.
I do not follow this section:
Why should this be? Naively, this is covered under make better decisions, because the method used to solve a problem is surely a decision. More broadly, it feels like we definitely want rationality to have the property that we can determine the limits of the art, using the art; and also that we can expand the limits of the art, using the art. Math has this property, and we don’t consider that to not be math but something more fundamental; not even in light of incompleteness theorems.
For the cooking pasta example: it feels like we should be able to rationally consider the time it would take to grok cooking pasta, compare it to the time it would take to just follow a good recipe, and conclude just following the recipe is a better decision. More specifically, we should be able to decide whether investing in improving our beliefs about pasta cooking is better or worse than going with our current beliefs and using a recipe, on a case-by-case basis.
I agree with your second paragraph, but I’m not sure how it’s meant to be contradicting what I wrote.
It is only contradictory insofar as I wrote it using the beliefs and decisions phraseology from Raemon’s definition, which isn’t much, but what I am really interested in is hearing more about your intuitions behind why applying the definition to meta-level questions points away from the usefulness of the definition.
Note that I am not really interested in Raemon’s specific definition per se, so if this is a broader intuition and you’d prefer to use other examples to illustrate that would be just fine.