Can you imagine an intelligent agent that is not rational? And vice versa, can you imagine a rational agent that is not intelligent?
AIXI is “rational” (believe that it’s vNM-rational in the literature). Is “instrumental rationality” a superset of this definition?
In the case of human rationality and human intelligence, part of it seems a question of scale. E.g. IQ tests seem to measure low level pattern matching, while “rationality” in the sense of Stanovich refers to more of a larger scale self reflective corrective process. (I’d conjecture that there are a lot of low level self reflective corrective processes occurring in an IQ test as well).
Some off the cuff thoughts:
Can you imagine an intelligent agent that is not rational? And vice versa, can you imagine a rational agent that is not intelligent?
AIXI is “rational” (believe that it’s vNM-rational in the literature). Is “instrumental rationality” a superset of this definition?
In the case of human rationality and human intelligence, part of it seems a question of scale. E.g. IQ tests seem to measure low level pattern matching, while “rationality” in the sense of Stanovich refers to more of a larger scale self reflective corrective process. (I’d conjecture that there are a lot of low level self reflective corrective processes occurring in an IQ test as well).