I’m aware of some of his work, but haven’t looked into it deeply. I added a section to the OP (quoted below) to clarify what question I’m most interested in. Do you think Stanovich’s work helps much in answering it?
To clarify, I’m asking about the difference between general intelligence and rationality as theoretical concepts that apply to all agents. Human rationality vs intelligence may give us a clue to that answer, but isn’t the main thing that I’m interested here.
I think Stanovich gives a persuasive model which, while complex, seems to explain the various results in a more sophisticated way than just System I vs System II. I’m not sure to what extent you could argue that his breakdown is universal, but I think it’s useful to ask why his model of intelligence-as-modeling sometimes invoked to solve problems with specific pieces of knowledge allowing solution of particularly tricky problems (to simplify his scheme) would not be universal. It may be to get an efficient agent, you do need to have a lot of hardwired frugal processing (System I) which occasionally punts to more detailed simulation/modeling cognition (System II) which is augmented by general heuristics/theories/pieces of knowledge. (I seem to recall reading a paper from the Schmidhuber lab where they found that one of their universal agents ran much better when they encoded some relevant knowledge into its program, but I can’t refind it. It might be their Optimal Ordered Problem Solver.)
I’m aware of some of his work, but haven’t looked into it deeply. I added a section to the OP (quoted below) to clarify what question I’m most interested in. Do you think Stanovich’s work helps much in answering it?
I think Stanovich gives a persuasive model which, while complex, seems to explain the various results in a more sophisticated way than just System I vs System II. I’m not sure to what extent you could argue that his breakdown is universal, but I think it’s useful to ask why his model of intelligence-as-modeling sometimes invoked to solve problems with specific pieces of knowledge allowing solution of particularly tricky problems (to simplify his scheme) would not be universal. It may be to get an efficient agent, you do need to have a lot of hardwired frugal processing (System I) which occasionally punts to more detailed simulation/modeling cognition (System II) which is augmented by general heuristics/theories/pieces of knowledge. (I seem to recall reading a paper from the Schmidhuber lab where they found that one of their universal agents ran much better when they encoded some relevant knowledge into its program, but I can’t refind it. It might be their Optimal Ordered Problem Solver.)