I think Stanovich gives a persuasive model which, while complex, seems to explain the various results in a more sophisticated way than just System I vs System II. I’m not sure to what extent you could argue that his breakdown is universal, but I think it’s useful to ask why his model of intelligence-as-modeling sometimes invoked to solve problems with specific pieces of knowledge allowing solution of particularly tricky problems (to simplify his scheme) would not be universal. It may be to get an efficient agent, you do need to have a lot of hardwired frugal processing (System I) which occasionally punts to more detailed simulation/modeling cognition (System II) which is augmented by general heuristics/theories/pieces of knowledge. (I seem to recall reading a paper from the Schmidhuber lab where they found that one of their universal agents ran much better when they encoded some relevant knowledge into its program, but I can’t refind it. It might be their Optimal Ordered Problem Solver.)
I think Stanovich gives a persuasive model which, while complex, seems to explain the various results in a more sophisticated way than just System I vs System II. I’m not sure to what extent you could argue that his breakdown is universal, but I think it’s useful to ask why his model of intelligence-as-modeling sometimes invoked to solve problems with specific pieces of knowledge allowing solution of particularly tricky problems (to simplify his scheme) would not be universal. It may be to get an efficient agent, you do need to have a lot of hardwired frugal processing (System I) which occasionally punts to more detailed simulation/modeling cognition (System II) which is augmented by general heuristics/theories/pieces of knowledge. (I seem to recall reading a paper from the Schmidhuber lab where they found that one of their universal agents ran much better when they encoded some relevant knowledge into its program, but I can’t refind it. It might be their Optimal Ordered Problem Solver.)