At least three possible explanations come to mind:
Abstract reasoning, by its very nature, is capable of reasoning in any domain and using entirely novel concepts. That limits the amount of domain-specific sanity checks that can be “built in”.
Reasoning may not have evolved for problem-solving in the first place, but for communication, limiting the extent to which it’s been selected for its problem-solving capability. (This would also help explain why it doesn’t drive our behavior that strongly.)
Although human reasoning obviously doesn’t run on predicate logic nor any other formal logic, some aspects of it seem to be similar. We know that formal logics are brittle in the sense that in order to get the correct conclusions, you need to get every single premise and inference step correct. People can easily carry out reasoning which is formally entirely correct, but still inapplicable to the real world because they didn’t remember to incorporate some relevant factor. (Our working memory limits and tendency to simplify things into nice stories that fit our memory easily probably both makes this worse, and is necessary for us to be able to use abstract reasoning in the first place.) Connectionist-style reasoning seems better capable of modeling things without needing to explicitly specify every single thing that influences it, which is (IIRC) a big part of why connectionists have criticized logic-based AI systems as hopelessly fragile and incapable of reliable real-world performance.
Good question!
At least three possible explanations come to mind:
Abstract reasoning, by its very nature, is capable of reasoning in any domain and using entirely novel concepts. That limits the amount of domain-specific sanity checks that can be “built in”.
Reasoning may not have evolved for problem-solving in the first place, but for communication, limiting the extent to which it’s been selected for its problem-solving capability. (This would also help explain why it doesn’t drive our behavior that strongly.)
Although human reasoning obviously doesn’t run on predicate logic nor any other formal logic, some aspects of it seem to be similar. We know that formal logics are brittle in the sense that in order to get the correct conclusions, you need to get every single premise and inference step correct. People can easily carry out reasoning which is formally entirely correct, but still inapplicable to the real world because they didn’t remember to incorporate some relevant factor. (Our working memory limits and tendency to simplify things into nice stories that fit our memory easily probably both makes this worse, and is necessary for us to be able to use abstract reasoning in the first place.) Connectionist-style reasoning seems better capable of modeling things without needing to explicitly specify every single thing that influences it, which is (IIRC) a big part of why connectionists have criticized logic-based AI systems as hopelessly fragile and incapable of reliable real-world performance.