If you’ve read Thinking, Fast and Slow, you’ll be familiar with the concepts of a “system 1” that does fast, unconscious processing, and a “system 2″ that does slow, methodical processing. The important insight here is that these systems don’t sit side by side. System 2 is built *on top* of system 1. Conscious thought is an emergent property of all the low-level unconscious thinking that’s going on.
(Every part of your conscious thinking process originally comes from your unconscious mind. Think about how you explicitly work through a problem step-by-step. How do you determine what step comes next? How do you perform one of those steps? The details are all performed unconsciously, and only a “summary” is brought to conscious awareness.)
If you had to use your system 2 to catch a ball, you couldn’t do it. The time it would take to explicitly calculate a trajectory and the muscle movements needed to position your hand in that location would take hours, if not years, to figure out. The reason we can do it with our system 1 is because system 1 is a vastly more powerful system, finely tuned through evolution to be good at problems like that.
(Just watch people try to program a robot to catch a ball or assemble a puzzle.)
System 1 evolved first, and it’s found in the minds of all animals. System 2 comes into existence once system 1 gets complicated enough to support a second layer of processing on top of it. (Think about someone building a computer in Minecraft. Their physical computer is simulating a universe, and then a computer is implemented inside the physics of that universe. The computer inside Minecraft is vastly slower and more limited than the actual computer it’s built on top of.)
There’s a conception of fields of thought as being “hard” or “soft”, such as in the hard sciences/soft sciences and hard skills/soft skills dichotomies. And the hard skills/sciences are generally thought of as being more difficult. This is generally true, for humans. Soft skills are the sorts of things that we evolved to be good at, so they feel natural and effortless. Hard skills are those that didn’t matter all that much in our ancestral environment, so we have no natural affinity for them.
But in a fundamental sense, hard skills are vastly simpler and easier than soft skills. Hard skills are those that can be formalized. Knowing how to perform long division is challenging for humans, but it’s trivial to program into a computer. Knowing how to hold a polite conversation with a coworker? Trivial for most humans, but almost impossible for an algorithmically-programmed computer. (Only recent advances in neural networks that learn human-like heuristics have gotten us there.)
(This is just describing Kolmogorov complexity; the length of the shortest computer program that can do what you want. The shortest program that can perform long division is much shorter than the shortest program that can competently navigate human social interaction.)
This is why experts in “hard skills” tend to be good at explaining them to others[1], while experts in “soft skills” tend to be bad at explaining their craft. Hard skills experts come to their expertise via conscious reasoning; they understand the subject matter on a step-by-step level, and can break it down for others.
Soft skills experts, on the other hand, tend to function through intuition. When someone is asked to explain why they’re so charismatic, they’ll often stumble and say things that boil down to “just say nice things instead of rude things”. They don’t actually *understand* why they behave they way they do; they just behave in the way that feels right to them, and it turns out that their unconscious mind is good at what it does.
This is the explanation behind Moravec’s paradox; the observation that computers tend to be good at the sorts of things humans are bad at and vice versa.
Computers formally implement an algorithm for a task. This makes them only capable of performing tasks that are simple enough for humans to design an algorithm for.[2]
Life evolved to do things that were necessary for survival. These things require a massive amount of low-level processing, which can be optimized specifically to do those things and nothing else. You are in some sense doing “calculus” any time you catch a ball mid-flight, but the mental processes doing that calculus have been optimized specifically for catching thrown objects, and cannot be retasked to do other types of calculus.
The end result is that computers are good at things with low Kolmogorov complexity, while humans are good at things that are useful for survival on the surface of a planet. There’s no particular reason to expect these things to be the same.
Some questions that I don’t know the answer to:
Why are modern neural networks rapidly getting better at social skills (e.g. holding a conversation) and intellectual skills (e.g. programming, answering test questions), but have made so little progress on physically-embodied tasks such as controlling a robot or a self-driving car?
How does this model account for “intuitive geniuses”, who can give fast and precise answers to large arithmetic questions, but do it by intuition rather than explicit reasoning? (I remember an article or blog post that mentioned one of them would only answer integer square roots, and when given a question that had an irrational answer, would say “the numbers don’t feel right” or something like that. I couldn’t find it again though.)
I think I agree with most of what you are saying here. Definitely with the part where we intuitively solve complex nonlinear differential equations tailored to specific classes of problems that do not necessarily generalize well outside their domain, and do not rise to the level of conscious thought. The centipede dilemma seems topical here.
As for one of your examples, autonomous driving, my guess is that computers are better at it than most human drivers in 99% of all cases, but the complexity of the remaining 1% is what kills it.
Ah, found the story. Wasn’t quite as I remembered. (Search for “wrong number”.)
https://arthurjensen.net/wp-content/uploads/2014/06/Speed-of-Information-Processing-in-a-Calculating-Prodigy-Shakuntala-Devi-1990-by-Arthur-Robert-Jensen.pdf
Easier to train, less sensitive to errors: neural nets do produce ‘bad’ or ‘uncanny’ outputs plenty of times, but their errors don’t harm or kill people, or cause significant damage (which a malfunctioning robot or self-driving car might).
It’s not that surprising that human intuitive reasoning could be flexible enough to build a ‘mental calculator’ for some specific types of arithmetic operations (humans can learn all kind of complicated intuitive skills! It implies some amount of flexibility.) It’s still somewhat surprising: I would expect human reasoning to have issues representing numbers with sufficient precision. I guess the calculation would have to be done digit by digit? I doubt neurons would be able to tell the difference between 2636743 and 2636744 if it’s stored as a single number.