Thomas Griffiths’ paper Understanding Human Intelligence through Human Limitations argues that the aspects we associate with human intelligence – rapid learning from small data, the ability to break down problems into parts, and the capacity for cumulative cultural evolution – arose from the 3 fundamental limitations all humans share: limited time, limited computation, and limited communication. (The constraints imposed by these characteristics cascade: limited time magnifies the effect of limited computation, and limited communication makes it harder to draw upon more computation.) In particular, limited computation leads to problem decomposition, hence modular solutions; relieving the computation constraint enables solutions that can be objectively better along some axis while also being incomprehensible to humans:
A key attribute of human intelligence is being able to break problems into parts that can individually be solved more easily, or that make it possible to reuse partial solutions discovered through previous experience. These methods for making computational problems more tractable such ubiquitous part of human intelligence that they seem to be an obligatory component of intelligence more generally. One example of this is forming subgoals. The early artificial intelligence literature, inspired by human problem-solving, put a significant emphasis on reducing tasks to a series of subgoals.
However, forming subgoals is not a necessary part of intelligence, it’s a consequence of having limited computation. With a sufficiently large amount of computation, there is no need to have subgoals: the problem can be solved by simply planning all the way to the final goal.
Go experts have commented that new AI systems sometimes produce play that seems alien, precisely because it was hard to identify goals that motivated particular actions [13]. This makes perfect sense, since the actions that taken by these systems are justified by the fact that they are most likely to yield a small expected advantage many steps in the future rather than because they satisfy some specific subgoal.
Another example where human intelligence looks very different from machine intelligence is in solving the Rubik’s cube. Thanks to some careful analysis and a significant amount of computation, the Rubik’s cube is a solved problem: the shortest path from any configuration to an unscrambled cube has been identified, taking no more than 20moves [45]. However, the solution doesn’t have a huge amount of underlying structure – those shortest paths are stored in a gigantic lookup table. Contrast this with the solutions used by human solvers. A variety of methods for solving the cube exist, but those used by the fastest human solvers require around 50 moves. These solutions require memorizing a few dozen to a few hundred “algorithms” that specify transformations to be used at particular points in the process. Methods also have intermediate subgoals, such as first solving an entire side.
This is why I don’t buy the argument that “in the limit, superior strategies will tend to be beautiful and elegant”, at least for strategies generated by AIs far less limited than humans are w.r.t. time, compute and communication. I don’t think they’ll necessarily look “dumb”, just not decomposable into human working memory-sized parts, hence weird and incomprehensible (and informationally overwhelming) from our perspective.
Since the topic of chess was brought up: I think the right intuition pump is endgame tablebase, not moves played by AlphaZero. A quote about KRNKNN mate-in-262 discovered by endgame tablebase from Wikipedia:
Playing over these moves is an eerie experience. They are not human; a grandmaster does not understand them any better than someone who has learned chess yesterday. The knights jump, the kings orbit, the sun goes down, and every move is the truth. It’s like being revealed the Meaning of Life, but it’s in Estonian.
Thomas Griffiths’ paper Understanding Human Intelligence through Human Limitations argues that the aspects we associate with human intelligence – rapid learning from small data, the ability to break down problems into parts, and the capacity for cumulative cultural evolution – arose from the 3 fundamental limitations all humans share: limited time, limited computation, and limited communication. (The constraints imposed by these characteristics cascade: limited time magnifies the effect of limited computation, and limited communication makes it harder to draw upon more computation.) In particular, limited computation leads to problem decomposition, hence modular solutions; relieving the computation constraint enables solutions that can be objectively better along some axis while also being incomprehensible to humans:
(Speedruns are another relevant intuition pump.)
This is why I don’t buy the argument that “in the limit, superior strategies will tend to be beautiful and elegant”, at least for strategies generated by AIs far less limited than humans are w.r.t. time, compute and communication. I don’t think they’ll necessarily look “dumb”, just not decomposable into human working memory-sized parts, hence weird and incomprehensible (and informationally overwhelming) from our perspective.
Since the topic of chess was brought up: I think the right intuition pump is endgame tablebase, not moves played by AlphaZero. A quote about KRNKNN mate-in-262 discovered by endgame tablebase from Wikipedia: