The failures of phlogiston and vitalism are historicalhindsight. Dare I step out on a limb, and name some current theory which I deem analogously flawed?
I name artificial intelligence or thinking machines—usually defined as the study of systems whose high-level behaviors arise from “thinking” or the interaction of many low-level elements. (R. J. Sternberg quoted in a paper by Shane Legg: “Viewed narrowly, there seem to be almost as many definitions of intelligence as there were experts asked to define it.”) Taken literally, that allows for infinitely many degrees of intelligence to fit every phenomenon in our universe above the level of individual quarks, which is part of the problem. Imagine pointing to a chess computer and saying “It’s not a stone!” Does that feel like an explanation? No? Then neither should saying “It’s a thinking machine!”
It’s the noun “intelligence” that I protest, rather than to “evoke a dynamic state sequence from a machine by computing an algorithm”. There’s nothing wrong with saying “X computes algorithm Y”, where Y is some specific, detailed flowchart that represents an algorithm or process. “Thinking about” is another legitimate phrase that means exactly the same thing: The machine is thinking about a problem, according to an specific algorithm. The machine is thinking about how to put elements of a list in a certain order, according to the a specific algorithm called quicksort.
Now suppose I should say that a problem is explained by “thinking” or that the order of elements in a list is the result of a “thinking machine”, and claim that as my explanation.
The phrase “evoke a dynamic state sequence from a machine by computing an algorithm” is acceptable, just like “thinking about” or “is caused by” are acceptable, if the phrase precedes some specification to be judged on its own merits.
However, this is not the way “intelligence” is commonly used. “Intelligence” is commonly used as an explanation in its own right.
I have lost track of how many times I have heard people say, “an artificial general intelligence would have a genuine intelligence advantage” as if that explained its advantage. This usage fits all the checklist items for a mysterious answer to a mysterious question. What do you know, after you have said that its “advantage” is “intelligence”? You can make no new predictions. You do not know anything about the behavior of real-world artificial general intelligence that you did not know before. It feels like you believe a new fact, but you don’t anticipate any different outcomes. Your curiosity feels sated, but it has not been fed. The hypothesis has no moving parts—there’s no detailed internal model to manipulate. Those who proffer the hypothesis of “intelligence” confess their ignorance of the internals, and take pride in it; they contrast the science of “artificial general intelligence” to other sciences merely mundane.
And even after the answer of “How? Intelligence!” is given, the practical realization is still a mystery and possesses the same sacred impenetrability it had at the start.
A fun exercise is to eliminate the explanation “intelligence” from any sentence in which it appears, and see if the sentence says anything different:
Before: The AI is going to take over the world by using its superhuman intelligence to invent nanotechnology.
After: The AI is going to take over the world by inventing nanotechnology.
Before: A friendly AI is going to use its superhuman intelligence to extrapolate the coherent volition of humanity.
After: A friendly AI is going to extrapolate the coherent volition of humanity.
Even better: A friendly AI is a powerful algorithm. We can successfully extrapolate some aspects of the volition of individual humans using [FILL IN DETAILS] procedure, without any global societal variables, showing that we understand how the extrapolate the volition of humanity in theory and that it converges rather than diverges, that our wishes cohere rather than interfere.
Another fun exercise is to replace “intelligence” with “magic”, the explanation that people had to use before the idea of an intelligence explosion was invented:
Before: The AI is going to use its superior intelligence to quickly evolve vastly superhuman capabilities and reach singleton status within a matter of weeks.
After: The AI is going to use magic to quickly evolve vastly superhuman capabilities and reach singleton status within a matter of weeks.
Before: Superhuman intelligence is able to use the internet to gain physical manipulators and expand its computational capabilities.
After: Superhuman magic is able to use the internet to gain physical manipulators and expand its computational capabilities.
Does not each statement convey exactly the same amount of knowledge about the phenomenon’s behavior? Does not each hypothesis fit exactly the same set of outcomes?
“Intelligence” has become very popular, just as saying “magic” used to be very popular. “Intelligence” has the same deep appeal to human psychology, for the same reason. “Intelligence” is such a wonderfully easy explanation, and it feels good to say it; it gives you a sacred mystery to worship. Intelligence is popular because it is the junk food of curiosity. You can explain anything using intelligence , and so people do just that; for it feels so wonderful to explain things. Humans are still humans, even if they’ve taken a few science classes in college. Once they find a way to escape the shackles of settled science, they get up to the same shenanigans as their ancestors, dressed up in the literary genre of “science” but still the same species psychology.
The Futility of Intelligence
The failures of phlogiston and vitalism are historical hindsight. Dare I step out on a limb, and name some current theory which I deem analogously flawed?
I name artificial intelligence or thinking machines—usually defined as the study of systems whose high-level behaviors arise from “thinking” or the interaction of many low-level elements. (R. J. Sternberg quoted in a paper by Shane Legg: “Viewed narrowly, there seem to be almost as many definitions of intelligence as there were experts asked to define it.”) Taken literally, that allows for infinitely many degrees of intelligence to fit every phenomenon in our universe above the level of individual quarks, which is part of the problem. Imagine pointing to a chess computer and saying “It’s not a stone!” Does that feel like an explanation? No? Then neither should saying “It’s a thinking machine!”
It’s the noun “intelligence” that I protest, rather than to “evoke a dynamic state sequence from a machine by computing an algorithm”. There’s nothing wrong with saying “X computes algorithm Y”, where Y is some specific, detailed flowchart that represents an algorithm or process. “Thinking about” is another legitimate phrase that means exactly the same thing: The machine is thinking about a problem, according to an specific algorithm. The machine is thinking about how to put elements of a list in a certain order, according to the a specific algorithm called quicksort.
Now suppose I should say that a problem is explained by “thinking” or that the order of elements in a list is the result of a “thinking machine”, and claim that as my explanation.
The phrase “evoke a dynamic state sequence from a machine by computing an algorithm” is acceptable, just like “thinking about” or “is caused by” are acceptable, if the phrase precedes some specification to be judged on its own merits.
However, this is not the way “intelligence” is commonly used. “Intelligence” is commonly used as an explanation in its own right.
I have lost track of how many times I have heard people say, “an artificial general intelligence would have a genuine intelligence advantage” as if that explained its advantage. This usage fits all the checklist items for a mysterious answer to a mysterious question. What do you know, after you have said that its “advantage” is “intelligence”? You can make no new predictions. You do not know anything about the behavior of real-world artificial general intelligence that you did not know before. It feels like you believe a new fact, but you don’t anticipate any different outcomes. Your curiosity feels sated, but it has not been fed. The hypothesis has no moving parts—there’s no detailed internal model to manipulate. Those who proffer the hypothesis of “intelligence” confess their ignorance of the internals, and take pride in it; they contrast the science of “artificial general intelligence” to other sciences merely mundane.
And even after the answer of “How? Intelligence!” is given, the practical realization is still a mystery and possesses the same sacred impenetrability it had at the start.
A fun exercise is to eliminate the explanation “intelligence” from any sentence in which it appears, and see if the sentence says anything different:
Before: The AI is going to take over the world by using its superhuman intelligence to invent nanotechnology.
After: The AI is going to take over the world by inventing nanotechnology.
Before: A friendly AI is going to use its superhuman intelligence to extrapolate the coherent volition of humanity.
After: A friendly AI is going to extrapolate the coherent volition of humanity.
Even better: A friendly AI is a powerful algorithm. We can successfully extrapolate some aspects of the volition of individual humans using [FILL IN DETAILS] procedure, without any global societal variables, showing that we understand how the extrapolate the volition of humanity in theory and that it converges rather than diverges, that our wishes cohere rather than interfere.
Another fun exercise is to replace “intelligence” with “magic”, the explanation that people had to use before the idea of an intelligence explosion was invented:
Before: The AI is going to use its superior intelligence to quickly evolve vastly superhuman capabilities and reach singleton status within a matter of weeks.
After: The AI is going to use magic to quickly evolve vastly superhuman capabilities and reach singleton status within a matter of weeks.
Before: Superhuman intelligence is able to use the internet to gain physical manipulators and expand its computational capabilities.
After: Superhuman magic is able to use the internet to gain physical manipulators and expand its computational capabilities.
Does not each statement convey exactly the same amount of knowledge about the phenomenon’s behavior? Does not each hypothesis fit exactly the same set of outcomes?
“Intelligence” has become very popular, just as saying “magic” used to be very popular. “Intelligence” has the same deep appeal to human psychology, for the same reason. “Intelligence” is such a wonderfully easy explanation, and it feels good to say it; it gives you a sacred mystery to worship. Intelligence is popular because it is the junk food of curiosity. You can explain anything using intelligence , and so people do just that; for it feels so wonderful to explain things. Humans are still humans, even if they’ve taken a few science classes in college. Once they find a way to escape the shackles of settled science, they get up to the same shenanigans as their ancestors, dressed up in the literary genre of “science” but still the same species psychology.