Where did I acquire, in my childhood, the deep conviction that reasoning from surface similarity couldn’t be trusted?
I don’t know; I really don’t. Maybe it was from S. I. Hayakawa’s Language in Thought and Action, or even Van Vogt’s similarly inspired Null-A novels. From there, perhaps, I began to mistrust reasoning that revolves around using the same word to label different things, and concluding they must be similar? Could that be the beginning of my great distrust of surface similarities? Maybe. Or maybe I tried to reverse stupidity of the sort found in Plato; that is where the young Eliezer got many of his principles.
And where did I get the other half of the principle, the drive to dig beneath the surface and find deep causal models? The notion of asking, not “What other thing does it resemble?”, but rather “How does it work inside?” I don’t know; I don’t remember reading that anywhere.
But this principle was surely one of the deepest foundations of the 15-year-old Eliezer, long before the modern me. “Simulation over similarity” I called the principle, in just those words. Years before I first heard the phrase “heuristics and biases”, let alone the notion of inside views and outside views.
The “Law of Similarity” is, I believe, the official name for the magical principle that similar things are connected; that you can make it rain by pouring water on the ground.
Like most forms of magic, you can ban the Law of Similarity in its most blatant form, but people will find ways to invoke it anyway; magic is too much fun for people to give it up just because it is rationally prohibited.
In the case of Artificial Intelligence, for example, reasoning by analogy is one of the chief generators of defective AI designs:
“My AI uses a highly parallel neural network, just like the human brain!”
First, the data elements you call “neurons” are nothing like biological neurons. They resemble them the way that a ball bearing resembles a foot.
Second, earthworms have neurons too, you know; not everything with neurons in it is human-smart.
But most importantly, you can’t build something that “resembles” the human brain in one surface facet and expect everything else to come out similar. This is science by voodoo doll. You might as well build your computer in the form of a little person and hope for it to rise up and walk, as build it in the form of a neural network and expect it to think. Not unless the neural network is fully as similar to human brains as individual human brains are to each other.
So that is one example of a failed modern attempt to exploit a magical Law of Similarity and Contagion that does not, in fact, hold in our physical universe. But magic has been very popular since ancient times, and every time you ban it it just comes back under a different name.
When you build a computer chip, it does not perform addition because the little beads of solder resemble beads on an abacus, and therefore the computer chip should perform addition just like an abacus.
The computer chip does not perform addition because the transistors are “logical” and arithmetic is “logical” too, so that if they are both “logical” they ought to do the same sort of thing.
The computer chip performs addition because the maker understood addition well enough to prove that the transistors, if they work as elementarily specified, will carry out adding operations. You can prove this without talking about abacuses. The computer chip would work just as well even if no abacus had ever existed. The computer chip has its own power and its own strength, it does not draw upon the abacus by a similarity-link.
Now can you tell me, without talking about how your neural network is “just like the human brain”, how your neural algorithm is going to output “intelligence”? Indeed, if you pretend I’ve never seen or heard of a human brain or anything like it, can you explain to me what you mean by “intelligence”? This is not a challenge to be leveled at random bystanders, but no one would succeed in designing Artificial Intelligence unless they could answer it.
I can explain a computer chip to someone who’s never seen an abacus or heard of an abacus and who doesn’t even have the concept of an abacus, and if I could not do this, I could not design an artifact that performed addition. I probably couldn’t even make my own abacus, because I wouldn’t understand which aspects of the beads were important.
I expect to return later to this point as it pertains to Artificial Intelligence particularly.
Reasoning by analogy is just as popular today, as in Greek times, and for the same reason. You’ve got no idea how something works, but you want to argue that it’s going to work a particular way. For example, you want to argue that your cute little sub-earthworm neural network is going to exhibit “intelligence”. Or you want to argue that your soul will survive its death. So you find something else to which it bears one single surface resemblance, such as the human mind or a sleep cycle, and argue that since they resemble each other they should have the same behavior. Or better yet, just call them by the same name, like “neural” or “the generation of opposites”.
But there is just no law which says that if X has property A and Y has property A then X and Y must share any other property. “I built my network, and it’s massively parallel and interconnected and complicated, just like the human brain from which intelligence emerges! Behold, now intelligence shall emerge from this neural network as well!” And nothing happens. Why should it?
You come up with your argument from surface resemblances, and Nature comes back and says “So what?” There just isn’t a law that says it should work.
If you design a system of transistors to do addition, and it says 2 + 2 = 5, you can go back and debug it; you can find the place where you made an identifiable mistake.
But suppose you build a neural network that is massively parallel and interconnected and complicated, and it fails to be intelligent. You can’t even identify afterward what went wrong, because the wrong step was in thinking that the clever argument from similarity had any power over Reality to begin with.
In place of this reliance of surface analogies, I have had this notion and principle—from so long ago that I can hardly remember how or why I first came to hold it—that the key to understanding is to ask why things happen, and to be able to walk through the process of their insides.
Hidden or openly, this principle is ubiquitously at work in all my writings. For example, take my notion of what it looks like to “explain” “free will” by digging down into the causal cognitive sources of human judgments of freedom-ness and determination-ness. Contrast to any standard analysis that lists out surface judgments of freedom-ness and determination-ness without asking what cognitive algorithm generates these perceptions.
Of course, some things that resemble each other in some ways, resemble each other in other ways as well. But in the modern world, at least, by the time we can rely on this resemblance, we generally have some idea of what is going on inside, and why the resemblance holds.
The distrust of surface analogies, and the drive to find deeper and causal models, has been with me my whole remembered span, and has been tremendously helpful to both the young me and the modern one. The drive toward causality makes me keep asking “Why?” and looking toward the insides of things; and the distrust of surface analogies helps me avoid standard dead ends. It has driven my whole life.
As for Inside View vs. Outside View, I think that the lesson of history is just that reasoning from surface resemblances starts to come apart at the seams when you try to stretch it over gaps larger than Christmas shopping—over gaps larger than different draws from the same causal-structural generator. And reasoning by surface resemblance fails with especial reliability, in cases where there is the slightest motivation in the underconstrained choice of a reference class.
Surface Analogies and Deep Causes
Followup to: Artificial Addition, The Outside View’s Domain
Where did I acquire, in my childhood, the deep conviction that reasoning from surface similarity couldn’t be trusted?
I don’t know; I really don’t. Maybe it was from S. I. Hayakawa’s Language in Thought and Action, or even Van Vogt’s similarly inspired Null-A novels. From there, perhaps, I began to mistrust reasoning that revolves around using the same word to label different things, and concluding they must be similar? Could that be the beginning of my great distrust of surface similarities? Maybe. Or maybe I tried to reverse stupidity of the sort found in Plato; that is where the young Eliezer got many of his principles.
And where did I get the other half of the principle, the drive to dig beneath the surface and find deep causal models? The notion of asking, not “What other thing does it resemble?”, but rather “How does it work inside?” I don’t know; I don’t remember reading that anywhere.
But this principle was surely one of the deepest foundations of the 15-year-old Eliezer, long before the modern me. “Simulation over similarity” I called the principle, in just those words. Years before I first heard the phrase “heuristics and biases”, let alone the notion of inside views and outside views.
The “Law of Similarity” is, I believe, the official name for the magical principle that similar things are connected; that you can make it rain by pouring water on the ground.
Like most forms of magic, you can ban the Law of Similarity in its most blatant form, but people will find ways to invoke it anyway; magic is too much fun for people to give it up just because it is rationally prohibited.
In the case of Artificial Intelligence, for example, reasoning by analogy is one of the chief generators of defective AI designs:
“My AI uses a highly parallel neural network, just like the human brain!”
First, the data elements you call “neurons” are nothing like biological neurons. They resemble them the way that a ball bearing resembles a foot.
Second, earthworms have neurons too, you know; not everything with neurons in it is human-smart.
But most importantly, you can’t build something that “resembles” the human brain in one surface facet and expect everything else to come out similar. This is science by voodoo doll. You might as well build your computer in the form of a little person and hope for it to rise up and walk, as build it in the form of a neural network and expect it to think. Not unless the neural network is fully as similar to human brains as individual human brains are to each other.
So that is one example of a failed modern attempt to exploit a magical Law of Similarity and Contagion that does not, in fact, hold in our physical universe. But magic has been very popular since ancient times, and every time you ban it it just comes back under a different name.
When you build a computer chip, it does not perform addition because the little beads of solder resemble beads on an abacus, and therefore the computer chip should perform addition just like an abacus.
The computer chip does not perform addition because the transistors are “logical” and arithmetic is “logical” too, so that if they are both “logical” they ought to do the same sort of thing.
The computer chip performs addition because the maker understood addition well enough to prove that the transistors, if they work as elementarily specified, will carry out adding operations. You can prove this without talking about abacuses. The computer chip would work just as well even if no abacus had ever existed. The computer chip has its own power and its own strength, it does not draw upon the abacus by a similarity-link.
Now can you tell me, without talking about how your neural network is “just like the human brain”, how your neural algorithm is going to output “intelligence”? Indeed, if you pretend I’ve never seen or heard of a human brain or anything like it, can you explain to me what you mean by “intelligence”? This is not a challenge to be leveled at random bystanders, but no one would succeed in designing Artificial Intelligence unless they could answer it.
I can explain a computer chip to someone who’s never seen an abacus or heard of an abacus and who doesn’t even have the concept of an abacus, and if I could not do this, I could not design an artifact that performed addition. I probably couldn’t even make my own abacus, because I wouldn’t understand which aspects of the beads were important.
I expect to return later to this point as it pertains to Artificial Intelligence particularly.
Reasoning by analogy is just as popular today, as in Greek times, and for the same reason. You’ve got no idea how something works, but you want to argue that it’s going to work a particular way. For example, you want to argue that your cute little sub-earthworm neural network is going to exhibit “intelligence”. Or you want to argue that your soul will survive its death. So you find something else to which it bears one single surface resemblance, such as the human mind or a sleep cycle, and argue that since they resemble each other they should have the same behavior. Or better yet, just call them by the same name, like “neural” or “the generation of opposites”.
But there is just no law which says that if X has property A and Y has property A then X and Y must share any other property. “I built my network, and it’s massively parallel and interconnected and complicated, just like the human brain from which intelligence emerges! Behold, now intelligence shall emerge from this neural network as well!” And nothing happens. Why should it?
You come up with your argument from surface resemblances, and Nature comes back and says “So what?” There just isn’t a law that says it should work.
If you design a system of transistors to do addition, and it says 2 + 2 = 5, you can go back and debug it; you can find the place where you made an identifiable mistake.
But suppose you build a neural network that is massively parallel and interconnected and complicated, and it fails to be intelligent. You can’t even identify afterward what went wrong, because the wrong step was in thinking that the clever argument from similarity had any power over Reality to begin with.
In place of this reliance of surface analogies, I have had this notion and principle—from so long ago that I can hardly remember how or why I first came to hold it—that the key to understanding is to ask why things happen, and to be able to walk through the process of their insides.
Hidden or openly, this principle is ubiquitously at work in all my writings. For example, take my notion of what it looks like to “explain” “free will” by digging down into the causal cognitive sources of human judgments of freedom-ness and determination-ness. Contrast to any standard analysis that lists out surface judgments of freedom-ness and determination-ness without asking what cognitive algorithm generates these perceptions.
Of course, some things that resemble each other in some ways, resemble each other in other ways as well. But in the modern world, at least, by the time we can rely on this resemblance, we generally have some idea of what is going on inside, and why the resemblance holds.
The distrust of surface analogies, and the drive to find deeper and causal models, has been with me my whole remembered span, and has been tremendously helpful to both the young me and the modern one. The drive toward causality makes me keep asking “Why?” and looking toward the insides of things; and the distrust of surface analogies helps me avoid standard dead ends. It has driven my whole life.
As for Inside View vs. Outside View, I think that the lesson of history is just that reasoning from surface resemblances starts to come apart at the seams when you try to stretch it over gaps larger than Christmas shopping—over gaps larger than different draws from the same causal-structural generator. And reasoning by surface resemblance fails with especial reliability, in cases where there is the slightest motivation in the underconstrained choice of a reference class.