The Power of Intelligence
In our skulls we carry around three pounds of slimy, wet, grayish tissue, corrugated like crumpled toilet paper.
You wouldn’t think, to look at the unappetizing lump, that it was some of the most powerful stuff in the known universe. If you’d never seen an anatomy textbook, and you saw a brain lying in the street, you’d say “Yuck!” and try not to get any of it on your shoes. Aristotle thought the brain was an organ that cooled the blood. It doesn’t look dangerous.
Five million years ago, the ancestors of lions ruled the day, the ancestors of wolves roamed the night. The ruling predators were armed with teeth and claws—sharp, hard cutting edges, backed up by powerful muscles. Their prey, in self-defense, evolved armored shells, sharp horns, toxic venoms, camouflage. The war had gone on through hundreds of eons and countless arms races. Many a loser had been removed from the game, but there was no sign of a winner. Where one species had shells, another species would evolve to crack them; where one species became poisonous, another would evolve to tolerate the poison. Each species had its private niche—for who could live in the seas and the skies and the land at once? There was no ultimate weapon and no ultimate defense and no reason to believe any such thing was possible.
Then came the Day of the Squishy Things.
They had no armor. They had no claws. They had no venoms.
If you saw a movie of a nuclear explosion going off, and you were told an Earthly life form had done it, you would never in your wildest dreams imagine that the Squishy Things could be responsible. After all, Squishy Things aren’t radioactive.
In the beginning, the Squishy Things had no fighter jets, no machine guns, no rifles, no swords. No bronze, no iron. No hammers, no anvils, no tongs, no smithies, no mines. All the Squishy Things had were squishy fingers—too weak to break a tree, let alone a mountain. Clearly not dangerous. To cut stone you would need steel, and the Squishy Things couldn’t excrete steel. In the environment there were no steel blades for Squishy fingers to pick up. Their bodies could not generate temperatures anywhere near hot enough to melt metal. The whole scenario was obviously absurd.
And as for the Squishy Things manipulating DNA—that would have been beyond ridiculous. Squishy fingers are not that small. There is no access to DNA from the Squishy level; it would be like trying to pick up a hydrogen atom. Oh, technically it’s all one universe, technically the Squishy Things and DNA are part of the same world, the same unified laws of physics, the same great web of causality. But let’s be realistic: you can’t get there from here.
Even if Squishy Things could someday evolve to do any of those feats, it would take thousands of millennia. We have watched the ebb and flow of Life through the eons, and let us tell you, a year is not even a single clock tick of evolutionary time. Oh, sure, technically a year is six hundred trillion trillion trillion trillion Planck intervals. But nothing ever happens in less than six hundred million trillion trillion trillion trillion Planck intervals, so it’s a moot point. The Squishy Things, as they run across the savanna now, will not fly across continents for at least another ten million years; no one could have that much sex.
Now explain to me again why an Artificial Intelligence can’t do anything interesting over the Internet unless a human programmer builds it a robot body.
I have observed that someone’s flinch-reaction to “intelligence”—the thought that crosses their mind in the first half-second after they hear the word “intelligence”—often determines their flinch-reaction to the notion of an intelligence explosion. Often they look up the keyword “intelligence” and retrieve the concept booksmarts—a mental image of the Grand Master chess player who can’t get a date, or a college professor who can’t survive outside academia.
“It takes more than intelligence to succeed professionally,” people say, as if charisma resided in the kidneys, rather than the brain. “Intelligence is no match for a gun,” they say, as if guns had grown on trees. “Where will an Artificial Intelligence get money?” they ask, as if the first Homo sapiens had found dollar bills fluttering down from the sky, and used them at convenience stores already in the forest. The human species was not born into a market economy. Bees won’t sell you honey if you offer them an electronic funds transfer. The human species imagined money into existence, and it exists—for us, not mice or wasps—because we go on believing in it.
I keep trying to explain to people that the archetype of intelligence is not Dustin Hoffman in Rain Man. It is a human being, period. It is squishy things that explode in a vacuum, leaving footprints on their moon. Within that gray wet lump is the power to search paths through the great web of causality, and find a road to the seemingly impossible—the power sometimes called creativity.
People—venture capitalists in particular—sometimes ask how, if the Machine Intelligence Research Institute successfully builds a true AI, the results will be commercialized. This is what we call a framing problem.
Or maybe it’s something deeper than a simple clash of assumptions. With a bit of creative thinking, people can imagine how they would go about travelling to the Moon, or curing smallpox, or manufacturing computers. To imagine a trick that could accomplish all these things at once seems downright impossible—even though such a power resides only a few centimeters behind their own eyes. The gray wet thing still seems mysterious to the gray wet thing.
And so, because people can’t quite see how it would all work, the power of intelligence seems less real; harder to imagine than a tower of fire sending a ship to Mars. The prospect of visiting Mars captures the imagination. But if one should promise a Mars visit, and also a grand unified theory of physics, and a proof of the Riemann Hypothesis, and a cure for obesity, and a cure for cancer, and a cure for aging, and a cure for stupidity—well, it just sounds wrong, that’s all.
And well it should. It’s a serious failure of imagination to think that intelligence is good for so little. Who could have imagined, ever so long ago, what minds would someday do? We may not even know what our real problems are.
But meanwhile, because it’s hard to see how one process could have such diverse powers, it’s hard to imagine that one fell swoop could solve even such prosaic problems as obesity and cancer and aging.
Well, one trick cured smallpox and built airplanes and cultivated wheat and tamed fire. Our current science may not agree yet on how exactly the trick works, but it works anyway. If you are temporarily ignorant about a phenomenon, that is a fact about your current state of mind, not a fact about the phenomenon. A blank map does not correspond to a blank territory. If one does not quite understand that power which put footprints on the Moon, nonetheless, the footprints are still there—real footprints, on a real Moon, put there by a real power. If one were to understand deeply enough, one could create and shape that power. Intelligence is as real as electricity. It’s merely far more powerful, far more dangerous, has far deeper implications for the unfolding story of life in the universe—and it’s a tiny little bit harder to figure out how to build a generator.
The first publication of this post is here.
- AI will change the world, but won’t take it over by playing “3-dimensional chess”. by 22 Nov 2022 18:57 UTC; 134 points) (
- Taxonomy of AI-risk counterarguments by 16 Oct 2023 0:12 UTC; 62 points) (
- The Power of Intelligence—The Animation by 11 Mar 2023 16:15 UTC; 59 points) (EA Forum;
- The Power of Intelligence—The Animation by 11 Mar 2023 16:15 UTC; 45 points) (
- We Have Not Been Invited to the Future: e/acc and the Narrowness of the Way Ahead by 17 Jul 2024 22:15 UTC; 10 points) (EA Forum;
- 1 Apr 2023 20:43 UTC; 6 points) 's comment on Pausing AI Developments Isn’t Enough. We Need to Shut it All Down by Eliezer Yudkowsky by (
- 8 Sep 2023 18:38 UTC; 1 point) 's comment on The AI apocalypse myth. by (
Great post. But, squishy as we are, there are two physical activities in which we have dominated the animal kingdom for a long time: throwing stones and long distance running. A silver-back gorilla can tear you apart limb from limb, but have you seen them attempt to throw something? Pityfull! The most they can achieve is flinging. And while intelligence is what started us on the path to developing the strongest long-range attack, the key to endurance hunting is our water cooling system (i.e. sweating). A trained human can literally run at a horse until it drops from heat stroke.
We ran at things and threw rocks at them for a long time. And when we caught things, we broke their bones and skin with more rocks. After awhile, the muscles for our jaws atrophied. Which, combined with the caloric surplus from our successful hunts, allowed our skulls and brains to grow. And then—then it was big brains time!
“Born to run” is further read on the topic—it also describes the proper technique of running (forefoot striking instead of heel striking, think rope jumping and stepping quietly). Although, the author does not seem to know math very well and his style might be too obnoxious for some.
TL;DR—intelligence is cool, but sweating should receive some recognition as the other super-power we humans enjoy. Or, as Toph Beifong would put it: ” You are a genius. A sweaty, stinky genius.”
This is an excellent point. I’d like to deflate it a little bit though, since your supporting comments for the evolution of sweating mechanisms are part of a general principle.
For every mental strength we confidently point to, there will be an excellent physical strength we could also point to as a proximate cause, and vice versa. Discussions like this sound like evolutionary “missing link” arguments for the fossil record, where any 2 provided examples imply some intermediate step that’s roughly as deserving of attention.
Pointing out that brains are a profound development in evolutionary history has more to do with helpfulness for deriving new insights and consolidating the lessons of history than it does with measuring some global score of evolutionary value. Maybe sweating scores higher in evolutionary value than the unification of brains, but I predict that developing the AI equivalent of sweating will be significantly easier than developing the AI equivalent of brains. If you believe otherwise though, then calling attention to sweating is worth more words.
I agree with you. I just find the particulars oddly inspiring—even if we are not the fastest land hunters, we are genetically the most persistent. This is a lesson from biology that bears thinking about.
Also, we could point to our physical strengths, but people usually don’t. We collectively have this body image of ourselves as being “squishy”, big brains compensating for weak, frail bodies. I like disabusing that notion.
An interesting choice since horses are one of the few other animals on the planet that sweat and, therefore, are one of the hardest to run down.
Interestingly, the ability to sweat also coincides with the ability to run oneself to death. Other creatures use panting as their primary cooling mechanism, and, as a result, when they become too warm, they cease to be able to take in sufficient oxygen to maintain their exertion and have to stop. Non-sweaters will drop from exhaustion, but it’s rarely fatal.
Horses use their extreme running ability to get away from predators. Humans use it to be predators. When we finally teamed up we became nearly unstoppable. :D
I laughed multiple times while reading this one. I was severely underestimating the general concept of intelligence. Almost felt like someone intentionally targeted my past self’s misconceptions lol.
A real conversation gives me 1 datapoint that people use the word wisdom for the concept intelligence