Most basically, because humans are only just on the cusp of general intelligence.
This a point I’ve been thinking about a lot recently—that the time between the evolution of a species whose smartest members crossed the finish line into general intelligence, and today, is a blink of an eye in evolutionary terms, and therefore we should expect to find that we are roughly as stupid as it’s possible to be and still have some of us smart enough to transform the world. You refer to it here in a way that suggests this is a well-understood point—is this point discussed more explicitly elsewhere?
It occurs to me that this is one reason we suffer from the “parochial intelligence scale” Eliezer complains about—that the difference in effect between being just barely at the point of having general intelligence and being slightly better than that is a lot, even if the difference in absolute capacity is slight.
I wonder how easy it would be to incorporate this point into my spiel for newcomers about why you should worry about AGI—what inferential distances am I missing?
I watched the end of this video and liked it quite a lot. Pretty good job, Eliezer. And thanks for the link.
And wow, the Q&A at the end of the talk has some tragically confused Q. And I’m sure these are people who consider themselves intelligent. Very amusing, and maddening.
Selection pressure might be even weaker a lot of the time than a 3% fitness advantage having a 6% chance of becoming universal in the gene pool, or at least it’s more complicated—a lot of changes don’t offer a stable advantage over long periods.
I think natural selection and human intelligence at this point can’t really be compared for strength. Each is doing things that the other can’t—afaik, we don’t know how to deliberately create organisms which can outcompete their wild conspecifics. (Or is it just that there’s no reason to try and/or we have too much sense to do the experiments?)
And we certainly don’t know how to deliberately design a creature which could thrive in the wild, though some animals which have been selectively bred for human purposes do well as ferals.
This point may be a nitpick since it doesn’t address how far human intelligence can go.
Another example of attribution error: Why would Gimli think that Galadriel is beautiful?
Eliezer made a very interesting claim—that current hardware is sufficient for AI. Details?
Another example of attribution error: Why would Gimli think that Galadriel is beautiful?
To be fair, the races of Middle-Earth weren’t created by evolution, so the criticism isn’t fully valid. Ilúvatar gave the dwarves spirits but set them to sleep so that they wouldn’t awaken before the elves. It’s not unreasonable to assume that as he did so, he also made them admire elven beauty.
Eliezer made a very interesting claim—that current hardware is sufficient for AI. Details?
Most who think Moravec and Kurzweil got this about right think that supercomputer hardware could run something similar to a human brain today—if you had the dollars, were prepared for it to run a bit slow—and had the right software.
Another example of attribution error: Why would Gimli think that Galadriel is beautiful?
If I’m not mistaken, all those races were created, so they could reasonably have very similar standards of beauty, and the elves might have been created to match that.
In The Lord of the Rings Tolkien writes that they breed slowly, for no more than a third of them are female, and not all marry; also, female Dwarves look and sound (and dress, if journeying — which is rare) so alike to Dwarf-males that other folk cannot distinguish them, and thus others wrongly believe Dwarves grow out of stone. Tolkien names only one female, Dís. In The War of the Jewels Tolkien says both males and females have beards.[18]
On the other hand, I suppose it’s possible that if humans find Elves that much more beautiful than humans, maybe Dwarves would be affected the same way, though it seems less likely for them.
Yeah, as JamesAndrix alludes to (warning: extreme geekery), the Dwarves were created by Aulë (one of the Valar (Gods)) because he was impatient for the Firstborn Children of Iluvatar (i.e., the Elves) to awaken. So you might call the Dwarves Aulë′s attempt at creating the Elves; at least, he knew what the Elves would look like (from the Great Song), so it’s pretty plausible that he impressed in the Dwarves an aesthetic sense which would rank Elves very highly.
There are different kinds of plausibility. There’s plausibility for fiction, and there’s plausibility for culture. Both pull in the same direction for LOTR to have Absolute Beauty, which by some odd coincidence, is a good match for what most of its readers think is beautiful.
What might break your suspension of disbelief? The usual BEM behavior would probably mean that the Watcher at the Gate preferencially grabbing Galadriel if she were available would seem entirely reasonable, but what about Treebeard? Shelob?
for LOTR to have Absolute Beauty, which by some odd coincidence, is a good match for what most of its readers think is beautiful.
Particularly when referring to the movie versions, you could consider this simply a storytelling device, similar to all the characters speaking English even in movies set in non-English speaking countries (or planets). It’s not that the Absolute Beauty of Middle-Earth is necessarily a good match for our beauty standards, it’s that it makes it easier for us to relate to the characters and experience what they’re feeling.
You write “Eliezer made a very interesting claim—that current hardware is sufficient for AI. Details?”
I don’t know what argument Eliezer would’ve been using to reach that conclusion, but it’s the kind of conclusion people typically reach if they do a Fermi estimate. E.g., take some bit of nervous tissue whose function seems to be pretty well understood, like the early visual preprocessing (edge detection, motion detection...) in the retina. Now estimate how much it would cost to build conventional silicon computer hardware performing the same operations; then scale the estimated cost of the brain in proportion to the ratio of volume of nervous tissue.
See http://boingboing.net/2009/02/10/hans-moravecs-slide.html for the conclusion of one popular version of this kind of analysis. I’m pretty sure that the analysis behind that slide is in at least one of Moravec’s books (where the slide, or something similar to it, appears as an illustration), but I don’t know offhand which book.
The analysis could be grossly wrong if the foundations are wrong, perhaps because key neurons are doing much more than we think. E.g., if some kind of neuron is storing a huge number of memory bits per neuron (which I doubt: admittedly there is no fundamental reason I know of that this couldn’t be true, but there’s also no evidence for it that I know of) or if neurons are doing quantum calculation (which seems exceedingly unlikely to me; and it is also unclear that quantum calculation can even help much with general intelligence, as opposed to helping with a few special classes of problems related to number theory). I don’t know any particularly likely for way the foundations to be grossly wrong, though, so the conclusions seem pretty reasonable to me.
Note also that suitably specialized computer hardware tends to have something like an order of magnitude better price/performance than the general-purpose computer systems which appear on the graph. (E.g., it is much more cost-effective to render computer graphics using a specialized graphics board, rather than using software running on a general-purpose computer board.)
I find this line of argument pretty convincing, so I think it’s a pretty good bet that given the software, current technology could build human-comparable AI hardware in quantity 100 for less than a million dollars per AI; and that if the figure isn’t yet as low as one hundred thousand dollars per AI, it will be that low very soon.
Why would Gimli think that Galadriel is beautiful?
The dwarves were intelligently designed by some god or other. That a dwarf can find an elf more beautiful than dwarves could be an unfortunate design flaw.
(Elves were also intelligently designed, but their creator was perhaps more intelligent.)
Edit: The creator-god of dwarves probably imbued them with some of his own sense of beauty.
With all respect to Eliezer I think nowadays the gravely anachronistic term “village idiot” shouldn’t be used anymore. I wanted to say that almost every time when I see the intelligence scale graphic in his talks.
Why do you think the term “village idiot” is “gravely anachronistic”? It’s part of an idiom. “Idiot” was briefly used as a quasi-scientific label for certain range of IQs, and that usage is certainly anachronistic, but “idiot” had meaning before that, and continues to. The same is true for “village idiot”.
You’re right, wnoise, “village idiot” is part of an idiom but one I don’t like at all and I don’t think I’m particular in this regard.
I should have put my objection as “‘Village idiot’ is gravely anachronistic unless you want to be insensitive by subsuming a plethora of medical conditions and social determinants under a dated, derogatory term for mentally disabled people.”
This may sound like nit-picking but obviously said intelligence graph is an important item in SIAI’s symbolic tool kit and therefore every detail should be right. When I see the graph, I’m always thinking: Please, “for the love of cute kittens”, change the “village idiot”!
For what it’s worth, I don’t find anything wrong with the term “village idiot”.
However, from previous discussions here, I think I might be on the low side of the community for my preference for “lengths to which Eliezer and the SIAI should go to accommodate the sensibilities of idiots”—there are more important things to do, and a never-ending supply of idiots.
Still, maybe it should be changed. It’s not because it doesn’t offend me that it won’t offend anybody reasonable.
In conversation with friends I tend to use George W Bush as the other endpoint—a dig at those hated Greens but it’s uncontentious here in the UK, and if it helps keep people listening (which it seems to) it’s worth it.
This seems a bad example to use given the context. If you are trying to convince people that greater than human intelligence will give AIs an insurmountable advantage over even the smartest humans then drawing attention to a supposed idiot who became the most powerful man in the world for 8 years raises the question of whether you either don’t know what intelligence is or vastly overestimate its ability to grant real world power.
Wikipedia gives him an estimated IQ of 125, which may be a wee bit off for the low end of the IQ distribution. Still, if that’s the example that requires the less explanation in practice, why not.
This a point I’ve been thinking about a lot recently—that the time between the evolution of a species whose smartest members crossed the finish line into general intelligence, and today, is a blink of an eye in evolutionary terms, and therefore we should expect to find that we are roughly as stupid as it’s possible to be and still have some of us smart enough to transform the world. You refer to it here in a way that suggests this is a well-understood point—is this point discussed more explicitly elsewhere?
OK, but if you buy the idea that environment has a substantial impact on intelligence, which I do, then it seems that the average modern human would have passed the finish line by a somewhat substantial amount.
Really there is no finish line for general intelligence—intelligence is a continuous parameter. Chimpanzees and other apes do experience cultural evolution, even though they’re substantially stupider than us.
“I’m just about as stupid as a mind can get while still being able to grasp x. Therefore it’s likely that I don’t fully understand its ramifications.”
Really there is no finish line for general intelligence—intelligence is a continuous parameter. Chimpanzees and other apes do experience cultural evolution, even though they’re substantially stupider than us.
You are equivocating “cultural evolution”. If you fix the genetic composition of other currently existing apes, they will never build an open-ended technological civilization.
Technological progress makes the average person smarter through environmental improvements, and technological progress is dependent on a very small number of people in society. Let’s say the human race had gotten lucky very early on in its history and had a streak of accidental geniuses who were totally unrepresentative of the population as a whole. If those geniuses improved the race’s technology substantially, that would improve the environment, cause everyone to become smarter due to genetic factors, and bootstrap the race out of their genetic deficits.
It’s basically a new argument. Would you prefer it if I explicitly demarcated that in the future? I briefly started writing out some sort of concession or disclaimer but it seemed like noise.
The problem here is that it’s not clear what that comment is argument for, and so the first thing to assume is that it’s supposed to be an argument about the discussion it was made in reply to. It’s still unclear to me what you argued in that last comment (and why).
So: you are arguing that the point where intelligent design “takes off” is a bit fuzzy—due to contingent factors—chance? That sounds reasonable.
There is also a case to be made that the supposed “point” is tricky to pin down. It was obviously around or before the 10,000 year-old agricultural revolution—but a case can be made for tracing it back further—to the origin of spoken language, gestural language, or to perhaps to other memetic landmarks.
It seems to me that once our ancestors’ tools got good enough that their reproductive fitness was qualitatively affected by their toolmaking/toolusing capabilities (defining “tools” broadly enough to include things like weapons, fire, and clothing), they were on a steep slippery slope to the present day, so that it would take an dinosaur-killer level of contingent event to get them off it. (Language and such helps a lot too, but as they say, language and a gun will get you more than language alone.:-) Starting to slide down that slope is one kind of turning point, but it might be hard to define that “point” with a standard deviation smaller than one hundred thousand years.
The takeoff to modern science and the industrial revolution is another turning point. Among other things related to this thread, it seems to me that this takeoff is when the heuristic of not thinking about grand strategy at all seriously and instead just doing what everyone has “always” done loses some of its value, because things start changing fast enough that most people’s strategies can be expected to be seriously out of date. That turning point seems to me to have been driven by arrival at some combination of sufficient individual human capabilities, sufficient population density, and sufficient communications techniques (esp. paper and printing) which serve as force multipliers for population density. Again it’s hard to define precisely, both in terms of exact date of reaching sufficiency and in terms of quite how much is sufficient; the Chinese ca. 1200AD and the societies around the Mediterranean ca. 1AD seem like they had enough that you wouldn’t’ve needed enormous differences in contingent factors to’ve given the takeoff to them instead of to the Atlantic trading community ca, 1700.
Most basically, because humans are only just on the cusp of general intelligence.
This a point I’ve been thinking about a lot recently—that the time between the evolution of a species whose smartest members crossed the finish line into general intelligence, and today, is a blink of an eye in evolutionary terms, and therefore we should expect to find that we are roughly as stupid as it’s possible to be and still have some of us smart enough to transform the world
This point of view drastically oversimplifies intelligence.
We are not ‘just on the cusp’ of general intelligence—if there was such a cusp it was hundreds of thousands of years ago. We are far far into an exponential expansion of general intelligence, but it has little do with genetics.
Elephants and whales have larger brains than even our brainiest Einsteins—with more neurons and interconnects, yet the typical human is vastly more intelligent than any animal.
And likewise, if Einstein had been a feral child raised by wolves, he would have been mentally retarded in terms of human intelligence.
Neanderthals had larger brains than us—so evolution actually tried that direction, but it ultimately was largely a dead end. We are probably near some asymptotic limit of brain size. In three very separate lineages—elephant, whale and hominid—brains reached a limit around 200 billion neurons or so and then petered out. In the hominid case it actually receded from the Neanderthal peak with homo sapiens having around 100 billion neurons.
Genetics can surely limit maximum obtainable intelligence, but its principally a memetic phenomenon
Elephants and whales have larger brains than even our brainiest Einsteins—with more neurons and interconnects, yet the typical human is vastly more intelligent than any animal.
Yes, because brain size does not equal neuron count; there are scaling laws at play, and not in the whales’/elephants’ favor. On neurons, whales and elephants are much inferior to humans. Since it’s neurons which compute, and not brain volume, the biological aspect is just fine; we would not expect a smaller number of neurons spread over a larger area (so, slower) to be smarter...
In three very separate lineages—elephant, whale and hominid—brains reached a limit around 200 billion neurons or so and then petered out. In the hominid case it actually receded from the Neanderthal peak with homo sapiens having around 100 billion neurons.
Elephants and whales have larger brains than even our brainiest Einsteins—with more neurons and interconnects, yet the typical human is vastly more intelligent than any animal.
Yes, because brain size does not equal neuron count; there are scaling laws at play, and not in the whales’/elephants’ favor.
Yes. - When I said ‘large’, I was talking about size in neurons, not physical size. Physical size, within bounds, is mostly irrelevant. (although it does effect latency of course).
On neurons, whales and elephants are much inferior to humans.
No—they really do have more neurons, ~257 billion in the elephant’s case. 1 (2014)
Since it’s neurons which compute, and not brain volume, the biological aspect is just fine; we would not expect a smaller number of neurons spread over a larger area (so, slower) to be smarter...
According to google, an elephant brain is about 5kg vs a human’s 1.4kg. So we have 51 billion neurons per kg for the elephant vs 75 to 60 per kg for the human. This is by the way, a smaller difference than I would have expected.
The elephant’s brain has a larger cerebellum than us but smaller cortex: about 5 billion neurons vs our 15 billion ish. Interestingly the elephant cortex is also sparser while its cerebellum is denser, perhaps suggesting that we should look at more parameters, such as synapse density as well (because of course there are many tradeoffs in neural micro-circuits).
Anyway the human cortex’s 3x neuron count is a theory for our greater intelligence. But this by itself is insufficient:
the elephant interacts with the world mainly through its trunk which is cerebellum controlled
humans/primates use up a large chunk of their cortex for vision, the elephant much less so
humans rely far more on their cortex for motor control, such that humans completely lacking a cerebellum are largely functional
Now—is having a larger cortex better for general intelligence than a larger cerebellum? - most likely. It appears to be a better hardware platform for unsupervised learning.
But again the key to intelligence is software—we are smart because of our ability to accumulate mental programs , exchange them, and pass them on to later generations. Our brain is unique mainly in that it was the first general platform for language, not because our brains are larger or have some special secret circuit sauce. (which wouldn’t make sense anyway—humans are recent and breed slowly; the key low level circuit developments were already made many millions of years back in faster breeding ancestor lineages)
Cite for the 200b and 100b neuron claims?
See above for elephant neuron counts.
For humans I was probably just using wikipedia or this page based on older research.
I think jacob_cannell is correct in that whales and elephants have larger brains, but that he’s extrapolating incorrectly when he implies through the conjunction that larger brain size == more neurons and more interconnects; so I’m agreeing with the first part, but pointing out why the second does not logically follow and providing cites that density decreases with brain size & known neuron counts are lower than humans.
I don’t always take the time to cite refs, but I should have been more clear I was talking about elephant and whale brains as being larger in neuron counts.
“We are probably near some asymptotic limit of brain size. In three very separate lineages—elephant, whale and hominid—brains reached a limit around 200 billion neurons or so and then petered out.”
Ever since early tool use and proto-language, scaling up the brain was advantageous for our hominid ancestors, and it in some sense even overscaled, such that we have birthing issues.
For big animals like elephants and whales especially, the costs for larger brains are very low. So the key question is then why aren’t their brains bigger? Trillions of neurons would have almost no extra cost for a 100 ton monster like a blue whale, which is already the size of a hippo at birth.
But instead a blue whale just has order 10^11 neurons, just like us or elephants, even though its brain only amounts to a minuscule 0.007% of its mass. The reasonable explanation: there is no advantage to further scaling—perhaps latency? Or more likely, that there are limits of what you can do with one set of largely serial IO interfaces. These are quick theories—I’m not claiming to know why—just that its interesting.
This a point I’ve been thinking about a lot recently—that the time between the evolution of a species whose smartest members crossed the finish line into general intelligence, and today, is a blink of an eye in evolutionary terms, and therefore we should expect to find that we are roughly as stupid as it’s possible to be and still have some of us smart enough to transform the world. You refer to it here in a way that suggests this is a well-understood point—is this point discussed more explicitly elsewhere?
It occurs to me that this is one reason we suffer from the “parochial intelligence scale” Eliezer complains about—that the difference in effect between being just barely at the point of having general intelligence and being slightly better than that is a lot, even if the difference in absolute capacity is slight.
I wonder how easy it would be to incorporate this point into my spiel for newcomers about why you should worry about AGI—what inferential distances am I missing?
We who are the first intelligences ever to exist … our tiny little brains at the uttermost dawn of mind … as awkward as the first replicator (2:01 in).
I watched the end of this video and liked it quite a lot. Pretty good job, Eliezer. And thanks for the link.
And wow, the Q&A at the end of the talk has some tragically confused Q. And I’m sure these are people who consider themselves intelligent. Very amusing, and maddening.
Selection pressure might be even weaker a lot of the time than a 3% fitness advantage having a 6% chance of becoming universal in the gene pool, or at least it’s more complicated—a lot of changes don’t offer a stable advantage over long periods.
I think natural selection and human intelligence at this point can’t really be compared for strength. Each is doing things that the other can’t—afaik, we don’t know how to deliberately create organisms which can outcompete their wild conspecifics. (Or is it just that there’s no reason to try and/or we have too much sense to do the experiments?)
And we certainly don’t know how to deliberately design a creature which could thrive in the wild, though some animals which have been selectively bred for human purposes do well as ferals.
This point may be a nitpick since it doesn’t address how far human intelligence can go.
Another example of attribution error: Why would Gimli think that Galadriel is beautiful?
Eliezer made a very interesting claim—that current hardware is sufficient for AI. Details?
To be fair, the races of Middle-Earth weren’t created by evolution, so the criticism isn’t fully valid. Ilúvatar gave the dwarves spirits but set them to sleep so that they wouldn’t awaken before the elves. It’s not unreasonable to assume that as he did so, he also made them admire elven beauty.
Why do humans think dolphins are beautiful?
Is a human likely to think that one specific dolphin is so beautiful as to be almost worth fighting a duel about it being the most beautiful?
Well, it’s always possible that Gimli was a zoophile.
Yeah, I mean have you seen Dwarven women?
I’m a human and can easily imagine being attracted to Galadriel :) I can’t speak for dwarves.
Well, elves were intelligently designed to specifically be attractive to humans...
Most who think Moravec and Kurzweil got this about right think that supercomputer hardware could run something similar to a human brain today—if you had the dollars, were prepared for it to run a bit slow—and had the right software.
“Another example of attribution error: Why would Gimli think that Galadriel is beautiful?”
A waist:hip:thigh ratio between 0.6 & 0.8 & a highly symmetric fce.
But she doesn’t even have a beard!
but he did have a preoccupation with her hair...
If I’m not mistaken, all those races were created, so they could reasonably have very similar standards of beauty, and the elves might have been created to match that.
[From Wikipedia:}(http://en.wikipedia.org/wiki/Dwarf_%28Middle-earth%29)
On the other hand, I suppose it’s possible that if humans find Elves that much more beautiful than humans, maybe Dwarves would be affected the same way, though it seems less likely for them.
Also, perhaps dwarves don’t have their beauty-sense linked to their mating selection. They appreciate elves as beautiful but something else as sexy.
Yeah, as JamesAndrix alludes to (warning: extreme geekery), the Dwarves were created by Aulë (one of the Valar (Gods)) because he was impatient for the Firstborn Children of Iluvatar (i.e., the Elves) to awaken. So you might call the Dwarves Aulë′s attempt at creating the Elves; at least, he knew what the Elves would look like (from the Great Song), so it’s pretty plausible that he impressed in the Dwarves an aesthetic sense which would rank Elves very highly.
Yes this is definitively correct. Also, it’s a world with magic rings and dragons people.
There are different kinds of plausibility. There’s plausibility for fiction, and there’s plausibility for culture. Both pull in the same direction for LOTR to have Absolute Beauty, which by some odd coincidence, is a good match for what most of its readers think is beautiful.
What might break your suspension of disbelief? The usual BEM behavior would probably mean that the Watcher at the Gate preferencially grabbing Galadriel if she were available would seem entirely reasonable, but what about Treebeard? Shelob?
Particularly when referring to the movie versions, you could consider this simply a storytelling device, similar to all the characters speaking English even in movies set in non-English speaking countries (or planets). It’s not that the Absolute Beauty of Middle-Earth is necessarily a good match for our beauty standards, it’s that it makes it easier for us to relate to the characters and experience what they’re feeling.
You write “Eliezer made a very interesting claim—that current hardware is sufficient for AI. Details?”
I don’t know what argument Eliezer would’ve been using to reach that conclusion, but it’s the kind of conclusion people typically reach if they do a Fermi estimate. E.g., take some bit of nervous tissue whose function seems to be pretty well understood, like the early visual preprocessing (edge detection, motion detection...) in the retina. Now estimate how much it would cost to build conventional silicon computer hardware performing the same operations; then scale the estimated cost of the brain in proportion to the ratio of volume of nervous tissue.
See http://boingboing.net/2009/02/10/hans-moravecs-slide.html for the conclusion of one popular version of this kind of analysis. I’m pretty sure that the analysis behind that slide is in at least one of Moravec’s books (where the slide, or something similar to it, appears as an illustration), but I don’t know offhand which book.
The analysis could be grossly wrong if the foundations are wrong, perhaps because key neurons are doing much more than we think. E.g., if some kind of neuron is storing a huge number of memory bits per neuron (which I doubt: admittedly there is no fundamental reason I know of that this couldn’t be true, but there’s also no evidence for it that I know of) or if neurons are doing quantum calculation (which seems exceedingly unlikely to me; and it is also unclear that quantum calculation can even help much with general intelligence, as opposed to helping with a few special classes of problems related to number theory). I don’t know any particularly likely for way the foundations to be grossly wrong, though, so the conclusions seem pretty reasonable to me.
Note also that suitably specialized computer hardware tends to have something like an order of magnitude better price/performance than the general-purpose computer systems which appear on the graph. (E.g., it is much more cost-effective to render computer graphics using a specialized graphics board, rather than using software running on a general-purpose computer board.)
I find this line of argument pretty convincing, so I think it’s a pretty good bet that given the software, current technology could build human-comparable AI hardware in quantity 100 for less than a million dollars per AI; and that if the figure isn’t yet as low as one hundred thousand dollars per AI, it will be that low very soon.
Thanks. I’m not sure how much complexity is added by the dendrites making new connections.
The dwarves were intelligently designed by some god or other. That a dwarf can find an elf more beautiful than dwarves could be an unfortunate design flaw.
(Elves were also intelligently designed, but their creator was perhaps more intelligent.)
Edit: The creator-god of dwarves probably imbued them with some of his own sense of beauty.
With all respect to Eliezer I think nowadays the gravely anachronistic term “village idiot” shouldn’t be used anymore. I wanted to say that almost every time when I see the intelligence scale graphic in his talks.
Why do you think the term “village idiot” is “gravely anachronistic”? It’s part of an idiom. “Idiot” was briefly used as a quasi-scientific label for certain range of IQs, and that usage is certainly anachronistic, but “idiot” had meaning before that, and continues to. The same is true for “village idiot”.
You’re right, wnoise, “village idiot” is part of an idiom but one I don’t like at all and I don’t think I’m particular in this regard.
I should have put my objection as “‘Village idiot’ is gravely anachronistic unless you want to be insensitive by subsuming a plethora of medical conditions and social determinants under a dated, derogatory term for mentally disabled people.”
This may sound like nit-picking but obviously said intelligence graph is an important item in SIAI’s symbolic tool kit and therefore every detail should be right. When I see the graph, I’m always thinking: Please, “for the love of cute kittens”, change the “village idiot”!
For what it’s worth, I don’t find anything wrong with the term “village idiot”.
However, from previous discussions here, I think I might be on the low side of the community for my preference for “lengths to which Eliezer and the SIAI should go to accommodate the sensibilities of idiots”—there are more important things to do, and a never-ending supply of idiots.
Still, maybe it should be changed. It’s not because it doesn’t offend me that it won’t offend anybody reasonable.
In conversation with friends I tend to use George W Bush as the other endpoint—a dig at those hated Greens but it’s uncontentious here in the UK, and if it helps keep people listening (which it seems to) it’s worth it.
This seems a bad example to use given the context. If you are trying to convince people that greater than human intelligence will give AIs an insurmountable advantage over even the smartest humans then drawing attention to a supposed idiot who became the most powerful man in the world for 8 years raises the question of whether you either don’t know what intelligence is or vastly overestimate its ability to grant real world power.
For the avoidance of doubt, it seems very unlikely in practice that Bush doesn’t have above-average intelligence.
Wikipedia gives him an estimated IQ of 125, which may be a wee bit off for the low end of the IQ distribution. Still, if that’s the example that requires the less explanation in practice, why not.
Maybe Forrest Gump would work as well?
My most recent use of this example got the response George W Bush Was Not Stupid.
OK, but if you buy the idea that environment has a substantial impact on intelligence, which I do, then it seems that the average modern human would have passed the finish line by a somewhat substantial amount.
Really there is no finish line for general intelligence—intelligence is a continuous parameter. Chimpanzees and other apes do experience cultural evolution, even though they’re substantially stupider than us.
“I’m just about as stupid as a mind can get while still being able to grasp x. Therefore it’s likely that I don’t fully understand its ramifications.”
You are equivocating “cultural evolution”. If you fix the genetic composition of other currently existing apes, they will never build an open-ended technological civilization.
Technological progress makes the average person smarter through environmental improvements, and technological progress is dependent on a very small number of people in society. Let’s say the human race had gotten lucky very early on in its history and had a streak of accidental geniuses who were totally unrepresentative of the population as a whole. If those geniuses improved the race’s technology substantially, that would improve the environment, cause everyone to become smarter due to genetic factors, and bootstrap the race out of their genetic deficits.
I don’t see how this note is relevant to either your original argument, or my comment on it.
It’s basically a new argument. Would you prefer it if I explicitly demarcated that in the future? I briefly started writing out some sort of concession or disclaimer but it seemed like noise.
The problem here is that it’s not clear what that comment is argument for, and so the first thing to assume is that it’s supposed to be an argument about the discussion it was made in reply to. It’s still unclear to me what you argued in that last comment (and why).
Trying to argue against a magical level of average societal genetic intelligence necessary for technological takeoff.
You can’t get geniuses who are “totally unrepresentative” in the relevant sense, since we are still the same species, with the same mind design.
So: you are arguing that the point where intelligent design “takes off” is a bit fuzzy—due to contingent factors—chance? That sounds reasonable.
There is also a case to be made that the supposed “point” is tricky to pin down. It was obviously around or before the 10,000 year-old agricultural revolution—but a case can be made for tracing it back further—to the origin of spoken language, gestural language, or to perhaps to other memetic landmarks.
It seems to me that once our ancestors’ tools got good enough that their reproductive fitness was qualitatively affected by their toolmaking/toolusing capabilities (defining “tools” broadly enough to include things like weapons, fire, and clothing), they were on a steep slippery slope to the present day, so that it would take an dinosaur-killer level of contingent event to get them off it. (Language and such helps a lot too, but as they say, language and a gun will get you more than language alone.:-) Starting to slide down that slope is one kind of turning point, but it might be hard to define that “point” with a standard deviation smaller than one hundred thousand years.
The takeoff to modern science and the industrial revolution is another turning point. Among other things related to this thread, it seems to me that this takeoff is when the heuristic of not thinking about grand strategy at all seriously and instead just doing what everyone has “always” done loses some of its value, because things start changing fast enough that most people’s strategies can be expected to be seriously out of date. That turning point seems to me to have been driven by arrival at some combination of sufficient individual human capabilities, sufficient population density, and sufficient communications techniques (esp. paper and printing) which serve as force multipliers for population density. Again it’s hard to define precisely, both in terms of exact date of reaching sufficiency and in terms of quite how much is sufficient; the Chinese ca. 1200AD and the societies around the Mediterranean ca. 1AD seem like they had enough that you wouldn’t’ve needed enormous differences in contingent factors to’ve given the takeoff to them instead of to the Atlantic trading community ca, 1700.
Only if the “improved environment” meant stronger selection pressure for intelligence. That’s not clear at all.
This point of view drastically oversimplifies intelligence.
We are not ‘just on the cusp’ of general intelligence—if there was such a cusp it was hundreds of thousands of years ago. We are far far into an exponential expansion of general intelligence, but it has little do with genetics.
Elephants and whales have larger brains than even our brainiest Einsteins—with more neurons and interconnects, yet the typical human is vastly more intelligent than any animal.
And likewise, if Einstein had been a feral child raised by wolves, he would have been mentally retarded in terms of human intelligence.
Neanderthals had larger brains than us—so evolution actually tried that direction, but it ultimately was largely a dead end. We are probably near some asymptotic limit of brain size. In three very separate lineages—elephant, whale and hominid—brains reached a limit around 200 billion neurons or so and then petered out. In the hominid case it actually receded from the Neanderthal peak with homo sapiens having around 100 billion neurons.
Genetics can surely limit maximum obtainable intelligence, but its principally a memetic phenomenon
Yes, because brain size does not equal neuron count; there are scaling laws at play, and not in the whales’/elephants’ favor. On neurons, whales and elephants are much inferior to humans. Since it’s neurons which compute, and not brain volume, the biological aspect is just fine; we would not expect a smaller number of neurons spread over a larger area (so, slower) to be smarter...
See https://pdf.yt/d/aF9jcFwWGn6c6I7O / https://www.dropbox.com/s/f9uc6eai9eaazko/1954-tower.pdf , http://changizi.com/diameter.pdf , http://onlinelibrary.wiley.com/doi/10.1002/ar.20404/full , http://www.pnas.org/content/early/2012/06/19/1201895109.full.pdf , https://en.wikipedia.org/wiki/List_of_animals_by_number_of_neurons#Whole_nervous_system
Cite for the 200b and 100b neuron claims? My understanding too was that H. sapiens is now thought to have more like 86b neurons & the 100b figure was a myth ( http://revistapesquisa.fapesp.br/en/2012/02/23/n%C3%BAmeros-em-revis%C3%A3o-3/ ), which indicates the imprecision even for creatures which are still around and easy to study...
Yes. - When I said ‘large’, I was talking about size in neurons, not physical size. Physical size, within bounds, is mostly irrelevant. (although it does effect latency of course).
No—they really do have more neurons, ~257 billion in the elephant’s case. 1 (2014)
According to google, an elephant brain is about 5kg vs a human’s 1.4kg. So we have 51 billion neurons per kg for the elephant vs 75 to 60 per kg for the human. This is by the way, a smaller difference than I would have expected.
The elephant’s brain has a larger cerebellum than us but smaller cortex: about 5 billion neurons vs our 15 billion ish. Interestingly the elephant cortex is also sparser while its cerebellum is denser, perhaps suggesting that we should look at more parameters, such as synapse density as well (because of course there are many tradeoffs in neural micro-circuits).
Anyway the human cortex’s 3x neuron count is a theory for our greater intelligence. But this by itself is insufficient:
the elephant interacts with the world mainly through its trunk which is cerebellum controlled
humans/primates use up a large chunk of their cortex for vision, the elephant much less so
humans rely far more on their cortex for motor control, such that humans completely lacking a cerebellum are largely functional
Now—is having a larger cortex better for general intelligence than a larger cerebellum? - most likely. It appears to be a better hardware platform for unsupervised learning.
But again the key to intelligence is software—we are smart because of our ability to accumulate mental programs , exchange them, and pass them on to later generations. Our brain is unique mainly in that it was the first general platform for language, not because our brains are larger or have some special secret circuit sauce. (which wouldn’t make sense anyway—humans are recent and breed slowly; the key low level circuit developments were already made many millions of years back in faster breeding ancestor lineages)
For humans I was probably just using wikipedia or this page based on older research.
[emphasis added]
Wait, what?
I think jacob_cannell is correct in that whales and elephants have larger brains, but that he’s extrapolating incorrectly when he implies through the conjunction that larger brain size == more neurons and more interconnects; so I’m agreeing with the first part, but pointing out why the second does not logically follow and providing cites that density decreases with brain size & known neuron counts are lower than humans.
I don’t always take the time to cite refs, but I should have been more clear I was talking about elephant and whale brains as being larger in neuron counts.
“We are probably near some asymptotic limit of brain size. In three very separate lineages—elephant, whale and hominid—brains reached a limit around 200 billion neurons or so and then petered out.”
Ever since early tool use and proto-language, scaling up the brain was advantageous for our hominid ancestors, and it in some sense even overscaled, such that we have birthing issues.
For big animals like elephants and whales especially, the costs for larger brains are very low. So the key question is then why aren’t their brains bigger? Trillions of neurons would have almost no extra cost for a 100 ton monster like a blue whale, which is already the size of a hippo at birth.
But instead a blue whale just has order 10^11 neurons, just like us or elephants, even though its brain only amounts to a minuscule 0.007% of its mass. The reasonable explanation: there is no advantage to further scaling—perhaps latency? Or more likely, that there are limits of what you can do with one set of largely serial IO interfaces. These are quick theories—I’m not claiming to know why—just that its interesting.