I would argue that a significant factor in the divide between humans and other animals is that we (now) accumulate knowledge over many generations/people. Humans a 100 000 years ago might already have been somewhat notable compared to chimpanzees, but we only hit the threshold of really accumulating knowledge/strategies/technology of millions of individuals very recently.
If I would have been raised by chimpanzees (assuming this works), my abilities to have influence over the world would have been a lot lower even though I would be human.
Given that LLMs already are massively beyond human ability in absorbing knowledge (I could not read even a fraction of all the texts that they now use in training), we have good reasons to think that future AI will be beyond human abilities, too.
Further than this, we have good reasons to think that AIs will be able to evade bottlenecks which humans cannot (compare life 3.0 by Max Tegmark). AIs will not have to suffer from ageing and an intrinsic expiration date to all of their accumulated expertise, will be in principle able to read and change their own source code, and have the scalability/flexibility advantages of software (one can just provide more GPUs to run the AI faster or with more copies).
Transmission, preservation and accumulation of knowledge (all enabled by language) are at the top of my list of guesses for most important qualitative change from chimps to humans too.
It certainly seems very likely that AIs can be much better at this than us, but it’s not obvious to me how big a difference that makes, compared with the difference between doing it at all (like us) and not (like chimps).
(I exaggerate slightly: chimps do do it a bit, because they teach one another things. But it does feel like more of a yes-versus-no difference than AIs versus us. Though that may be because I’m failing to see how transformative AIs’ possible superiority in this area will be.)
A lot of the ways in which we hope/fear AIs may be radically better than us seem dependent on having AIs that are designed in a principled way rather than being huge bags of matrices doing no one knows quite what. That does seem like a thing that will happen eventually, but I suspect it won’t happen until after there’s something significantly smarter than us designing them. For the nearest things to AI we know how to make at the moment, for instance, even we can’t in any useful sense read their source code. All our attempts to build AIs that work in comprehensible ways seem to produce results that lag far far behind the huge incomprehensible mudballs. (Maybe some sort of hybrid, with comprehensible GOFAI bits and incomprehensible mudballs working together, will turn out to be the best we can do. In that case, the system could at least read the source code for the comprehensible bits.)
I (not an expert) suspect there’s a few key ways the human/AI difference is likely to be pretty large.
Generation time. Training an expert human from scratch takes 20-30 years and no one is in a position to curate more than a small fraction of the training data and environment. AI could copy itself in potentially seconds to minutes. It could train up a new model for a specific purpose much faster, too, if it can’t just paste new capabilities into itself directly. And there should be no equivalent of capabilities that can’t be taught, only learned (the kinds of things humans need years-long apprenticeships to even have a chance to acquire imperfectly).
Even with “huge bags of matrices doing no one knows quite what” there are still things we can learn, with the tools we have today, from e.g. analyzing weights. We can also (compared to a human mind) test more precisely how such AIs “would” act in different circumstances by just giving a prompt, even if we can’t reliably predict behavior in advance. Even assuming this doesn’t hold for future architectures and higher capability levels, and AI shouldn’t have a human’s problems predicting it’s own future behavior.
Breadth. being effectively a high-level expert in every field of knowledge at once should allow a massively higher level of ability to extract and propagate insight from new data. There should be no equivalent of it taking decades to generations for results to filter from researchers to other fields of academia to public policy to common knowledge among the public. With enough compute, there should be no equivalent of a result languishing for years before anyone recognizes it’s relevance to a new problem.
Precise introspective and retrospective data. This is more about sensors and robotics than AI in some ways, but it would be much easier for me to learn new physical skills if I could see exactly what I was doing each time I attempted it, and measure exactly what the result was, and intuitively do statistics on the data, and similarly precisely control future behavior. I do think this also applies to any other kind of data feedback, too. If I had access to a precise playback of my entire life (or even just the text of all my conversations and summary of my actions) at all times, I’d be better at a lot of things.
I agree a lot. I had the constant urge to add some disclaimer “this is mostly about possible AI and not about likely AI” while writing.
It certainly seems very likely that AIs can be much better at this than us, but it’s not obvious to me how big a difference that makes, compared with the difference between doing it at all (like us) and not (like chimps).
The only obvious improvements I can think of are things where I feel that humans are below their own potential[1]. This does seem large to me, but by itself not quite enough to create a difference as large as chimps-to-humans.
But I do think that adding possibilities like compute speed could be enough to make this a sufficiently large gap: If I imagine myself to wake up for one day every month while everyone else went on with their lives in the mean time, I would probably be completely overwhelmed by all the developments after only a few subjective ‘days’. Of course, this is only a meaningful analogy if the whole world is filled with the fast AIs, but that seems very possible to me.
A lot of the ways in which we hope/fear AIs may be radically better than us seem dependent on having AIs that are designed in a principled way rather than being huge bags of matrices doing no one knows quite what. [...]
Yeah.
I still think that the other considerations are have significance: they give AIs structural advantages compared to biological life/humans. Even if there never comes to be some process which can improve the architecture of AIs using a thorough understanding, mostly random exploration still creates an evolutionary pressure on AIs becoming more capable. And this should be enough to take advantage of the other properties (even if a lot slower).
My sense: In the right situation or mindset, people can be way more impressive than we are most of the time. I sometimes feel like a lot of this is caused by hard-wired mechanisms that conserve calories. This does not make much sense for most people in rich countries, but is part of being human.
If people are reading this thread and want to read this argument in more detail: the (excellent) book ‘The Secret of our Success’ by Joseph Henrich (astral codex 10 review/summary here https://slatestarcodex.com/2019/06/04/book-review-the-secret-of-our-success/) makes this argument in a very compelling way. There is a lot of support for the idea that the crucial ‘rubicon’ that separates chimps from people is cultural transmission which enables the gradual evolution of strategies over periods longer than an individual lifetime rather than any ‘raw’ problem solving intelligence. In fact according to Heinrich there are many ways in which humans are actually worse than chimps in some measures of raw intelligence: chimps have better working memory and faster reactions for complex tasks in some cases, and they are better than people at finding Nash equilibria which require randomising your strategy. But humans are uniquely able to learn behaviours from demonstration and forming larger groups which enable the gradual accumulation of ‘cultural technology’, which then allowed a runway of cultural-genetic co-evolution (e.g food processing technology → smaller stomachs and bigger brains → even more culture → bigger brains even more of an advantage etc.) It’s hard to appreciate how much this kind of thing helps you think; for instance, most people can learn maths but few would have invented arabic numerals by themselves. Similarly, having a large brain by itself is actually not super useful without the cultural superstructure: most people alive today would quickly die if dropped into the ancestral environment without the support of modern culture unless they could learn from hunter-gatherers (see Henrich for many examples of this happening to European explorers!). For instance, i like to think I’m a pretty smart guy but I have no idea how to make e.g bronze or stone tools, and it’s not obvious that my physics degree would help me figure it out! Henrich also makes the case for the importance of this with some slightly chilling examples of cultures that lost their ability to make complex technology (e.g boats) when they fell below a critical population size and became isolated.
It’s interesting to consider the implications for AI: I’m not very sure about this. On the one hand LLMs clearly have superhuman ability to memorise facts, but I’m not sure if this means they can learn new tasks or information particularly easily. On the other it seems likely that LLMs are taking pretty heavy advantage of the ‘culture overhang’ of the internet! I don’t know if it really makes sense to think of their abilities here as strongly superhuman: if you magically had the compute and code to try to train gpt-n in 1950 it’s not obvious you could have got it to do very much, without the internet for it to absorb.
Haven’t read that book, added to the top of my list, thanks for the reference!
But humans are uniquely able to learn behaviours from demonstration and forming larger groups which enable the gradual accumulation of ‘cultural technology’, which then allowed a runway of cultural-genetic co-evolution (e.g food processing technology → smaller stomachs and bigger brains → even more culture → bigger brains even more of an advantage etc.)
One thing I think about a lot is: are we sure this is unique, or did something else like luck or geography somehow play an important role in one (or a handful) of groups of sapiens happening to develop some strong (or “viral”) positive-feedback cultural learning mechanisms that eventually dramatically outpaced other creatures? We know that other species can learn by demonstration, and pass down information from generation to generation, and we know that humans have big brains, but were some combination of timing / luck / climate / resources perhaps also a major factor?
If Homo sapiens are believed to have originated around 200,000 years ago, but only developed agricultural techniques around 12,000 years ago, the earliest known city 9,000 years ago, and only developed a modern-style writing system maybe 5,000 years ago, are we sure that those humans who lived for 90%+ of human “pre-history” without agriculture, large groups, and writing systems would look substantially more intelligent to us than chimpanzees? If our ancestral primates never branched off from chimps and bonobos, are we sure the earth wouldn’t now (or some time in the past or future) be populated with chimpanzee city-equivalents and something that looked remotely like our definition of technology?
It’s hard to appreciate how much this kind of thing helps you think
Strongly agree. It seems possible that a time-travelling scientist could go back to some point in time and conduct rigorous experiments that would show sapiens as not as “intelligent” as some other species at that point in time. It’s easy to forget how recently human society looked a lot closer to animal society than it does to modern human society. I’ve seen tests that estimate the human IQ level of adult chimpanzees to be maybe in the 20-25 range, but we can’t know how a prehistoric adult human would perform on the same tests. Like, if humans are so inherently smart and curious, why did it take us over 100,000 years to figure out how plants work? If someone developed an AI today that took 100,000 years to figure out how plants work, they’d be laughed at if they suggested it had “human-level” intelligence.
One of the problems with the current AI human-level intelligence debate that seems pretty fundamental is that many people, without even realising it, conflate concepts like “as intelligent as a [modern educated] human” with “as intelligent as humanity” or “as intelligent as a randomly selected Homo sapiens from history”.
Well maybe you should read the book! I think that there are a few concrete points you can disagree on.
One thing I think about a lot is: are we sure this is unique, or did something else like luck or geography
somehow play an important role in one (or a handful) of groups of sapiens happening to develop some
strong (or “viral”) positive-feedback cultural learning mechanisms that eventually dramatically outpaced
other creatures?
I’m not an expert, but I’m not so sure that this is right; I think that anatomically modern humans already had significantly better abilities to learn and transmit culture than other animals, because anatomically modern humans generally need to extensively prepare their food (cooking, grinding etc.) in a culturally transmitted way. So by the time we get to sapiens we are already pretty strongly on this trajectory.
I think there’s an element of luck: other animals do have cultural transmission (for example elephants and killer whales) but maybe aren’t anatomically suited to discover fire and agriculture. Some quirks of group size likely also play a role. It’s definitely a feedback loop though; once you are an animal with culture, then there is increased selection pressure to be better at culture, which creates more culture etc.
If Homo sapiens are believed to have originated around 200,000 years ago, but only developed agricultural techniques around 12,000 years ago, the earliest known city 9,000 years ago, and only developed a modern-style writing system maybe 5,000 years ago, are we sure that those humans who lived for 90%+ of human “pre-history” without agriculture, large groups, and writing systems would look substantially more intelligent to us than chimpanzees?
I’m gonna go with absolutely yes, see my above comment about anatomically modern humans and food prep. I think you are severely under-estimating the sophistication of hunter-gatherer technology and culture!
The degree to which ‘objective’ measures of intelligence like IQ are culturally specific is an interesting question.
I would argue that a significant factor in the divide between humans and other animals is that we (now) accumulate knowledge over many generations/people. Humans a 100 000 years ago might already have been somewhat notable compared to chimpanzees, but we only hit the threshold of really accumulating knowledge/strategies/technology of millions of individuals very recently.
If I would have been raised by chimpanzees (assuming this works), my abilities to have influence over the world would have been a lot lower even though I would be human.
Given that LLMs already are massively beyond human ability in absorbing knowledge (I could not read even a fraction of all the texts that they now use in training), we have good reasons to think that future AI will be beyond human abilities, too.
Further than this, we have good reasons to think that AIs will be able to evade bottlenecks which humans cannot (compare life 3.0 by Max Tegmark). AIs will not have to suffer from ageing and an intrinsic expiration date to all of their accumulated expertise, will be in principle able to read and change their own source code, and have the scalability/flexibility advantages of software (one can just provide more GPUs to run the AI faster or with more copies).
Transmission, preservation and accumulation of knowledge (all enabled by language) are at the top of my list of guesses for most important qualitative change from chimps to humans too.
It certainly seems very likely that AIs can be much better at this than us, but it’s not obvious to me how big a difference that makes, compared with the difference between doing it at all (like us) and not (like chimps).
(I exaggerate slightly: chimps do do it a bit, because they teach one another things. But it does feel like more of a yes-versus-no difference than AIs versus us. Though that may be because I’m failing to see how transformative AIs’ possible superiority in this area will be.)
A lot of the ways in which we hope/fear AIs may be radically better than us seem dependent on having AIs that are designed in a principled way rather than being huge bags of matrices doing no one knows quite what. That does seem like a thing that will happen eventually, but I suspect it won’t happen until after there’s something significantly smarter than us designing them. For the nearest things to AI we know how to make at the moment, for instance, even we can’t in any useful sense read their source code. All our attempts to build AIs that work in comprehensible ways seem to produce results that lag far far behind the huge incomprehensible mudballs. (Maybe some sort of hybrid, with comprehensible GOFAI bits and incomprehensible mudballs working together, will turn out to be the best we can do. In that case, the system could at least read the source code for the comprehensible bits.)
I (not an expert) suspect there’s a few key ways the human/AI difference is likely to be pretty large.
Generation time. Training an expert human from scratch takes 20-30 years and no one is in a position to curate more than a small fraction of the training data and environment. AI could copy itself in potentially seconds to minutes. It could train up a new model for a specific purpose much faster, too, if it can’t just paste new capabilities into itself directly. And there should be no equivalent of capabilities that can’t be taught, only learned (the kinds of things humans need years-long apprenticeships to even have a chance to acquire imperfectly).
Even with “huge bags of matrices doing no one knows quite what” there are still things we can learn, with the tools we have today, from e.g. analyzing weights. We can also (compared to a human mind) test more precisely how such AIs “would” act in different circumstances by just giving a prompt, even if we can’t reliably predict behavior in advance. Even assuming this doesn’t hold for future architectures and higher capability levels, and AI shouldn’t have a human’s problems predicting it’s own future behavior.
Breadth. being effectively a high-level expert in every field of knowledge at once should allow a massively higher level of ability to extract and propagate insight from new data. There should be no equivalent of it taking decades to generations for results to filter from researchers to other fields of academia to public policy to common knowledge among the public. With enough compute, there should be no equivalent of a result languishing for years before anyone recognizes it’s relevance to a new problem.
Precise introspective and retrospective data. This is more about sensors and robotics than AI in some ways, but it would be much easier for me to learn new physical skills if I could see exactly what I was doing each time I attempted it, and measure exactly what the result was, and intuitively do statistics on the data, and similarly precisely control future behavior. I do think this also applies to any other kind of data feedback, too. If I had access to a precise playback of my entire life (or even just the text of all my conversations and summary of my actions) at all times, I’d be better at a lot of things.
I agree a lot. I had the constant urge to add some disclaimer “this is mostly about possible AI and not about likely AI” while writing.
The only obvious improvements I can think of are things where I feel that humans are below their own potential[1]. This does seem large to me, but by itself not quite enough to create a difference as large as chimps-to-humans. But I do think that adding possibilities like compute speed could be enough to make this a sufficiently large gap: If I imagine myself to wake up for one day every month while everyone else went on with their lives in the mean time, I would probably be completely overwhelmed by all the developments after only a few subjective ‘days’. Of course, this is only a meaningful analogy if the whole world is filled with the fast AIs, but that seems very possible to me.
Yeah. I still think that the other considerations are have significance: they give AIs structural advantages compared to biological life/humans. Even if there never comes to be some process which can improve the architecture of AIs using a thorough understanding, mostly random exploration still creates an evolutionary pressure on AIs becoming more capable. And this should be enough to take advantage of the other properties (even if a lot slower).
My sense: In the right situation or mindset, people can be way more impressive than we are most of the time. I sometimes feel like a lot of this is caused by hard-wired mechanisms that conserve calories. This does not make much sense for most people in rich countries, but is part of being human.
If people are reading this thread and want to read this argument in more detail: the (excellent) book ‘The Secret of our Success’ by Joseph Henrich (astral codex 10 review/summary here https://slatestarcodex.com/2019/06/04/book-review-the-secret-of-our-success/) makes this argument in a very compelling way. There is a lot of support for the idea that the crucial ‘rubicon’ that separates chimps from people is cultural transmission which enables the gradual evolution of strategies over periods longer than an individual lifetime rather than any ‘raw’ problem solving intelligence. In fact according to Heinrich there are many ways in which humans are actually worse than chimps in some measures of raw intelligence: chimps have better working memory and faster reactions for complex tasks in some cases, and they are better than people at finding Nash equilibria which require randomising your strategy. But humans are uniquely able to learn behaviours from demonstration and forming larger groups which enable the gradual accumulation of ‘cultural technology’, which then allowed a runway of cultural-genetic co-evolution (e.g food processing technology → smaller stomachs and bigger brains → even more culture → bigger brains even more of an advantage etc.) It’s hard to appreciate how much this kind of thing helps you think; for instance, most people can learn maths but few would have invented arabic numerals by themselves. Similarly, having a large brain by itself is actually not super useful without the cultural superstructure: most people alive today would quickly die if dropped into the ancestral environment without the support of modern culture unless they could learn from hunter-gatherers (see Henrich for many examples of this happening to European explorers!). For instance, i like to think I’m a pretty smart guy but I have no idea how to make e.g bronze or stone tools, and it’s not obvious that my physics degree would help me figure it out! Henrich also makes the case for the importance of this with some slightly chilling examples of cultures that lost their ability to make complex technology (e.g boats) when they fell below a critical population size and became isolated.
It’s interesting to consider the implications for AI: I’m not very sure about this. On the one hand LLMs clearly have superhuman ability to memorise facts, but I’m not sure if this means they can learn new tasks or information particularly easily. On the other it seems likely that LLMs are taking pretty heavy advantage of the ‘culture overhang’ of the internet! I don’t know if it really makes sense to think of their abilities here as strongly superhuman: if you magically had the compute and code to try to train gpt-n in 1950 it’s not obvious you could have got it to do very much, without the internet for it to absorb.
Haven’t read that book, added to the top of my list, thanks for the reference!
One thing I think about a lot is: are we sure this is unique, or did something else like luck or geography somehow play an important role in one (or a handful) of groups of sapiens happening to develop some strong (or “viral”) positive-feedback cultural learning mechanisms that eventually dramatically outpaced other creatures? We know that other species can learn by demonstration, and pass down information from generation to generation, and we know that humans have big brains, but were some combination of timing / luck / climate / resources perhaps also a major factor?
If Homo sapiens are believed to have originated around 200,000 years ago, but only developed agricultural techniques around 12,000 years ago, the earliest known city 9,000 years ago, and only developed a modern-style writing system maybe 5,000 years ago, are we sure that those humans who lived for 90%+ of human “pre-history” without agriculture, large groups, and writing systems would look substantially more intelligent to us than chimpanzees? If our ancestral primates never branched off from chimps and bonobos, are we sure the earth wouldn’t now (or some time in the past or future) be populated with chimpanzee city-equivalents and something that looked remotely like our definition of technology?
Strongly agree. It seems possible that a time-travelling scientist could go back to some point in time and conduct rigorous experiments that would show sapiens as not as “intelligent” as some other species at that point in time. It’s easy to forget how recently human society looked a lot closer to animal society than it does to modern human society. I’ve seen tests that estimate the human IQ level of adult chimpanzees to be maybe in the 20-25 range, but we can’t know how a prehistoric adult human would perform on the same tests. Like, if humans are so inherently smart and curious, why did it take us over 100,000 years to figure out how plants work? If someone developed an AI today that took 100,000 years to figure out how plants work, they’d be laughed at if they suggested it had “human-level” intelligence.
One of the problems with the current AI human-level intelligence debate that seems pretty fundamental is that many people, without even realising it, conflate concepts like “as intelligent as a [modern educated] human” with “as intelligent as humanity” or “as intelligent as a randomly selected Homo sapiens from history”.
Well maybe you should read the book! I think that there are a few concrete points you can disagree on.
I’m not an expert, but I’m not so sure that this is right; I think that anatomically modern humans already had significantly better abilities to learn and transmit culture than other animals, because anatomically modern humans generally need to extensively prepare their food (cooking, grinding etc.) in a culturally transmitted way. So by the time we get to sapiens we are already pretty strongly on this trajectory.
I think there’s an element of luck: other animals do have cultural transmission (for example elephants and killer whales) but maybe aren’t anatomically suited to discover fire and agriculture. Some quirks of group size likely also play a role. It’s definitely a feedback loop though; once you are an animal with culture, then there is increased selection pressure to be better at culture, which creates more culture etc.
I’m gonna go with absolutely yes, see my above comment about anatomically modern humans and food prep. I think you are severely under-estimating the sophistication of hunter-gatherer technology and culture!
The degree to which ‘objective’ measures of intelligence like IQ are culturally specific is an interesting question.