The quotes above are not the complete conversation. In the section of the discussion about AGI, Blake says:
Blake: Because the set of all possible tasks will include some really bizarre stuff that we certainly don’t need our AI systems to do. And in that case, we can ask, “Well, might there be a system that is good at all the sorts of tasks that we might want it to do?” Here, we don’t have a mathematical proof, but again, I suspect Yann’s intuition is similar to mine, which is that you could have systems that are good at a remarkably wide range of things, but it’s not going to cover everything you could possibly hope to do with AI or want to do with AI.
Blake: At some point, you’re going to have to decide where your system is actually going to place its bets as it were. And that can be as general as say a human being. So we could, of course, obviously humans are a proof of concept that way. We know that an intelligence with a level of generality equivalent to humans is possible and maybe it’s even possible to have an intelligence that is even more general than humans to some extent. I wouldn’t discount it as a possibility, but I don’t think you’re ever going to have something that can truly do anything you want, whether it be protein folding, predictions, managing traffic, manufacturing new materials, and also having a conversation with you about your grand’s latest visit that can’t be… There is going to be no system that does all of that for you.
I don’t think he’s making the mistake you’re pointing to. Looks like he’s willing to allow for AI with at least as much generality as humans.
And he doesn’t seem too committed to one definition of generality. Instead he talks about different types/levels of generality.
In that case it’s still an extremely poor argument.
He’s successfully pointed out that something nobody ever cared about can’t exist (due to the free lunch theorem). We know this argument doesn’t apply to humans since humans are better at all the things he discussed than apes, and polymaths are better at all the things he discussed than your average human.
So he’s basically got no evidence at all for his assertion, and the no free lunch theorem is completely irrelevant.
The goal of the podcast is to discuss why people believe certain things while discussing their inside views about AI. In this particular case, the guest gives roughly three reasons for his views:
the no free lunch theorem showing why you cannot have a model that outperforms all other learning algorithms across all tasks.
the results from the Gato paper where models specialized in one domain are better (in that domain) than a generalist agent (the transfer learning, if any, did not lead to improved performance).
society as a whole being similar to some “general intelligence”, with humans being the individual constituants who have a more specialized intelligence
If I were to steelman his point about humans being specialized, I think he basically meant that what happened with society is we have many specialized agents, and that’s probably what will happen as AIs automate our economy, as AIs specialized in one domain will be better than general ones at specific tasks.
He is also saying that, with respect to general agents, we have evidence from humans, the impossibility result from the no free lunch theorem, and basically no evidence for anything in between. For the current models, there is evidence for positive transfer for NLP tasks but less evidence for a broad set of tasks like in Gato.
The best version of the “different levels of generality” argument I can think of (though I don’t buy it) goes something like: “The reasons why humans are able to do impressive things like building smartphones is because they are multiple specialized agents who teach other humans what they have done before they die. No humans alive today could build the latest Iphone from scratch, yet as a society we build it. It is not clear that a single ML model who is never turned off would be trivially capable of learning to do virtually everything that is needed to build a smartphone, spaceships and other things that humans might have not discovered yet necessary to expand through space, and even if it is a possibility, what will most likely happen (and sooner) is a society full of many specialized agents (cf. CAIS).”
what happened with society is we have many specialized agents, and that’s probably what will happen as AIs automate our economy, as AIs specialized in one domain will be better than general ones at specific tasks.
Humans specialize, because their brains are limited.
If an AI with certain computing capacity has to choose whether to be an expert at X or an expert at Y, an AI with twice as much capacity could choose to be an expert on both X and Y.
From this perspective, maybe a human-level AI is not a risk, but a humankind-level AI could be.
Thanks for bringing up the rest of the conversation. It is indeed unfortunate that I cut out certain quotes from their full context. For completness sake, here is the full excerpt without interruptions, including my prompts. Emphasis mine.
Michaël: Got you. And I think Yann LeCun’s point is that there is no such thing as AGI because it’s impossible to build something truly general across all domains.
Blake: That’s right. So that is indeed one of the sources of my concerns as well. I would say I have two concerns with the terminology AGI, but let’s start with Yann’s, which he’s articulated a few times. And as I said, I agree with him on it. We know from the no free lunch theorem that you cannot have a learning algorithm that outperforms all other learning algorithms across all tasks. It’s just an impossibility. So necessarily, any learning algorithm is going to have certain things that it’s good at and certain things that it’s bad at. Or alternatively, if it’s truly a Jack of all trades, it’s going to be just mediocre at everything. Right? So with that reality in place, you can say concretely that if you take AGI to mean literally good at anything, it’s just an impossibility, it cannot exist. And that’s been mathematically proven.
Blake: Now, all that being said, the proof for the no free lunch theorem, refers to all possible tasks. And that’s a very different thing from the set of tasks that we might actually care about. Right?
Michaël: Right.
Blake: Because the set of all possible tasks will include some really bizarre stuff that we certainly don’t need our AI systems to do. And in that case, we can ask, “Well, might there be a system that is good at all the sorts of tasks that we might want it to do?” Here, we don’t have a mathematical proof, but again, I suspect Yann’s intuition is similar to mine, which is that you could have systems that are good at a remarkably wide range of things, but it’s not going to cover everything you could possibly hope to do with AI or want to do with AI.
Blake: At some point, you’re going to have to decide where your system is actually going to place its bets as it were. And that can be as general as say a human being. So we could, of course, obviously humans are a proof of concept that way. We know that an intelligence with a level of generality equivalent to humans is possible and maybe it’s even possible to have an intelligence that is even more general than humans to some extent. I wouldn’t discount it as a possibility, but I don’t think you’re ever going to have something that can truly do anything you want, whether it be protein folding, predictions, managing traffic, manufacturing new materials, and also having a conversation with you about your grand’s latest visit that can’t be… There is going to be no system that does all of that for you.
Michaël: So we will have system that do those separately, but not at the same time?
Blake: Yeah, exactly. I think that we will have AI systems that are good at different domains. So, we might have AI systems that are good for scientific discovery, AI systems that are good for motor control and robotics, AI systems that are good for general conversation and being assistants for people, all these sorts of things, but not a single system that does it all for you.
Michaël: Why do you think that?
Blake: Well, I think that just because of the practical realities that one finds when one trains these networks. So, what has happened with, for example, scaling laws? And I said this to Ethan the other day on Twitter. What’s happened with scaling laws is that we’ve seen really impressive ability to transfer to related tasks. So if you train a large language model, it can transfer to a whole bunch of language-related stuff, very impressively. And there’s been some funny work that shows that it can even transfer to some out-of-domain stuff a bit, but there hasn’t been any convincing demonstration that it transfers to anything you want. And in fact, I think that the recent paper… The Gato paper from DeepMind actually shows, if you look at their data, that they’re still getting better transfer effects if you train in domain than if you train across all possible tasks.
The quotes above are not the complete conversation. In the section of the discussion about AGI, Blake says:
I don’t think he’s making the mistake you’re pointing to. Looks like he’s willing to allow for AI with at least as much generality as humans.
And he doesn’t seem too committed to one definition of generality. Instead he talks about different types/levels of generality.
In that case it’s still an extremely poor argument.
He’s successfully pointed out that something nobody ever cared about can’t exist (due to the free lunch theorem). We know this argument doesn’t apply to humans since humans are better at all the things he discussed than apes, and polymaths are better at all the things he discussed than your average human.
So he’s basically got no evidence at all for his assertion, and the no free lunch theorem is completely irrelevant.
The goal of the podcast is to discuss why people believe certain things while discussing their inside views about AI. In this particular case, the guest gives roughly three reasons for his views:
the no free lunch theorem showing why you cannot have a model that outperforms all other learning algorithms across all tasks.
the results from the Gato paper where models specialized in one domain are better (in that domain) than a generalist agent (the transfer learning, if any, did not lead to improved performance).
society as a whole being similar to some “general intelligence”, with humans being the individual constituants who have a more specialized intelligence
If I were to steelman his point about humans being specialized, I think he basically meant that what happened with society is we have many specialized agents, and that’s probably what will happen as AIs automate our economy, as AIs specialized in one domain will be better than general ones at specific tasks.
He is also saying that, with respect to general agents, we have evidence from humans, the impossibility result from the no free lunch theorem, and basically no evidence for anything in between. For the current models, there is evidence for positive transfer for NLP tasks but less evidence for a broad set of tasks like in Gato.
The best version of the “different levels of generality” argument I can think of (though I don’t buy it) goes something like: “The reasons why humans are able to do impressive things like building smartphones is because they are multiple specialized agents who teach other humans what they have done before they die. No humans alive today could build the latest Iphone from scratch, yet as a society we build it. It is not clear that a single ML model who is never turned off would be trivially capable of learning to do virtually everything that is needed to build a smartphone, spaceships and other things that humans might have not discovered yet necessary to expand through space, and even if it is a possibility, what will most likely happen (and sooner) is a society full of many specialized agents (cf. CAIS).”
Humans specialize, because their brains are limited.
If an AI with certain computing capacity has to choose whether to be an expert at X or an expert at Y, an AI with twice as much capacity could choose to be an expert on both X and Y.
From this perspective, maybe a human-level AI is not a risk, but a humankind-level AI could be.
Thanks for bringing up the rest of the conversation. It is indeed unfortunate that I cut out certain quotes from their full context. For completness sake, here is the full excerpt without interruptions, including my prompts. Emphasis mine.