In that case it’s still an extremely poor argument.
He’s successfully pointed out that something nobody ever cared about can’t exist (due to the free lunch theorem). We know this argument doesn’t apply to humans since humans are better at all the things he discussed than apes, and polymaths are better at all the things he discussed than your average human.
So he’s basically got no evidence at all for his assertion, and the no free lunch theorem is completely irrelevant.
The goal of the podcast is to discuss why people believe certain things while discussing their inside views about AI. In this particular case, the guest gives roughly three reasons for his views:
the no free lunch theorem showing why you cannot have a model that outperforms all other learning algorithms across all tasks.
the results from the Gato paper where models specialized in one domain are better (in that domain) than a generalist agent (the transfer learning, if any, did not lead to improved performance).
society as a whole being similar to some “general intelligence”, with humans being the individual constituants who have a more specialized intelligence
If I were to steelman his point about humans being specialized, I think he basically meant that what happened with society is we have many specialized agents, and that’s probably what will happen as AIs automate our economy, as AIs specialized in one domain will be better than general ones at specific tasks.
He is also saying that, with respect to general agents, we have evidence from humans, the impossibility result from the no free lunch theorem, and basically no evidence for anything in between. For the current models, there is evidence for positive transfer for NLP tasks but less evidence for a broad set of tasks like in Gato.
The best version of the “different levels of generality” argument I can think of (though I don’t buy it) goes something like: “The reasons why humans are able to do impressive things like building smartphones is because they are multiple specialized agents who teach other humans what they have done before they die. No humans alive today could build the latest Iphone from scratch, yet as a society we build it. It is not clear that a single ML model who is never turned off would be trivially capable of learning to do virtually everything that is needed to build a smartphone, spaceships and other things that humans might have not discovered yet necessary to expand through space, and even if it is a possibility, what will most likely happen (and sooner) is a society full of many specialized agents (cf. CAIS).”
what happened with society is we have many specialized agents, and that’s probably what will happen as AIs automate our economy, as AIs specialized in one domain will be better than general ones at specific tasks.
Humans specialize, because their brains are limited.
If an AI with certain computing capacity has to choose whether to be an expert at X or an expert at Y, an AI with twice as much capacity could choose to be an expert on both X and Y.
From this perspective, maybe a human-level AI is not a risk, but a humankind-level AI could be.
In that case it’s still an extremely poor argument.
He’s successfully pointed out that something nobody ever cared about can’t exist (due to the free lunch theorem). We know this argument doesn’t apply to humans since humans are better at all the things he discussed than apes, and polymaths are better at all the things he discussed than your average human.
So he’s basically got no evidence at all for his assertion, and the no free lunch theorem is completely irrelevant.
The goal of the podcast is to discuss why people believe certain things while discussing their inside views about AI. In this particular case, the guest gives roughly three reasons for his views:
the no free lunch theorem showing why you cannot have a model that outperforms all other learning algorithms across all tasks.
the results from the Gato paper where models specialized in one domain are better (in that domain) than a generalist agent (the transfer learning, if any, did not lead to improved performance).
society as a whole being similar to some “general intelligence”, with humans being the individual constituants who have a more specialized intelligence
If I were to steelman his point about humans being specialized, I think he basically meant that what happened with society is we have many specialized agents, and that’s probably what will happen as AIs automate our economy, as AIs specialized in one domain will be better than general ones at specific tasks.
He is also saying that, with respect to general agents, we have evidence from humans, the impossibility result from the no free lunch theorem, and basically no evidence for anything in between. For the current models, there is evidence for positive transfer for NLP tasks but less evidence for a broad set of tasks like in Gato.
The best version of the “different levels of generality” argument I can think of (though I don’t buy it) goes something like: “The reasons why humans are able to do impressive things like building smartphones is because they are multiple specialized agents who teach other humans what they have done before they die. No humans alive today could build the latest Iphone from scratch, yet as a society we build it. It is not clear that a single ML model who is never turned off would be trivially capable of learning to do virtually everything that is needed to build a smartphone, spaceships and other things that humans might have not discovered yet necessary to expand through space, and even if it is a possibility, what will most likely happen (and sooner) is a society full of many specialized agents (cf. CAIS).”
Humans specialize, because their brains are limited.
If an AI with certain computing capacity has to choose whether to be an expert at X or an expert at Y, an AI with twice as much capacity could choose to be an expert on both X and Y.
From this perspective, maybe a human-level AI is not a risk, but a humankind-level AI could be.