I used to think that current AI methods just aren’t nearly as sample/data—efficient as humans. For example, GPT-3 had to read 300B tokens of text whereas humans encounter 2 − 3 OOMs less, various game-playing AIs had to play hundreds of years worth of games to get gud, etc.
Plus various people with 20 − 40 year AI timelines seem to think it’s plausible—in fact, probable—that unless we get radically new and better architectures, this will continue for decades, meaning that we’ll get AGI only when we can actually train AIs on medium or long-horizon tasks for a ridiculously large amount of data/episodes.
So EfficientZero came as a surprise to me, though it wouldn’t have surprised me if I had been paying more attention to that part of the literature.
in linguistic there is an argument called the poverty of stimulus. The claim is that children must figure out the rules of language using only a limited number of unlabeled examples. This is taken as evidence that the brain has some kind of hard-wired grammar framework, that serves as a canvas for further learning while growing up.
Is it possible that tools like EfficientZero help find the fundamental limits for how much training data you need to figure out a set of rules? If an artificial neural network ever manages to reconstruct the rules of English by using only the stimulus that the average children is exposed too, that would be a strong counter-argument against poverty of stimulus.
The ‘poverty of stimulus’ argument proves too much, and is just a rehash of the problem of induction, IMO. Everything that humans learn is ill-posed/underdetermined/vulnerable to skeptical arguments and problems like Duhem-Quine or the grue paradox. There’s nothing special about language. And so—it all adds up to normality—since we solve those other inferential problems, why shouldn’t we solve language equally easily and for the same reasons? If we are not surprised that lasso can fit a good linear model by having an informative prior about coefficients being sparse/simple, we shouldn’t be surprised if human children can learn a language without seeing an infinity of every possible instance of a language or if a deep neural net can do similar things.
Right. So, what do you think about the AI-timelines-related claim then? Will we need medium or long-horizon training for a number of episodes within an OOM or three of parameter count to get something x-risky?
ETA: To put it more provocatively: If EfficientZero can beat humans at Atari using less game experience starting from a completely blank slate whereas humans have decades of pre-training, then shouldn’t a human-brain-sized EfficientZero beat humans at any intellectual task given decades of experience at those tasks + decades of pre-training similar to human pre-training.
I have no good argument that a human-sized EfficientZero would somehow need to be much slower than humans.
Arguing otherwise sounds suspiciously like moving the goalposts after an AI effect: “look how stupid DL agents are, they need tons of data to few-shot stuff like challenging text tasks or image classifications, and they OOMs more data on even something as simple as ALE games! So inefficient! So un-human-like! This should deeply concern any naive DL enthusiast, that the archs are so bad & inefficient.” [later] “Oh no. Well… ‘the curves cross’, you know, this merely shows that DL agents can get good performance on uninteresting tasks, but human brains will surely continue showing their tremendous sample-efficiency in any real problem domain, no matter how you scale your little toys.”
As I’ve said before, I continue to ask myself what it is that the human brain does with all the resources it uses, particularly with the estimates that put it at like 7 OOMs more than models like GPT-3 or other wackily high FLOPS-equivalence. It does not seem like those models do ‘0.0000001% of human performance’, in some sense.
Not out of the box, but it’s also not designed at all for doing exploration. Exploration in MuZero is an obvious but largely (ahem) unexplored topic. Such is research: only a few people in the world can do research with MuZero on meaningful problems like ALE, and not everything will happen at once. I think the model-based nature of MuZero means that a lot of past approaches (like training an ensemble of MuZeros and targeting parts of the game tree where the models disagree most on their predictions) ought to port into it pretty easily. We’ll see if that’s enough to match Go-Explore.
I used to think that current AI methods just aren’t nearly as sample/data—efficient as humans. For example, GPT-3 had to read 300B tokens of text whereas humans encounter 2 − 3 OOMs less, various game-playing AIs had to play hundreds of years worth of games to get gud, etc.
Plus various people with 20 − 40 year AI timelines seem to think it’s plausible—in fact, probable—that unless we get radically new and better architectures, this will continue for decades, meaning that we’ll get AGI only when we can actually train AIs on medium or long-horizon tasks for a ridiculously large amount of data/episodes.
So EfficientZero came as a surprise to me, though it wouldn’t have surprised me if I had been paying more attention to that part of the literature.
What gives?
Inspired by this comment:
The ‘poverty of stimulus’ argument proves too much, and is just a rehash of the problem of induction, IMO. Everything that humans learn is ill-posed/underdetermined/vulnerable to skeptical arguments and problems like Duhem-Quine or the grue paradox. There’s nothing special about language. And so—it all adds up to normality—since we solve those other inferential problems, why shouldn’t we solve language equally easily and for the same reasons? If we are not surprised that lasso can fit a good linear model by having an informative prior about coefficients being sparse/simple, we shouldn’t be surprised if human children can learn a language without seeing an infinity of every possible instance of a language or if a deep neural net can do similar things.
Right. So, what do you think about the AI-timelines-related claim then? Will we need medium or long-horizon training for a number of episodes within an OOM or three of parameter count to get something x-risky?
ETA: To put it more provocatively: If EfficientZero can beat humans at Atari using less game experience starting from a completely blank slate whereas humans have decades of pre-training, then shouldn’t a human-brain-sized EfficientZero beat humans at any intellectual task given decades of experience at those tasks + decades of pre-training similar to human pre-training.
I have no good argument that a human-sized EfficientZero would somehow need to be much slower than humans.
Arguing otherwise sounds suspiciously like moving the goalposts after an AI effect: “look how stupid DL agents are, they need tons of data to few-shot stuff like challenging text tasks or image classifications, and they OOMs more data on even something as simple as ALE games! So inefficient! So un-human-like! This should deeply concern any naive DL enthusiast, that the archs are so bad & inefficient.” [later] “Oh no. Well… ‘the curves cross’, you know, this merely shows that DL agents can get good performance on uninteresting tasks, but human brains will surely continue showing their tremendous sample-efficiency in any real problem domain, no matter how you scale your little toys.”
As I’ve said before, I continue to ask myself what it is that the human brain does with all the resources it uses, particularly with the estimates that put it at like 7 OOMs more than models like GPT-3 or other wackily high FLOPS-equivalence. It does not seem like those models do ‘0.0000001% of human performance’, in some sense.
Can EfficientZero beat Montezuma’s Revenge?
Not out of the box, but it’s also not designed at all for doing exploration. Exploration in MuZero is an obvious but largely (ahem) unexplored topic. Such is research: only a few people in the world can do research with MuZero on meaningful problems like ALE, and not everything will happen at once. I think the model-based nature of MuZero means that a lot of past approaches (like training an ensemble of MuZeros and targeting parts of the game tree where the models disagree most on their predictions) ought to port into it pretty easily. We’ll see if that’s enough to match Go-Explore.