Like, if nothing else the network could allocate 10% of itself to each domain; 100M parameters are more than enough to show good performance in these domains (robotics often uses far fewer parameters iirc).
Are you suggesting that this isn’t really a “general” agent any more than the combination of several separate models trained independently would be? And that this is just several different agents that happened to be trained in a network that’s big enough to contain all of them?
I don’t really know what you mean by a “general” agent. Here are some properties that I would guess it has (caveating again that I haven’t read the paper in detail), which may or may not be related to what you mean by “generality”:
Given an input, it can tell which task it is supposed to do, and then do the relevant tasks.
Some of the tasks do benefit from the training done on other tasks (“positive transfer”), presumably because some of the basic building blocks of the needed programs are the same (“look at the token that was one place prior” is probably helpful for many tasks).
It has some neurons that are used in multiple different tasks (presumably).
It cannot learn new tasks particularly quickly (“few-shot learning”), except inasmuch as that could already be done with language models.
It does not do any “learning with frozen weights” (i.e. the sort of thing where you prompt a language model to define a new word, and then it can use that word later on, without any gradient descent), except inasmuch as the specialized models would also do that learning.
It is about as well-modeled as an expected utility maximizer as the specialized models would be.
Are you suggesting that this isn’t really a “general” agent any more than the combination of several separate models trained independently would be? And that this is just several different agents that happened to be trained in a network that’s big enough to contain all of them?
I don’t really know what you mean by a “general” agent. Here are some properties that I would guess it has (caveating again that I haven’t read the paper in detail), which may or may not be related to what you mean by “generality”:
Given an input, it can tell which task it is supposed to do, and then do the relevant tasks.
Some of the tasks do benefit from the training done on other tasks (“positive transfer”), presumably because some of the basic building blocks of the needed programs are the same (“look at the token that was one place prior” is probably helpful for many tasks).
It has some neurons that are used in multiple different tasks (presumably).
It cannot learn new tasks particularly quickly (“few-shot learning”), except inasmuch as that could already be done with language models.
It does not do any “learning with frozen weights” (i.e. the sort of thing where you prompt a language model to define a new word, and then it can use that word later on, without any gradient descent), except inasmuch as the specialized models would also do that learning.
It is about as well-modeled as an expected utility maximizer as the specialized models would be.