I don’t really know what you mean by a “general” agent. Here are some properties that I would guess it has (caveating again that I haven’t read the paper in detail), which may or may not be related to what you mean by “generality”:
Given an input, it can tell which task it is supposed to do, and then do the relevant tasks.
Some of the tasks do benefit from the training done on other tasks (“positive transfer”), presumably because some of the basic building blocks of the needed programs are the same (“look at the token that was one place prior” is probably helpful for many tasks).
It has some neurons that are used in multiple different tasks (presumably).
It cannot learn new tasks particularly quickly (“few-shot learning”), except inasmuch as that could already be done with language models.
It does not do any “learning with frozen weights” (i.e. the sort of thing where you prompt a language model to define a new word, and then it can use that word later on, without any gradient descent), except inasmuch as the specialized models would also do that learning.
It is about as well-modeled as an expected utility maximizer as the specialized models would be.
I don’t really know what you mean by a “general” agent. Here are some properties that I would guess it has (caveating again that I haven’t read the paper in detail), which may or may not be related to what you mean by “generality”:
Given an input, it can tell which task it is supposed to do, and then do the relevant tasks.
Some of the tasks do benefit from the training done on other tasks (“positive transfer”), presumably because some of the basic building blocks of the needed programs are the same (“look at the token that was one place prior” is probably helpful for many tasks).
It has some neurons that are used in multiple different tasks (presumably).
It cannot learn new tasks particularly quickly (“few-shot learning”), except inasmuch as that could already be done with language models.
It does not do any “learning with frozen weights” (i.e. the sort of thing where you prompt a language model to define a new word, and then it can use that word later on, without any gradient descent), except inasmuch as the specialized models would also do that learning.
It is about as well-modeled as an expected utility maximizer as the specialized models would be.