My main takeaway from Gato: If we can build specialized AI agents for 100s/1000s of tasks, it’s now pretty straightforward to make a general agent that can do it all in a single model. Just tokenize data from all the tasks and feed into a transformer.
And vice-versa: transfer Gato to the new task, and finetune and sparsify/distill (eg turn the Transformer intoa RNN, or do training with Transformer-XL instead of just runtime) when a task becomes common enough to justify the amortized expense.
The fact that adding new tasks doesn’t diminuish performance on previous tasks is highly non trivial!
It may be that there is a lot of room in the embedding space to store them. The wild thing is that nothing (apart few hardware iterations) stop us to increase the embedding space if really needed.
My main takeaway from Gato: If we can build specialized AI agents for 100s/1000s of tasks, it’s now pretty straightforward to make a general agent that can do it all in a single model. Just tokenize data from all the tasks and feed into a transformer.
And vice-versa: transfer Gato to the new task, and finetune and sparsify/distill (eg turn the Transformer into a RNN, or do training with Transformer-XL instead of just runtime) when a task becomes common enough to justify the amortized expense.
The fact that adding new tasks doesn’t diminuish performance on previous tasks is highly non trivial!
It may be that there is a lot of room in the embedding space to store them. The wild thing is that nothing (apart few hardware iterations) stop us to increase the embedding space if really needed.