I don’t see why it should update my beliefs a non-neglible amount? I expected techniques like this to work for a wide variety of specific tasks given enough effort (indeed, stacking together 5 different techniques into a specialist agent is how a lot of academic work in robotics looks like). I also think that the way people can compose text-davinci-002 or other LMs with themselves into more generalist agents basically should screen off this evidence, even if you weren’t expecting to see it.
I didn’t say it should update your beliefs (edit: I did literally say this lol but it’s not what I meant!) I said it should update the beliefs of people who have a specific prevailing attitude.
If you have that belief, I imagine this paper should update you more towards AI capabilities.
I do believe David Chapman’s tweet though! I don’t think you can just hotwire together a bunch of modules that are superhuman only in narrow domains, and get a powerful generalist agent, without doing a lot of work in the middle.
(That being said, I don’t count gluing together a Python interpreter and a retrieval mechanism to a fine-tuned GPT-3 or whatever to fall in this category; here the work is done by GPT-3 (a generalist agent) and the other parts are primarily augmenting its capabilities.)
I think what we’re seeing here is that LLMs can act as glue to put together these modules in surprising ways, and make them more general. You see that here and with Saycan. And I do think that Chapman’s point becomes less tenable with them in the picture.
I don’t see why it should update my beliefs a non-neglible amount? I expected techniques like this to work for a wide variety of specific tasks given enough effort (indeed, stacking together 5 different techniques into a specialist agent is how a lot of academic work in robotics looks like). I also think that the way people can compose
text-davinci-002
or other LMs with themselves into more generalist agents basically should screen off this evidence, even if you weren’t expecting to see it.I didn’t say it should update your beliefs (edit: I did literally say this lol but it’s not what I meant!) I said it should update the beliefs of people who have a specific prevailing attitude.
I do believe David Chapman’s tweet though! I don’t think you can just hotwire together a bunch of modules that are superhuman only in narrow domains, and get a powerful generalist agent, without doing a lot of work in the middle.
(That being said, I don’t count gluing together a Python interpreter and a retrieval mechanism to a fine-tuned GPT-3 or whatever to fall in this category; here the work is done by GPT-3 (a generalist agent) and the other parts are primarily augmenting its capabilities.)
I think what we’re seeing here is that LLMs can act as glue to put together these modules in surprising ways, and make them more general. You see that here and with Saycan. And I do think that Chapman’s point becomes less tenable with them in the picture.
So… LLMs are AGIs?
LLMs can act as glue that makes AI’s more G?
Yes, essentially.