I don’t think that Cicero is a general agent made by gluing together superhuman narrow agents! It’s not clear that any of its components are super human in a meaningful sense.
I also don’t think that “you can’t just copy paste together a bunch of systems that are superhuman...” is a fair summary of David Chapman’s tweet! I think his tweet is specifically pointing out that naming your components suggestive names and drawing arrows between them does not do the hard work of building your generalist agent (which is far more involved).
I don’t think that Cicero is a general agent made by gluing together superhuman narrow agents! It’s not clear that any of its components are super human in a meaningful sense
I don’t either! I think it should update your beliefs that that’s possible though.
I don’t see why it should update my beliefs a non-neglible amount? I expected techniques like this to work for a wide variety of specific tasks given enough effort (indeed, stacking together 5 different techniques into a specialist agent is how a lot of academic work in robotics looks like). I also think that the way people can compose text-davinci-002 or other LMs with themselves into more generalist agents basically should screen off this evidence, even if you weren’t expecting to see it.
I didn’t say it should update your beliefs (edit: I did literally say this lol but it’s not what I meant!) I said it should update the beliefs of people who have a specific prevailing attitude.
If you have that belief, I imagine this paper should update you more towards AI capabilities.
I do believe David Chapman’s tweet though! I don’t think you can just hotwire together a bunch of modules that are superhuman only in narrow domains, and get a powerful generalist agent, without doing a lot of work in the middle.
(That being said, I don’t count gluing together a Python interpreter and a retrieval mechanism to a fine-tuned GPT-3 or whatever to fall in this category; here the work is done by GPT-3 (a generalist agent) and the other parts are primarily augmenting its capabilities.)
I think what we’re seeing here is that LLMs can act as glue to put together these modules in surprising ways, and make them more general. You see that here and with Saycan. And I do think that Chapman’s point becomes less tenable with them in the picture.
I don’t think that Cicero is a general agent made by gluing together superhuman narrow agents! It’s not clear that any of its components are super human in a meaningful sense.
I also don’t think that “you can’t just copy paste together a bunch of systems that are superhuman...” is a fair summary of David Chapman’s tweet! I think his tweet is specifically pointing out that naming your components suggestive names and drawing arrows between them does not do the hard work of building your generalist agent (which is far more involved).
(Btw, your link is broken, here’s the tweet.)
I don’t either! I think it should update your beliefs that that’s possible though.
I don’t see why it should update my beliefs a non-neglible amount? I expected techniques like this to work for a wide variety of specific tasks given enough effort (indeed, stacking together 5 different techniques into a specialist agent is how a lot of academic work in robotics looks like). I also think that the way people can compose
text-davinci-002
or other LMs with themselves into more generalist agents basically should screen off this evidence, even if you weren’t expecting to see it.I didn’t say it should update your beliefs (edit: I did literally say this lol but it’s not what I meant!) I said it should update the beliefs of people who have a specific prevailing attitude.
I do believe David Chapman’s tweet though! I don’t think you can just hotwire together a bunch of modules that are superhuman only in narrow domains, and get a powerful generalist agent, without doing a lot of work in the middle.
(That being said, I don’t count gluing together a Python interpreter and a retrieval mechanism to a fine-tuned GPT-3 or whatever to fall in this category; here the work is done by GPT-3 (a generalist agent) and the other parts are primarily augmenting its capabilities.)
I think what we’re seeing here is that LLMs can act as glue to put together these modules in surprising ways, and make them more general. You see that here and with Saycan. And I do think that Chapman’s point becomes less tenable with them in the picture.
So… LLMs are AGIs?
LLMs can act as glue that makes AI’s more G?
Yes, essentially.