This Twitter thread is an interesting recent example of composing LLMs into a more agent-like system. I’m not sure how well it actually works, but the graph in the first tweet demonstrates a very practical and concrete application of concept discussed in section 6.
Remark: the tools and effort needed to compose LLMs into these kinds of systems are much simpler than those needed to train the underlying LLM(s) that these systems are composed of.
Training GPT-4 was the work of hundreds of engineers and millions of dollars of computing resources by OpenAI. LangChain is maintained by a very small team. And a single developer can write a python script which glues together chains of OpenAI API calls into a graph. Most of the effort was in training the LLM, but most of the agency (and most of the useful work) comes from the relatively tiny bit of glue code that puts them all together at the end.
This Twitter thread is an interesting recent example of composing LLMs into a more agent-like system. I’m not sure how well it actually works, but the graph in the first tweet demonstrates a very practical and concrete application of concept discussed in section 6.
Remark: the tools and effort needed to compose LLMs into these kinds of systems are much simpler than those needed to train the underlying LLM(s) that these systems are composed of.
Training GPT-4 was the work of hundreds of engineers and millions of dollars of computing resources by OpenAI. LangChain is maintained by a very small team. And a single developer can write a python script which glues together chains of OpenAI API calls into a graph. Most of the effort was in training the LLM, but most of the agency (and most of the useful work) comes from the relatively tiny bit of glue code that puts them all together at the end.