When GPT-3 first came out, I expected that people would use it as a sort of “common-sense reasoning module”. That is, if you want to process or generate information in some way, then you can give GPT-3 a relevant prompt, and repeatedly apply it to a bunch of different inputs to generate corresponding outputs. After GPT-3 came out, I had expected that people would end up constructing a whole bunch of such modules and wire them together to create big advanced reasoning machines. However, this doesn’t seem to have panned out; you don’t see much discussion about building LLM-based apps.
Why not? I assume that there must be something that goes wrong along the way, but what exactly goes wrong? Seems like it has the potential to teach us a lot about LLMs.
[Question] Is there some reason LLMs haven’t seen broader use?
When GPT-3 first came out, I expected that people would use it as a sort of “common-sense reasoning module”. That is, if you want to process or generate information in some way, then you can give GPT-3 a relevant prompt, and repeatedly apply it to a bunch of different inputs to generate corresponding outputs. After GPT-3 came out, I had expected that people would end up constructing a whole bunch of such modules and wire them together to create big advanced reasoning machines. However, this doesn’t seem to have panned out; you don’t see much discussion about building LLM-based apps.
Why not? I assume that there must be something that goes wrong along the way, but what exactly goes wrong? Seems like it has the potential to teach us a lot about LLMs.