Sounds like GOFAI, which never worked out. I’ve seen a few suggestions from people from the GOFAI days to combine these new-fangled LLMs that actually work with GOFAI and get the best of both, but I expect “the best of both” to come down to just LLMs.
The idea that we can decide what we want the AI to do, and design it to do that. To build a Hammer rather than an Anything machine that can be a hammer. Or build a General Problem Solver by working out how such a thing would work and programming that. “General problem solvers” in the GOFAI days tended to develop into programming languages specialised towards this or that type of reasoning, leaving the real work still to be done by the programmer. Prolog is the classic example.
The LLM approach has been to say, training to predict data streams is a universal fount of intelligence.
Perhaps there is scope for training specialised LLMs by training on specialised data sets, but I don’t know if anyone is doing that. The more limited the resulting tool, the more limited its market, so the incentives are against it, at least until the first disaster unleashed by Claude-n.
Sounds like GOFAI, which never worked out. I’ve seen a few suggestions from people from the GOFAI days to combine these new-fangled LLMs that actually work with GOFAI and get the best of both, but I expect “the best of both” to come down to just LLMs.
How does this sound like that?
The idea that we can decide what we want the AI to do, and design it to do that. To build a Hammer rather than an Anything machine that can be a hammer. Or build a General Problem Solver by working out how such a thing would work and programming that. “General problem solvers” in the GOFAI days tended to develop into programming languages specialised towards this or that type of reasoning, leaving the real work still to be done by the programmer. Prolog is the classic example.
The LLM approach has been to say, training to predict data streams is a universal fount of intelligence.
Perhaps there is scope for training specialised LLMs by training on specialised data sets, but I don’t know if anyone is doing that. The more limited the resulting tool, the more limited its market, so the incentives are against it, at least until the first disaster unleashed by Claude-n.