The idea that we can decide what we want the AI to do, and design it to do that. To build a Hammer rather than an Anything machine that can be a hammer. Or build a General Problem Solver by working out how such a thing would work and programming that. “General problem solvers” in the GOFAI days tended to develop into programming languages specialised towards this or that type of reasoning, leaving the real work still to be done by the programmer. Prolog is the classic example.
The LLM approach has been to say, training to predict data streams is a universal fount of intelligence.
Perhaps there is scope for training specialised LLMs by training on specialised data sets, but I don’t know if anyone is doing that. The more limited the resulting tool, the more limited its market, so the incentives are against it, at least until the first disaster unleashed by Claude-n.
The idea that we can decide what we want the AI to do, and design it to do that. To build a Hammer rather than an Anything machine that can be a hammer. Or build a General Problem Solver by working out how such a thing would work and programming that. “General problem solvers” in the GOFAI days tended to develop into programming languages specialised towards this or that type of reasoning, leaving the real work still to be done by the programmer. Prolog is the classic example.
The LLM approach has been to say, training to predict data streams is a universal fount of intelligence.
Perhaps there is scope for training specialised LLMs by training on specialised data sets, but I don’t know if anyone is doing that. The more limited the resulting tool, the more limited its market, so the incentives are against it, at least until the first disaster unleashed by Claude-n.