I don’t think there are fundamental barriers. Sensory and motor networks, and types of senses and actions that people don’t have, are well along. And the HuggingGPT work shows that they’re surprisingly easy to integrate with LLMs. That plus error-checking are how humans successfully act in the real world.
I don’t think the existence of sensors is the problem. I believe that self-driving cars, a key example, have problems regardless of their sensor level. I see the key hurdle as ad-hoc action in the world. Overall, all of our knowledge about neural networks, including LLMs, is a combination of heuristic observations and mathematical and other intuitions. So I’m not certain that this hurdle won’t be overcome but I’d still like put the reasons that it could be fundamental.
What LLMs seems to do really well is pull together pieces of information and make deductions about them. What they seem to do less well is reconciling an “outline” of a situation with the particular details involved (Something I’ve found ChatGPT reliably does badly is reconciling further detail you supply once it’s summarized a novel). A human or even an animal, is very good at interacting with complex, changing, multilayered situations that they only have a partial understanding of—especially staying within various safe zones that avoid different dangers. Driving a car is an example of this—you have a bunch of intersecting constraints that can come from a very wide range of things that can happen (but usually don’t). Slowing (or not) when you see a child’s ball go into the road is an archetypal example.
I mean, most efforts to use deep learning in robotics have foundered on the problem that generating enough information to teach the thing to act in the world is extremely difficult. Which implies that the only way that these things can be taught to deal with a complex situation is by roughly complete modeling of it and in real world action situations, that simply may not be possible (contrast with video games or board games where summary of the rules is given and an uncertainty is “known unknowns”).
...having an external code loop that calls multiple networks to check markers of accuracy and effectiveness is scary and promising.
Maybe but methods like this have been tried without neural nets for a while and haven’t by themselves demonstrated effectiveness. Of course, some code could produce AGI then nautral LLMs plus some code could produce AGI so the question is how much needs to be added.
I don’t think the existence of sensors is the problem. I believe that self-driving cars, a key example, have problems regardless of their sensor level. I see the key hurdle as ad-hoc action in the world. Overall, all of our knowledge about neural networks, including LLMs, is a combination of heuristic observations and mathematical and other intuitions. So I’m not certain that this hurdle won’t be overcome but I’d still like put the reasons that it could be fundamental.
What LLMs seems to do really well is pull together pieces of information and make deductions about them. What they seem to do less well is reconciling an “outline” of a situation with the particular details involved (Something I’ve found ChatGPT reliably does badly is reconciling further detail you supply once it’s summarized a novel). A human or even an animal, is very good at interacting with complex, changing, multilayered situations that they only have a partial understanding of—especially staying within various safe zones that avoid different dangers. Driving a car is an example of this—you have a bunch of intersecting constraints that can come from a very wide range of things that can happen (but usually don’t). Slowing (or not) when you see a child’s ball go into the road is an archetypal example.
I mean, most efforts to use deep learning in robotics have foundered on the problem that generating enough information to teach the thing to act in the world is extremely difficult. Which implies that the only way that these things can be taught to deal with a complex situation is by roughly complete modeling of it and in real world action situations, that simply may not be possible (contrast with video games or board games where summary of the rules is given and an uncertainty is “known unknowns”).
Maybe but methods like this have been tried without neural nets for a while and haven’t by themselves demonstrated effectiveness. Of course, some code could produce AGI then nautral LLMs plus some code could produce AGI so the question is how much needs to be added.