My contention is that they are all shallow. A system that is trained on near-infinite training sets can look indistinguishable from one that can do deep reasoning, but is in fact just pattern-matching.
I agree.
This is a big part of what my post is about.
We have AI that is obviously dumb, in the sense of failing on trivial tasks and having mathematically provable strict bounds.
That type of AI is eating progressively larger chunks of things we used to call “intelligence.”
The things we used to call intelligence are, apparently, easy.
We should expect (and have good reason to believe) that more of what we currently call intelligence to be easy, and it may very well be consumed by dumb architectures.
Less dumb architectures are being worked on, and do not require paradigm shifts.
Uh oh.
This is a statement mostly about the problem, not the problem solver. The problem we thought was hard just isn’t.
And if we say we’ll fix it by adding ‘actual reasoning’, well… good luck! AI spent 2 decades trying to build symbolic reasoning systems, getting that to work is incredibly hard.
Going to be deliberately light on details here again, sorry. When I say ‘actual reasoning,’ I mean AI that is trained in a way that learning the capabilities provided by reasoning is a more direct byproduct, rather than a highly indirect feature that arises from its advantages in blind token prediction. (Though a sufficiently large dumb system might manage to capture way too much anyway.)
I’m not suggesting we need a new SHRDLU. There are paths fully contained within the current deep learning paradigm. There is empirical support for this.
That’s a very well-argued point. I have precisely the opposite intuition of course, but I can’t deny the strength of your argument.. I tend to be less interested in tasks that are well-bounded, than those that are open-ended and uncertain. I agree that much of what we call intelligent might be much simpler. But then I think common sense reasoning is much harder. I think maybe I’ll try to draw up my own list of tasks for AGI :)
Is this research into ‘actual reasoning’ that you’re deliberately being light on details about something that is out in the public (e.g. on arxiv), or is this something you’ve witnessed privately and anticipate will become public in the near future?
Most of it is the latter, but to be clear, I do not have inside information about what any large organization is doing privately, nor have I seen an “oh no we’re doomed” proof of concept. Just some very obvious “yup that’ll work” stuff. I expect adjacent things to be published at some point soonishly just because the ideas are so simple and easily found/implemented independently. Someone might have already and I’m just not aware of it. I just don’t want to be the one to oops and push on the wrong side of the capability-safety balance.
I agree.
This is a big part of what my post is about.
We have AI that is obviously dumb, in the sense of failing on trivial tasks and having mathematically provable strict bounds.
That type of AI is eating progressively larger chunks of things we used to call “intelligence.”
The things we used to call intelligence are, apparently, easy.
We should expect (and have good reason to believe) that more of what we currently call intelligence to be easy, and it may very well be consumed by dumb architectures.
Less dumb architectures are being worked on, and do not require paradigm shifts.
Uh oh.
This is a statement mostly about the problem, not the problem solver. The problem we thought was hard just isn’t.
Going to be deliberately light on details here again, sorry. When I say ‘actual reasoning,’ I mean AI that is trained in a way that learning the capabilities provided by reasoning is a more direct byproduct, rather than a highly indirect feature that arises from its advantages in blind token prediction. (Though a sufficiently large dumb system might manage to capture way too much anyway.)
I’m not suggesting we need a new SHRDLU. There are paths fully contained within the current deep learning paradigm. There is empirical support for this.
That’s a very well-argued point. I have precisely the opposite intuition of course, but I can’t deny the strength of your argument.. I tend to be less interested in tasks that are well-bounded, than those that are open-ended and uncertain. I agree that much of what we call intelligent might be much simpler. But then I think common sense reasoning is much harder. I think maybe I’ll try to draw up my own list of tasks for AGI :)
Is this research into ‘actual reasoning’ that you’re deliberately being light on details about something that is out in the public (e.g. on arxiv), or is this something you’ve witnessed privately and anticipate will become public in the near future?
Here is a paper from January 2022 on arXiv that details the sort of generalization-hop we’re seeing models doing.
Most of it is the latter, but to be clear, I do not have inside information about what any large organization is doing privately, nor have I seen an “oh no we’re doomed” proof of concept. Just some very obvious “yup that’ll work” stuff. I expect adjacent things to be published at some point soonishly just because the ideas are so simple and easily found/implemented independently. Someone might have already and I’m just not aware of it. I just don’t want to be the one to oops and push on the wrong side of the capability-safety balance.