Yeah, it was pretty underspecified, I was just gesturing at the idea.
Even more informally: Just look at GPT-4. Imagine that you’re doing it with fresh eyes, setting aside all the fancy technical arguments. Does it not seem like it’s almost there? Whatever the AI industry is doing, it sure feels like it’s moving in the right direction, and quickly. And yes, it’s possible that the common sense is deceptive here; but it’s usually not.
Or, to make a technical argument: The deep-learning paradigm is a pretty broad-purpose trick. Stochastic gradient descent isn’t just some idiosyncratic method of training neural networks; it’s a way to automatically generate software that meets certain desiderata. And it’s compute-efficient enough to generate software approaching human brains in complexity. Thus, I don’t expect that we’ll need to move beyond it to get to AGI — general intelligence is reachable by doing SGD over some architecture.
I expect we’ll need advancement(s) on the order of “fully-connected NN → transformers”, not “GOFAI → DL”.
I would say it seems like it’s almost there, but it also seems to me to already have some fluid intelligence, and that might be why it seems close. If it doesn’t have fluid intelligence, then my intuition that it’s close may not be very reliable.
Yeah, it was pretty underspecified, I was just gesturing at the idea.
Even more informally: Just look at GPT-4. Imagine that you’re doing it with fresh eyes, setting aside all the fancy technical arguments. Does it not seem like it’s almost there? Whatever the AI industry is doing, it sure feels like it’s moving in the right direction, and quickly. And yes, it’s possible that the common sense is deceptive here; but it’s usually not.
Or, to make a technical argument: The deep-learning paradigm is a pretty broad-purpose trick. Stochastic gradient descent isn’t just some idiosyncratic method of training neural networks; it’s a way to automatically generate software that meets certain desiderata. And it’s compute-efficient enough to generate software approaching human brains in complexity. Thus, I don’t expect that we’ll need to move beyond it to get to AGI — general intelligence is reachable by doing SGD over some architecture.
I expect we’ll need advancement(s) on the order of “fully-connected NN → transformers”, not “GOFAI → DL”.
I would say it seems like it’s almost there, but it also seems to me to already have some fluid intelligence, and that might be why it seems close. If it doesn’t have fluid intelligence, then my intuition that it’s close may not be very reliable.