For example, I think the current ML community is very unlikely to ever produce AGI (<10%)
I’d be interested to hear why you think this.
BTW, I talked to one person with experience in GOFAI and got the impression it’s essentially a grab bag of problem-specific approaches. Curious what “other parts of AI” you’re optimistic about.
I think ML methods are insufficient for producing AGI, and getting to AGI will require one or more changes in paradigm before we have a set of tools that will look like they can produce AGI. From what I can tell the ML community is not working on this, and instead prefer incremental enhancements to existing algorithms.
Basically what I view as needed to make AGI work might be summarized as needing to design dynamic feedback networks with memory that support online learning. What we mostly see out of ML these days are feedforward networks with offline learning that are static in execution and often manage to work without memory, though some do have this. My impression is that existing ML algorithms are unstable under these kinds of conditions. I expect something like neural networks will be part of making it to AGI, and so some current ML research will matter, but mostly we should think of current ML research as being about near-term, narrow applications rather than on the road to AGI.
That’s at least my opinion based on my understanding of how consciousness works, my belief that “general” requires consciousness, and my understanding of the current state of ML and what it does and does not do that could support consciousness.
As someone with 5+ years of experience in the field, I think you’re impression of current ML is not very accurate. It’s true that we haven’t *solved* the problem of “online learning” (what you probably mean is something more like “continual learning” or “lifelong learning”), but a fair number of people are working on those problems (with a fairly incremental approach, granted). You can find several recent workshops on those topics recently, and work going back to the 90s at least.
It’s also true that long-term planning, credit assignment, memory preservation, and other forms of “stability” appear to be a central challenge to making this stuff work. On the other hand, we don’t know that humans are stable in the limit, just for ~100yrs, so there very well may be no non-heuristic solution to these problems.
I’d be interested to hear why you think this.
BTW, I talked to one person with experience in GOFAI and got the impression it’s essentially a grab bag of problem-specific approaches. Curious what “other parts of AI” you’re optimistic about.
I think ML methods are insufficient for producing AGI, and getting to AGI will require one or more changes in paradigm before we have a set of tools that will look like they can produce AGI. From what I can tell the ML community is not working on this, and instead prefer incremental enhancements to existing algorithms.
Basically what I view as needed to make AGI work might be summarized as needing to design dynamic feedback networks with memory that support online learning. What we mostly see out of ML these days are feedforward networks with offline learning that are static in execution and often manage to work without memory, though some do have this. My impression is that existing ML algorithms are unstable under these kinds of conditions. I expect something like neural networks will be part of making it to AGI, and so some current ML research will matter, but mostly we should think of current ML research as being about near-term, narrow applications rather than on the road to AGI.
That’s at least my opinion based on my understanding of how consciousness works, my belief that “general” requires consciousness, and my understanding of the current state of ML and what it does and does not do that could support consciousness.
As someone with 5+ years of experience in the field, I think you’re impression of current ML is not very accurate. It’s true that we haven’t *solved* the problem of “online learning” (what you probably mean is something more like “continual learning” or “lifelong learning”), but a fair number of people are working on those problems (with a fairly incremental approach, granted). You can find several recent workshops on those topics recently, and work going back to the 90s at least.
It’s also true that long-term planning, credit assignment, memory preservation, and other forms of “stability” appear to be a central challenge to making this stuff work. On the other hand, we don’t know that humans are stable in the limit, just for ~100yrs, so there very well may be no non-heuristic solution to these problems.