One complicating factor is how much you believe ML contributes to existential threats. For example, I think the current ML community is very unlikely to ever produce AGI (<10%) and that AGI will be the result of break throughs from researchers in other parts of AI, thus it seems not very important to me what current ML researchers think of long-term safety concerns. Other analyses of the situation would result in concluding differently, though, so this seems like an upstream question that must be addressed or at least contingently decided upon before evaluating how much it would make sense to pursue this line of inquiry.
For example, I think the current ML community is very unlikely to ever produce AGI (<10%)
I’d be interested to hear why you think this.
BTW, I talked to one person with experience in GOFAI and got the impression it’s essentially a grab bag of problem-specific approaches. Curious what “other parts of AI” you’re optimistic about.
I think ML methods are insufficient for producing AGI, and getting to AGI will require one or more changes in paradigm before we have a set of tools that will look like they can produce AGI. From what I can tell the ML community is not working on this, and instead prefer incremental enhancements to existing algorithms.
Basically what I view as needed to make AGI work might be summarized as needing to design dynamic feedback networks with memory that support online learning. What we mostly see out of ML these days are feedforward networks with offline learning that are static in execution and often manage to work without memory, though some do have this. My impression is that existing ML algorithms are unstable under these kinds of conditions. I expect something like neural networks will be part of making it to AGI, and so some current ML research will matter, but mostly we should think of current ML research as being about near-term, narrow applications rather than on the road to AGI.
That’s at least my opinion based on my understanding of how consciousness works, my belief that “general” requires consciousness, and my understanding of the current state of ML and what it does and does not do that could support consciousness.
As someone with 5+ years of experience in the field, I think you’re impression of current ML is not very accurate. It’s true that we haven’t *solved* the problem of “online learning” (what you probably mean is something more like “continual learning” or “lifelong learning”), but a fair number of people are working on those problems (with a fairly incremental approach, granted). You can find several recent workshops on those topics recently, and work going back to the 90s at least.
It’s also true that long-term planning, credit assignment, memory preservation, and other forms of “stability” appear to be a central challenge to making this stuff work. On the other hand, we don’t know that humans are stable in the limit, just for ~100yrs, so there very well may be no non-heuristic solution to these problems.
I don’t follow. You seem to be responding to a statement that “I think the ML communities perceptions are important, because the ML community’s attitude seems of critical importance for getting good Xrisk reduction policies in place”, which I see as having little bearing on the question the post raises of “how should we assess info-hazard type risks of AI research we conduct?”
Based on my reading of the post it seemed to me that you were concerned primarily with info-hazard risks in ML research, not AI research in general; maybe it’s the way you framed it that I took it to be contingent on ML mattering.
One complicating factor is how much you believe ML contributes to existential threats. For example, I think the current ML community is very unlikely to ever produce AGI (<10%) and that AGI will be the result of break throughs from researchers in other parts of AI, thus it seems not very important to me what current ML researchers think of long-term safety concerns. Other analyses of the situation would result in concluding differently, though, so this seems like an upstream question that must be addressed or at least contingently decided upon before evaluating how much it would make sense to pursue this line of inquiry.
I’d be interested to hear why you think this.
BTW, I talked to one person with experience in GOFAI and got the impression it’s essentially a grab bag of problem-specific approaches. Curious what “other parts of AI” you’re optimistic about.
I think ML methods are insufficient for producing AGI, and getting to AGI will require one or more changes in paradigm before we have a set of tools that will look like they can produce AGI. From what I can tell the ML community is not working on this, and instead prefer incremental enhancements to existing algorithms.
Basically what I view as needed to make AGI work might be summarized as needing to design dynamic feedback networks with memory that support online learning. What we mostly see out of ML these days are feedforward networks with offline learning that are static in execution and often manage to work without memory, though some do have this. My impression is that existing ML algorithms are unstable under these kinds of conditions. I expect something like neural networks will be part of making it to AGI, and so some current ML research will matter, but mostly we should think of current ML research as being about near-term, narrow applications rather than on the road to AGI.
That’s at least my opinion based on my understanding of how consciousness works, my belief that “general” requires consciousness, and my understanding of the current state of ML and what it does and does not do that could support consciousness.
As someone with 5+ years of experience in the field, I think you’re impression of current ML is not very accurate. It’s true that we haven’t *solved* the problem of “online learning” (what you probably mean is something more like “continual learning” or “lifelong learning”), but a fair number of people are working on those problems (with a fairly incremental approach, granted). You can find several recent workshops on those topics recently, and work going back to the 90s at least.
It’s also true that long-term planning, credit assignment, memory preservation, and other forms of “stability” appear to be a central challenge to making this stuff work. On the other hand, we don’t know that humans are stable in the limit, just for ~100yrs, so there very well may be no non-heuristic solution to these problems.
I don’t follow. You seem to be responding to a statement that “I think the ML communities perceptions are important, because the ML community’s attitude seems of critical importance for getting good Xrisk reduction policies in place”, which I see as having little bearing on the question the post raises of “how should we assess info-hazard type risks of AI research we conduct?”
Based on my reading of the post it seemed to me that you were concerned primarily with info-hazard risks in ML research, not AI research in general; maybe it’s the way you framed it that I took it to be contingent on ML mattering.
I meant it to be about all AI research. I don’t usually make too much effort to distinguish ML and AI, TBH.