I would agree that the specific applications that machine learning generally pursues are useless for general AI, but the general theory that they develop and use (e.g. probabilistic networks, support vector machines, various clustering techniques, etc. etc....) seems like something that AGI would eventually be built on. Of course, the narrow applications get more funding than the general theory, but that’s how it always is. My knowledge/experience of ML is probably even less than yours, though.
I have this (OpenCog-influenced) mental image of a superintelligent AGI equipped with a huge arsenal of various reasoning and analysis techniques, and when it encounters a novel problem which it doesn’t know how to solve, it’ll just throw everything it has on it (prioritizing techniques that have worked on similar problems before) until it starts making progress. (For an “artistic” depiction of the same, see “AI thought process visualization, part II” here.) The hard part of such an AGI would then be mostly in finding a good data format that could efficiently represent the outputs of all those different thought mechanisms, and to balance the interactions of various modules together. (I have no idea of how realistic this vision is, and even less of an idea about how to make such an AGI Friendly.)
I think it’s largely true: the narrow AI “arsenal” currently being developed often comes up with results that seem to be transferable between fields. For example, there is a recent paper that applies the same novel strategy both for image understanding and natural language sentence parsing, both with success. Although you often need lots of tinkering to get state-of-art results, producing the same quality just using a general method without any parameters seems to make a good paper.
And while the problem of how to build an AGI is not directly solved by these, we certainly get closer to it using them. (You still need a module to recognize/imagine/process visual data, unless the solution is something really abstract like AIXI...)
I would agree that the specific applications that machine learning generally pursues are useless for general AI, but the general theory that they develop and use (e.g. probabilistic networks, support vector machines, various clustering techniques, etc. etc....) seems like something that AGI would eventually be built on. Of course, the narrow applications get more funding than the general theory, but that’s how it always is. My knowledge/experience of ML is probably even less than yours, though.
I have this (OpenCog-influenced) mental image of a superintelligent AGI equipped with a huge arsenal of various reasoning and analysis techniques, and when it encounters a novel problem which it doesn’t know how to solve, it’ll just throw everything it has on it (prioritizing techniques that have worked on similar problems before) until it starts making progress. (For an “artistic” depiction of the same, see “AI thought process visualization, part II” here.) The hard part of such an AGI would then be mostly in finding a good data format that could efficiently represent the outputs of all those different thought mechanisms, and to balance the interactions of various modules together. (I have no idea of how realistic this vision is, and even less of an idea about how to make such an AGI Friendly.)
I think it’s largely true: the narrow AI “arsenal” currently being developed often comes up with results that seem to be transferable between fields. For example, there is a recent paper that applies the same novel strategy both for image understanding and natural language sentence parsing, both with success. Although you often need lots of tinkering to get state-of-art results, producing the same quality just using a general method without any parameters seems to make a good paper.
And while the problem of how to build an AGI is not directly solved by these, we certainly get closer to it using them. (You still need a module to recognize/imagine/process visual data, unless the solution is something really abstract like AIXI...)