That’s a terrible argument. AlphaGo represents a general approach to AI, but its instantiation on the specific problem of Go tightly constrains the problem domain and solution space. Real life is far more combinatorial still, and an AGI requires much more expensive meta-level repeated cognition as well. You don’t just solve one problem, you also look at all past solved problems and think about his you could have solved those better. That’s quadratic blowup.
But what if a general AI could generate specialized narrow AIs? That is something the human brain cannot do but an AGI could. Thus speed of general AI = speed of AI narrow + time to specialize.
It isn’t. At least not in my model of what an AI is. But Mark_Friedenbach seems to operate under a model where this is less clear or the consequences of the capability of an AI creating these kind of specialized sub agents seem not to be taken into account enough.
AlphaGo represents a general approach to AI, but its instantiation on the specific problem of Go tightly constrains the problem domain and solution space ..
Sure, but that wasn’t my point. I was addressing key questions of training data size, sample efficiency, and learning speed. At least for Go, vision, and related domains, the sample efficiency of DL based systems appears to be approaching that of humans. The net learning efficiency of the brain is far beyond current DL systems in terms of learning per joule, but the gap in terms of learning per dollar is less, and closing quickly. Machine DL systems also easily and typically run 10x or more faster than the brain, and thus learn/train 10x faster.
Although I disagree that fooming will be slow, from what I’ve learned studying it I would say that its approach is not easy to generalize. AlphaGo draws its power partly due to the step where an ‘intuitive’ neural net is created, using millions of self-play from another already supervisedly trained net. But the training can be accurate because the end positions and the winning player are clearly defined once the game is over. This allows a precise calculation of the outcome function that the intuitive neural net is trying to learn. Unsupervised learners interacting with an environment that has open ontologies will have a much harder time to come up with this kind of intuition-building step.
That’s a terrible argument. AlphaGo represents a general approach to AI, but its instantiation on the specific problem of Go tightly constrains the problem domain and solution space. Real life is far more combinatorial still, and an AGI requires much more expensive meta-level repeated cognition as well. You don’t just solve one problem, you also look at all past solved problems and think about his you could have solved those better. That’s quadratic blowup.
Tl;Dr speed of narrow AI != speed of general AI.
But what if a general AI could generate specialized narrow AIs? That is something the human brain cannot do but an AGI could. Thus speed of general AI = speed of AI narrow + time to specialize.
How is it different than a general AI solving the problems by itself?
It isn’t. At least not in my model of what an AI is. But Mark_Friedenbach seems to operate under a model where this is less clear or the consequences of the capability of an AI creating these kind of specialized sub agents seem not to be taken into account enough.
Sure, but that wasn’t my point. I was addressing key questions of training data size, sample efficiency, and learning speed. At least for Go, vision, and related domains, the sample efficiency of DL based systems appears to be approaching that of humans. The net learning efficiency of the brain is far beyond current DL systems in terms of learning per joule, but the gap in terms of learning per dollar is less, and closing quickly. Machine DL systems also easily and typically run 10x or more faster than the brain, and thus learn/train 10x faster.
Although I disagree that fooming will be slow, from what I’ve learned studying it I would say that its approach is not easy to generalize.
AlphaGo draws its power partly due to the step where an ‘intuitive’ neural net is created, using millions of self-play from another already supervisedly trained net. But the training can be accurate because the end positions and the winning player are clearly defined once the game is over. This allows a precise calculation of the outcome function that the intuitive neural net is trying to learn.
Unsupervised learners interacting with an environment that has open ontologies will have a much harder time to come up with this kind of intuition-building step.