The AI section is actually very short, and doesn’t say much about potential AI paths to superintelligence. E.g. one thing I might have mentioned is the “one learning algorithm” hypothesis about the neocortex, and the relevance of deep learning methods. Or the arcade learning environment as a nice test for increasingly general intelligence algorithms. Or whatever.
One reason to avoid such topics is that it is more difficult to make forecasts based on current experiments (which I suppose is a reason to be concerned if things keep on going this way, since by the same token it may be hard to see the end until we are there).
The question I find most interesting about deep learning, and about local search approaches of this flavor more broadly, is the plausibility that they could go all the way with relatively modest theoretical insight. The general consensus seems to be “probably not,” although also I think people will agree that this is basically what has happened over the last few decades in a number of parts of machine learning (deep learning in 2014 doesn’t have too many theoretical insights beyond deep learning in 1999), and appears to be basically what happened in computer game-playing.
This is closely related to my other comment about evolution, though it’s a bit less frightening as a prospect.
A lot of activities in the real world could be re-defined as games with a score.
-If deep learning systems were interfaced and integrated with robots with better object recognition and manipulation systems than we have today, what tasks would they be capable of doing?
-Can deep learning methods be applied to design? One important application would be architecture and interior design.
In order for deep learning to generate a plan for real-world action, first an accurate simulation of the real-world environment is required, is that correct?
I remember readng Jeff Hawkins’ On Intelligence 10 or 12 years ago, and found his version of the “one learning algorithm” extremely intriguing. I remember thinking at the time how elegant it was, and the multiple fronts on which it conferred explanatory power. I see why Kurzweil and others like it too.
I find myself, ever since reading Jeff’s book (and hearing some of talks later) sometimes musing—as I go through my day, noting the patterns in my expectations and my interpretations of the day’s events—about his memory—prediciton model. Introspectively, it resonates so well with the observed degrees of fit, priming, pruning to a subtree of possibility space, as the day unfolds, that it becomes kind of automatic thinking.
In other words, the idea was so intuitively compelling when I heard it that it has “snuck-in” and actually become part of my “folk psychology”, along with concepts like cognitive dissonance, the “subconscious”, and other ideas that just automatically float around in the internal chatter (even if not all of them are equally well verified concepts.)
I think Jeff’s idea has a lot to be said for it. (I’m calling it Jeff’s, but I think I’ve heard it said, since then, that someone else independently, earlier, may have had a similar idea. Maybe that is why you didn’t mention it as Jeff’s yourself, but by its conceptual description.) It’s one of the more interesting ideas we have to work with, in any case.
The AI section is actually very short, and doesn’t say much about potential AI paths to superintelligence. E.g. one thing I might have mentioned is the “one learning algorithm” hypothesis about the neocortex, and the relevance of deep learning methods. Or the arcade learning environment as a nice test for increasingly general intelligence algorithms. Or whatever.
One reason to avoid such topics is that it is more difficult to make forecasts based on current experiments (which I suppose is a reason to be concerned if things keep on going this way, since by the same token it may be hard to see the end until we are there).
The question I find most interesting about deep learning, and about local search approaches of this flavor more broadly, is the plausibility that they could go all the way with relatively modest theoretical insight. The general consensus seems to be “probably not,” although also I think people will agree that this is basically what has happened over the last few decades in a number of parts of machine learning (deep learning in 2014 doesn’t have too many theoretical insights beyond deep learning in 1999), and appears to be basically what happened in computer game-playing.
This is closely related to my other comment about evolution, though it’s a bit less frightening as a prospect.
A lot of activities in the real world could be re-defined as games with a score.
-If deep learning systems were interfaced and integrated with robots with better object recognition and manipulation systems than we have today, what tasks would they be capable of doing?
-Can deep learning methods be applied to design? One important application would be architecture and interior design.
In order for deep learning to generate a plan for real-world action, first an accurate simulation of the real-world environment is required, is that correct?
lukeprog,
I remember readng Jeff Hawkins’ On Intelligence 10 or 12 years ago, and found his version of the “one learning algorithm” extremely intriguing. I remember thinking at the time how elegant it was, and the multiple fronts on which it conferred explanatory power. I see why Kurzweil and others like it too.
I find myself, ever since reading Jeff’s book (and hearing some of talks later) sometimes musing—as I go through my day, noting the patterns in my expectations and my interpretations of the day’s events—about his memory—prediciton model. Introspectively, it resonates so well with the observed degrees of fit, priming, pruning to a subtree of possibility space, as the day unfolds, that it becomes kind of automatic thinking.
In other words, the idea was so intuitively compelling when I heard it that it has “snuck-in” and actually become part of my “folk psychology”, along with concepts like cognitive dissonance, the “subconscious”, and other ideas that just automatically float around in the internal chatter (even if not all of them are equally well verified concepts.)
I think Jeff’s idea has a lot to be said for it. (I’m calling it Jeff’s, but I think I’ve heard it said, since then, that someone else independently, earlier, may have had a similar idea. Maybe that is why you didn’t mention it as Jeff’s yourself, but by its conceptual description.) It’s one of the more interesting ideas we have to work with, in any case.