A statement which we can derive from the simple fact that the mere existence of general intelligence (apes) does not result automatically in catastrophe.
I wonder how long it’ll take before people catch onto the notion that artificial “dumbness” is in many ways a more interesting field than artificial “intelligence”? (As in, how much could an AGI no smarter than a dog, but hooked into expert systems similar to Watson, do?)
It was pretty well accepted at MIT’s Media Lab back when my orbit took me around there periodically, a decade or so ago, that there was a huge amount of low-hanging fruit in this area… not necessarily of academic interest, but damned useful (and commercial).
That’s interesting since my impression if anything is the exact opposite. There seem to be a lot of people trying to apply Bayesian learning systems and expert learning systems to all sorts of different practical problems. I wonder if this is a new thing or whether I simply don’t have a good view of the field.
I can see that for expert systems, but Bayesian learning systems seem to be a distinct category. The primary limits seem to be scalibility not architecture.
Bayesian learning systems are essentially another form of trainable neural network. That makes them very good in a narrow range of categories but also makes them insufficient to the cause of achieving general intelligence.
I do not see that scaling Bayesian learning networks would ever achieve general intelligence. No matter how big the hammer, it’ll never be a wrench. That being said, I do believe that some form of pattern recognition and ‘selective forgetting’ is important to cognition and as such Bayesian learning architecture is a good tool towards that end.
not necessarily of academic interest, but damned useful (and commercial).
Actually, I’m curious that isn’t seen as an area of significan academic interest—designing artificial systems around being efficient parsers of extraneous data. I recall that one of the major differences between Deep Blue and Deep Fritz in the Kasperov chess matches was precisely that Fritz was designed around not probing every last possible set of playable moves; that is, Deep Fritz was “learning to forget the right things”.
It seems to me that understanding this mechanism and how it behaves in humans could have huge potential for opening up the understanding of general intelligence and cognition. And that’s a very academic concern.
A statement which we can derive from the simple fact that the mere existence of general intelligence (apes) does not result automatically in catastrophe.
I wonder how long it’ll take before people catch onto the notion that artificial “dumbness” is in many ways a more interesting field than artificial “intelligence”? (As in, how much could an AGI no smarter than a dog, but hooked into expert systems similar to Watson, do?)
It was pretty well accepted at MIT’s Media Lab back when my orbit took me around there periodically, a decade or so ago, that there was a huge amount of low-hanging fruit in this area… not necessarily of academic interest, but damned useful (and commercial).
That’s interesting since my impression if anything is the exact opposite. There seem to be a lot of people trying to apply Bayesian learning systems and expert learning systems to all sorts of different practical problems. I wonder if this is a new thing or whether I simply don’t have a good view of the field.
For what it’s worth, I consider Bayesian learning systems and expert learning systems to be “narrow” AI—hence the example I gave of Watson.
I think Ben Goertzel’s Novamente project is the closest extant project to a ‘general’ AI of any form that I’ve heard of.
I can see that for expert systems, but Bayesian learning systems seem to be a distinct category. The primary limits seem to be scalibility not architecture.
Bayesian learning systems are essentially another form of trainable neural network. That makes them very good in a narrow range of categories but also makes them insufficient to the cause of achieving general intelligence.
I do not see that scaling Bayesian learning networks would ever achieve general intelligence. No matter how big the hammer, it’ll never be a wrench. That being said, I do believe that some form of pattern recognition and ‘selective forgetting’ is important to cognition and as such Bayesian learning architecture is a good tool towards that end.
Actually, I’m curious that isn’t seen as an area of significan academic interest—designing artificial systems around being efficient parsers of extraneous data. I recall that one of the major differences between Deep Blue and Deep Fritz in the Kasperov chess matches was precisely that Fritz was designed around not probing every last possible set of playable moves; that is, Deep Fritz was “learning to forget the right things”.
It seems to me that understanding this mechanism and how it behaves in humans could have huge potential for opening up the understanding of general intelligence and cognition. And that’s a very academic concern.