I think that the reason that I find Brooks’ ideas interesting is because it seems to mirror the way that natural intelligences came about.
Biological evolution seems to amount to nothing more than local systems adapting to survive in and environment, and then aggregating into more complex systems. We know that this strategy has produced intelligence at least once in the history of the universe, and thus is seems to me a productive example to follow in attempting to create artificial intelligence as well.
Now, I don’t know what the state of the art is for the emergent AI school of thought is at the moment, but isn’t it possible that the challenge isn’t solving each of the little problems that feedback loops can help overcome, but rather enfolding the lessons learned by these simple systems into more complex aggregate systems?
That being said, you may be right, it may be easier (at this point) to program AI systems to narrow their search field with information about probability distributions and so forth, but could it not be that this strategy is fundamentally limited in the same way that expert systems are limited? That is, the system is only as “smart” as the knowledge base (or probability distributions) allow it to become, and they fail as “general” AI?
I think that the reason that I find Brooks’ ideas interesting is because it seems to mirror the way that natural intelligences came about.
Biological evolution seems to amount to nothing more than local systems adapting to survive in and environment, and then aggregating into more complex systems. We know that this strategy has produced intelligence at least once in the history of the universe, and thus is seems to me a productive example to follow in attempting to create artificial intelligence as well.
Now, I don’t know what the state of the art is for the emergent AI school of thought is at the moment, but isn’t it possible that the challenge isn’t solving each of the little problems that feedback loops can help overcome, but rather enfolding the lessons learned by these simple systems into more complex aggregate systems?
That being said, you may be right, it may be easier (at this point) to program AI systems to narrow their search field with information about probability distributions and so forth, but could it not be that this strategy is fundamentally limited in the same way that expert systems are limited? That is, the system is only as “smart” as the knowledge base (or probability distributions) allow it to become, and they fail as “general” AI?
Do not copy the Blind Idiot God, for it lives much longer than you, and is a blind idiot.