Connectionism may be the best we’ve got. But it is not very good.
Take the recent example of improving performance on a task by reading a manual. If we were to try and implement something similar in a connectionist/reinforcement model we would have problems. We need positive and negative reinforcement to change the neural connection strengths but we wouldn’t get those whilst reading a book, so how do we assimilate the non-inductive information stored in there? It is possible with feedback loops, those can be used to store information quickly in a connectionist system, however I haven’t seen any systems use them or learn them on the sort of scale that would be needed for the civilization problem.
There are also more complex processes which seem out of its reach, such as learning a language using a language e.g. “En francais, le mot pour ‘cat’ est ‘chat’”.
Neural nets don’t need feedback. They can benefit from unsupervised learning too. In this case you would have it learn a model of the manual’s text, and another model of playing the game, and connect them.
When words appear in the game, they will activate neurons in the text net. The game playing net might find that these correlate with successful actions in the game and make use of it.
The idea of “virtual machines” mentioned in [Your Brain is (almost) Perfect] (http://www.amazon.com/Your-Brain-Almost-Perfect-Decisions/dp/0452288843) is tempting me to think in the direction of “reading a manual will trigger the nuerons involved in running the task and the reinforcements will be implemented on those ‘virtual’ runs”.
How reading a manual will trigger this virtual run can be answered by the same way hearing “get me a glass of water” will trigger the neurons to do so, and if I get a “thank you” it will be reinforced. In the same way reading “to open the TV, click the red button on the remote” might trigger the neurons for opening a TV and reinforce the behavior in accordance to the manual.
I know this is quite a wild guess, but perhaps someone can elaborate on it in a more accurate manner
Connectionism may be the best we’ve got. But it is not very good.
Take the recent example of improving performance on a task by reading a manual. If we were to try and implement something similar in a connectionist/reinforcement model we would have problems. We need positive and negative reinforcement to change the neural connection strengths but we wouldn’t get those whilst reading a book, so how do we assimilate the non-inductive information stored in there? It is possible with feedback loops, those can be used to store information quickly in a connectionist system, however I haven’t seen any systems use them or learn them on the sort of scale that would be needed for the civilization problem.
There are also more complex processes which seem out of its reach, such as learning a language using a language e.g. “En francais, le mot pour ‘cat’ est ‘chat’”.
Neural nets don’t need feedback. They can benefit from unsupervised learning too. In this case you would have it learn a model of the manual’s text, and another model of playing the game, and connect them.
When words appear in the game, they will activate neurons in the text net. The game playing net might find that these correlate with successful actions in the game and make use of it.
The idea of “virtual machines” mentioned in [Your Brain is (almost) Perfect] (http://www.amazon.com/Your-Brain-Almost-Perfect-Decisions/dp/0452288843) is tempting me to think in the direction of “reading a manual will trigger the nuerons involved in running the task and the reinforcements will be implemented on those ‘virtual’ runs”.
How reading a manual will trigger this virtual run can be answered by the same way hearing “get me a glass of water” will trigger the neurons to do so, and if I get a “thank you” it will be reinforced. In the same way reading “to open the TV, click the red button on the remote” might trigger the neurons for opening a TV and reinforce the behavior in accordance to the manual.
I know this is quite a wild guess, but perhaps someone can elaborate on it in a more accurate manner