There is a lot of talk that can be given about how that complex union takes place, but here is one very important takeaway: it can always be made to happen in such a way that there will not, in the future, be any Gotcha cases (those where you thought you did completely merge the two concepts, but where you suddenly find a peculiar situation where you got it disastrously wrong). The reason why you won’t get any Gotcha cases is that the concepts are defined by large numbers of weak constraints, and no strong constraints—in such systems, the effect of smaller and smaller numbers of concepts can be guaranteed to converge to zero.
That is an interesting aspect of one particular way to deal with the problem that I have not yet heard about and I’d like to see a reference for that to read up on it.
I first started trying to explain, informally, how these types of systems could work back in 2005. The reception was so negative that it led to a nasty flame war.
I have continued to work on these systems, but there is a problem with publishing too much detail about them. The very same mechanisms that make the motivation engine a safer type of beast (as described above) also make the main AGI mechanisms extremely powerful. That creates a dilemma: talk about the safety issues, and almost inevitably I have to talk about the powerful design. So, I have given some details in my published papers, but the design is largely under wraps, being developed as an AGI project, outside the glare of publicity.
I am still trying to find ways to write a publishable paper about this class of systems, and when/if I do I will let everyone know about it. In the mean time, much of the core technology is already described in some of the references that you will find in my papers (including the one above). The McClelland and Rumelhart reference, in particular, talks about the fundamental ideas behind connectionist systems. There is also a good paper by Hofstadter called “Jumbo” which illustrates another simple system that operates with multiple weak constraints. Finally, I would recommend that you check out Geoff Hinton’s early work.
In all you neural net reading, it is important to stay above the mathematical details and focus on the ideas, because the math is a distraction from the more important message.
I have read McClelland and Rumelhart first ~20 years ago and it has a prominent place in my book shelf. I havn’t been able to actively work in AI but I have followed the field. I put some hopes in integrated connectionist symbolic systems and was rewarded with deep neural networks lately. I think that every advanced system will need some non-symbolic approach to integrate reality. I don’t know whether it will be NNs or some other statistical means. And the really tricky part will be to figure out how to pre-wire it such that it ‘does what it should’. I think a lot will be learned how the same is realized in the human brain.
That is an interesting aspect of one particular way to deal with the problem that I have not yet heard about and I’d like to see a reference for that to read up on it.
I first started trying to explain, informally, how these types of systems could work back in 2005. The reception was so negative that it led to a nasty flame war.
I have continued to work on these systems, but there is a problem with publishing too much detail about them. The very same mechanisms that make the motivation engine a safer type of beast (as described above) also make the main AGI mechanisms extremely powerful. That creates a dilemma: talk about the safety issues, and almost inevitably I have to talk about the powerful design. So, I have given some details in my published papers, but the design is largely under wraps, being developed as an AGI project, outside the glare of publicity.
I am still trying to find ways to write a publishable paper about this class of systems, and when/if I do I will let everyone know about it. In the mean time, much of the core technology is already described in some of the references that you will find in my papers (including the one above). The McClelland and Rumelhart reference, in particular, talks about the fundamental ideas behind connectionist systems. There is also a good paper by Hofstadter called “Jumbo” which illustrates another simple system that operates with multiple weak constraints. Finally, I would recommend that you check out Geoff Hinton’s early work.
In all you neural net reading, it is important to stay above the mathematical details and focus on the ideas, because the math is a distraction from the more important message.
I have read McClelland and Rumelhart first ~20 years ago and it has a prominent place in my book shelf. I havn’t been able to actively work in AI but I have followed the field. I put some hopes in integrated connectionist symbolic systems and was rewarded with deep neural networks lately. I think that every advanced system will need some non-symbolic approach to integrate reality. I don’t know whether it will be NNs or some other statistical means. And the really tricky part will be to figure out how to pre-wire it such that it ‘does what it should’. I think a lot will be learned how the same is realized in the human brain.