First off the bat, you absolutely can create an AGI that is a pure ANN. In fact the most successful early precursor AGI we have—the atari deepmind agent—is a pure ANN. Your claim that ANNs/Deep Learning is not the end of all AGI research is quickly becoming a minority position.
The deepmind agent has no memory, one of the problems that I noted in the first place with naive ANN systems. The deepmind’s team’s solution to this is the neural Turing machine model, which is a hybrid system between a neural network and a database. It’s not a pure ANN. It isn’t even neuromorphic.
Improving its performance is going to involve giving it more structure and more specialized components, and not just throwing more neurons and training time at it.
For goodness sake: Geoffrey Hinton, the father of deep learning, believes that the future of machine vision is explicitly integrating the idea of three dimensional coordinates and geometry into the structure of the network itself, and moving away from more naive and general purpose conv-nets.
Your position is not as mainstream as you like to present it.
The real test here would be to take a brain and give it an entirely new sense
Done and done. Next!
If you’d read the full sentence that I wrote, you’d appreciate that remapping existing senses doesn’t actually address my disagreement. I want a new sense, to make absolutely sure that the subjects aren’t just re-using hard coding from a different system. Snarky, but not a useful contribution to the conversation.
This is nonsense—language processing develops in general purpose cortical modules, there is no specific language circuitry.
This is far from the mainstream linguistic perspective. Go argue with Noam Chomsky; he’s smarter than I am. Incidentally, you didn’t answer the question about birds and cats. Why can’t cats learn to do complex language tasks? Surely they also implement the universal learning algorithm just as parrots do.
What about Watson?
Not an AGI.
AGIs literally don’t exist, so that’s hardly a useful argument. Watson is the most powerful thing in its (fairly broad) class, and it’s not a neural network.
Finally, I don’t have the background to refute your argument on the efficiency of the brain (although I know clever people who do who disagree with you).
The correct thing to do here is update. Instead you are searching for ways in which you can ignore the evidence.
No, it really isn’t. I don’t update based on forum posts on topics I don’t understand, because I have no way to distinguish experts from crackpots.
The deepmind’s team’s solution to this is the neural Turing machine model, which is a hybrid system between a neural network and a database. It’s not a pure ANN.
Yes it is a pure ANN—according to my use of the term ANN (arguing over definitions is a waste of time). ANNs are fully general circuit models, which obviously can re-implement any module from any computer—memory, database, whatever. The defining characteristics of an ANN are—simulated network circuit structure based on analog/real valued nodes, and some universal learning algorithm over the weights—such as SGD.
Your position is not as mainstream as you like to present it.
You don’t understand my position. I don’t believe DL as it exists today is somehow the grail of AI. And yes I’m familiar with Hinton’s ‘Capsule’ proposals. And yes I agree there is still substantial room for improvement in ANN microarchitecture, and especially for learning invariances—and unsupervised especially.
This is far from the mainstream linguistic perspective.
For any theory of anything the brain does—if it isn’t grounded in computational neuroscience data, it is probably wrong—mainstream or not.
No, it really isn’t. I don’t update based on forum posts on topics I don’t understand, because I have no way to distinguish experts from crackpots.
You don’t update on forum posts? Really? You seem pretty familiar with MIRI and LW positions. So are you saying that you arrived at those positions all on your own somehow? Then you just showed up here, thankfully finding other people who just happened to have arrived at all the same ideas?
Yes it is a pure ANN—according to my use of the term ANN (arguing over definitions is a waste of time). ANNs are fully general circuit models, which obviously can re-implement any module from any computer—memory, database, whatever. The defining characteristics of an ANN are—simulated network circuit structure based on analog/real valued nodes, and some universal learning algorithm over the weights—such as SGD.
You could say that any machine learning system is an ANN, under a sufficiently vague definition. That’s not particularly useful in a discussion, however.
Yes it is a pure ANN—according to my use of the term ANN (arguing over definitions is a waste of time). ANNs are fully general circuit models, which obviously can re-implement any module from any computer—memory, database, whatever. The defining characteristics of an ANN are—simulated network circuit structure based on analog/real valued nodes, and some universal learning algorithm over the weights—such as SGD.
I think you misunderstood me. The current DeepMind AI that they’ve shown the public is a pure ANN. However, it has serious limitations because it’s not easy to implement long-term memory as a naive ANN. So they’re working on a successor called the “neural Turing machine” which marries an ANN to a database retrieval system—a specialized module.
You don’t understand my position. I don’t believe DL as it exists today is somehow the grail of AI. And yes I’m familiar with Hinton’s ‘Capsule’ proposals. And yes I agree there is still substantial room for improvement in ANN microarchitecture, and especially for learning invariances—and unsupervised especially.
The thing is, many of those improvements are dependent on the task at hand. It’s really, really hard for an off-the-shelf convnet neural network to learn the rules of three dimensional geometry, so we have to build it into the network. Our own visual processing shows signs of having the same structure imbedded in it.
The same structure would not, for example, benefit an NLP system, so we’d give it a different specialized structure, tuned to the hierarchical nature of language. The future, past a certain point, isn’t making ‘neural networks’ better. It’s making ‘machine vision’ networks better, or ‘natural language’ networks better. To make a long story short, specialized modules are an obvious place to go when you run into problem too complex to teach a naive convnet to do efficiently. Both for human engineers over the next 5-10, and for evolution over the last couple of billion.
You don’t update on forum posts? Really? You seem pretty familiar with MIRI and LW positions. So are you saying that you arrived at those positions all on your own somehow?
I have a CS and machine learning background, and am well-read on the subject outside LW. My math is extremely spotty, and my physics is non-existent. I update on things I read that I understand, or things from people I believe to be reputable. I don’t know you well enough to judge whether you usually say things that make sense, and I don’t have the physics to understand the argument you made or judge its validity. Therefore, I’m not inclined to update much on your conclusion.
EDIT: Oh, and you still haven’t responded to the cat thing. Which, seriously, seems like a pretty big hole in the universal learner hypothesis.
I update on things I read that I understand, or things from people I believe to be reputable.
So you are claiming that either you already understood AI/AGI completely when you arrived to LW, or you updated on LW/MIRI writings because they are ‘reputable’ - even though their positions are disavowed or even ridiculed by many machine learning experts.
EDIT: Oh, and you still haven’t responded to the cat thing. Which, seriously, seems like a pretty big hole in the universal learner hypothesis.
I replied here, and as expected—it looks like you are factually mistaken in your assertion that disagreed with the ULH. Better yet, the outcome of your cat vs bird observation was correctly predicted by the ULH, so that’s yet more evidence in its favor.
The deepmind agent has no memory, one of the problems that I noted in the first place with naive ANN systems. The deepmind’s team’s solution to this is the neural Turing machine model, which is a hybrid system between a neural network and a database. It’s not a pure ANN. It isn’t even neuromorphic.
Improving its performance is going to involve giving it more structure and more specialized components, and not just throwing more neurons and training time at it.
For goodness sake: Geoffrey Hinton, the father of deep learning, believes that the future of machine vision is explicitly integrating the idea of three dimensional coordinates and geometry into the structure of the network itself, and moving away from more naive and general purpose conv-nets.
Source: https://github.com/WalnutiQ/WalnutiQ/issues/157
Your position is not as mainstream as you like to present it.
If you’d read the full sentence that I wrote, you’d appreciate that remapping existing senses doesn’t actually address my disagreement. I want a new sense, to make absolutely sure that the subjects aren’t just re-using hard coding from a different system. Snarky, but not a useful contribution to the conversation.
This is far from the mainstream linguistic perspective. Go argue with Noam Chomsky; he’s smarter than I am. Incidentally, you didn’t answer the question about birds and cats. Why can’t cats learn to do complex language tasks? Surely they also implement the universal learning algorithm just as parrots do.
AGIs literally don’t exist, so that’s hardly a useful argument. Watson is the most powerful thing in its (fairly broad) class, and it’s not a neural network.
No, it really isn’t. I don’t update based on forum posts on topics I don’t understand, because I have no way to distinguish experts from crackpots.
Yes it is a pure ANN—according to my use of the term ANN (arguing over definitions is a waste of time). ANNs are fully general circuit models, which obviously can re-implement any module from any computer—memory, database, whatever. The defining characteristics of an ANN are—simulated network circuit structure based on analog/real valued nodes, and some universal learning algorithm over the weights—such as SGD.
You don’t understand my position. I don’t believe DL as it exists today is somehow the grail of AI. And yes I’m familiar with Hinton’s ‘Capsule’ proposals. And yes I agree there is still substantial room for improvement in ANN microarchitecture, and especially for learning invariances—and unsupervised especially.
For any theory of anything the brain does—if it isn’t grounded in computational neuroscience data, it is probably wrong—mainstream or not.
You don’t update on forum posts? Really? You seem pretty familiar with MIRI and LW positions. So are you saying that you arrived at those positions all on your own somehow? Then you just showed up here, thankfully finding other people who just happened to have arrived at all the same ideas?
You could say that any machine learning system is an ANN, under a sufficiently vague definition. That’s not particularly useful in a discussion, however.
I think you misunderstood me. The current DeepMind AI that they’ve shown the public is a pure ANN. However, it has serious limitations because it’s not easy to implement long-term memory as a naive ANN. So they’re working on a successor called the “neural Turing machine” which marries an ANN to a database retrieval system—a specialized module.
The thing is, many of those improvements are dependent on the task at hand. It’s really, really hard for an off-the-shelf convnet neural network to learn the rules of three dimensional geometry, so we have to build it into the network. Our own visual processing shows signs of having the same structure imbedded in it.
The same structure would not, for example, benefit an NLP system, so we’d give it a different specialized structure, tuned to the hierarchical nature of language. The future, past a certain point, isn’t making ‘neural networks’ better. It’s making ‘machine vision’ networks better, or ‘natural language’ networks better. To make a long story short, specialized modules are an obvious place to go when you run into problem too complex to teach a naive convnet to do efficiently. Both for human engineers over the next 5-10, and for evolution over the last couple of billion.
I have a CS and machine learning background, and am well-read on the subject outside LW. My math is extremely spotty, and my physics is non-existent. I update on things I read that I understand, or things from people I believe to be reputable. I don’t know you well enough to judge whether you usually say things that make sense, and I don’t have the physics to understand the argument you made or judge its validity. Therefore, I’m not inclined to update much on your conclusion.
EDIT: Oh, and you still haven’t responded to the cat thing. Which, seriously, seems like a pretty big hole in the universal learner hypothesis.
So you are claiming that either you already understood AI/AGI completely when you arrived to LW, or you updated on LW/MIRI writings because they are ‘reputable’ - even though their positions are disavowed or even ridiculed by many machine learning experts.
I replied here, and as expected—it looks like you are factually mistaken in your assertion that disagreed with the ULH. Better yet, the outcome of your cat vs bird observation was correctly predicted by the ULH, so that’s yet more evidence in its favor.