I was encouraged to see that Paul Christiano collaborated on the Google paper, and that it seemed to recognize many of the failure modes of AGI. I think it’s likely that with more/stronger evidence of rapid self-improvement the full AI safety arguments will make sense to most of those researchers at least. My personal prediction/guess is that the next big public showcase of AI will be code/mathematics generation of superhuman quality (e.g. deep neural nets out-coding/out-solving the best coder/mathematician in the world at arbitrary problem statements) which will make it fairly obvious that rapid recursive self-improvement is highly likely. The obvious next step is to tell it to self-optimize but hopefully the initial results will scare the researchers.
Hopelessly? I wouldn’t say hopelessly. We just saw a nice new use of NNs in premise selection for theorem-proving which expands the number of proofs which can now be automatically proven:
And if you search ‘neural programming’ or ‘neural interpretation’, there are a number of interesting papers on getting NNs to write or implement programs. Nothing yet that could be considered useful, but some results are interesting and show promise like:
We just saw a nice new use of NNs in premise selection for theorem-proving
Yes, but the NN itself is not doing the theorem proving. A theorem prover, which is not a neural network, actually does the work. The NN is providing advice to it. Any machine learning system could provide heuristics to a theorem prover.
Blanket statements about what NNs can or can’t do lately is ill-advised unless you spend a lot of time reading Arxiv.
I would stand by my original statement—if you actually read through these papers they do demonstrate the amount of acrobatics that has to be done to join the world of neural networks to anything discrete like programming or proofs.
That’s not to say that that acrobatics won’t eventually yield some useful hybrid approach, but I think people like the poster above will just see lots of hype about neural networks/deep learning, and assume that it’s some kind of magic that easily tackles any problem type. Well, actually programming and mathematics is uniquely unsuited to neural networks, and that fact is central to the literature that you have linked to. That’s why people are working on Neural Turing Machines etc.
The NN is providing advice to it. Any machine learning system could provide heuristics to a theorem prover.
I don’t think this is a meaningful distinction. The system requires NN for top performance. Would you say that AlphaGo doesn’t show NNs can play Go because ‘really, it’s the tree search which is doing the Go playing, all the CNN is doing is providing advice to it’?
I would stand by my original statement—if you actually read through these papers they do demonstrate the amount of acrobatics that has to be done to join the world of neural networks to anything discrete like programming or proofs.
Explain again how working NN systems delivering results, sometimes better than previous approaches, despite minimal research effort so far, shows that ‘neural networks (deep or shallow) are hopelessly unsuited to programming and mathematics.’
This must be some new and novel definition of ‘hopeless’, I am hitherto unfamiliar with, where it now means not ‘nearly impossible’ but ‘possible and already done sometimes’ As a descriptivist, I of course must move with the times and try to understand new uses of old words, and so I don’t object to this use, but I do want to be sure I am understanding you correctly.
Approaches that make some use of Neural Networks, or incorporate them in some way are indeed making progress. What I want to make clear is that you can’t just take some code, throw deep learning at it and abracadabra you have an superhuman AI programmer.
I was encouraged to see that Paul Christiano collaborated on the Google paper, and that it seemed to recognize many of the failure modes of AGI. I think it’s likely that with more/stronger evidence of rapid self-improvement the full AI safety arguments will make sense to most of those researchers at least. My personal prediction/guess is that the next big public showcase of AI will be code/mathematics generation of superhuman quality (e.g. deep neural nets out-coding/out-solving the best coder/mathematician in the world at arbitrary problem statements) which will make it fairly obvious that rapid recursive self-improvement is highly likely. The obvious next step is to tell it to self-optimize but hopefully the initial results will scare the researchers.
As far as I know, neural networks (deep or shallow) are hopelessly unsuited to programming and mathematics.
Hopelessly? I wouldn’t say hopelessly. We just saw a nice new use of NNs in premise selection for theorem-proving which expands the number of proofs which can now be automatically proven:
“DeepMath—Deep Sequence Models for Premise Selection”, Alemi et al 2016
And if you search ‘neural programming’ or ‘neural interpretation’, there are a number of interesting papers on getting NNs to write or implement programs. Nothing yet that could be considered useful, but some results are interesting and show promise like:
“Learning to Execute”
“Programming with a Differentiable Forth Interpreter”, Riedel et al 2016
“Latent Predictor Networks for Code Generation”, Ling et al 2016
(Blanket statements about what NNs can or can’t do lately is ill-advised unless you spend a lot of time reading Arxiv.)
Yes, but the NN itself is not doing the theorem proving. A theorem prover, which is not a neural network, actually does the work. The NN is providing advice to it. Any machine learning system could provide heuristics to a theorem prover.
I would stand by my original statement—if you actually read through these papers they do demonstrate the amount of acrobatics that has to be done to join the world of neural networks to anything discrete like programming or proofs.
That’s not to say that that acrobatics won’t eventually yield some useful hybrid approach, but I think people like the poster above will just see lots of hype about neural networks/deep learning, and assume that it’s some kind of magic that easily tackles any problem type. Well, actually programming and mathematics is uniquely unsuited to neural networks, and that fact is central to the literature that you have linked to. That’s why people are working on Neural Turing Machines etc.
I don’t think this is a meaningful distinction. The system requires NN for top performance. Would you say that AlphaGo doesn’t show NNs can play Go because ‘really, it’s the tree search which is doing the Go playing, all the CNN is doing is providing advice to it’?
Explain again how working NN systems delivering results, sometimes better than previous approaches, despite minimal research effort so far, shows that ‘neural networks (deep or shallow) are hopelessly unsuited to programming and mathematics.’
This must be some new and novel definition of ‘hopeless’, I am hitherto unfamiliar with, where it now means not ‘nearly impossible’ but ‘possible and already done sometimes’ As a descriptivist, I of course must move with the times and try to understand new uses of old words, and so I don’t object to this use, but I do want to be sure I am understanding you correctly.
Approaches that make some use of Neural Networks, or incorporate them in some way are indeed making progress. What I want to make clear is that you can’t just take some code, throw deep learning at it and abracadabra you have an superhuman AI programmer.