I was under the impression that things like “deliberative thinking” and “awareness” haven’t been simulated by ML thus far, so I think that’s the diff between us—though it’s not that strongly held, there are lots of ML advances I may just not have heard of.
At first I was very surprised that they got such good performance at answering questions about visual scenes (e.g. “what shape is the red thing?” “the red thing is a cube.”)
Then I noticed that they gave ground-truth examples not just for the answers to the questions but to the programs used to compute those answers. This does not sound like the machine “learned to reason” so much as it “learned to do pattern-recognition on examples of reasoning.” When humans learn, they are “trained” on examples of other people’s behavior and words, but they don’t get any access to the raw procedures being executed in other people’s brains. This AI did get “raw downloads of thinking processes,” which I’d consider “cheating” compared to what humans do. (It doesn’t make it any less of an achievement by the paper authors, of course; you have to do easier things before you can do harder things.)
That seems like weaseling out of the evidence to me. This is just another instance of neural networks being able to learn to do geometric computation to produce hard-edged answers, like alphago is; that they’re being used to generate programs seems not super relevant to that. I certainly agree that it’s not obvious exactly how to get them to learn the space of programs efficiently, but it seems surprising to expect it to be different in kind vs previous neural network stuff. This doesn’t seem that different to me vs attention models in terms of what kind of problem learning the internal behavior presents.
I was under the impression that things like “deliberative thinking” and “awareness” haven’t been simulated by ML thus far, so I think that’s the diff between us—though it’s not that strongly held, there are lots of ML advances I may just not have heard of.
An example of what I would mean by thinking: https://arxiv.org/pdf/1705.03633.pdf
Thanks for the paper!
At first I was very surprised that they got such good performance at answering questions about visual scenes (e.g. “what shape is the red thing?” “the red thing is a cube.”)
Then I noticed that they gave ground-truth examples not just for the answers to the questions but to the programs used to compute those answers. This does not sound like the machine “learned to reason” so much as it “learned to do pattern-recognition on examples of reasoning.” When humans learn, they are “trained” on examples of other people’s behavior and words, but they don’t get any access to the raw procedures being executed in other people’s brains. This AI did get “raw downloads of thinking processes,” which I’d consider “cheating” compared to what humans do. (It doesn’t make it any less of an achievement by the paper authors, of course; you have to do easier things before you can do harder things.)
That seems like weaseling out of the evidence to me. This is just another instance of neural networks being able to learn to do geometric computation to produce hard-edged answers, like alphago is; that they’re being used to generate programs seems not super relevant to that. I certainly agree that it’s not obvious exactly how to get them to learn the space of programs efficiently, but it seems surprising to expect it to be different in kind vs previous neural network stuff. This doesn’t seem that different to me vs attention models in terms of what kind of problem learning the internal behavior presents.