Evolving brains took a long time. Learning to think, once we had a monkey’s brain, was comparatively fast. If we focus on “do this algorithm’s conscious thoughts at all resembles human conscious thoughts?”, I expect AI progress will look like many decades of “not at all” followed by a brief blur as machines overtake humans.
I agree with you that there is a real cluster of distinctions here. I also agree with the takeaway:
We should be thinking of “being conscious by choice” more as a sort of weird Bene Gesserit witchcraft than as either the default state or as an irrelevant aberration. It is neither the whole of cognition, nor is it unimportant — it is a special power, and we don’t know how it works.
But I suspect there isn’t any concise explanation of “how it works”; thinking is a big complicated machine that works for a long list of messy reasons, with the underlying meta-reason that evolution tried a bunch of stuff and kept what worked.
It also looks increasingly likely that we won’t understand how thinking works until after we’ve built machines that think, just as we can now build machines-that-perceive much better than we understand how perception works.
It seems casually obvious that raccoons and crows engage in deliberate thought. Possibly hunting spiders and octopods do. Also obvious at this point that we have more processing power available than hunting spiders. Everything biology does has an explanation as concise as the algorithmic complexity of the genome, which isn’t obviously intractable for understanding at all of the relevant levels of description.
I could believe that. My guess is that the things brains do which are “interpretable” or “explicit” are a very small subset.
OTOH, I’m interested in the possibilities that might arise from building machines that have things similar to “attention” or “memory” or “self-other distinctions”, things which mammalian brains generally have but most machine learners don’t. I think there’s qualitatively new stuff that we’re only beginning to touch.
I still find myself confused that you don’t express thinking that the more complicated architectures provide a plausible guide for what to expect these things to look like in the brain. I have come to the conclusion that all the parts of intelligence—thinking and blindsight among them—have been tried by ML people at this point, but none of them at large enough scale or integrated well enough to produce something like us. It continues to seem strange to me that your objection is “they haven’t actually tried the right thing”, and that you are also optimistic about attention/memory/etc as being the right sort of thing to produce it. Do you think that thinking doesn’t have an obvious construction from available parts? What are the cruxes, the diff of our beliefs here?
I really wish you would give his argument for the claim that we (even plausibly) have all the pieces, Lahwran. I would also love to see an abridged transcript of a discourse wherein the two of you reached a double-crux. My best guess is that Lahwran is thinking of ‘only integrating existing systems’ as a triviality which can be automated by the market rather than what it actually is, a higher-level instance of the design problem.
That said, the idea that thinking has been tried seems so insane to me that I may be failing to steelman it accurately.
I was under the impression that things like “deliberative thinking” and “awareness” haven’t been simulated by ML thus far, so I think that’s the diff between us—though it’s not that strongly held, there are lots of ML advances I may just not have heard of.
At first I was very surprised that they got such good performance at answering questions about visual scenes (e.g. “what shape is the red thing?” “the red thing is a cube.”)
Then I noticed that they gave ground-truth examples not just for the answers to the questions but to the programs used to compute those answers. This does not sound like the machine “learned to reason” so much as it “learned to do pattern-recognition on examples of reasoning.” When humans learn, they are “trained” on examples of other people’s behavior and words, but they don’t get any access to the raw procedures being executed in other people’s brains. This AI did get “raw downloads of thinking processes,” which I’d consider “cheating” compared to what humans do. (It doesn’t make it any less of an achievement by the paper authors, of course; you have to do easier things before you can do harder things.)
That seems like weaseling out of the evidence to me. This is just another instance of neural networks being able to learn to do geometric computation to produce hard-edged answers, like alphago is; that they’re being used to generate programs seems not super relevant to that. I certainly agree that it’s not obvious exactly how to get them to learn the space of programs efficiently, but it seems surprising to expect it to be different in kind vs previous neural network stuff. This doesn’t seem that different to me vs attention models in terms of what kind of problem learning the internal behavior presents.
Evolving brains took a long time. Learning to think, once we had a monkey’s brain, was comparatively fast. If we focus on “do this algorithm’s conscious thoughts at all resembles human conscious thoughts?”, I expect AI progress will look like many decades of “not at all” followed by a brief blur as machines overtake humans.
I agree with you that there is a real cluster of distinctions here. I also agree with the takeaway:
But I suspect there isn’t any concise explanation of “how it works”; thinking is a big complicated machine that works for a long list of messy reasons, with the underlying meta-reason that evolution tried a bunch of stuff and kept what worked.
It also looks increasingly likely that we won’t understand how thinking works until after we’ve built machines that think, just as we can now build machines-that-perceive much better than we understand how perception works.
It seems casually obvious that raccoons and crows engage in deliberate thought. Possibly hunting spiders and octopods do. Also obvious at this point that we have more processing power available than hunting spiders.
Everything biology does has an explanation as concise as the algorithmic complexity of the genome, which isn’t obviously intractable for understanding at all of the relevant levels of description.
I could believe that. My guess is that the things brains do which are “interpretable” or “explicit” are a very small subset.
OTOH, I’m interested in the possibilities that might arise from building machines that have things similar to “attention” or “memory” or “self-other distinctions”, things which mammalian brains generally have but most machine learners don’t. I think there’s qualitatively new stuff that we’re only beginning to touch.
I still find myself confused that you don’t express thinking that the more complicated architectures provide a plausible guide for what to expect these things to look like in the brain. I have come to the conclusion that all the parts of intelligence—thinking and blindsight among them—have been tried by ML people at this point, but none of them at large enough scale or integrated well enough to produce something like us. It continues to seem strange to me that your objection is “they haven’t actually tried the right thing”, and that you are also optimistic about attention/memory/etc as being the right sort of thing to produce it. Do you think that thinking doesn’t have an obvious construction from available parts? What are the cruxes, the diff of our beliefs here?
I really wish you would give his argument for the claim that we (even plausibly) have all the pieces, Lahwran. I would also love to see an abridged transcript of a discourse wherein the two of you reached a double-crux. My best guess is that Lahwran is thinking of ‘only integrating existing systems’ as a triviality which can be automated by the market rather than what it actually is, a higher-level instance of the design problem.
That said, the idea that thinking has been tried seems so insane to me that I may be failing to steelman it accurately.
I was under the impression that things like “deliberative thinking” and “awareness” haven’t been simulated by ML thus far, so I think that’s the diff between us—though it’s not that strongly held, there are lots of ML advances I may just not have heard of.
An example of what I would mean by thinking: https://arxiv.org/pdf/1705.03633.pdf
Thanks for the paper!
At first I was very surprised that they got such good performance at answering questions about visual scenes (e.g. “what shape is the red thing?” “the red thing is a cube.”)
Then I noticed that they gave ground-truth examples not just for the answers to the questions but to the programs used to compute those answers. This does not sound like the machine “learned to reason” so much as it “learned to do pattern-recognition on examples of reasoning.” When humans learn, they are “trained” on examples of other people’s behavior and words, but they don’t get any access to the raw procedures being executed in other people’s brains. This AI did get “raw downloads of thinking processes,” which I’d consider “cheating” compared to what humans do. (It doesn’t make it any less of an achievement by the paper authors, of course; you have to do easier things before you can do harder things.)
That seems like weaseling out of the evidence to me. This is just another instance of neural networks being able to learn to do geometric computation to produce hard-edged answers, like alphago is; that they’re being used to generate programs seems not super relevant to that. I certainly agree that it’s not obvious exactly how to get them to learn the space of programs efficiently, but it seems surprising to expect it to be different in kind vs previous neural network stuff. This doesn’t seem that different to me vs attention models in terms of what kind of problem learning the internal behavior presents.