A bold claim, since no one understands “the algorithms used by the brain”. People have been trying to “understand how intelligence works” for decades with no appreciable progress; all of the algorithms that look “intelligent” (Deep Blue, Watson, industrial-strength machine learning) require massive computing power.
It’s not that bold a claim. It’s quite the same claim that simulating a brain at the level of quantum electrodynamics requires much more processing power than at the level of neurons. Or, if you will, that simulating a CPU at the level of silicon takes more than simulating at a functional level takes more than running the same algorithm natively.
Deep Blue just brute forces the game tree (more-or-less). Obviously, this is not at all how humans play chess. Deep Blue’s evaluation for a specific position is more “intelligent”, but it’s just hard-coded by the programmers. Deep Blue didn’t think of it.
Watson can “read”, which is pretty cool. But:
1) It doesn’t read very well. It can’t even parse English. It just looks for concepts near each other, and it turns out that the vast quantities of data override how terrible it is at reading.
2) We don’t really understand how Watson works. The output of a machine-learning algorithm is basically a black box. (“How does Watson think when it answers a question?”)
There are impressive results which look like intelligence, which are improving incrementally over time. There is no progress towards an efficient “intelligence algorithm”, or “understanding how intelligence works”.
Deep Blue just brute forces the game tree (more-or-less). Obviously, this is not at all how humans play chess. Deep Blue’s evaluation for a specific position is more “intelligent”, but it’s just hard-coded by the programmers. Deep Blue didn’t think of it.
I can’t remember right off hand, but there’s some AI researcher (maybe Marvin Minsky?) who pointed out that people use the word “intelligence” to describe whatever humans can do for which the underlying algorithms are not understood. So as we discover more and more algorithms for doing intelligent stuff, the goalposts for what constitutes “intelligence” keep getting moved. I think I remember one particular prominent intellectual who, decades ago, essentially declared that when chess could be played better by a computer than a human, the problem of AI would be solved. Why was this intellectual surprised? Because he didn’t realize that there were discoverable, implementable algorithms that could be used to complete the action of playing chess. And in the same way, there exist algorithms for doing all the other thinking that people do (including inventing algorithms)… we just haven’t discovered and refined them the way we’ve discovered and refined chess-playing algorithms.
(Maybe you’re one of those Cartesian dualists who thinks humans have souls that don’t exist in physical reality and that’s how they do their thinking? Or you hold some other variation of the “brains are magic” position? Speaking of magic, that’s how ancient people thought about lightning and other phenomena that are well-understood today… given that human brains are probably the most complicated natural thing we know about, it’s not surprising that they’d be one of the last natural things for us to understand.)
The output of a machine-learning algorithm is basically a black box.
Hm, that doesn’t sound like an accurate description of all machine learning techniques. Would you consider the output of a regression a black box? I don’t think I would. What’s your machine learning background like, by the way?
Anyway, even if it’s a black box, I’d say it constitutes appreciable progress. It seems like you are counting it as a point against chess programs that we know exactly how they work, and a point against Watson that we don’t know exactly how it works.
There are impressive results which look like intelligence, which are improving incrementally over time. There is no progress towards an efficient “intelligence algorithm”, or “understanding how intelligence works”.
My impression is that many, if not most, experts in AI see human intelligence as essentially algorithmic and see the field of AI as making slow progress towards something like human intelligence (e.g. see this interview series). Are you an expert in AI? If not, you are talking with an awful lot of certainty for a layman.
I think I remember one particular prominent intellectual who, decades ago, essentially declared that when chess could be played better by a computer than a human, the problem of AI would be solved.
Hofstadter, in Godel, Escher, Bach?
Maybe you’re one of those Cartesian dualists who thinks humans have souls that don’t exist in physical reality and that’s how they do their thinking
Not at all. Brains are complicated, not magic. But complicated is bad enough.
Would you consider the output of a regression a black box?
In the sense that we don’t understand why the coefficients make sense; the only way to get that output is feed a lot of data into the machine and see what comes out. It’s the difference between being able to make predictions and understanding what’s going on (e.g. compare epicycle astronomy with the Copernican model. Equally good predictions, but one sheds better light on what’s happening).
What’s your machine learning background like, by the way?
One semester graduate course a few years ago.
It seems like you are counting it as a point against chess programs that we know exactly how they work, and a point against Watson that we don’t know exactly how it works.
The goal is to understand intelligence. We know that chess programs aren’t intelligent; the state space is just luckily small enough to brute force. Watson might be “intelligent”, but we don’t know. We need programs that are intelligent and that we understand.
My impression is that many, if not most, experts in AI see human intelligence as essentially algorithmic and see the field of AI as making slow progress towards something like human intelligence
I agree. My point is that there isn’t likely to be a simple “intelligence algorithm”. All the people like Hofstadter who’ve looked for one have been floundering for decades, and all the progress has been made by forgetting about “intelligence” and carving out smaller areas.
Brains are complicated, not magic. But complicated is bad enough.
So would you consider this blog post in accordance with your position?
I could believe that coding an AGI is an extremely laborious task with no shortcuts that could be accomplished only through an inordinately large number of years of work by an inordinately large team of inordinately bright people. I argued earlier (without protest from you) that most humans can’t make technological advances, so maybe there exists some advance A such that it’s too hard for any human who will ever live to make, and AGI ends up requiring advance A? This is another way of saying that although AGI is possible in theory, in practice it ends up being too hard. (Or to make a more probable but still very relevant claim, it might be sufficiently difficult that some other civilization-breaking technological advance ends up deciding the fate of the human race. That way AGI just has to be harder than the easiest civilization-breaking thing.)
I think I remember one particular prominent intellectual who, decades ago, essentially declared that when chess could be played better by a computer than a human, the problem of AI would be solved.
Hofstadter, in Godel, Escher, Bach?
What? That runs contrary to, like, the last third of the book. Where in the book would one find this claim?
A black box is actually a necessary condition for a true AI, I suspect. Understanding a system is inherently more complex than that system’s thought patterns, or else the system couldn’t generate those thoughts in the first place. We can understand the neurons or transistors, but not how they turn senses into thoughts.
It depends what level of understanding you’re referring to. I mean, in a sense we understand the human brain extremely well. We know when, why, and how neurons fire, but that level of understanding is completely worthless when it comes time to predict how someone is going to behave. That level of understanding we’ll certainly have for AIs. I just don’t consider that sufficient to really say that we understand the AI.
We don’t have that degree of understanding of the human brain, no. Sure, we know physics, but we don’t know the initial conditions, even.
There are several layers of abstraction one could cram between our knowledge and conscious thoughts.
No, what I’m referring to is an algorithm that you completely grok, but whose execution is just too big. A bit like how you could completely specify the solution to the towers of Hanoi puzzle with 64 plates, but actually doing it is simply beyond your powers.
It’s theoretically possible that an AI could result from that, but it seems vanishingly unlikely to me. I don’t think an AI is going to come from someone hacking together an intelligence in their basement—if it was simple enough for a single human to grok, 50 years of AI research probably would have come up with it already. Simple algorithms can produce complex results, yes, but they very rarely solve complex problems.
No, but the complete lack of results do constitute reasonably strong evidence, even if they’re not proof. Given that my prior on that is very low(seriously, why would we believe that it’s at all likely an algorithm so simple a human can understand it could produce an AGI?), my posterior probability is so low as to be utterly negligible.
Humans can understand some pretty complicated things. I’m not saying that the algorithm ought to fit on a napkin. I’m saying that with years of study one can understand every element of the algorithm, with the remaining black-boxes being things that are inessential and can be understood by contract (e.g. transistor design, list sorting, floating point number specifications)
Do you think a human can understand the algorithms used by the human brain to the same level you’re assuming that they can understand a silicon brain to?
Evolution is another one of those impersonal forces I’d consider a superhuman intelligence without much prodding. Again, myopic as hell, but it does good work—such good work, in fact, that considering it superhuman was essentially universal until the modern era.
On that note, I’d put very high odds on the first AGI being designed by an evolutionary algorithm of some sort—I simply don’t think humans can design one directly, we need to conscript Azathoth to do another job like his last one.
Is it necessary that we understand how intelligence works for us to know how to build it? This may almost be a philosophical question. Certainly a guy who builds race car engines almost certainly knows nothing about the periodic table of elements and the quantum effects behind electronic orbitals that can explain some of the mechanical properties of the metals that are used in the engines. Very likely he does not know much thermodynamics, does not appreciate the interplay between energy and entropy required to make a heat engine produce mechanical power. Possibly knows very little of the chemistry behind the design of the lubricants or the chemistry evolved in storing energy in hydrocarbons and releasing it by oxidizing it.
But I’d sure rather drive a car with an engine he designed in it than a car with an engine designed by a room full of chemists and physicists.
My point being, we may well develop a set of black boxes that can be linked together to produce AI systems for various tasks. Quite a lot will be known by the builders of these AIs about how to put these together and what to expect in certain configurations. But they may not know much about how the eagle-eye-vision core works or how the alpha-chimp-emotional core works, just how the go together and a sense of what to expect as they get hooked up.
Maybe we never have much sense of what goes on inside some of those black boxes. Just as it is hard to picture what the universe looked like before the big bang or at the center of a black hole. Maybe not.
An empirical question? Most people I know define understanding something as being able to build it. Its not a bad definition, it limits you to a subset of maps that have demonstrated utility for building things.
I don’t think it is an empirical question, empirically I think it is a tautology.
A bold claim, since no one understands “the algorithms used by the brain”. People have been trying to “understand how intelligence works” for decades with no appreciable progress; all of the algorithms that look “intelligent” (Deep Blue, Watson, industrial-strength machine learning) require massive computing power.
It’s not that bold a claim. It’s quite the same claim that simulating a brain at the level of quantum electrodynamics requires much more processing power than at the level of neurons. Or, if you will, that simulating a CPU at the level of silicon takes more than simulating at a functional level takes more than running the same algorithm natively.
You don’t think Deep Blue and Watson constitute appreciable progress?
In understanding how intelligence works? No.
Deep Blue just brute forces the game tree (more-or-less). Obviously, this is not at all how humans play chess. Deep Blue’s evaluation for a specific position is more “intelligent”, but it’s just hard-coded by the programmers. Deep Blue didn’t think of it.
Watson can “read”, which is pretty cool. But:
1) It doesn’t read very well. It can’t even parse English. It just looks for concepts near each other, and it turns out that the vast quantities of data override how terrible it is at reading.
2) We don’t really understand how Watson works. The output of a machine-learning algorithm is basically a black box. (“How does Watson think when it answers a question?”)
There are impressive results which look like intelligence, which are improving incrementally over time. There is no progress towards an efficient “intelligence algorithm”, or “understanding how intelligence works”.
I can’t remember right off hand, but there’s some AI researcher (maybe Marvin Minsky?) who pointed out that people use the word “intelligence” to describe whatever humans can do for which the underlying algorithms are not understood. So as we discover more and more algorithms for doing intelligent stuff, the goalposts for what constitutes “intelligence” keep getting moved. I think I remember one particular prominent intellectual who, decades ago, essentially declared that when chess could be played better by a computer than a human, the problem of AI would be solved. Why was this intellectual surprised? Because he didn’t realize that there were discoverable, implementable algorithms that could be used to complete the action of playing chess. And in the same way, there exist algorithms for doing all the other thinking that people do (including inventing algorithms)… we just haven’t discovered and refined them the way we’ve discovered and refined chess-playing algorithms.
(Maybe you’re one of those Cartesian dualists who thinks humans have souls that don’t exist in physical reality and that’s how they do their thinking? Or you hold some other variation of the “brains are magic” position? Speaking of magic, that’s how ancient people thought about lightning and other phenomena that are well-understood today… given that human brains are probably the most complicated natural thing we know about, it’s not surprising that they’d be one of the last natural things for us to understand.)
Hm, that doesn’t sound like an accurate description of all machine learning techniques. Would you consider the output of a regression a black box? I don’t think I would. What’s your machine learning background like, by the way?
Anyway, even if it’s a black box, I’d say it constitutes appreciable progress. It seems like you are counting it as a point against chess programs that we know exactly how they work, and a point against Watson that we don’t know exactly how it works.
My impression is that many, if not most, experts in AI see human intelligence as essentially algorithmic and see the field of AI as making slow progress towards something like human intelligence (e.g. see this interview series). Are you an expert in AI? If not, you are talking with an awful lot of certainty for a layman.
Hofstadter, in Godel, Escher, Bach?
Not at all. Brains are complicated, not magic. But complicated is bad enough.
In the sense that we don’t understand why the coefficients make sense; the only way to get that output is feed a lot of data into the machine and see what comes out. It’s the difference between being able to make predictions and understanding what’s going on (e.g. compare epicycle astronomy with the Copernican model. Equally good predictions, but one sheds better light on what’s happening).
One semester graduate course a few years ago.
The goal is to understand intelligence. We know that chess programs aren’t intelligent; the state space is just luckily small enough to brute force. Watson might be “intelligent”, but we don’t know. We need programs that are intelligent and that we understand.
I agree. My point is that there isn’t likely to be a simple “intelligence algorithm”. All the people like Hofstadter who’ve looked for one have been floundering for decades, and all the progress has been made by forgetting about “intelligence” and carving out smaller areas.
So would you consider this blog post in accordance with your position?
I could believe that coding an AGI is an extremely laborious task with no shortcuts that could be accomplished only through an inordinately large number of years of work by an inordinately large team of inordinately bright people. I argued earlier (without protest from you) that most humans can’t make technological advances, so maybe there exists some advance A such that it’s too hard for any human who will ever live to make, and AGI ends up requiring advance A? This is another way of saying that although AGI is possible in theory, in practice it ends up being too hard. (Or to make a more probable but still very relevant claim, it might be sufficiently difficult that some other civilization-breaking technological advance ends up deciding the fate of the human race. That way AGI just has to be harder than the easiest civilization-breaking thing.)
Here’s a blog post with some AI progress estimates: http://www.overcomingbias.com/2012/08/ai-progress-estimate.html
What? That runs contrary to, like, the last third of the book. Where in the book would one find this claim?
Previous discussion: http://lesswrong.com/lw/hp5/after_critical_event_w_happens_they_still_wont/95ow
I see. He got so focused on the power of strange loops that he forgot that you can do a whole lot without them.
I don’t have a copy handy. I distinctly remember this claim, though. This purports to be a quote from near the end of the book.
4 “Will there be chess programs that can beat anyone?” “No. There may be programs which can beat anyone at chess, but they will not be exclusively chess players.” (http://www.psychologytoday.com/blog/the-decision-tree/201111/how-much-progress-has-artificial-intelligence-made)
A black box is actually a necessary condition for a true AI, I suspect. Understanding a system is inherently more complex than that system’s thought patterns, or else the system couldn’t generate those thoughts in the first place. We can understand the neurons or transistors, but not how they turn senses into thoughts.
Understanding a system’s algorithm doesn’t mean that executing it doesn’t end up way more complicated than you can grasp.
It depends what level of understanding you’re referring to. I mean, in a sense we understand the human brain extremely well. We know when, why, and how neurons fire, but that level of understanding is completely worthless when it comes time to predict how someone is going to behave. That level of understanding we’ll certainly have for AIs. I just don’t consider that sufficient to really say that we understand the AI.
We don’t have that degree of understanding of the human brain, no. Sure, we know physics, but we don’t know the initial conditions, even.
There are several layers of abstraction one could cram between our knowledge and conscious thoughts.
No, what I’m referring to is an algorithm that you completely grok, but whose execution is just too big. A bit like how you could completely specify the solution to the towers of Hanoi puzzle with 64 plates, but actually doing it is simply beyond your powers.
It’s theoretically possible that an AI could result from that, but it seems vanishingly unlikely to me. I don’t think an AI is going to come from someone hacking together an intelligence in their basement—if it was simple enough for a single human to grok, 50 years of AI research probably would have come up with it already. Simple algorithms can produce complex results, yes, but they very rarely solve complex problems.
We have hardly saturated the likely parts of the space of human-comprehensible algorithms, even with our search power turned way up.
No, but the complete lack of results do constitute reasonably strong evidence, even if they’re not proof. Given that my prior on that is very low(seriously, why would we believe that it’s at all likely an algorithm so simple a human can understand it could produce an AGI?), my posterior probability is so low as to be utterly negligible.
Humans can understand some pretty complicated things. I’m not saying that the algorithm ought to fit on a napkin. I’m saying that with years of study one can understand every element of the algorithm, with the remaining black-boxes being things that are inessential and can be understood by contract (e.g. transistor design, list sorting, floating point number specifications)
Do you think a human can understand the algorithms used by the human brain to the same level you’re assuming that they can understand a silicon brain to?
Quite likely not, since we’re evolved. Humans have taken a distressingly large amount of time to understand FPGA-evolved addition gates.
Evolution is another one of those impersonal forces I’d consider a superhuman intelligence without much prodding. Again, myopic as hell, but it does good work—such good work, in fact, that considering it superhuman was essentially universal until the modern era.
On that note, I’d put very high odds on the first AGI being designed by an evolutionary algorithm of some sort—I simply don’t think humans can design one directly, we need to conscript Azathoth to do another job like his last one.
Is it necessary that we understand how intelligence works for us to know how to build it? This may almost be a philosophical question. Certainly a guy who builds race car engines almost certainly knows nothing about the periodic table of elements and the quantum effects behind electronic orbitals that can explain some of the mechanical properties of the metals that are used in the engines. Very likely he does not know much thermodynamics, does not appreciate the interplay between energy and entropy required to make a heat engine produce mechanical power. Possibly knows very little of the chemistry behind the design of the lubricants or the chemistry evolved in storing energy in hydrocarbons and releasing it by oxidizing it.
But I’d sure rather drive a car with an engine he designed in it than a car with an engine designed by a room full of chemists and physicists.
My point being, we may well develop a set of black boxes that can be linked together to produce AI systems for various tasks. Quite a lot will be known by the builders of these AIs about how to put these together and what to expect in certain configurations. But they may not know much about how the eagle-eye-vision core works or how the alpha-chimp-emotional core works, just how the go together and a sense of what to expect as they get hooked up.
Maybe we never have much sense of what goes on inside some of those black boxes. Just as it is hard to picture what the universe looked like before the big bang or at the center of a black hole. Maybe not.
This is definitely an empirical question. I hope it will be settled “relatively soon” in the affirmative by brain emulation.
An empirical question? Most people I know define understanding something as being able to build it. Its not a bad definition, it limits you to a subset of maps that have demonstrated utility for building things.
I don’t think it is an empirical question, empirically I think it is a tautology.