Stephen Hawking, Martin Rees, Max Tegmark, Nick Bostrom, Michio Kaku, David Chalmers and Robin Hanson are all smart people who broadly agree that >human AI in the next 50-100 years is reasonably likely (they’d all give p > 10% to that with the possible exception of Rees). On the con side, who do we have? To my knowledge, no one of similarly high academic rank has come out with a negative prediction.
Edit: See Carl’s comment below. Arguing majoritarianism against a significant chance of AI this century is becoming less tenable, as a significant set of experts come down on the “yes” side.
It is notable that I can’t think of any very reputable nos. The ones that come to mind are Jaron Lanier and that Glenn Zorpette.
10% is a low bar, it would require a dubiously high level of confidence to rule out AI over a 90 year time frame (longer than the time since Turing and Von Neumann and the like got going, with a massively expanding tech industry, improved neuroimaging and neuroscience, superabundant hardware, and perhaps biological intelligence enhancement for researchers). I would estimate the average of the group you mention as over 1/3rd by 2100. Chalmers says AI is more likely than not by 2100, I think Robin and Nick are near half, and I am less certain about the others (who have said that it is important to address AI or AI risks but not given unambiguous estimates).
Here’s Ben Goertzel’s survey. I think that Dan Dennett’s median estimate is over a century, although at the 10% level by 2100 I suspect he would agree. Dawkins has made statements that suggest similar estimates, although perhaps with someone shorter timelines. Likewise for Doug Hofstadter, who claimed at the Stanford Singularity Summit to have raised his estimate of time to human-level AI from 21st century to mid-late millenium, although he weirdly claimed to have done so for non-truth-seeking reasons.
None of those people are AI theorists so it isn’t clear that their opinions should get that much weight given that it is outside their area of expertise (incidentally, I’d be curious what citation you have for the Hawking claim). From the computer scientists I’ve talked to, the impression I get is that they see AI as such a failure that most of them just aren’t bothering to do much in the way of research in it except for narrow purpose machine learning or expert systems. There’s also an issue of a sampling bias: the people who think a technology is going to work are generally more loud about that than people who think it won’t. For example, a lot of physicists are very skeptical of Tokamak fusion reactors being practical anytime in the next 50 years, but the people who talk about them a lot are the people who think they will be practical.
Note also that nothing in Yoreth’s post actually relied on or argued that there won’t be moderately smart AI so it doesn’t go against what he’s said to point out that some experts think there will be very smart AI (although certainly some people on that list, such as Chalmers and Hanson do believe that some form of intelligence explosion like event will occur). Indeed, Yoreth’s second argument applies roughly to any level of intelligence. So overall, I don’t think the point about those individuals does much to address the argument.
None of those people are AI theorists so it isn’t clear that their opinions should get that much weight given that it is outside their area of expertise
I disagree with this, basically because AI is a pre-paradigm science. Having been at a big CS/AI dept, I know that the amount of accumulated wisdom about AI is virtually nonexistent compared to that for physics.
What does an average AI prof know that a physics graduate who can code doesn’t know? I’m struggling to name even one thing. If you set the two of them to code AI for some competition like controlling a robot, I doubt that there would be much advantage to the AI guy.
The only examples of genuine scientific insight in AI I have seen are in the works of Pearl, Hutter, Drew McDerrmot and recently Josh Tenebaum.
That’s a very good point. The AI theorist presumably knows more about avenues that have not done very well (neural nets, other forms of machine learning, expert systems) but isn’t likely to have much general knowledge. However, that does mean the AI individual has a better understanding of how many different approaches to AI have failed miserably. But that’s just a comparison to your example of the physics grad student who can code. Most of the people you mentioned in your reply to Yoreth are clearly people who have knowledge bases closer to that of the AI prof than to the physics grad student. Hanson certainly has looked a lot at various failed attempts at AI. I think I’ll withdraw this argument. You are correct that these individuals on the whole are likely to have about as much relevant expertise as the AI professor.
What does an average AI prof know that a physics graduate who can code doesn’t know? I’m struggling to name even one thing. If you set the two of them to code AI for some competition like controlling a robot, I doubt that there would be much advantage to the AI guy.
So people with no experience programming robots but who know the equations governing them would just be able to, on the spot, come up with comparable code to AI profs? What do they teach in AI courses, if not the kind of thing that would make you better at this?
How to code, and rookie Bayesian stats/ML, plus some other applied stuff, like statistical Natural Language Processing (this being an application of the ML/stats stuff, but there are some domain tricks and tweaks you need).
So people with no experience programming robots but who know the equations governing them would just be able to, on the spot, come up with comparable code to AI profs?
The point is that there would only be experience, not theory, separating someone who knew Bayesian stats, coding and how to do science from an AI “specialist”. Yes, there are little shortcuts and details that a PhD in AI would know, but really there’s no massive intellectual gulf there.
Each prof will, of course, have a niche app that they do well (in fact sometimes there is too much pressure to have a “trick” you can do to justify funding), but the key question is: are they more like a software engineer masquerading as a scientist than a real scientist? Do they have a paradigm and theory that enables thousands of engineers to move into completely new design-spaces?
I think that the closest we have seen is the ML revolution, but when you look at it, it is not new science, it is just statistics correctly applied.
I have seen some instances of people trying to push forward the frontier, such as the work of Hutter, but it is very rare.
Could you clarify exactly what Hutter has done that has advanced the frontier? I used to be very nearly a “Hutter enthusiast”, but I eventually concluded that his entire work is:
“Here’s a few general algorithms that are really good, but take way too long to be of any use whatsoever.”
Am I missing something? Is there something of his I should read that will open my eyes to the ease of mechanizing intelligence?
I think that the way of looking at the problem that he introduced is the key, i.e. thinking of the agent and environment as programs. The algorithms (AIXI, etc) are just intuition pumps.
This seems like a fairly reasonable description of the work’s impact:
“Another theme that I picked up was how central Hutter’s AIXI and my work on the universal intelligence measure has become: Marcus and I were being cited in presentations so often that by the last day many of the speakers were simply using our first names. As usual there were plenty of people who disagree with our approach, however it was clear that our work has become a major landmark in the area.”
But why does it get those numerous citations? What real-world, non-academic consequences have resulted from this massive usage of Hutter’s intelligence definition, which would distinguish it from a mere mass frenzy?
No time for a long explanation from me—but “universal intelligence” seems important partly since it shows how simple an intelligent agent can be—if you abstract away most of its complexity into a data-compression system. It is just a neat way to break down the problem.
A good physics or math grad who has done bayesian stats is at no disadvantage on the machine learning stuff, but what do you mean by “belief networks background”?
There is ton of knowledge about probabilistic processes defined by networks in various ways, numerical methods for inference in them, clustering, etc. All the fundamental stuff in this range has applications to physics, and some of it was known in physics before getting reinvented in machine learning, so in principle a really good physics grad could know that stuff, but it’s more than standard curriculum requires. On the other hand, it’s much more directly relevant to probabilistic methods in machine learning. Of course both should have good background in statistics and bayesian probability theory, but probabilistic analysis of nontrivial processes in particular adds unique intuitions that a physics grad won’t necessarily possess.
Re: “What does an average AI prof know that a physics graduate who can code doesn’t know? I’m struggling to name even one thing. If you set the two of them to code AI for some competition like controlling a robot, I doubt that there would be much advantage to the AI guy.”
A very odd opinion. We have 60 years of study of the field, and have learned quite a bit, judging by things like the state of translation and speech recognition.
The AI prof is more likely to know more things that don’t work and the difficulty of finding things that do. Which is useful knowledge when predicting the speed of AI development, no?
Dan Dennett and Douglas Hofstadater don’t think machine intelligence is coming anytime soon. Those folk actually know something about machine intelligence, too!
Stephen Hawking, Martin Rees, Max Tegmark, Nick Bostrom, Michio Kaku, David Chalmers and Robin Hanson are all smart people who broadly agree that >human AI in the next 50-100 years is reasonably likely (they’d all give p > 10% to that with the possible exception of Rees). On the con side, who do we have? To my knowledge, no one of similarly high academic rank has come out with a negative prediction.
Edit: See Carl’s comment below. Arguing majoritarianism against a significant chance of AI this century is becoming less tenable, as a significant set of experts come down on the “yes” side.
It is notable that I can’t think of any very reputable nos. The ones that come to mind are Jaron Lanier and that Glenn Zorpette.
10% is a low bar, it would require a dubiously high level of confidence to rule out AI over a 90 year time frame (longer than the time since Turing and Von Neumann and the like got going, with a massively expanding tech industry, improved neuroimaging and neuroscience, superabundant hardware, and perhaps biological intelligence enhancement for researchers). I would estimate the average of the group you mention as over 1/3rd by 2100. Chalmers says AI is more likely than not by 2100, I think Robin and Nick are near half, and I am less certain about the others (who have said that it is important to address AI or AI risks but not given unambiguous estimates).
Here’s Ben Goertzel’s survey. I think that Dan Dennett’s median estimate is over a century, although at the 10% level by 2100 I suspect he would agree. Dawkins has made statements that suggest similar estimates, although perhaps with someone shorter timelines. Likewise for Doug Hofstadter, who claimed at the Stanford Singularity Summit to have raised his estimate of time to human-level AI from 21st century to mid-late millenium, although he weirdly claimed to have done so for non-truth-seeking reasons.
None of those people are AI theorists so it isn’t clear that their opinions should get that much weight given that it is outside their area of expertise (incidentally, I’d be curious what citation you have for the Hawking claim). From the computer scientists I’ve talked to, the impression I get is that they see AI as such a failure that most of them just aren’t bothering to do much in the way of research in it except for narrow purpose machine learning or expert systems. There’s also an issue of a sampling bias: the people who think a technology is going to work are generally more loud about that than people who think it won’t. For example, a lot of physicists are very skeptical of Tokamak fusion reactors being practical anytime in the next 50 years, but the people who talk about them a lot are the people who think they will be practical.
Note also that nothing in Yoreth’s post actually relied on or argued that there won’t be moderately smart AI so it doesn’t go against what he’s said to point out that some experts think there will be very smart AI (although certainly some people on that list, such as Chalmers and Hanson do believe that some form of intelligence explosion like event will occur). Indeed, Yoreth’s second argument applies roughly to any level of intelligence. So overall, I don’t think the point about those individuals does much to address the argument.
I disagree with this, basically because AI is a pre-paradigm science. Having been at a big CS/AI dept, I know that the amount of accumulated wisdom about AI is virtually nonexistent compared to that for physics.
What does an average AI prof know that a physics graduate who can code doesn’t know? I’m struggling to name even one thing. If you set the two of them to code AI for some competition like controlling a robot, I doubt that there would be much advantage to the AI guy.
The only examples of genuine scientific insight in AI I have seen are in the works of Pearl, Hutter, Drew McDerrmot and recently Josh Tenebaum.
That’s a very good point. The AI theorist presumably knows more about avenues that have not done very well (neural nets, other forms of machine learning, expert systems) but isn’t likely to have much general knowledge. However, that does mean the AI individual has a better understanding of how many different approaches to AI have failed miserably. But that’s just a comparison to your example of the physics grad student who can code. Most of the people you mentioned in your reply to Yoreth are clearly people who have knowledge bases closer to that of the AI prof than to the physics grad student. Hanson certainly has looked a lot at various failed attempts at AI. I think I’ll withdraw this argument. You are correct that these individuals on the whole are likely to have about as much relevant expertise as the AI professor.
Upvoted for honest debating!
So people with no experience programming robots but who know the equations governing them would just be able to, on the spot, come up with comparable code to AI profs? What do they teach in AI courses, if not the kind of thing that would make you better at this?
How to code, and rookie Bayesian stats/ML, plus some other applied stuff, like statistical Natural Language Processing (this being an application of the ML/stats stuff, but there are some domain tricks and tweaks you need).
The point is that there would only be experience, not theory, separating someone who knew Bayesian stats, coding and how to do science from an AI “specialist”. Yes, there are little shortcuts and details that a PhD in AI would know, but really there’s no massive intellectual gulf there.
I am gratified to find that someone else shares this opinion.
A better way to phrase the question might be: what can an average AI prof. do that a physics graduate who can code, can’t?
Each prof will, of course, have a niche app that they do well (in fact sometimes there is too much pressure to have a “trick” you can do to justify funding), but the key question is: are they more like a software engineer masquerading as a scientist than a real scientist? Do they have a paradigm and theory that enables thousands of engineers to move into completely new design-spaces?
I think that the closest we have seen is the ML revolution, but when you look at it, it is not new science, it is just statistics correctly applied.
I have seen some instances of people trying to push forward the frontier, such as the work of Hutter, but it is very rare.
Statistics vs machine learning: FIGHT!
Could you clarify exactly what Hutter has done that has advanced the frontier? I used to be very nearly a “Hutter enthusiast”, but I eventually concluded that his entire work is:
“Here’s a few general algorithms that are really good, but take way too long to be of any use whatsoever.”
Am I missing something? Is there something of his I should read that will open my eyes to the ease of mechanizing intelligence?
I think that the way of looking at the problem that he introduced is the key, i.e. thinking of the agent and environment as programs. The algorithms (AIXI, etc) are just intuition pumps.
Surely everyone has been doing that from the beginning.
This seems like a fairly reasonable description of the work’s impact:
“Another theme that I picked up was how central Hutter’s AIXI and my work on the universal intelligence measure has become: Marcus and I were being cited in presentations so often that by the last day many of the speakers were simply using our first names. As usual there were plenty of people who disagree with our approach, however it was clear that our work has become a major landmark in the area.”
http://www.vetta.org/2010/03/agi-10-and-fhi/
But why does it get those numerous citations? What real-world, non-academic consequences have resulted from this massive usage of Hutter’s intelligence definition, which would distinguish it from a mere mass frenzy?
No time for a long explanation from me—but “universal intelligence” seems important partly since it shows how simple an intelligent agent can be—if you abstract away most of its complexity into a data-compression system. It is just a neat way to break down the problem.
Machine learning, more math/probability theory/belief networks background?
A good physics or math grad who has done bayesian stats is at no disadvantage on the machine learning stuff, but what do you mean by “belief networks background”?
Do you mean “deep belief networks”?
There is ton of knowledge about probabilistic processes defined by networks in various ways, numerical methods for inference in them, clustering, etc. All the fundamental stuff in this range has applications to physics, and some of it was known in physics before getting reinvented in machine learning, so in principle a really good physics grad could know that stuff, but it’s more than standard curriculum requires. On the other hand, it’s much more directly relevant to probabilistic methods in machine learning. Of course both should have good background in statistics and bayesian probability theory, but probabilistic analysis of nontrivial processes in particular adds unique intuitions that a physics grad won’t necessarily possess.
Re: “What does an average AI prof know that a physics graduate who can code doesn’t know? I’m struggling to name even one thing. If you set the two of them to code AI for some competition like controlling a robot, I doubt that there would be much advantage to the AI guy.”
A very odd opinion. We have 60 years of study of the field, and have learned quite a bit, judging by things like the state of translation and speech recognition.
The AI prof is more likely to know more things that don’t work and the difficulty of finding things that do. Which is useful knowledge when predicting the speed of AI development, no?
Which things?
Trying to model the world as crisp logical statements a la block worlds for example.
That being in the “things that don’t work” category?
Yup… which things were you asking for? Examples of things that do work? You don’t actually need to find them to know that they are hard to find!
I think Hofstadter could fairly be described as an AI theorist.
So could Robin Hanson.
Dan Dennett and Douglas Hofstadater don’t think machine intelligence is coming anytime soon. Those folk actually know something about machine intelligence, too!