“Just 50 years?” Shane Legg’s explanation of why his mode is at 2025:
Thanks for pointing this out. I don’t have the subject matter knowledge to make an independent assessment of the validity of the remarks in the linked article, but it makes points that I had not seen before.
I’d recur to CarlShulman’s remark about selection bias here. I look forward to seeing the results of the hypothetical Bostrom survey and the SIAI collection of all public predictions.
If 15 years is more accurate—then things are a bit different.
I agree. There’s still an issue of a lack of concrete directions of research at present but if 15 years is accurate then I agree with Eliezer that we should be in “crunch” mode (amassing resources specifically directed at future FAI research).
I agree. There’s still an issue of a lack of concrete directions of research at present but if 15 years is accurate then I agree with Eliezer that we should be in “crunch” mode (amassing resources specifically directed at future FAI research).
At any rate, most rationalists who have seriously considered the topic will agree that there is a large amount of probability mass 15 years into the future: large enough that even if the median estimate till AGI is 2050, we’re still in serious crunch time. The tails are fat in both directions. (This is important because it takes away a lot of the Pascalian flavoring that makes people (justifiably) nervous when reasoning about whether or not to donate to FAI projects: 15% chance of FOOM before 2020 just feels very different to a bounded rationalist than a .5% chance of FOOM before 2020.)
Thanks for pointing this out. I don’t have the subject matter knowledge to make an independent assessment of the validity of the remarks in the linked article, but it makes points that I had not seen before.
For what it’s worth, Shane Legg is a pretty reasonable fellow who understands that AGI isn’t automatically good, so we can at least rule out that his predictions are tainted by the thoughts of “Yay, technology is good, AGI is close!” that tend to cast doubt on the lack of bias in most AGI researchers’ and futurists’ predictions. He’s familiar with the field and indeed wrote the book on Machine Super Intelligence. I’m more persuaded by Legg’s arguments than most at SIAI, though, and although this isn’t a claim that is easily backed by evidence, the people at SIAI are really freakin’ good thinkers and are not to be disagreed with lightly.
At any rate, most rationalists who have seriously considered the topic will agree that there is a large amount of probability mass 15 years into the future: large enough that even if the median estimate till AGI is 2050, we’re still in serious crunch time. The tails are fat in both directions.
I recur to my concern about selection effects. If it really is reasonable to place a large amount of probability mass 15 years into the future, why are virtually all mainstream scientists (including the best ones) apparently oblivious to this?
I do think that it’s sufficiently likely that the people in academia have erred that it’s worth my learning more about this topic and spending some time pressing people within academia on this point. But at present I assign a low probability (~5%) to the notion that the mainstream has missed something so striking as a large probability of a superhuman AI within 15 years.
Incidentally, I do think that decisive paradigm changing events are very likely to occur over the next 200 years and that this warrants focused effort on making sure that society is running as possible (as opposed to doing pure scientific research with the justification that it may pay off in 500 years).
A fair response to this requires a post that Less Wrong desperately needs to read: People Are Crazy, the World Is Mad. Unfortunately this requires that I convince Michael Vassar or Tom McCabe to write it. Thus, I am now on a mission to enlist the great power of Thomas McCabe.
(A not-so-fair response: you underestimate the extent to which academia is batshit insane just like nearly every individual in it, you overestimate the extent to which scientists ever look outside of their tiny fields of specialization, you overestimate the extent to which the most rational scientists are willing to put their reputations on the line by even considering much less accepting an idea as seemingly kooky as ‘human-level AI by 2035’, and you underestimate the extent to which the most rational scientists are starting to look at the possibility of AGI in the next 50 years (which amounts to non-trivial probability mass in the next 15). I guess I don’t know who the very best scientists are. (Dawkins and Tooby/Cosmides impress me a lot; Tooby was at the Summit. He signed a book that’s on my table top. :D ) Basically, I think you’re giving academia too much credit. These are all assertions, though; like I said, this response is not a fair one, but this way at least you can watch for a majoritarian bias in your thinking and a contrarian bias in my arguments.)
As for your “not-so-fair response”—I seriously doubt that you know enough about academia to have any confidence in this view. I think that first hand experience is crucial to developing a good understanding of the strengths and weaknesses of academia.
(I say this with all due respect—I’ve read and admired some of your top level posts.)
As for your “not-so-fair response”—I seriously doubt that you know enough about academia to have any confidence in this view. I think that first hand experience is crucial to developing a good understanding of the strengths and weaknesses of academia.
I definitely don’t have the necessary first-hand-experience: I was reporting second-hand the impressions of a few people who I respect but whose insights I’ve yet to verify. Sorry, I should have said that. I deserve some amount of shame for my lack of epistemic hygiene there.
(I say this with all due respect—I’ve read and admired some of your top level posts.)
Thanks! I really appreciate it. A big reason for the large amounts of comments I’ve been barfing up lately is a desire to improve my writing ability such that I’ll be able to make more and better posts in the future.
If it really is reasonable to place a large amount of probability mass 15 years into the future, why are virtually all mainstream scientists (including the best ones) apparently oblivious to this?
How do you support this? Have you done a poll of mainstream scientists (or better yet—the ‘best’ ones)? I haven’t seen a poll exactly, but when IEEE ran a special on the Singularity, the opinions were divided almost 50⁄50. It’s also important to note that the IEEE editor was against the Singularity-hypothesis—if I remember correctly, so there may be some bias there.
And whose opinions should we count exactly? Do we value the opinions of historians, economists, psychologists, chemists, geologists, astronomers, etc etc as much as we value the opinions of neuroscientists, computer scientists, and engineers?
I’d actually guess that at this point in time, a significant chunk of the intelligence of say Silicon Valley believes that the default Kurzweil/Moravec view is correct—AGI will arrive around when Moore’s law makes it so.
200 years? There is wisdom in some skepticism, but that seems excessive. If you hold such a view, you should analyze it with respect to its fundamental support based on a predictive technological roadmap—not a general poll of scientists.
The semiconductor industry predicts it’s own future pretty accurately, but they don’t invite biologists, philosophers or mathematicians to those meetings. Their roadmap and moore’s law in general is the most relevant for predicting AGI.
I base my own internal estimate on my own knowledge of the relevant fields—partly because this is so interesting and important that one should spend time investigating it.
I honestly suspect that most people who reject the possibility of near-term AGI have some deeper philosophical rejection.
If you are a materialist then intelligence is just another algorithm—something the brain does, and something we can build. It is an engineering problem and subject to the same future planning that we use for other engineering challenges.
How do you support this? Have you done a poll of mainstream scientists (or better yet—the ‘best’ ones)?
I have not done a poll of mainstream scientists. Aside from Shane Legg, the one mainstream scientist who I know of who has written on this subject is Scott Aaronson in his The Singularity Is Far article.
I was not claiming that I have strong grounds for confidence in my impressions of expert views. But it is the case if there’s a significant probability that we’ll see AGI over the next 15 years, mainstream scientists are apparently oblivious to this. They are not behaving as I would expect them to if they believed that AGI is 15 years off.
I haven’t seen a poll exactly, but when IEEE ran a special on the Singularity, the opinions were divided almost 50⁄50. It’s also important to note that the IEEE editor was against the Singularity-hypothesis—if I remember correctly, so there may be some bias there.
Can you give a reference?
I’d actually guess that at this point in time, a significant chunk of the intelligence of say Silicon Valley believes that the default Kurzweil/Moravec view is correct—AGI will arrive around when Moore’s law makes it so.
This is interesting. I presume then that they believe that the software aspect of the problem is easy. Why do they believe this.
200 years? There is wisdom in some skepticism, but that seems excessive. If you hold such a view, you should analyze it with respect to its fundamental support based on a predictive technological roadmap—not a general poll of scientists.
I have sufficiently little subject matter knowledge so that it’s reasonable for me to take the outside view here and listen to people who seem to know what they’re talking about rather than attempting to do a detailed analysis myself.
Aside from Shane Legg, the one mainstream scientist who I know of who has written on this subject is Scott Aaronson in his The Singularity Is Far article.
Yes, from my reading of Shane Legg I think his prediction is a reasonable inside view and close to my own. But keep in mind it is also something of popular view. Kurzweil’s latest tome was probably not much new news for most of it’s target demographic (silicon valley).
I’ve read Aaronson’s post and his counterview seems to boil down to generalized pessimism, which I don’t find to be especially illuminating. However, he does raise the good point about solving subproblems first. Of course, Kurzweil spends a good portion of TSIN summarizing progress in sub-problems of reverse engineering the brain.
There appears to be a good deal of neuroscience research going on right now, but perhaps not nearly enough serious computational neuroscience and AGI research as we may like, but it is still proceeding. MIT’s lab is no joke.
There is some sort of strange academic stigma though as Legg discusses on his blog—almost like a silent conspiracy against serious academic AGI. Nonetheless, there appears to be no stigma against the precursors, which is where one needs to start anyway.
I was not claiming that I have strong grounds for confidence in my impressions of expert views. But it is the case if there’s a significant probability that we’ll see AGI over the next 15 years, mainstream scientists are apparently oblivious to this. They are not behaving as I would expect them to if they believed that AGI is 15 years off.
I do not think we can infer their views on this matter based on their behaviour. Given the general awareness of the meme I suspect a good portion of academics in general have heard of it. That doesn’t mean that anyone will necessarily change their behavior.
I agree this seems really odd, but then I think—how have I changed my behavior? And it dawns on me that this is a much more complex topic.
For the IEEE singularity issue—just google it .. something like “IEEE Singularity special issue”. I’m having slow internet atm.
This is interesting. I presume then that they believe that the software aspect of the problem is easy. Why do they believe this.
Because any software problem can become easy given enough hardware.
For example, we have enough neuroscience data to build reasonably good models of the low level cortical circuits today . We also know the primary function of perhaps 5% of the higher level pathways. For much of that missing 95% we have abstract theories but are still very much in the dark.
With enough computing power we could skip tricky neuroscience or AGI research and just string together brain-ish networks built on our current cortical circuit models, throw them in a massive VR game-world sim that sets up increasingly difficult IQ puzzles as a fitness function, and use massive evolutionary search to get something intelligent.
The real solution may end up looking something like that, but will probably use much more human intelligence and be less wasteful of our computational intelligence.
This is interesting. I presume then that they believe that the software aspect of the problem is easy. Why do they believe this.
Because any software problem can become easy given enough hardware.
That would have been a pretty naive reply—since we know from public key crypto that it is relatively easy to make really difficult problems that require stupendous quantities of hardware to solve.
IMO, the biggest reason we have for thinking that the software will be fairly tractable, is that we have an existing working model which we could always just copy—if the worst came to the worst.
Agreed, although it will be very difficult to copy it without understanding it in considerably more detail than we do at present. Copying without any understanding (whole brain scanning and emulation) is possible in theory, but the required engineering capability for that level of scanning technology seems pretty far into the future at the moment.
A poll of mainstream scientists sounds like a poor way to get an estimate of the date of arrival of “human-level” machine minds—since machine intelligence is a complex and difficult field—and so most outsiders will probably be pretty clueless.
Also, 15 years is still a long way off: people may think 5 years out, when they are feeling particularly far sighted. Expecting major behavioral changes from something 15 years down the line seems a bit unreasonable.
I’d actually guess that at this point in time, a significant chunk of the intelligence of say Silicon Valley believes that the default Kurzweil/Moravec view is correct—AGI will arrive around when Moore’s law makes it so.
Of course, neither Kurzweil nor Moravec think any such thing—both have estimates of when a computer with the same processing power as the human brain a considerable while before they think the required software will be developed.
The biggest optimist I have come across is Peter Voss. His estimate in 2009 was around 8 years − 7:00 in. However, he obviously has something to sell—so maybe we should not pay too much attention to his opinion—due to the signalling effects associated with confidence.
“Just 50 years?” Shane Legg’s explanation of why his mode is at 2025:
http://www.vetta.org/2009/12/tick-tock-tick-tock-bing/
If 15 years is more accurate—then things are a bit different.
Thanks for pointing this out. I don’t have the subject matter knowledge to make an independent assessment of the validity of the remarks in the linked article, but it makes points that I had not seen before.
I’d recur to CarlShulman’s remark about selection bias here. I look forward to seeing the results of the hypothetical Bostrom survey and the SIAI collection of all public predictions.
I agree. There’s still an issue of a lack of concrete directions of research at present but if 15 years is accurate then I agree with Eliezer that we should be in “crunch” mode (amassing resources specifically directed at future FAI research).
At any rate, most rationalists who have seriously considered the topic will agree that there is a large amount of probability mass 15 years into the future: large enough that even if the median estimate till AGI is 2050, we’re still in serious crunch time. The tails are fat in both directions. (This is important because it takes away a lot of the Pascalian flavoring that makes people (justifiably) nervous when reasoning about whether or not to donate to FAI projects: 15% chance of FOOM before 2020 just feels very different to a bounded rationalist than a .5% chance of FOOM before 2020.)
For what it’s worth, Shane Legg is a pretty reasonable fellow who understands that AGI isn’t automatically good, so we can at least rule out that his predictions are tainted by the thoughts of “Yay, technology is good, AGI is close!” that tend to cast doubt on the lack of bias in most AGI researchers’ and futurists’ predictions. He’s familiar with the field and indeed wrote the book on Machine Super Intelligence. I’m more persuaded by Legg’s arguments than most at SIAI, though, and although this isn’t a claim that is easily backed by evidence, the people at SIAI are really freakin’ good thinkers and are not to be disagreed with lightly.
I recur to my concern about selection effects. If it really is reasonable to place a large amount of probability mass 15 years into the future, why are virtually all mainstream scientists (including the best ones) apparently oblivious to this?
I do think that it’s sufficiently likely that the people in academia have erred that it’s worth my learning more about this topic and spending some time pressing people within academia on this point. But at present I assign a low probability (~5%) to the notion that the mainstream has missed something so striking as a large probability of a superhuman AI within 15 years.
Incidentally, I do think that decisive paradigm changing events are very likely to occur over the next 200 years and that this warrants focused effort on making sure that society is running as possible (as opposed to doing pure scientific research with the justification that it may pay off in 500 years).
A fair response to this requires a post that Less Wrong desperately needs to read: People Are Crazy, the World Is Mad. Unfortunately this requires that I convince Michael Vassar or Tom McCabe to write it. Thus, I am now on a mission to enlist the great power of Thomas McCabe.
(A not-so-fair response: you underestimate the extent to which academia is batshit insane just like nearly every individual in it, you overestimate the extent to which scientists ever look outside of their tiny fields of specialization, you overestimate the extent to which the most rational scientists are willing to put their reputations on the line by even considering much less accepting an idea as seemingly kooky as ‘human-level AI by 2035’, and you underestimate the extent to which the most rational scientists are starting to look at the possibility of AGI in the next 50 years (which amounts to non-trivial probability mass in the next 15). I guess I don’t know who the very best scientists are. (Dawkins and Tooby/Cosmides impress me a lot; Tooby was at the Summit. He signed a book that’s on my table top. :D ) Basically, I think you’re giving academia too much credit. These are all assertions, though; like I said, this response is not a fair one, but this way at least you can watch for a majoritarian bias in your thinking and a contrarian bias in my arguments.)
I look forward to the hypothetical post.
As for your “not-so-fair response”—I seriously doubt that you know enough about academia to have any confidence in this view. I think that first hand experience is crucial to developing a good understanding of the strengths and weaknesses of academia.
(I say this with all due respect—I’ve read and admired some of your top level posts.)
I definitely don’t have the necessary first-hand-experience: I was reporting second-hand the impressions of a few people who I respect but whose insights I’ve yet to verify. Sorry, I should have said that. I deserve some amount of shame for my lack of epistemic hygiene there.
Thanks! I really appreciate it. A big reason for the large amounts of comments I’ve been barfing up lately is a desire to improve my writing ability such that I’ll be able to make more and better posts in the future.
How do you support this? Have you done a poll of mainstream scientists (or better yet—the ‘best’ ones)? I haven’t seen a poll exactly, but when IEEE ran a special on the Singularity, the opinions were divided almost 50⁄50. It’s also important to note that the IEEE editor was against the Singularity-hypothesis—if I remember correctly, so there may be some bias there.
And whose opinions should we count exactly? Do we value the opinions of historians, economists, psychologists, chemists, geologists, astronomers, etc etc as much as we value the opinions of neuroscientists, computer scientists, and engineers?
I’d actually guess that at this point in time, a significant chunk of the intelligence of say Silicon Valley believes that the default Kurzweil/Moravec view is correct—AGI will arrive around when Moore’s law makes it so.
200 years? There is wisdom in some skepticism, but that seems excessive. If you hold such a view, you should analyze it with respect to its fundamental support based on a predictive technological roadmap—not a general poll of scientists.
The semiconductor industry predicts it’s own future pretty accurately, but they don’t invite biologists, philosophers or mathematicians to those meetings. Their roadmap and moore’s law in general is the most relevant for predicting AGI.
I base my own internal estimate on my own knowledge of the relevant fields—partly because this is so interesting and important that one should spend time investigating it.
I honestly suspect that most people who reject the possibility of near-term AGI have some deeper philosophical rejection.
If you are a materialist then intelligence is just another algorithm—something the brain does, and something we can build. It is an engineering problem and subject to the same future planning that we use for other engineering challenges.
I have not done a poll of mainstream scientists. Aside from Shane Legg, the one mainstream scientist who I know of who has written on this subject is Scott Aaronson in his The Singularity Is Far article.
I was not claiming that I have strong grounds for confidence in my impressions of expert views. But it is the case if there’s a significant probability that we’ll see AGI over the next 15 years, mainstream scientists are apparently oblivious to this. They are not behaving as I would expect them to if they believed that AGI is 15 years off.
Can you give a reference?
This is interesting. I presume then that they believe that the software aspect of the problem is easy. Why do they believe this.
I have sufficiently little subject matter knowledge so that it’s reasonable for me to take the outside view here and listen to people who seem to know what they’re talking about rather than attempting to do a detailed analysis myself.
Yes, from my reading of Shane Legg I think his prediction is a reasonable inside view and close to my own. But keep in mind it is also something of popular view. Kurzweil’s latest tome was probably not much new news for most of it’s target demographic (silicon valley).
I’ve read Aaronson’s post and his counterview seems to boil down to generalized pessimism, which I don’t find to be especially illuminating. However, he does raise the good point about solving subproblems first. Of course, Kurzweil spends a good portion of TSIN summarizing progress in sub-problems of reverse engineering the brain.
There appears to be a good deal of neuroscience research going on right now, but perhaps not nearly enough serious computational neuroscience and AGI research as we may like, but it is still proceeding. MIT’s lab is no joke.
There is some sort of strange academic stigma though as Legg discusses on his blog—almost like a silent conspiracy against serious academic AGI. Nonetheless, there appears to be no stigma against the precursors, which is where one needs to start anyway.
I do not think we can infer their views on this matter based on their behaviour. Given the general awareness of the meme I suspect a good portion of academics in general have heard of it. That doesn’t mean that anyone will necessarily change their behavior.
I agree this seems really odd, but then I think—how have I changed my behavior? And it dawns on me that this is a much more complex topic.
For the IEEE singularity issue—just google it .. something like “IEEE Singularity special issue”. I’m having slow internet atm.
Because any software problem can become easy given enough hardware.
For example, we have enough neuroscience data to build reasonably good models of the low level cortical circuits today . We also know the primary function of perhaps 5% of the higher level pathways. For much of that missing 95% we have abstract theories but are still very much in the dark.
With enough computing power we could skip tricky neuroscience or AGI research and just string together brain-ish networks built on our current cortical circuit models, throw them in a massive VR game-world sim that sets up increasingly difficult IQ puzzles as a fitness function, and use massive evolutionary search to get something intelligent.
The real solution may end up looking something like that, but will probably use much more human intelligence and be less wasteful of our computational intelligence.
That would have been a pretty naive reply—since we know from public key crypto that it is relatively easy to make really difficult problems that require stupendous quantities of hardware to solve.
Technically true—I should have said “tractable” or “these types of” rather than “any”. That of course is what computational complexity is all about.
IMO, the biggest reason we have for thinking that the software will be fairly tractable, is that we have an existing working model which we could always just copy—if the worst came to the worst.
Agreed, although it will be very difficult to copy it without understanding it in considerably more detail than we do at present. Copying without any understanding (whole brain scanning and emulation) is possible in theory, but the required engineering capability for that level of scanning technology seems pretty far into the future at the moment.
A poll of mainstream scientists sounds like a poor way to get an estimate of the date of arrival of “human-level” machine minds—since machine intelligence is a complex and difficult field—and so most outsiders will probably be pretty clueless.
Also, 15 years is still a long way off: people may think 5 years out, when they are feeling particularly far sighted. Expecting major behavioral changes from something 15 years down the line seems a bit unreasonable.
Of course, neither Kurzweil nor Moravec think any such thing—both have estimates of when a computer with the same processing power as the human brain a considerable while before they think the required software will be developed.
The biggest optimist I have come across is Peter Voss. His estimate in 2009 was around 8 years − 7:00 in. However, he obviously has something to sell—so maybe we should not pay too much attention to his opinion—due to the signalling effects associated with confidence.
Optimist or pessimist?
In his own words: Increased Intelligence, Improved Life.