If it really is reasonable to place a large amount of probability mass 15 years into the future, why are virtually all mainstream scientists (including the best ones) apparently oblivious to this?
How do you support this? Have you done a poll of mainstream scientists (or better yet—the ‘best’ ones)? I haven’t seen a poll exactly, but when IEEE ran a special on the Singularity, the opinions were divided almost 50⁄50. It’s also important to note that the IEEE editor was against the Singularity-hypothesis—if I remember correctly, so there may be some bias there.
And whose opinions should we count exactly? Do we value the opinions of historians, economists, psychologists, chemists, geologists, astronomers, etc etc as much as we value the opinions of neuroscientists, computer scientists, and engineers?
I’d actually guess that at this point in time, a significant chunk of the intelligence of say Silicon Valley believes that the default Kurzweil/Moravec view is correct—AGI will arrive around when Moore’s law makes it so.
200 years? There is wisdom in some skepticism, but that seems excessive. If you hold such a view, you should analyze it with respect to its fundamental support based on a predictive technological roadmap—not a general poll of scientists.
The semiconductor industry predicts it’s own future pretty accurately, but they don’t invite biologists, philosophers or mathematicians to those meetings. Their roadmap and moore’s law in general is the most relevant for predicting AGI.
I base my own internal estimate on my own knowledge of the relevant fields—partly because this is so interesting and important that one should spend time investigating it.
I honestly suspect that most people who reject the possibility of near-term AGI have some deeper philosophical rejection.
If you are a materialist then intelligence is just another algorithm—something the brain does, and something we can build. It is an engineering problem and subject to the same future planning that we use for other engineering challenges.
How do you support this? Have you done a poll of mainstream scientists (or better yet—the ‘best’ ones)?
I have not done a poll of mainstream scientists. Aside from Shane Legg, the one mainstream scientist who I know of who has written on this subject is Scott Aaronson in his The Singularity Is Far article.
I was not claiming that I have strong grounds for confidence in my impressions of expert views. But it is the case if there’s a significant probability that we’ll see AGI over the next 15 years, mainstream scientists are apparently oblivious to this. They are not behaving as I would expect them to if they believed that AGI is 15 years off.
I haven’t seen a poll exactly, but when IEEE ran a special on the Singularity, the opinions were divided almost 50⁄50. It’s also important to note that the IEEE editor was against the Singularity-hypothesis—if I remember correctly, so there may be some bias there.
Can you give a reference?
I’d actually guess that at this point in time, a significant chunk of the intelligence of say Silicon Valley believes that the default Kurzweil/Moravec view is correct—AGI will arrive around when Moore’s law makes it so.
This is interesting. I presume then that they believe that the software aspect of the problem is easy. Why do they believe this.
200 years? There is wisdom in some skepticism, but that seems excessive. If you hold such a view, you should analyze it with respect to its fundamental support based on a predictive technological roadmap—not a general poll of scientists.
I have sufficiently little subject matter knowledge so that it’s reasonable for me to take the outside view here and listen to people who seem to know what they’re talking about rather than attempting to do a detailed analysis myself.
Aside from Shane Legg, the one mainstream scientist who I know of who has written on this subject is Scott Aaronson in his The Singularity Is Far article.
Yes, from my reading of Shane Legg I think his prediction is a reasonable inside view and close to my own. But keep in mind it is also something of popular view. Kurzweil’s latest tome was probably not much new news for most of it’s target demographic (silicon valley).
I’ve read Aaronson’s post and his counterview seems to boil down to generalized pessimism, which I don’t find to be especially illuminating. However, he does raise the good point about solving subproblems first. Of course, Kurzweil spends a good portion of TSIN summarizing progress in sub-problems of reverse engineering the brain.
There appears to be a good deal of neuroscience research going on right now, but perhaps not nearly enough serious computational neuroscience and AGI research as we may like, but it is still proceeding. MIT’s lab is no joke.
There is some sort of strange academic stigma though as Legg discusses on his blog—almost like a silent conspiracy against serious academic AGI. Nonetheless, there appears to be no stigma against the precursors, which is where one needs to start anyway.
I was not claiming that I have strong grounds for confidence in my impressions of expert views. But it is the case if there’s a significant probability that we’ll see AGI over the next 15 years, mainstream scientists are apparently oblivious to this. They are not behaving as I would expect them to if they believed that AGI is 15 years off.
I do not think we can infer their views on this matter based on their behaviour. Given the general awareness of the meme I suspect a good portion of academics in general have heard of it. That doesn’t mean that anyone will necessarily change their behavior.
I agree this seems really odd, but then I think—how have I changed my behavior? And it dawns on me that this is a much more complex topic.
For the IEEE singularity issue—just google it .. something like “IEEE Singularity special issue”. I’m having slow internet atm.
This is interesting. I presume then that they believe that the software aspect of the problem is easy. Why do they believe this.
Because any software problem can become easy given enough hardware.
For example, we have enough neuroscience data to build reasonably good models of the low level cortical circuits today . We also know the primary function of perhaps 5% of the higher level pathways. For much of that missing 95% we have abstract theories but are still very much in the dark.
With enough computing power we could skip tricky neuroscience or AGI research and just string together brain-ish networks built on our current cortical circuit models, throw them in a massive VR game-world sim that sets up increasingly difficult IQ puzzles as a fitness function, and use massive evolutionary search to get something intelligent.
The real solution may end up looking something like that, but will probably use much more human intelligence and be less wasteful of our computational intelligence.
This is interesting. I presume then that they believe that the software aspect of the problem is easy. Why do they believe this.
Because any software problem can become easy given enough hardware.
That would have been a pretty naive reply—since we know from public key crypto that it is relatively easy to make really difficult problems that require stupendous quantities of hardware to solve.
IMO, the biggest reason we have for thinking that the software will be fairly tractable, is that we have an existing working model which we could always just copy—if the worst came to the worst.
Agreed, although it will be very difficult to copy it without understanding it in considerably more detail than we do at present. Copying without any understanding (whole brain scanning and emulation) is possible in theory, but the required engineering capability for that level of scanning technology seems pretty far into the future at the moment.
A poll of mainstream scientists sounds like a poor way to get an estimate of the date of arrival of “human-level” machine minds—since machine intelligence is a complex and difficult field—and so most outsiders will probably be pretty clueless.
Also, 15 years is still a long way off: people may think 5 years out, when they are feeling particularly far sighted. Expecting major behavioral changes from something 15 years down the line seems a bit unreasonable.
I’d actually guess that at this point in time, a significant chunk of the intelligence of say Silicon Valley believes that the default Kurzweil/Moravec view is correct—AGI will arrive around when Moore’s law makes it so.
Of course, neither Kurzweil nor Moravec think any such thing—both have estimates of when a computer with the same processing power as the human brain a considerable while before they think the required software will be developed.
How do you support this? Have you done a poll of mainstream scientists (or better yet—the ‘best’ ones)? I haven’t seen a poll exactly, but when IEEE ran a special on the Singularity, the opinions were divided almost 50⁄50. It’s also important to note that the IEEE editor was against the Singularity-hypothesis—if I remember correctly, so there may be some bias there.
And whose opinions should we count exactly? Do we value the opinions of historians, economists, psychologists, chemists, geologists, astronomers, etc etc as much as we value the opinions of neuroscientists, computer scientists, and engineers?
I’d actually guess that at this point in time, a significant chunk of the intelligence of say Silicon Valley believes that the default Kurzweil/Moravec view is correct—AGI will arrive around when Moore’s law makes it so.
200 years? There is wisdom in some skepticism, but that seems excessive. If you hold such a view, you should analyze it with respect to its fundamental support based on a predictive technological roadmap—not a general poll of scientists.
The semiconductor industry predicts it’s own future pretty accurately, but they don’t invite biologists, philosophers or mathematicians to those meetings. Their roadmap and moore’s law in general is the most relevant for predicting AGI.
I base my own internal estimate on my own knowledge of the relevant fields—partly because this is so interesting and important that one should spend time investigating it.
I honestly suspect that most people who reject the possibility of near-term AGI have some deeper philosophical rejection.
If you are a materialist then intelligence is just another algorithm—something the brain does, and something we can build. It is an engineering problem and subject to the same future planning that we use for other engineering challenges.
I have not done a poll of mainstream scientists. Aside from Shane Legg, the one mainstream scientist who I know of who has written on this subject is Scott Aaronson in his The Singularity Is Far article.
I was not claiming that I have strong grounds for confidence in my impressions of expert views. But it is the case if there’s a significant probability that we’ll see AGI over the next 15 years, mainstream scientists are apparently oblivious to this. They are not behaving as I would expect them to if they believed that AGI is 15 years off.
Can you give a reference?
This is interesting. I presume then that they believe that the software aspect of the problem is easy. Why do they believe this.
I have sufficiently little subject matter knowledge so that it’s reasonable for me to take the outside view here and listen to people who seem to know what they’re talking about rather than attempting to do a detailed analysis myself.
Yes, from my reading of Shane Legg I think his prediction is a reasonable inside view and close to my own. But keep in mind it is also something of popular view. Kurzweil’s latest tome was probably not much new news for most of it’s target demographic (silicon valley).
I’ve read Aaronson’s post and his counterview seems to boil down to generalized pessimism, which I don’t find to be especially illuminating. However, he does raise the good point about solving subproblems first. Of course, Kurzweil spends a good portion of TSIN summarizing progress in sub-problems of reverse engineering the brain.
There appears to be a good deal of neuroscience research going on right now, but perhaps not nearly enough serious computational neuroscience and AGI research as we may like, but it is still proceeding. MIT’s lab is no joke.
There is some sort of strange academic stigma though as Legg discusses on his blog—almost like a silent conspiracy against serious academic AGI. Nonetheless, there appears to be no stigma against the precursors, which is where one needs to start anyway.
I do not think we can infer their views on this matter based on their behaviour. Given the general awareness of the meme I suspect a good portion of academics in general have heard of it. That doesn’t mean that anyone will necessarily change their behavior.
I agree this seems really odd, but then I think—how have I changed my behavior? And it dawns on me that this is a much more complex topic.
For the IEEE singularity issue—just google it .. something like “IEEE Singularity special issue”. I’m having slow internet atm.
Because any software problem can become easy given enough hardware.
For example, we have enough neuroscience data to build reasonably good models of the low level cortical circuits today . We also know the primary function of perhaps 5% of the higher level pathways. For much of that missing 95% we have abstract theories but are still very much in the dark.
With enough computing power we could skip tricky neuroscience or AGI research and just string together brain-ish networks built on our current cortical circuit models, throw them in a massive VR game-world sim that sets up increasingly difficult IQ puzzles as a fitness function, and use massive evolutionary search to get something intelligent.
The real solution may end up looking something like that, but will probably use much more human intelligence and be less wasteful of our computational intelligence.
That would have been a pretty naive reply—since we know from public key crypto that it is relatively easy to make really difficult problems that require stupendous quantities of hardware to solve.
Technically true—I should have said “tractable” or “these types of” rather than “any”. That of course is what computational complexity is all about.
IMO, the biggest reason we have for thinking that the software will be fairly tractable, is that we have an existing working model which we could always just copy—if the worst came to the worst.
Agreed, although it will be very difficult to copy it without understanding it in considerably more detail than we do at present. Copying without any understanding (whole brain scanning and emulation) is possible in theory, but the required engineering capability for that level of scanning technology seems pretty far into the future at the moment.
A poll of mainstream scientists sounds like a poor way to get an estimate of the date of arrival of “human-level” machine minds—since machine intelligence is a complex and difficult field—and so most outsiders will probably be pretty clueless.
Also, 15 years is still a long way off: people may think 5 years out, when they are feeling particularly far sighted. Expecting major behavioral changes from something 15 years down the line seems a bit unreasonable.
Of course, neither Kurzweil nor Moravec think any such thing—both have estimates of when a computer with the same processing power as the human brain a considerable while before they think the required software will be developed.