How do you support this? Have you done a poll of mainstream scientists (or better yet—the ‘best’ ones)?
I have not done a poll of mainstream scientists. Aside from Shane Legg, the one mainstream scientist who I know of who has written on this subject is Scott Aaronson in his The Singularity Is Far article.
I was not claiming that I have strong grounds for confidence in my impressions of expert views. But it is the case if there’s a significant probability that we’ll see AGI over the next 15 years, mainstream scientists are apparently oblivious to this. They are not behaving as I would expect them to if they believed that AGI is 15 years off.
I haven’t seen a poll exactly, but when IEEE ran a special on the Singularity, the opinions were divided almost 50⁄50. It’s also important to note that the IEEE editor was against the Singularity-hypothesis—if I remember correctly, so there may be some bias there.
Can you give a reference?
I’d actually guess that at this point in time, a significant chunk of the intelligence of say Silicon Valley believes that the default Kurzweil/Moravec view is correct—AGI will arrive around when Moore’s law makes it so.
This is interesting. I presume then that they believe that the software aspect of the problem is easy. Why do they believe this.
200 years? There is wisdom in some skepticism, but that seems excessive. If you hold such a view, you should analyze it with respect to its fundamental support based on a predictive technological roadmap—not a general poll of scientists.
I have sufficiently little subject matter knowledge so that it’s reasonable for me to take the outside view here and listen to people who seem to know what they’re talking about rather than attempting to do a detailed analysis myself.
Aside from Shane Legg, the one mainstream scientist who I know of who has written on this subject is Scott Aaronson in his The Singularity Is Far article.
Yes, from my reading of Shane Legg I think his prediction is a reasonable inside view and close to my own. But keep in mind it is also something of popular view. Kurzweil’s latest tome was probably not much new news for most of it’s target demographic (silicon valley).
I’ve read Aaronson’s post and his counterview seems to boil down to generalized pessimism, which I don’t find to be especially illuminating. However, he does raise the good point about solving subproblems first. Of course, Kurzweil spends a good portion of TSIN summarizing progress in sub-problems of reverse engineering the brain.
There appears to be a good deal of neuroscience research going on right now, but perhaps not nearly enough serious computational neuroscience and AGI research as we may like, but it is still proceeding. MIT’s lab is no joke.
There is some sort of strange academic stigma though as Legg discusses on his blog—almost like a silent conspiracy against serious academic AGI. Nonetheless, there appears to be no stigma against the precursors, which is where one needs to start anyway.
I was not claiming that I have strong grounds for confidence in my impressions of expert views. But it is the case if there’s a significant probability that we’ll see AGI over the next 15 years, mainstream scientists are apparently oblivious to this. They are not behaving as I would expect them to if they believed that AGI is 15 years off.
I do not think we can infer their views on this matter based on their behaviour. Given the general awareness of the meme I suspect a good portion of academics in general have heard of it. That doesn’t mean that anyone will necessarily change their behavior.
I agree this seems really odd, but then I think—how have I changed my behavior? And it dawns on me that this is a much more complex topic.
For the IEEE singularity issue—just google it .. something like “IEEE Singularity special issue”. I’m having slow internet atm.
This is interesting. I presume then that they believe that the software aspect of the problem is easy. Why do they believe this.
Because any software problem can become easy given enough hardware.
For example, we have enough neuroscience data to build reasonably good models of the low level cortical circuits today . We also know the primary function of perhaps 5% of the higher level pathways. For much of that missing 95% we have abstract theories but are still very much in the dark.
With enough computing power we could skip tricky neuroscience or AGI research and just string together brain-ish networks built on our current cortical circuit models, throw them in a massive VR game-world sim that sets up increasingly difficult IQ puzzles as a fitness function, and use massive evolutionary search to get something intelligent.
The real solution may end up looking something like that, but will probably use much more human intelligence and be less wasteful of our computational intelligence.
This is interesting. I presume then that they believe that the software aspect of the problem is easy. Why do they believe this.
Because any software problem can become easy given enough hardware.
That would have been a pretty naive reply—since we know from public key crypto that it is relatively easy to make really difficult problems that require stupendous quantities of hardware to solve.
IMO, the biggest reason we have for thinking that the software will be fairly tractable, is that we have an existing working model which we could always just copy—if the worst came to the worst.
Agreed, although it will be very difficult to copy it without understanding it in considerably more detail than we do at present. Copying without any understanding (whole brain scanning and emulation) is possible in theory, but the required engineering capability for that level of scanning technology seems pretty far into the future at the moment.
A poll of mainstream scientists sounds like a poor way to get an estimate of the date of arrival of “human-level” machine minds—since machine intelligence is a complex and difficult field—and so most outsiders will probably be pretty clueless.
Also, 15 years is still a long way off: people may think 5 years out, when they are feeling particularly far sighted. Expecting major behavioral changes from something 15 years down the line seems a bit unreasonable.
I have not done a poll of mainstream scientists. Aside from Shane Legg, the one mainstream scientist who I know of who has written on this subject is Scott Aaronson in his The Singularity Is Far article.
I was not claiming that I have strong grounds for confidence in my impressions of expert views. But it is the case if there’s a significant probability that we’ll see AGI over the next 15 years, mainstream scientists are apparently oblivious to this. They are not behaving as I would expect them to if they believed that AGI is 15 years off.
Can you give a reference?
This is interesting. I presume then that they believe that the software aspect of the problem is easy. Why do they believe this.
I have sufficiently little subject matter knowledge so that it’s reasonable for me to take the outside view here and listen to people who seem to know what they’re talking about rather than attempting to do a detailed analysis myself.
Yes, from my reading of Shane Legg I think his prediction is a reasonable inside view and close to my own. But keep in mind it is also something of popular view. Kurzweil’s latest tome was probably not much new news for most of it’s target demographic (silicon valley).
I’ve read Aaronson’s post and his counterview seems to boil down to generalized pessimism, which I don’t find to be especially illuminating. However, he does raise the good point about solving subproblems first. Of course, Kurzweil spends a good portion of TSIN summarizing progress in sub-problems of reverse engineering the brain.
There appears to be a good deal of neuroscience research going on right now, but perhaps not nearly enough serious computational neuroscience and AGI research as we may like, but it is still proceeding. MIT’s lab is no joke.
There is some sort of strange academic stigma though as Legg discusses on his blog—almost like a silent conspiracy against serious academic AGI. Nonetheless, there appears to be no stigma against the precursors, which is where one needs to start anyway.
I do not think we can infer their views on this matter based on their behaviour. Given the general awareness of the meme I suspect a good portion of academics in general have heard of it. That doesn’t mean that anyone will necessarily change their behavior.
I agree this seems really odd, but then I think—how have I changed my behavior? And it dawns on me that this is a much more complex topic.
For the IEEE singularity issue—just google it .. something like “IEEE Singularity special issue”. I’m having slow internet atm.
Because any software problem can become easy given enough hardware.
For example, we have enough neuroscience data to build reasonably good models of the low level cortical circuits today . We also know the primary function of perhaps 5% of the higher level pathways. For much of that missing 95% we have abstract theories but are still very much in the dark.
With enough computing power we could skip tricky neuroscience or AGI research and just string together brain-ish networks built on our current cortical circuit models, throw them in a massive VR game-world sim that sets up increasingly difficult IQ puzzles as a fitness function, and use massive evolutionary search to get something intelligent.
The real solution may end up looking something like that, but will probably use much more human intelligence and be less wasteful of our computational intelligence.
That would have been a pretty naive reply—since we know from public key crypto that it is relatively easy to make really difficult problems that require stupendous quantities of hardware to solve.
Technically true—I should have said “tractable” or “these types of” rather than “any”. That of course is what computational complexity is all about.
IMO, the biggest reason we have for thinking that the software will be fairly tractable, is that we have an existing working model which we could always just copy—if the worst came to the worst.
Agreed, although it will be very difficult to copy it without understanding it in considerably more detail than we do at present. Copying without any understanding (whole brain scanning and emulation) is possible in theory, but the required engineering capability for that level of scanning technology seems pretty far into the future at the moment.
A poll of mainstream scientists sounds like a poor way to get an estimate of the date of arrival of “human-level” machine minds—since machine intelligence is a complex and difficult field—and so most outsiders will probably be pretty clueless.
Also, 15 years is still a long way off: people may think 5 years out, when they are feeling particularly far sighted. Expecting major behavioral changes from something 15 years down the line seems a bit unreasonable.