What facts, or observations are the ones you find which provide the most compelling evidence that intelligent machines are at least ten years off.
It hasn’t worked in sixty years of trying, and I see nothing in the current revival to suggest they have any ideas that are likely to do any better. To be specific, I mean people such as Marcus Hutter, Shane Legg, Steve Omohundro, Ben Goertzel, and so on—those are the names that come to me off the top of my head. And by their current ideas for AGI I mean Bayesian reasoning, algorithmic information theory, AIXI, Novamente, etc.
I don’t think any of these people are stupid or crazy (which is why I don’t mention Mentifex in the same breath as them), and I wouldn’t try to persuade any of them out of what they are doing unless I had something demonstrably better, but I just don’t believe that collection of ideas can be made to work. The fundamental thing that is lacking in AGI research, and always has been, is knowledge of how brains work. The basic ideas that people have tried can be classified as (1) crude imitation of the lowest-level anatomy (neural nets), (2) brute-forced mathematics (automated reasoning, logical or probabilistic), or (3) attempts to code up what it feels like to be a mind (the whole cognitive AI tradition).
Indeed, how do you know that the NSA doesn’t have such a machine chained up in its basement right now?
My estimates are unaffected by hypothetical possibilities for which there is no evidence, and are protected against that lack of evidence.
Besides, the current state of the world is not suggestive of the presence of AIs in it.
ETA: But this is becoming a digression from the purpose of the thread.
Thanks for sharing. As previously mentioned, we share a generally negative impression of the chances of success in the next ten years.
However, it appears that I give more weight to the possibility that there are researchers within companies, within government organisations, or within other countries who are doing better than you suggest—or that there will be at some time over the next ten years. For example, Voss’s estimate (from a year ago) was “8 years”—see: http://www.vimeo.com/3461663
We also appear to differ on our estimates of how important knowledge of how brains work will be. I think there is a good chance that it will not be very important.
Ignorance about NSA projects might not affect our estimates, but perhaps it should affect our confidence in them. An NSA intelligent agent might well remain hidden—on national security grounds. After all, if China’s agent found out for sure that America had an agent too, who knows what might happen?
They are the National Security Agency. Which of those areas presents the biggest potential threat to national security? With a machine intelligence, you could build all the quantum computers you would ever need.
The fundamental thing that is lacking in AGI research, and always has been, is knowledge of how brains work.
This is my sense as well. I also think there is a substantial limit on what we’re likely to learn about the brain given that we can’t study brain functionality with large scope, neuron-level definition, in real time given obvious ethical constraints. Does anyone know of any technologies on the horizon that could change this in the next ten years?
“One of [the Middle Ages’] characteristics was that ‘reasoning by analogy’ was rampant; another characteristic was almost total intellectual stagnation, and we now see why the two go together.
There’s no reason to spread such myths about medieval history.
The main characteristics of the Early Middle Ages were low population densities, very low urbanization rates, very low literacy rates, and almost zero lay literacy rates. Being in a reference class of times and places with such characteristics, it would be a miracle if any significant progress happened during Early Middle Ages.
China also springs to mind. I have listened to documentary about the Chinese empire and distinctly remember how advanced yet stagnant it seemed. At the time my explanation was authoritarianism.
But 1) I’m not sure anyone has a good grasp of what the properties we’re trying to duplicate are. I’m sure some people think they do and it is possible someone has stumbled on to the answer but I’m not sure there is enough evidence to justify any claims of this sort. How exactly would someone figure out what general intelligence is without ever seeing it in action? The interior experience of being intelligent? Socialization with other intelligences? An analogy to computers?
2) Lets say we do have or can come up with a clear conception of what the AGI project is trying to accomplish without better neuroscience. It isn’t then obvious to me that the way to create intelligence will be easy to derive without more neuroscience. Sure, from just from a conception of what flight is it is possible to come up with solutions to the problem of heavier than air flight. But for the most part humans are not this smart. Despite the ridiculous attempts at flight with flapping wings I suspect having birds to study—weigh, measure and see in action—sped up the process significantly. Same goes for creating intelligence.
(Prediction: .9 probability you have considered both these objections and rejected them for good reason. And .6 you’ve published something that rebuts at least one of the above. :-)
It hasn’t worked in sixty years of trying, and I see nothing in the current revival to suggest they have any ideas that are likely to do any better. To be specific, I mean people such as Marcus Hutter, Shane Legg, Steve Omohundro, Ben Goertzel, and so on—those are the names that come to me off the top of my head. And by their current ideas for AGI I mean Bayesian reasoning, algorithmic information theory, AIXI, Novamente, etc.
I don’t think any of these people are stupid or crazy (which is why I don’t mention Mentifex in the same breath as them), and I wouldn’t try to persuade any of them out of what they are doing unless I had something demonstrably better, but I just don’t believe that collection of ideas can be made to work. The fundamental thing that is lacking in AGI research, and always has been, is knowledge of how brains work. The basic ideas that people have tried can be classified as (1) crude imitation of the lowest-level anatomy (neural nets), (2) brute-forced mathematics (automated reasoning, logical or probabilistic), or (3) attempts to code up what it feels like to be a mind (the whole cognitive AI tradition).
My estimates are unaffected by hypothetical possibilities for which there is no evidence, and are protected against that lack of evidence.
Besides, the current state of the world is not suggestive of the presence of AIs in it.
ETA: But this is becoming a digression from the purpose of the thread.
Thanks for sharing. As previously mentioned, we share a generally negative impression of the chances of success in the next ten years.
However, it appears that I give more weight to the possibility that there are researchers within companies, within government organisations, or within other countries who are doing better than you suggest—or that there will be at some time over the next ten years. For example, Voss’s estimate (from a year ago) was “8 years”—see: http://www.vimeo.com/3461663
We also appear to differ on our estimates of how important knowledge of how brains work will be. I think there is a good chance that it will not be very important.
Ignorance about NSA projects might not affect our estimates, but perhaps it should affect our confidence in them. An NSA intelligent agent might well remain hidden—on national security grounds. After all, if China’s agent found out for sure that America had an agent too, who knows what might happen?
I would guess that the NSA is more interested in quantum computing than in AI.
They are the National Security Agency. Which of those areas presents the biggest potential threat to national security? With a machine intelligence, you could build all the quantum computers you would ever need.
This is my sense as well. I also think there is a substantial limit on what we’re likely to learn about the brain given that we can’t study brain functionality with large scope, neuron-level definition, in real time given obvious ethical constraints. Does anyone know of any technologies on the horizon that could change this in the next ten years?
http://lesswrong.com/lw/vx/failure_by_analogy/
From quote in that post:
There’s no reason to spread such myths about medieval history.
The main characteristics of the Early Middle Ages were low population densities, very low urbanization rates, very low literacy rates, and almost zero lay literacy rates. Being in a reference class of times and places with such characteristics, it would be a miracle if any significant progress happened during Early Middle Ages.
High and Late Middle Ages on the other hand had plenty of technological and intellectual progress.
I’m much more surprised why dense, urbanized, and highly literate Roman Empire was so stagnant.
China also springs to mind. I have listened to documentary about the Chinese empire and distinctly remember how advanced yet stagnant it seemed. At the time my explanation was authoritarianism.
All that is fine.
But 1) I’m not sure anyone has a good grasp of what the properties we’re trying to duplicate are. I’m sure some people think they do and it is possible someone has stumbled on to the answer but I’m not sure there is enough evidence to justify any claims of this sort. How exactly would someone figure out what general intelligence is without ever seeing it in action? The interior experience of being intelligent? Socialization with other intelligences? An analogy to computers?
2) Lets say we do have or can come up with a clear conception of what the AGI project is trying to accomplish without better neuroscience. It isn’t then obvious to me that the way to create intelligence will be easy to derive without more neuroscience. Sure, from just from a conception of what flight is it is possible to come up with solutions to the problem of heavier than air flight. But for the most part humans are not this smart. Despite the ridiculous attempts at flight with flapping wings I suspect having birds to study—weigh, measure and see in action—sped up the process significantly. Same goes for creating intelligence.
(Prediction: .9 probability you have considered both these objections and rejected them for good reason. And .6 you’ve published something that rebuts at least one of the above. :-)