Eliezer, good question. Now that I think of it, I realize that my AI article may have been a bit of a bad example to use here—after all, it’s not predicting AI within 50 years as such, but just making the case that the probability for it happening within 50 years is nontrivial. I’m not sure of what the “get the deposit back” condition on such a prediction would be...
...but I digress. To answer your question: IBM was estimating that they’d finish building their full-scale simulation of the human brain in 10-15 years. Having a simulation where parts of a brain can be selectively turned on or off at will or fed arbitrary sense input would seem very useful in the study of intelligence. Other projections I’ve seen (but which I now realize I never cited in the actual article) place the development of molecular nanotech within 20 years or so. That’d seem to allow direct uploading of minds, which again would help considerably in the study of the underlying principles of intelligence. I tacked 30 years on that to be conservative—I don’t know how long it takes before people learn to really milk those simulations for everything they’re worth, but modern brain imaging techniques were developed about 15 years ago and are slowly starting to produce some pretty impressive results. 30 years seemed like an okay guess, assuming that the two were comparable and that the development of technology would continue to accelerate. (Then there’s nanotech giving enough computing power to run immense evolutionary simulations and other brute-force methods of achieving AI, but I don’t really know enough about that to estimate its impact.)
So basically the 50 years was “projections made by other people estimate really promising stuff within 20 years, then to be conservative I’ll tack on as much extra time as possible without losing the point of the article entirely”. ‘Within 50 years or so’ seemed to still put AI within the lifetimes of enough people (or their children) that it might convince them to give the issue some thought.
Eliezer, good question. Now that I think of it, I realize that my AI article may have been a bit of a bad example to use here—after all, it’s not predicting AI within 50 years as such, but just making the case that the probability for it happening within 50 years is nontrivial. I’m not sure of what the “get the deposit back” condition on such a prediction would be...
...but I digress. To answer your question: IBM was estimating that they’d finish building their full-scale simulation of the human brain in 10-15 years. Having a simulation where parts of a brain can be selectively turned on or off at will or fed arbitrary sense input would seem very useful in the study of intelligence. Other projections I’ve seen (but which I now realize I never cited in the actual article) place the development of molecular nanotech within 20 years or so. That’d seem to allow direct uploading of minds, which again would help considerably in the study of the underlying principles of intelligence. I tacked 30 years on that to be conservative—I don’t know how long it takes before people learn to really milk those simulations for everything they’re worth, but modern brain imaging techniques were developed about 15 years ago and are slowly starting to produce some pretty impressive results. 30 years seemed like an okay guess, assuming that the two were comparable and that the development of technology would continue to accelerate. (Then there’s nanotech giving enough computing power to run immense evolutionary simulations and other brute-force methods of achieving AI, but I don’t really know enough about that to estimate its impact.)
So basically the 50 years was “projections made by other people estimate really promising stuff within 20 years, then to be conservative I’ll tack on as much extra time as possible without losing the point of the article entirely”. ‘Within 50 years or so’ seemed to still put AI within the lifetimes of enough people (or their children) that it might convince them to give the issue some thought.