John did ask about timescales and my answer was that I had no logical way of knowing the answer to that question and was reluctant to just make one up.
...
As for guessing the timescales, that actually seems to me much harder than guessing the qualitative answer to the question “Will an intelligence explosion occur?”
There is more there, best to start here and read all we way down to the bottom of that thread. I think that discussion captures some of the best arguments in favor of friendly AI in the most concise way you can currently find.
How much do members’ predictions of when the singularity will happen differ within the Singularity Institute?
Eliezer Yudkowsky wrote:
...
There is more there, best to start here and read all we way down to the bottom of that thread. I think that discussion captures some of the best arguments in favor of friendly AI in the most concise way you can currently find.