10% at 2030. 50% at 2050. 90% at 2082 (the year I turn 100).
The probability that the Singularity Institute fails in the bad way? Hmm. I’d say 40%.
Hours, 5%. Days, 30%. Less than 5 years, 75%. If it can’t do it in the time it takes for your average person to make it through high school, then I don’t think it will be able to do it at all. Or in some other respect, it isn’t even trying.
much more. I don’t think we have too many chefs in the kitchen at this point.
Seriously don’t know. It seems like a very open question, like asking if a bear is more dangerous than a tiger. Are we talking worst case? Then no, I think they both end the same for humans. Are we talking likely case? Then I don’t know enough about nanotech or AI to say.
Realistically? I suppose in the future, consumer-grade computer had the computational power of our current best supercomputer, and there was some equivalent to the X-Prize for developing a human-level AI, I would expect someone to win the prize within 5 years.
10% at 2030. 50% at 2050. 90% at 2082 (the year I turn 100).
The probability that the Singularity Institute fails in the bad way? Hmm. I’d say 40%.
Hours, 5%. Days, 30%. Less than 5 years, 75%. If it can’t do it in the time it takes for your average person to make it through high school, then I don’t think it will be able to do it at all. Or in some other respect, it isn’t even trying.
much more. I don’t think we have too many chefs in the kitchen at this point.
Seriously don’t know. It seems like a very open question, like asking if a bear is more dangerous than a tiger. Are we talking worst case? Then no, I think they both end the same for humans. Are we talking likely case? Then I don’t know enough about nanotech or AI to say.
Realistically? I suppose in the future, consumer-grade computer had the computational power of our current best supercomputer, and there was some equivalent to the X-Prize for developing a human-level AI, I would expect someone to win the prize within 5 years.