Perhaps you mean this, but the probabilities involved in the argument are different. (human level AI will be built in 100 years; the AI will be able to undergo a recursive self-improvement in intelligence; this intelligence explosion will unpredictably transform our world)
The original article; the LW link post.
Perhaps you mean this, but the probabilities involved in the argument are different. (human level AI will be built in 100 years; the AI will be able to undergo a recursive self-improvement in intelligence; this intelligence explosion will unpredictably transform our world)
Thanks, this is what I was looking for.