The following quoted texts are from this post by Scott Alexander:
Alan Turing:
Let us now assume, for the sake of argument, that these machines are a genuine possibility, and look at the consequences of constructing them. To do so would of course meet with great opposition, unless we have advanced greatly in religious tolerance since the days of Galileo. There would be great opposition from the intellectuals who were afraid of being put out of a job. It is probable though that the intellectuals would be mistaken about this. There would be plenty to do in trying to keep one’s intelligence up to the standards set by the machines, for it seems probable that once the machine thinking method had started, it would not take long to outstrip our feeble powers…At some stage therefore we should have to expect the machines to take control.
[EDIT: a similar text, attributed to Alan Turing, appears here (from the last paragraph) - continued here.]
I. J. Good:
Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make
The following quoted texts are from this post by Scott Alexander:
Alan Turing:
[EDIT: a similar text, attributed to Alan Turing, appears here (from the last paragraph) - continued here.]
I. J. Good:
[EDIT: I didn’t manage to verify it yet, but it seems that that last quote is from a 58 page paper by I. J. Good, titled Speculations Concerning the First Ultraintelligent Machine; here is an archived version of the broken link in Scott’s post.]