There was discussion about this post on /r/ControlProblem; I agree with these two comments:
If I understood the article correctly, it seems to me that the author is missing the point a bit.
He argues that the explosion has to slow down, but the point is not about superintelligence becoming limitless in a mathematical sense, it’s about how far it can actually get before it starts hitting its limits.
Of course, it makes sense that, as the author writes, a rapid increase in intelligence would at some point eventually have to slow down due to approaching some hardware and data acquisition limits which would keep making its improvement process harder and harder. But that seems almost irrelevant if the actual limits turn out to be high enough for the system to evolve far enough.
Bostrom’s argument is not that the intelligence explosion, once started, would have to continue indefinitely for it to be dangerous.
Who cares if the intelligence explosion of an AI entity will have to grind to a halt before quite reaching the predictive power of an absolute omniscient god.
If it has just enough hardware and data available during its initial phase of the explosion to figure out how to break out of its sandbox and connect to some more hardware and data over the net, then it might just have enough resources to keep the momentum and sustain its increasingly rapid improvement long enough to become dangerous, and the effects of its recalcitrance increasing sometime further down the road would not matter much to us.
and
I had the same impression.
He presents an argument about improving the various expressions in Bayes’ theorem, and arrives at the conclusion that the agent would need to improve its hardware or interact with the outside world in order to lead to a potentially dangerous intelligence explosion. My impression was that everyone had already taken that conclusion for granted.
Also, I wrote a paper some time back that essentially presented the opposite argument; here’s the abstract, you may be interested in checking it out:
Two crucial questions in discussions about the risks of artificial superintelligence are: 1) How much more capable could an AI become relative to humans, and 2) how easily could superhuman capability be acquired? To answer these questions, I will consider the literature on human expertise and intelligence, discuss its relevance for AI, and consider how an AI could improve on humans in two major aspects of thought and expertise, namely mental simulation and pattern recognition. I find that although there are very real limits to prediction, it seems like an AI could still substantially improve on human intelligence, possibly even mastering domains which are currently too hard for humans. In practice, the limits of prediction do not seem to pose much of a meaningful upper bound on an AI’s capabilities, nor do we have any nontrivial lower bounds on how much time it might take to achieve a superhuman level of capability. Takeover scenarios with timescales on the order of mere days or weeks seem to remain within the range of plausibility.
There was discussion about this post on /r/ControlProblem; I agree with these two comments:
and
Also, I wrote a paper some time back that essentially presented the opposite argument; here’s the abstract, you may be interested in checking it out: