More explicitly, many problems that are relevant for recursive self-improvement (circuit design and memory management for example) explicitly involve graph coloring and traveling salesman variants which are NP-hard or NP-complete. In that context, it could well be that designing new hardware and software will quickly hit diminishing marginal returns. If P, NP. coNP, PSPACE, and EXP are all distinct in a strong sense, then this sort of result is plausible.
We have already seen quite a bit of software and hardware improvement. We already know that it goes pretty fast.
It would seem to me that at this point that a lot more attention should be paid to computational complexity and what it has to say about the plausibility of quick recursive self-improvement.
Maybe. Speed limits for technological evolution seem far off to me. The paucity of results in this area so far may mean that bounding progress rates is not an easy problem.
Many practical problems we actually have pretty decent limits. For example, the best algorithm to find gcds in the integers are provably very close to best possible. Of course, the actual upper bounds for many problems may be far off or they may be near. That’s why this is an argument that we need to do more research into this question not that it is a slamdunk against runaway self-improvement.
In practice, much is down to how fast scientific and technological progress can accelerate. If seems fairly clear that progress is autocatalytic—and that the rate of progress ramps up with the number of scientists, which does not have hard limits.
Algorithm limits seem to apply more to the question of how smart a computer program can become in an isolated virtual world.
Matt Mahoney has looked at that area—though his results so far do not seem terribly interesting to me.
I think one math problem is much more important to progress than all the other ones: inductive inference.
We can see a long history of progress in solving that problem—and I think we can see that the problem extends far above the human level.
One possible issue is whether progress will slow down as we head towards human capabilities. It seems possible (though not very likely) that we are making progress simply by coding our own inductive inference skills into the machines.
In practice, much is down to how fast scientific and technological progress can accelerate. If seems fairly clear that progress is autocatalytic—and that the rate of progress ramps up with the number of scientists, which does not have hard limits.
It might ramp up with increasing the number of scientists, but there are clear diminishing marginal returns. There are more scientists today at some major research universities than there were at any point in the 19th century. Yet we don’t have people constantly coming up with ideas as big as say evolution or Maxwell’s equations. The low hanging fruit gets picked quickly.
Algorithm limits seem to apply more to the question of how smart a computer program can become in an isolated virtual world.
Matt Mahoney has looked at that area—though his results so far do not seem terribly interesting to me.
I agree that Mahoney’s work isn’t so far very impressive. The models used are simplistic and weak.
I think one math problem is much more important to progress than all the other ones: inductive inference.
Many forms of induction are NP-hard and some versions are NP-complete so these sorts of limits are clearly relevant. Some other forms are closely related where one models things in terms of recognizing pseudorandom number generators. But it seems to me to be incorrect to identify this as the only issue or even that it is necessarily more important.. If for example one could factor large numbers more efficiently, an AI could do a lot with that if it got minimal internet access.
You are far more knowledgeable than me and a lot better at expressing possible problems with an intelligence explosion.
Since the very beginning I wondered why nobody has written down what speaks against that possibility. Which is one of the reasons for why I even bothered to start arguing against it myself—the trigger has been a deletion of a certain post which made me realize that there is a lot more to it (socially and psychologically) than the average research project—even though I knew very well that I don’t have the necessary background, nor patience, to do so in a precise and elaborated manner.
Do people think that a skeptical inquiry of, and counterarguments against an intelligence explosion are not valuable?
You are far more knowledgeable than me and a lot better at expressing possible problems with an intelligence explosion.
I don’t know about that. The primary issue I’ve talked about limiting an intelligence explosion is computational complexity issues. That’s a necessarily technical area. Moreover, almost all the major boundaries are conjectural. If P=NP in a practical way, than an intelligence explosion may be quite easy. There’s also a major danger that in thinking/arguing that this is relevant, I may be engaging in motivated cognition in that there’s an obvious bias to thinking that things close to one’s own field are somehow relevant.
It might ramp up with increasing the number of scientists, but there are clear diminishing marginal returns.
Perhaps eventually—but much depends on how you measure it. In dollar terms, scientists are doing fairly well—there are a lot of them and they command reasonable salaries. They may not be Newtons of Einsteins, but society still seems to be prepared to pay them in considerable numbers at the moment. I figure that means there is still important stuff that needs discovering.
[re: inductive inference] it seems to me to be incorrect to identify this as the only issue or even that it is necessarily more important.. If for example one could factor large numbers more efficiently, an AI could do a lot with that if it got minimal internet access.
As Eray Özkural once said: “Every algorithm encodes a bit of intelligence”. However, some algorithms do more so than others. A powerful inductive inference engine could be used to solve factoring problems—but also, a huge number of other problems.
We have already seen quite a bit of software and hardware improvement. We already know that it goes pretty fast.
Maybe. Speed limits for technological evolution seem far off to me. The paucity of results in this area so far may mean that bounding progress rates is not an easy problem.
Many practical problems we actually have pretty decent limits. For example, the best algorithm to find gcds in the integers are provably very close to best possible. Of course, the actual upper bounds for many problems may be far off or they may be near. That’s why this is an argument that we need to do more research into this question not that it is a slamdunk against runaway self-improvement.
In practice, much is down to how fast scientific and technological progress can accelerate. If seems fairly clear that progress is autocatalytic—and that the rate of progress ramps up with the number of scientists, which does not have hard limits.
Algorithm limits seem to apply more to the question of how smart a computer program can become in an isolated virtual world.
Matt Mahoney has looked at that area—though his results so far do not seem terribly interesting to me.
I think one math problem is much more important to progress than all the other ones: inductive inference.
We can see a long history of progress in solving that problem—and I think we can see that the problem extends far above the human level.
One possible issue is whether progress will slow down as we head towards human capabilities. It seems possible (though not very likely) that we are making progress simply by coding our own inductive inference skills into the machines.
It might ramp up with increasing the number of scientists, but there are clear diminishing marginal returns. There are more scientists today at some major research universities than there were at any point in the 19th century. Yet we don’t have people constantly coming up with ideas as big as say evolution or Maxwell’s equations. The low hanging fruit gets picked quickly.
I agree that Mahoney’s work isn’t so far very impressive. The models used are simplistic and weak.
Many forms of induction are NP-hard and some versions are NP-complete so these sorts of limits are clearly relevant. Some other forms are closely related where one models things in terms of recognizing pseudorandom number generators. But it seems to me to be incorrect to identify this as the only issue or even that it is necessarily more important.. If for example one could factor large numbers more efficiently, an AI could do a lot with that if it got minimal internet access.
You are far more knowledgeable than me and a lot better at expressing possible problems with an intelligence explosion.
Since the very beginning I wondered why nobody has written down what speaks against that possibility. Which is one of the reasons for why I even bothered to start arguing against it myself—the trigger has been a deletion of a certain post which made me realize that there is a lot more to it (socially and psychologically) than the average research project—even though I knew very well that I don’t have the necessary background, nor patience, to do so in a precise and elaborated manner.
Do people think that a skeptical inquiry of, and counterarguments against an intelligence explosion are not valuable?
I don’t know about that. The primary issue I’ve talked about limiting an intelligence explosion is computational complexity issues. That’s a necessarily technical area. Moreover, almost all the major boundaries are conjectural. If P=NP in a practical way, than an intelligence explosion may be quite easy. There’s also a major danger that in thinking/arguing that this is relevant, I may be engaging in motivated cognition in that there’s an obvious bias to thinking that things close to one’s own field are somehow relevant.
Perhaps eventually—but much depends on how you measure it. In dollar terms, scientists are doing fairly well—there are a lot of them and they command reasonable salaries. They may not be Newtons of Einsteins, but society still seems to be prepared to pay them in considerable numbers at the moment. I figure that means there is still important stuff that needs discovering.
As Eray Özkural once said: “Every algorithm encodes a bit of intelligence”. However, some algorithms do more so than others. A powerful inductive inference engine could be used to solve factoring problems—but also, a huge number of other problems.