If UGC is true, then one should doubt recursive self-improvement will happen in general
This is interesting, can you expand on this? I feel there clearly are some arguments in complexity theory against AI as an existential risk, and that these arguments would deserve more attention.
To sidetrack a bit, as I’ve argued in a comment, if it turns out that many important problems are practically unsolvable in realistic timescales, any superintelligence would be unlikely to get strategic advantage. The support for this idea is much more concrete than the speculations in the OP of this thread; for example, there are many problems in economics that we don’t know how to solve efficiently despite investing a lot of effort.
This is interesting, can you expand on this? I feel there clearly are some arguments in complexity theory against AI as an existential risk, and that these arguments would deserve more attention.
Sure.
There’s a basic argument that if P != NP then we should expect recursive self-improvement to be hard because many of the problems that would be involved in self-improvement (e.g. memory management, circuit design, protein folding) are NP-hard or NP-complete.
This argument in this form suffers from two problems:
First, it isn’t clear what the constants in question are. There could be an exponential time algorithm for solving your favorite NP-complete problem but the algorithm is so efficient on instances of any size that matters that it doesn’t end up impacting things. It isn’t completely clear how serious this problem is. On the one hand, as of right now, it looks like “polynomial time algorithm if and only if has practical algorithm” is a decent heuristic even though it has exceptions. On the other hand, we’ve seen even in the last few years important examples where careful improvements to algorithms can drastically improve practical running speed. One major example is linear programming.
Second, most of the NP-complete problems we want to solve in practice are optimization problems. So one might hope that even if a problem is NP-hard, getting close to an optimal solution will still be doable efficiently. Maybe for most of these problems, we can expect that as the problem instances get large we actually get really efficient approximations for things which are very close to the best possible solutions.
UGC addresses both these points- the first point in a weak way, and the second point in a strong way. Prior to the Unique Games Conjecture, there was a major theorem called the PCP theorem. This is one of the few unambiguously deep results in computational complexity and it has important results that say that in many cases approximation is tough. If UGC is true, then the idea that approximation is tough becomes even more true. Consider for example the problem of minimal vertex cover(which shows up in a bunch of practical contexts and is NP-complete). Now, there is an algorithm which will approximate minimal vertex cover within a factor of 2 or so, and the algorithm is simple, and it isn’t a bad idea try to work it out. PCP implies that no algorithm can in polynomial time do better than about 1.3 times the best possible, but there’s a gap between 2 and 1.3. It turns out that UGC implies that a factor of 2 is really the best unless P=NP (more formally that approximating minimal vertex cover within a factor of 2 would be NP complete). This is discussed in some detail (with spoilers for the above recommended exercise) here. This is one of many problems where UGC implies that some fairly straightforward algorithm really is best possible unless P=NP. More precisely, if UGC is true, then in a wide variety of circumstances beating approximations one get from semidefinite programming is not doable.
So, if one believes UGC, then optimization is tough. This addresses primarily issue 2. How does this address issue 1? Well, as a rough heuristic, if all these approximation problems are NP-complete even for very weak approximations, and P != NP, then one should expect that even improving one’s demand for the closeness of approximation a tiny amount will cause the constants (if not even also the asymptotics) to shoot up.
This isn’t the entire story: For example, it ignores any context that quantum computing may impact things. Right now, the general consensus is that it is very unlikely that a quantum computer will be able to solve NP-complete problems in polynomial time. Formally, we believe that NP is not contained in BQP. But since BQP contains P, this is a much stronger claim than P != NP. On the other hand, if one believes UGC, then one also gets that if one does believe that NP is not contained in BQP then one should also believe that many approximate optimization problems cannot be done substantially better on a quantum computer than they can on a classical computer. Some people have also raised the possibility that quantum computers may somehow even if they don’t improve the asymptotics for some problems may improve the constants: there’s no intrinsic, theoretical, or strong empirical reason to believe this (with the exception of Grover’s algorithm), but it seems like enough of a concern that I consider the idea of letting any nascent AI have access to a functioning quantum computer to be a Really Bad Idea (TM).
It is also worth noting that even if P=NP in some strong sense, this doesn’t mean that recursive self-improvement necessarily can happen, although it does it make it more likely: physical limits and other issues may still come into play. It is worth keeping in mind that even if one has really fast optimization algorithms one cannot optimize beyond what is actually possible. To put it another way, if you have a bunch of needles in a haystack, even if you can search haystacks really fast, if none of the needles are made of gold you won’t find any gold needles.
Replying separately(rather than editing), to make sure this comment gets seen.
It is worth noting that although UGC may not be true, weaker versions of UGC get many similar results. So for example, one can get most results of UGC if one believes that NP ⇐ P^U/poly where U is a unique games oracle. Then one would get similar results under the assumption that the polynomial hierarchy does not collapse. Also if one instead of believing that UGC is NP-complete one believes that UGC has no polynomial time algorithm then you get some of the same results or very similar results.
Since many people don’t assign that high a probability to UGC, it may be worth asking what evidence should cause us to update to assign a higher probability to UGC (short of a proof). There seem to be four obvious lines of evidence:
First, showing that some problems expected to be NP-intermediate (e.g. factoring, graph isomorphism, ring isomorphism) can be reduced to unique games. (Remark: the obvious candidate here would be graph isomorphism. I’m not aware of such a reduction but my knowledge of the literature here is not very good.) This would be strong evidence for at least some of the weaker versions of UGC, and thus evidence for UGC.
Second, showing that the attempted methods to solve unique games fail for intrinsic reasons that cannot be overcome by those methods. Right now, the two noteworthy methods are Sums of Squares and QAOA. If there are fundamental barriers to variants of either method working that would make UGC substantially more plausible.
Third, proving more of the inapproximation results implied by UGC by methods independent of UGC. There’s been some progress on this, but it may turn out that the inapproximations implied by UGC are the correct bounds even as UGC is false. At the same time, even as these sorts of results are proved, they make UGC less directly useful for trying to actually estimate whether recursive self-improvement can substantially occur.
Fourth, it may be possible to prove not that some unexpected collapse occurs when given access to a unique games oracle that would be unlikely unless UGC is true. Obvious candidates would be something like NP < co-NP^U, or NP < P^U/poly.
This is interesting, can you expand on this? I feel there clearly are some arguments in complexity theory against AI as an existential risk, and that these arguments would deserve more attention.
To sidetrack a bit, as I’ve argued in a comment, if it turns out that many important problems are practically unsolvable in realistic timescales, any superintelligence would be unlikely to get strategic advantage. The support for this idea is much more concrete than the speculations in the OP of this thread; for example, there are many problems in economics that we don’t know how to solve efficiently despite investing a lot of effort.
Sure.
There’s a basic argument that if P != NP then we should expect recursive self-improvement to be hard because many of the problems that would be involved in self-improvement (e.g. memory management, circuit design, protein folding) are NP-hard or NP-complete.
This argument in this form suffers from two problems:
First, it isn’t clear what the constants in question are. There could be an exponential time algorithm for solving your favorite NP-complete problem but the algorithm is so efficient on instances of any size that matters that it doesn’t end up impacting things. It isn’t completely clear how serious this problem is. On the one hand, as of right now, it looks like “polynomial time algorithm if and only if has practical algorithm” is a decent heuristic even though it has exceptions. On the other hand, we’ve seen even in the last few years important examples where careful improvements to algorithms can drastically improve practical running speed. One major example is linear programming.
Second, most of the NP-complete problems we want to solve in practice are optimization problems. So one might hope that even if a problem is NP-hard, getting close to an optimal solution will still be doable efficiently. Maybe for most of these problems, we can expect that as the problem instances get large we actually get really efficient approximations for things which are very close to the best possible solutions.
UGC addresses both these points- the first point in a weak way, and the second point in a strong way. Prior to the Unique Games Conjecture, there was a major theorem called the PCP theorem. This is one of the few unambiguously deep results in computational complexity and it has important results that say that in many cases approximation is tough. If UGC is true, then the idea that approximation is tough becomes even more true. Consider for example the problem of minimal vertex cover(which shows up in a bunch of practical contexts and is NP-complete). Now, there is an algorithm which will approximate minimal vertex cover within a factor of 2 or so, and the algorithm is simple, and it isn’t a bad idea try to work it out. PCP implies that no algorithm can in polynomial time do better than about 1.3 times the best possible, but there’s a gap between 2 and 1.3. It turns out that UGC implies that a factor of 2 is really the best unless P=NP (more formally that approximating minimal vertex cover within a factor of 2 would be NP complete). This is discussed in some detail (with spoilers for the above recommended exercise) here. This is one of many problems where UGC implies that some fairly straightforward algorithm really is best possible unless P=NP. More precisely, if UGC is true, then in a wide variety of circumstances beating approximations one get from semidefinite programming is not doable.
So, if one believes UGC, then optimization is tough. This addresses primarily issue 2. How does this address issue 1? Well, as a rough heuristic, if all these approximation problems are NP-complete even for very weak approximations, and P != NP, then one should expect that even improving one’s demand for the closeness of approximation a tiny amount will cause the constants (if not even also the asymptotics) to shoot up.
This isn’t the entire story: For example, it ignores any context that quantum computing may impact things. Right now, the general consensus is that it is very unlikely that a quantum computer will be able to solve NP-complete problems in polynomial time. Formally, we believe that NP is not contained in BQP. But since BQP contains P, this is a much stronger claim than P != NP. On the other hand, if one believes UGC, then one also gets that if one does believe that NP is not contained in BQP then one should also believe that many approximate optimization problems cannot be done substantially better on a quantum computer than they can on a classical computer. Some people have also raised the possibility that quantum computers may somehow even if they don’t improve the asymptotics for some problems may improve the constants: there’s no intrinsic, theoretical, or strong empirical reason to believe this (with the exception of Grover’s algorithm), but it seems like enough of a concern that I consider the idea of letting any nascent AI have access to a functioning quantum computer to be a Really Bad Idea (TM).
It is also worth noting that even if P=NP in some strong sense, this doesn’t mean that recursive self-improvement necessarily can happen, although it does it make it more likely: physical limits and other issues may still come into play. It is worth keeping in mind that even if one has really fast optimization algorithms one cannot optimize beyond what is actually possible. To put it another way, if you have a bunch of needles in a haystack, even if you can search haystacks really fast, if none of the needles are made of gold you won’t find any gold needles.
Replying separately(rather than editing), to make sure this comment gets seen.
It is worth noting that although UGC may not be true, weaker versions of UGC get many similar results. So for example, one can get most results of UGC if one believes that NP ⇐ P^U/poly where U is a unique games oracle. Then one would get similar results under the assumption that the polynomial hierarchy does not collapse. Also if one instead of believing that UGC is NP-complete one believes that UGC has no polynomial time algorithm then you get some of the same results or very similar results.
Since many people don’t assign that high a probability to UGC, it may be worth asking what evidence should cause us to update to assign a higher probability to UGC (short of a proof). There seem to be four obvious lines of evidence:
First, showing that some problems expected to be NP-intermediate (e.g. factoring, graph isomorphism, ring isomorphism) can be reduced to unique games. (Remark: the obvious candidate here would be graph isomorphism. I’m not aware of such a reduction but my knowledge of the literature here is not very good.) This would be strong evidence for at least some of the weaker versions of UGC, and thus evidence for UGC.
Second, showing that the attempted methods to solve unique games fail for intrinsic reasons that cannot be overcome by those methods. Right now, the two noteworthy methods are Sums of Squares and QAOA. If there are fundamental barriers to variants of either method working that would make UGC substantially more plausible.
Third, proving more of the inapproximation results implied by UGC by methods independent of UGC. There’s been some progress on this, but it may turn out that the inapproximations implied by UGC are the correct bounds even as UGC is false. At the same time, even as these sorts of results are proved, they make UGC less directly useful for trying to actually estimate whether recursive self-improvement can substantially occur.
Fourth, it may be possible to prove not that some unexpected collapse occurs when given access to a unique games oracle that would be unlikely unless UGC is true. Obvious candidates would be something like NP < co-NP^U, or NP < P^U/poly.