While it is true that exponential improvements in computer speed and memory often have the sort of limited impact you are describing, algorithmic improvements are frequently much more helpful. When RSA-129 was published as a factoring challenge, it was estimated that even assuming Moore’s law it would take a very long time to factor (the classic estimate was that it would take on the order of 10^15 years assuming that one could do modular arithmetic operations at 1 per a nanosecond. Assuming a steady progress of Moore’s law one got an estimate in the range of hundreds of years at minimum.) However, it was factored only a few years later because new algorithms made factoring much much easier. In particular, the quadratic sieve and the number field sieve were both subexponential. The analogy here is roughly to the jump in Go programs that occurred when the new Monte Carlo methods were introduced.
An AI that is a very good mathematician, and can come up with lots of good algorithms might plausibly go FOOM. For example, if it has internet access and has finds a practical polynomial time factoring algorithm it will control much of the internet quite quickly. This is not the only example of this sort of problem. Indeed, I place most of the probability mass of an AI going FOOM on the chance that P=NP and that there’s an actually practical fast way of solving NP complete problems. (ETA: Although note that prior discussion with Cousin It has convinced me that the level of barrier that P !=NP would be might be much smaller than I’d have estimated earlier. But the upshot is that if P=NP then FOOM really does seem plausible.)
I’ll breathe much more easily if we show that P != NP.
The upshot is that improvement in raw computation is not the only thing that can potentially lead to FOOMing.
If you believe that FOOM is comparably probable to P = NP, I think you should be breathing pretty easily. Based on purely mathematical arguments it would be extraordinarily surprising if P = NP (and even more surprising if SAT had a solution with low degree polynomial running time), but even setting this aside, if there were a fast (say, quadratic) algorithm for SAT, then it would probably allow a smart human to go FOOM within a few weeks. Your concern at this point isn’t really about an AI, its just that a universe where worst-case SAT can be practically solved is absolutely unlike the universe we are used to. If anyone has doubts about this assertion I would be happy to argue, although I think its an argument that as been made on the internet before.
So I guess maybe you shouldn’t breathe easy, but you should at least have a different set of worries.
In reality I would bet my life against a dollar on the assertion P != NP, but I don’t think this makes it difficult for FOOM to occur. I don’t think the possibility of a FOOMing AI existing on modern computers is even worth debating, its just how likely humans are to stumble upon it. If anyone wants to challenge the assertion that a FOOMing AI can exist, as a statement about the natural world rather than a statement about human capabilities, I would be happy to argue with that as well.
As an aside, it seems likely that a reasonably powerful AI would be able to build a quantum computer good enough to break most encryption used in practice in the near future. I don’t think this is really a serious issue, since breaking RSA seems like about the least threatening thing a reasonably intelligent agent could do to our world as it stands right now.
If you believe that FOOM is comparably probable to P = NP, I think you should be
breathing pretty easily. Based on purely mathematical arguments it would be
extraordinarily surprising if P = NP (and even more surprising if SAT had a solution > with low degree polynomial running time),
Yes, I’m familiar with these arguments. I find that suggestive but not nearly as persuasive as others seem to. I estimate about a 1% chance that P=NP is provable in ZFC, around a 2% chance that P=NP is undecideable in ZFC (this is a fairly recent update. This number used to be much smaller. I am willing to discuss reasons for it if anyone cares.) and a 97% chance that P !=NP. Since this is close to my area of expertise, I think I can make these estimates fairly safely.
but even setting this aside, if there were a fast (say, quadratic) algorithm for SAT, then it would probably allow a smart human to go FOOM within a few weeks.
Absolutely not. Humans can’t do good FOOM. We evolved in circumstances where we very rarely had to solve NP hard or NP complete problems. And our self-modification system is essentially unconscious. There’s little historical evolutionary incentive to take advantage of fast SAT solving. If one doesn’t believe this just look at how much trouble humans have doing all sorts of very tiny instances of simple computational problems like multiplying small numbers, or factoring small integers (say under 10 digits).
In reality I would bet my life against a dollar on the assertion P != NP,
Really? In that case we have a sharply different probability estimate. Would you care to make an actual bet? Is it fair to say that you are putting an estimate of less than 10^-6 that P=NP?
In reality I would bet my life against a dollar on the assertion P != NP, but I don’t think this makes it difficult for FOOM to occur. I don’t think the possibility of a FOOMing AI existing on modern computers is even worth debating, its just how likely humans are to stumble upon it. If anyone wants to challenge the assertion that a FOOMing AI can exist, as a statement about the natural world rather than a statement about human capabilities, I would be happy to argue with that as well.
As an aside, it seems likely that a reasonably powerful AI would be able to build a quantum computer good enough to break most encryption used in practice in the near future. I don’t think this is really a serious issue, since breaking RSA seems like about the least threatening thing a reasonably intelligent agent could do to our world as it stands right now.
If it an AI can make quantum computers that can do that then it hardly has so much matter manipulation ability it has likely already won (although I doubt that even a reasonably powerful AI could necessarily do this simply because quantum computers are so finicky and unstable.)
But if P=NP in a practical way, RSA cracking is just one of the many things the AI will have fun with. Many crypto systems not just RSA will be vulnerable. The AI might quickly control many computer systems, increasing its intelligence and data input drastically. Many sensitive systems will likely fall under its control. And if P=NP then the AI also has shortcuts to all sorts of other things that could help it, like designing new circuits for it to use (and chip factories are close to automated at this point), and lots of neat biological tricks (protein folding becomes a lot easier although there seems to be some disagreement about from a computational perspective what class general protein folding falls into.) And of course, all those secure systems that are on the net which shouldn’t be become far more vulnerable (nuclear power plants, particle accelerators, hydroelectric dams), as do lots of commercial and military satellites. And those are just a handful of the things that my little human mind comes up without being very creative. Harry James Potter Evans-Verres would do a lot better. (Incidentally I didn’t remember how to spell his name so I started typing in Harry James Potter to Google and at “Harry James Pot” the third suggestion is for Evans-Verres. Apparently HPMoR is frequently googled.) And neither Harry nor I is as smart as decently intelligent AI.
I now agree that I was overconfident in P != NP. I was thinking only of failures where my general understanding of and intuition about math and computer science are correct. In fact most of the failure probability comes from the case where I (and most computer scientists) are completely off base and don’t know at all what is going on. I think that worlds like this are unlikely, but probably not 1 in a million.
We have very different beliefs about P != NP. I would be willing to make the following wager, if it could be suitably enforced with sufficiently low overhead. If a proof that P != NP reaches general acceptance, you will pay me $10000 with probability 1/1000000 (expectation $.01). If an algorithm provably solves 3SAT on n variables in time O(n^4) or less, I will pay you $1000000.
This bet is particularly attractive to me, because if such a fast algorithm for SAT appears I will probably cease to care about the million dollars. My actual probability estimate is somewhere in the ballpark of 10^-6, though its hard to be precise about probabilities so small.
Perhaps it was not clear what I meant by an individual going FOOM, which is fair since it was a bit of a misuse. I mean just that an individual with access to such an algorithm could quickly amplify their own power and then exert a dominant influence on human society. I don’t imagine a human will physically alter themselves. It might be entertaining to develop and write up a plan of attack for this contingency. I think the step I assume is possible that you don’t is the use of a SAT solver to search for a compact program whose behavior satisfies some desired property, which can be used to better leverage your SAT solver.
A similar thought experiment my friends and I have occasionally contemplated is: given a machine which can run any C program in exactly 1 second (or report an infinite loop), how many seconds would it take you to ?
Replying a second time to remind you of this subthread in case you still have any interest in making a bet. If we change it to degree 7 rather than degree 4 and changed the monetary aspects as I outlined I’m ok with the bet.
We have very different beliefs about P != NP. I would be willing to make the following wager, if it could be suitably enforced with sufficiently low overhead. If a proof that P != NP reaches general acceptance, you will pay me $10000 with probability 1/1000000 (expectation $.01). If an algorithm provably solves 3SAT on n variables in time O(n^4) or less, I will pay you $1000000.
We may need to adjust the terms. The most worrisome parts of that bet are twofold: 1) I don’t have 10^5 dollars, so on my end, paying $1000 with probability 1/100000 which has the same expectation probably makes more sense. 2) I’m not willing to agree to that O(n^4) simply because there are many problems which are in P where our best algorithm known is much worse than that. For example, the AKS primality test is O(n^6) and deterministic Miller Rabin might be O(n^4) but only if one makes strong assumptions corresponding to a generalized Riemann hypothesis.
Perhaps it was not clear what I meant by an individual going FOOM, which is fair since it was a bit of a misuse. I mean just that an individual with access to such an algorithm could quickly amplify their own power and then exert a dominant influence on human society. I don’t imagine a human will physically alter themselves. It might be entertaining to develop and write up a plan of attack for this contingency.
Not necessarily. While such an algoirthm would be useful many of the more effective uses would only last as long as one kept quiet that one had such a fast SAT solver. So to take full advantage requires some subtlety.
. I think the step I assume is possible that you don’t is the use of a SAT solver to search for a compact program whose behavior satisfies some desired property, which can be used to better leverage your SAT solver.
There’s a limit to how much you can do with this since general questions about properties of algorithms are still strongly not decidable.
I estimate about a 1% chance that P=NP is provable in ZFC, around a 2% chance that P=NP is undecideable in ZFC (this is a fairly recent update. This number used to be much smaller. I am willing to discuss reasons for it if anyone cares.) and a 97% chance that P !=NP.
I’m not sure I’m parsing that correctly. Is that 2% for undecidable or undecidable+true? Don’t most people consider undecidability evidence against?
But if P=NP in a practical way, … Many crypto systems not just RSA will be vulnerable.
All crypto systems would be vulnerable. At least, all that have ever been deployed on a computer.
I’m not sure I’m parsing that correctly. Is that 2% for undecidable or undecidable+true? Don’t most people consider undecidability evidence against?
2% is undecidable in general. Most of that probability mass is for “There’s no polynomial time solver for solving an NP complete problem but that is not provable in ZFC” (obviously one then needs to be careful about what one means by saying such a thing doesn’t exist, but I don’t want to have to deal with those details). A tiny part of that 2% is the possibility that there’s a polynomial time algorithm for solving some NP complete problem but one can’t prove in ZFC that the algorithm is polynomial time. That’s such a weird option that I’m not sure how small a probability for it, other than “very unlikely.”
All crypto systems would be vulnerable. At least, all that have ever been deployed on a computer.
Actually, no. There are some that would not. For example, one-time pads have been deployed on computer systems (among other methods using USB flash drives to deliver the secure bits.) One-time pads are provably secure. But all public key cryptography would be vulnerable, which means most forms of modern crypto.
I forgot about one-time pads, which certainly are deployed, but which I don’t think of as crypto in the sense of “turning small shared secrets into large shared secrets.” My point was that breaks not just public-key cryptography, but also symmetric cryptography, which tends to be formalizable as equivalent to one-way functions.
P = NP is effectively indistinguishable from P = ALL. Like PaulFChristiano said, If P = NP (in a practical way), then FOOMs are a dime a dozen.
And assuming P != NP, the fact that NP-complete problems aren’t efficiently solvable in the general case doesn’t mean paperclipping the universe is the least bit difficult.
Nothing to do with it at all? I’m curious as to how you reach that conclusion.
P = NP is effectively indistinguishable from P = ALL. Like PaulFChristiano said, If P = NP (in a practical way), then FOOMs are a dime a dozen.
No. There are many fairly natural problems which are not in P. For example, given a specific Go position does white or black have a win? This is in EXP
The difficulty of FOOMing for most entities such as humans even given practical P=NP is still severe. See my discussion with Paul.
And assuming P != NP, the fact that NP-complete problems aren’t efficiently solvable in the general case doesn’t mean paperclipping the universe is the least bit difficult.
I’m puzzled at how you can reach such a conclusion. Many natural problems that an entity trying to FOOM would want to do are NP complete. For example, graph coloring comes up in memory optimization and traveling salesman comes up in circuit design. Now, in a prior conversation I had with Cousin It he made the excellent point that a FOOMing AI might not need to actually deal with worst case instances of these problems. But that is a more subtle issue than what you seem to be claiming.
I’m confused or about to display my ignorance. When you write:
There are many fairly natural problems which are not in P. For example, given a specific Go position does white or black have a win? This is in EXP
Are you saying that evaluating a Go position has exponential (time?) cost even on a nondeterministic machine? I thought that any given string of moves to the end of the game could be evaluated in polynomial (N^2?) time. I thought that the full set of possible strings of moves could be evaluated (naively) in exponential time on a deterministic machine and polynomial time on a nondeterministic machine—so I thought Go position evaluation would be in NP. I think you are saying that it is more costly than that. Are you saying that, and what am I getting wrong?
“Does this position win?” has a structure like “Is there a move such that, for each possible reply there is a move such that, for each possible reply… you win.”—where existential and universal quantifiers alternate in the nesting. In a SAT problem on the other hand you just have a nest of existentials. I don’t know about Go specifically, but that’s my understanding of the usual core difference between games and SAT.
Much appreciated! So the NP solution to SAT is basically an OR over all of the possible assignments of the variables, where here (or for alternating move games in general), we’ve got alternating ORs and ANDs on sequential moves.
I’m not sure what you are getting wrong. My initial inclination was to think as you do. But then I thought of this:
You are probably imagining N to be the number of moves deep that need to be searched. Which you probably think is roughly the square of the board size (nominally, 19). The trouble is, what N really means is the size of the specific problem instance. So, it is possible to imagine Go problems on 1001x1001 boards where the number of stones already played is small. I.e. N is much less than a million, but the amount of computation needed to search the tree is on the order of 1000000^1000000.
ETA: This explanation is wrong. Darius got it right.
Much appreciated! I was taking N to be the number of squares on the board. My current thought is that, as you said, the number of possible move sequences on an N square board is of the order of N^N (actually, I think slightly smaller: N!). As you said, N may be much larger than the number of stones already played.
My current understanding is that board size if fixed for any given Go problem. Is that true or false? If it is true, then I’d think that the factor of N branching at each step in the tree of moves is just what gets swept into the nondeterministic part of NP.
There are many fairly natural problems which are not in P. For example, given a specific Go position does white or black have a win? This is in EXP
Are you saying that evaluating a Go position has exponential (time?) cost even on a nondeterministic machine?
If one assumes everything that is conjectured, then yes. To say that it is EXP-hard is to say that it takes exponential time on a deterministic machine. This does not immediately say how much time it takes on a non-deterministic machine. It is not ruled out that NP=EXP, but it is extremely implausible. Also doubted, though more plausible, is that PSPACE=EXP. PSPACE doesn’t care about determinism.
I’m not completely sure what you mean. Darius’s response seems relevant (in particular, you may want to look at the difference between a general non-deterministic Turing machine and an alternating Turing machine). However, there seems to be possibly another issue here: When mathematicians and computer scientists discuss polynomial time, they are talking about polynomials of the length of the input, not polynomials of the input (similarly, for exponential time and other classes). Thus, for example, to say that PRIMES is in P we mean that there’s an algorithm that answers whether a given integer is prime that is time bounded by a polynomial of log_2 p (assuming p is written in base 2).
There are indeed natural problems outside of NP, but an AI will be able to quickly answer any such queries in a way that, to a lesser intelligence, looks indistinguishable from an oracle’s answers.
There are indeed natural problems outside of NP, but an AI will be able to quickly answer any such queries in a way that, to a lesser intelligence, looks indistinguishable from an oracle’s answers.
How do you know that? What makes you reach that conclusion?
Do you mean an AI that has already FOOMed? If so, this is sort of a trivial claim.
If you are talking about a pre-FOOM AI then I don’t see why you think this is such an obvious conclusion. And the issue at hand is precisely what the AI could do leading up to and during FOOM.
Determining whether white or black wins at GO is certainly not in P (in fact certainly not in NP, I think, if the game can be exponentially long), but in the real world you don’t care whether white or black wins. You care about whether you win in the particular game of Go you are playing, which is in NP (although with bad constants, since you have to simulate whoever you are playing against).
There is a compelling argument to be made that any problem you care about is in NP, although in general the constants will be impractical in the same sense that building a computer to simulate the universe is impractical, even if the problem is in P. In fact, this question doesn’t really matter, because P is not actually the class of problems which can be solved in practice. It is a convenient approximation which allows us to state some theorems we have a chance of proving in the next century.
I think the existence of computational lower bounds is clearly of extreme importance to anything clever enough to discover optimal algorithms (and probably also to humans in the very long term for similar reasons). P != NP is basically the crudest such question, and even though I am fairly certain I know which way that question goes the probability of an AI fooming depends on much subtler problems which I can’t even begin to understand. In fact, basically the only reason I personally am interested in the P vs NP question is because I think it involves techniques which will eventually help us address these more difficult problems.
Determining whether white or black wins at GO is certainly not in P (in fact certainly not in NP, I think, if the game can be exponentially long), but in the real world you don’t care whether white or black wins. You care about whether you win in the particular game of Go you are playing, which is in NP (although with bad constants, since you have to simulate whoever you are playing against).
Huh? I don’t follow this at all. The question of any fixed game who would win is trivially in NP because it is doable in constant time. Any single question is always answerable in constant time. Am I misunderstanding you?
Suppose I want to choose how to behave to achieve some goal. Either what genes to put in a cell I am growing or what moves to play in a game of go or etc. Presumably I can determine whether any fixed prescription will cause me to attain my goal—I can simulate the universe and check the outcome at the end. Thus checking whether a particular sequence of actions (or a particular design, strategy, etc.) has the desired property is in P. Thus finding one with the desired property is in NP. The same applies to determining how to build a cell with desired properties, or how to beat the world’s best go player, etc. None of this is to say that P = NP sheds light on how easy these questions actually are, but P = NP is the normal theoretical interpretation, and in fact the only theoretical interpretation that makes sense if you are going to stick with the position that P is precisely the class of problems that an AI can solve.
I’m having some trouble parsing what you have wrote.
Presumably I can determine whether any fixed prescription will cause me to attain my goal—I can simulate the universe and check the outcome at the end. Thus checking whether a particular sequence of actions (or a particular design, strategy, etc.) has the desired property is in P.
I don’t follow this line of reasoning at all. Whether a problem is in P is a statement about the length of time it takes in general to solve instances. Also, a problem for this purpose is a collection of questions of the form “for given input N, does N have property A?”
None of this is to say that P = NP sheds light on how easy these questions actually are, but P = NP is the normal theoretical interpretation, and in fact the only theoretical interpretation that makes sense if you are going to stick with the position that P is precisely the class of problems that an AI can solve.
I’m not sure what you mean by this. First of all, the general consensus is that P !=NP. Second of all, in no interpretation is P is somehow precisely the set of problems that an AI can solve. It seems you are failing to distinguish between instances of problems and the general problems. Thus for example, the traveling salesman problem is NP complete. Even if P != NP, I can still solve individual traveling salesman problems (you can probably solve any instance with fewer than five nodes more or less by hand without too much effort.). Similarly, even if factoring turns out to be not in P, it doesn’t mean anyone is going to have trouble factoring 15.
Right, but one of the reasons for the curve of capability is the general version of Amdahl’s law. A particular new algorithm may make a particular computational task much easier, but if that task was only 5% of the total problem you are trying to solve, then even an infinite speedup on that task will only give you 5% overall improvement. The upshot is that new algorithms only make enough of a splash to be heard of by computer scientists, whereas the proverbial man on the street has heard of Moore’s Law.
But I will grant you that a constructive proof of P=NP would… be interesting. I don’t know that it would enable AI to go foom (that would still have other difficulties to overcome), but it would be of such wide applicability that my argument from the curve of capability against AI foom, would be invalidated.
I share the consensus view that P=NP doesn’t seem to be on the cards, but I agree it would be better to have a proof of P!=NP.
There are plenty of instances where an algorithmic breakthrough didn’t just apply to 5% of a problem, but the major part of it. My field (applied math) is riddled with such breakthroughs.
Yeah, but it’s fractal. For every such problem, there is a bigger problem such that the original one was a 5% subgoal. This is one of the reasons why the curve of capability manifests so consistently: you always find yourself hitting the general version of Amdahl’s law in the context of a larger problem
For every such problem, there is a bigger problem such that the original one was a 5% subgoal.
Sure, you can always cook up some ad hoc problem such that my perfect 100% solution to problem A is just a measly 5% subcomponent of problem B . That doesn’t change the fact that I’ve solved problem A, and all the ramifications that come along with it. You’re just relabeling things to automate a moving goal post. Luckily, an algorithm by any other name would still smell as sweet.
I think the point he was trying to make was that the set of expanding subgoals that an AI would have to make its way through would be sufficient to slow it down to within the exponential we’ve all been working with.
Phrased this way, however, it’s a much stronger point and I think it would require more discussion to be meaningful.
Bear in mind that if we zoom in to sufficiently fine granularity, we can get literally infinite speedup—some of my most productive programming days have been when I’ve found a way to make a section of code unnecessary, and delete it entirely.
To show this part of my argument is not infinitely flexible, I will say one algorithmic breakthrough that would invalidate it, would be a constructive proof of P=NP (or a way to make quantum computers solve NP-complete problems in polynomial time—I’m told the latter has been proven impossible, but I don’t know how certain it is that there aren’t any ways around that).
There are no known strong results concerning the relationship between BQP (roughly the analogue of P for quantum computers) and NP. There is strong consensus that BQP does not contain NP, but it is not as strong as the overwhelming consensus that P != NP.
Right, but one of the reasons for the curve of capability is the general version of Amdahl’s law. A particular new algorithm may make a particular computational task much easier, but if that task was only 5% of the total problem you are trying to solve, then even an infinite speedup on that task will only give you 5% overall improvement. The upshot is that new algorithms only make enough of a splash to be heard of by computer scientists, whereas the proverbial man on the street has heard of Moore’s Law.
May I suggest that the reasons that the proverbial person on the street has heard of Moore’s Law is more due to that 1) It is easier to understand 2) It has more of a visibly obvious impact on their lives?
Edit: Also, regarding 5%, sometimes the entire problem is just in the algorithm. For example, in the one I gave, factoring, the entire problem is can you factor integers quickly.
While it is true that exponential improvements in computer speed and memory often have the sort of limited impact you are describing, algorithmic improvements are frequently much more helpful. When RSA-129 was published as a factoring challenge, it was estimated that even assuming Moore’s law it would take a very long time to factor (the classic estimate was that it would take on the order of 10^15 years assuming that one could do modular arithmetic operations at 1 per a nanosecond. Assuming a steady progress of Moore’s law one got an estimate in the range of hundreds of years at minimum.) However, it was factored only a few years later because new algorithms made factoring much much easier. In particular, the quadratic sieve and the number field sieve were both subexponential. The analogy here is roughly to the jump in Go programs that occurred when the new Monte Carlo methods were introduced.
An AI that is a very good mathematician, and can come up with lots of good algorithms might plausibly go FOOM. For example, if it has internet access and has finds a practical polynomial time factoring algorithm it will control much of the internet quite quickly. This is not the only example of this sort of problem. Indeed, I place most of the probability mass of an AI going FOOM on the chance that P=NP and that there’s an actually practical fast way of solving NP complete problems. (ETA: Although note that prior discussion with Cousin It has convinced me that the level of barrier that P !=NP would be might be much smaller than I’d have estimated earlier. But the upshot is that if P=NP then FOOM really does seem plausible.)
I’ll breathe much more easily if we show that P != NP.
The upshot is that improvement in raw computation is not the only thing that can potentially lead to FOOMing.
If you believe that FOOM is comparably probable to P = NP, I think you should be breathing pretty easily. Based on purely mathematical arguments it would be extraordinarily surprising if P = NP (and even more surprising if SAT had a solution with low degree polynomial running time), but even setting this aside, if there were a fast (say, quadratic) algorithm for SAT, then it would probably allow a smart human to go FOOM within a few weeks. Your concern at this point isn’t really about an AI, its just that a universe where worst-case SAT can be practically solved is absolutely unlike the universe we are used to. If anyone has doubts about this assertion I would be happy to argue, although I think its an argument that as been made on the internet before.
So I guess maybe you shouldn’t breathe easy, but you should at least have a different set of worries.
In reality I would bet my life against a dollar on the assertion P != NP, but I don’t think this makes it difficult for FOOM to occur. I don’t think the possibility of a FOOMing AI existing on modern computers is even worth debating, its just how likely humans are to stumble upon it. If anyone wants to challenge the assertion that a FOOMing AI can exist, as a statement about the natural world rather than a statement about human capabilities, I would be happy to argue with that as well.
As an aside, it seems likely that a reasonably powerful AI would be able to build a quantum computer good enough to break most encryption used in practice in the near future. I don’t think this is really a serious issue, since breaking RSA seems like about the least threatening thing a reasonably intelligent agent could do to our world as it stands right now.
Yes, I’m familiar with these arguments. I find that suggestive but not nearly as persuasive as others seem to. I estimate about a 1% chance that P=NP is provable in ZFC, around a 2% chance that P=NP is undecideable in ZFC (this is a fairly recent update. This number used to be much smaller. I am willing to discuss reasons for it if anyone cares.) and a 97% chance that P !=NP. Since this is close to my area of expertise, I think I can make these estimates fairly safely.
Absolutely not. Humans can’t do good FOOM. We evolved in circumstances where we very rarely had to solve NP hard or NP complete problems. And our self-modification system is essentially unconscious. There’s little historical evolutionary incentive to take advantage of fast SAT solving. If one doesn’t believe this just look at how much trouble humans have doing all sorts of very tiny instances of simple computational problems like multiplying small numbers, or factoring small integers (say under 10 digits).
Really? In that case we have a sharply different probability estimate. Would you care to make an actual bet? Is it fair to say that you are putting an estimate of less than 10^-6 that P=NP?
If it an AI can make quantum computers that can do that then it hardly has so much matter manipulation ability it has likely already won (although I doubt that even a reasonably powerful AI could necessarily do this simply because quantum computers are so finicky and unstable.)
But if P=NP in a practical way, RSA cracking is just one of the many things the AI will have fun with. Many crypto systems not just RSA will be vulnerable. The AI might quickly control many computer systems, increasing its intelligence and data input drastically. Many sensitive systems will likely fall under its control. And if P=NP then the AI also has shortcuts to all sorts of other things that could help it, like designing new circuits for it to use (and chip factories are close to automated at this point), and lots of neat biological tricks (protein folding becomes a lot easier although there seems to be some disagreement about from a computational perspective what class general protein folding falls into.) And of course, all those secure systems that are on the net which shouldn’t be become far more vulnerable (nuclear power plants, particle accelerators, hydroelectric dams), as do lots of commercial and military satellites. And those are just a handful of the things that my little human mind comes up without being very creative. Harry James Potter Evans-Verres would do a lot better. (Incidentally I didn’t remember how to spell his name so I started typing in Harry James Potter to Google and at “Harry James Pot” the third suggestion is for Evans-Verres. Apparently HPMoR is frequently googled.) And neither Harry nor I is as smart as decently intelligent AI.
I now agree that I was overconfident in P != NP. I was thinking only of failures where my general understanding of and intuition about math and computer science are correct. In fact most of the failure probability comes from the case where I (and most computer scientists) are completely off base and don’t know at all what is going on. I think that worlds like this are unlikely, but probably not 1 in a million.
We have very different beliefs about P != NP. I would be willing to make the following wager, if it could be suitably enforced with sufficiently low overhead. If a proof that P != NP reaches general acceptance, you will pay me $10000 with probability 1/1000000 (expectation $.01). If an algorithm provably solves 3SAT on n variables in time O(n^4) or less, I will pay you $1000000.
This bet is particularly attractive to me, because if such a fast algorithm for SAT appears I will probably cease to care about the million dollars. My actual probability estimate is somewhere in the ballpark of 10^-6, though its hard to be precise about probabilities so small.
Perhaps it was not clear what I meant by an individual going FOOM, which is fair since it was a bit of a misuse. I mean just that an individual with access to such an algorithm could quickly amplify their own power and then exert a dominant influence on human society. I don’t imagine a human will physically alter themselves. It might be entertaining to develop and write up a plan of attack for this contingency. I think the step I assume is possible that you don’t is the use of a SAT solver to search for a compact program whose behavior satisfies some desired property, which can be used to better leverage your SAT solver.
A similar thought experiment my friends and I have occasionally contemplated is: given a machine which can run any C program in exactly 1 second (or report an infinite loop), how many seconds would it take you to ?
Replying a second time to remind you of this subthread in case you still have any interest in making a bet. If we change it to degree 7 rather than degree 4 and changed the monetary aspects as I outlined I’m ok with the bet.
We may need to adjust the terms. The most worrisome parts of that bet are twofold: 1) I don’t have 10^5 dollars, so on my end, paying $1000 with probability 1/100000 which has the same expectation probably makes more sense. 2) I’m not willing to agree to that O(n^4) simply because there are many problems which are in P where our best algorithm known is much worse than that. For example, the AKS primality test is O(n^6) and deterministic Miller Rabin might be O(n^4) but only if one makes strong assumptions corresponding to a generalized Riemann hypothesis.
Not necessarily. While such an algoirthm would be useful many of the more effective uses would only last as long as one kept quiet that one had such a fast SAT solver. So to take full advantage requires some subtlety.
There’s a limit to how much you can do with this since general questions about properties of algorithms are still strongly not decidable.
I’m not sure I’m parsing that correctly. Is that 2% for undecidable or undecidable+true? Don’t most people consider undecidability evidence against?
All crypto systems would be vulnerable. At least, all that have ever been deployed on a computer.
2% is undecidable in general. Most of that probability mass is for “There’s no polynomial time solver for solving an NP complete problem but that is not provable in ZFC” (obviously one then needs to be careful about what one means by saying such a thing doesn’t exist, but I don’t want to have to deal with those details). A tiny part of that 2% is the possibility that there’s a polynomial time algorithm for solving some NP complete problem but one can’t prove in ZFC that the algorithm is polynomial time. That’s such a weird option that I’m not sure how small a probability for it, other than “very unlikely.”
Actually, no. There are some that would not. For example, one-time pads have been deployed on computer systems (among other methods using USB flash drives to deliver the secure bits.) One-time pads are provably secure. But all public key cryptography would be vulnerable, which means most forms of modern crypto.
I forgot about one-time pads, which certainly are deployed, but which I don’t think of as crypto in the sense of “turning small shared secrets into large shared secrets.” My point was that breaks not just public-key cryptography, but also symmetric cryptography, which tends to be formalizable as equivalent to one-way functions.
Agreed. (Though it would be kinda cool in the long run if P = NP.)
P vs NP has nothing to do with AI FOOM.
P = NP is effectively indistinguishable from P = ALL. Like PaulFChristiano said, If P = NP (in a practical way), then FOOMs are a dime a dozen.
And assuming P != NP, the fact that NP-complete problems aren’t efficiently solvable in the general case doesn’t mean paperclipping the universe is the least bit difficult.
Nothing to do with it at all? I’m curious as to how you reach that conclusion.
No. There are many fairly natural problems which are not in P. For example, given a specific Go position does white or black have a win? This is in EXP
The difficulty of FOOMing for most entities such as humans even given practical P=NP is still severe. See my discussion with Paul.
I’m puzzled at how you can reach such a conclusion. Many natural problems that an entity trying to FOOM would want to do are NP complete. For example, graph coloring comes up in memory optimization and traveling salesman comes up in circuit design. Now, in a prior conversation I had with Cousin It he made the excellent point that a FOOMing AI might not need to actually deal with worst case instances of these problems. But that is a more subtle issue than what you seem to be claiming.
I’m confused or about to display my ignorance. When you write:
Are you saying that evaluating a Go position has exponential (time?) cost even on a nondeterministic machine? I thought that any given string of moves to the end of the game could be evaluated in polynomial (N^2?) time. I thought that the full set of possible strings of moves could be evaluated (naively) in exponential time on a deterministic machine and polynomial time on a nondeterministic machine—so I thought Go position evaluation would be in NP. I think you are saying that it is more costly than that. Are you saying that, and what am I getting wrong?
“Does this position win?” has a structure like “Is there a move such that, for each possible reply there is a move such that, for each possible reply… you win.”—where existential and universal quantifiers alternate in the nesting. In a SAT problem on the other hand you just have a nest of existentials. I don’t know about Go specifically, but that’s my understanding of the usual core difference between games and SAT.
Much appreciated! So the NP solution to SAT is basically an OR over all of the possible assignments of the variables, where here (or for alternating move games in general), we’ve got alternating ORs and ANDs on sequential moves.
I’m not sure what you are getting wrong. My initial inclination was to think as you do. But then I thought of this:
You are probably imagining N to be the number of moves deep that need to be searched. Which you probably think is roughly the square of the board size (nominally, 19). The trouble is, what N really means is the size of the specific problem instance. So, it is possible to imagine Go problems on 1001x1001 boards where the number of stones already played is small. I.e. N is much less than a million, but the amount of computation needed to search the tree is on the order of 1000000^1000000.
ETA: This explanation is wrong. Darius got it right.
Much appreciated! I was taking N to be the number of squares on the board. My current thought is that, as you said, the number of possible move sequences on an N square board is of the order of N^N (actually, I think slightly smaller: N!). As you said, N may be much larger than the number of stones already played.
My current understanding is that board size if fixed for any given Go problem. Is that true or false? If it is true, then I’d think that the factor of N branching at each step in the tree of moves is just what gets swept into the nondeterministic part of NP.
If one assumes everything that is conjectured, then yes. To say that it is EXP-hard is to say that it takes exponential time on a deterministic machine. This does not immediately say how much time it takes on a non-deterministic machine. It is not ruled out that NP=EXP, but it is extremely implausible. Also doubted, though more plausible, is that PSPACE=EXP. PSPACE doesn’t care about determinism.
I’m not completely sure what you mean. Darius’s response seems relevant (in particular, you may want to look at the difference between a general non-deterministic Turing machine and an alternating Turing machine). However, there seems to be possibly another issue here: When mathematicians and computer scientists discuss polynomial time, they are talking about polynomials of the length of the input, not polynomials of the input (similarly, for exponential time and other classes). Thus, for example, to say that PRIMES is in P we mean that there’s an algorithm that answers whether a given integer is prime that is time bounded by a polynomial of log_2 p (assuming p is written in base 2).
There are indeed natural problems outside of NP, but an AI will be able to quickly answer any such queries in a way that, to a lesser intelligence, looks indistinguishable from an oracle’s answers.
How do you know that? What makes you reach that conclusion?
Do you mean an AI that has already FOOMed? If so, this is sort of a trivial claim.
If you are talking about a pre-FOOM AI then I don’t see why you think this is such an obvious conclusion. And the issue at hand is precisely what the AI could do leading up to and during FOOM.
Determining whether white or black wins at GO is certainly not in P (in fact certainly not in NP, I think, if the game can be exponentially long), but in the real world you don’t care whether white or black wins. You care about whether you win in the particular game of Go you are playing, which is in NP (although with bad constants, since you have to simulate whoever you are playing against).
There is a compelling argument to be made that any problem you care about is in NP, although in general the constants will be impractical in the same sense that building a computer to simulate the universe is impractical, even if the problem is in P. In fact, this question doesn’t really matter, because P is not actually the class of problems which can be solved in practice. It is a convenient approximation which allows us to state some theorems we have a chance of proving in the next century.
I think the existence of computational lower bounds is clearly of extreme importance to anything clever enough to discover optimal algorithms (and probably also to humans in the very long term for similar reasons). P != NP is basically the crudest such question, and even though I am fairly certain I know which way that question goes the probability of an AI fooming depends on much subtler problems which I can’t even begin to understand. In fact, basically the only reason I personally am interested in the P vs NP question is because I think it involves techniques which will eventually help us address these more difficult problems.
Huh? I don’t follow this at all. The question of any fixed game who would win is trivially in NP because it is doable in constant time. Any single question is always answerable in constant time. Am I misunderstanding you?
Suppose I want to choose how to behave to achieve some goal. Either what genes to put in a cell I am growing or what moves to play in a game of go or etc. Presumably I can determine whether any fixed prescription will cause me to attain my goal—I can simulate the universe and check the outcome at the end. Thus checking whether a particular sequence of actions (or a particular design, strategy, etc.) has the desired property is in P. Thus finding one with the desired property is in NP. The same applies to determining how to build a cell with desired properties, or how to beat the world’s best go player, etc. None of this is to say that P = NP sheds light on how easy these questions actually are, but P = NP is the normal theoretical interpretation, and in fact the only theoretical interpretation that makes sense if you are going to stick with the position that P is precisely the class of problems that an AI can solve.
I’m having some trouble parsing what you have wrote.
I don’t follow this line of reasoning at all. Whether a problem is in P is a statement about the length of time it takes in general to solve instances. Also, a problem for this purpose is a collection of questions of the form “for given input N, does N have property A?”
I’m not sure what you mean by this. First of all, the general consensus is that P !=NP. Second of all, in no interpretation is P is somehow precisely the set of problems that an AI can solve. It seems you are failing to distinguish between instances of problems and the general problems. Thus for example, the traveling salesman problem is NP complete. Even if P != NP, I can still solve individual traveling salesman problems (you can probably solve any instance with fewer than five nodes more or less by hand without too much effort.). Similarly, even if factoring turns out to be not in P, it doesn’t mean anyone is going to have trouble factoring 15.
Right, but one of the reasons for the curve of capability is the general version of Amdahl’s law. A particular new algorithm may make a particular computational task much easier, but if that task was only 5% of the total problem you are trying to solve, then even an infinite speedup on that task will only give you 5% overall improvement. The upshot is that new algorithms only make enough of a splash to be heard of by computer scientists, whereas the proverbial man on the street has heard of Moore’s Law.
But I will grant you that a constructive proof of P=NP would… be interesting. I don’t know that it would enable AI to go foom (that would still have other difficulties to overcome), but it would be of such wide applicability that my argument from the curve of capability against AI foom, would be invalidated.
I share the consensus view that P=NP doesn’t seem to be on the cards, but I agree it would be better to have a proof of P!=NP.
There are plenty of instances where an algorithmic breakthrough didn’t just apply to 5% of a problem, but the major part of it. My field (applied math) is riddled with such breakthroughs.
Yeah, but it’s fractal. For every such problem, there is a bigger problem such that the original one was a 5% subgoal. This is one of the reasons why the curve of capability manifests so consistently: you always find yourself hitting the general version of Amdahl’s law in the context of a larger problem
Sure, you can always cook up some ad hoc problem such that my perfect 100% solution to problem A is just a measly 5% subcomponent of problem B . That doesn’t change the fact that I’ve solved problem A, and all the ramifications that come along with it. You’re just relabeling things to automate a moving goal post. Luckily, an algorithm by any other name would still smell as sweet.
I think the point he was trying to make was that the set of expanding subgoals that an AI would have to make its way through would be sufficient to slow it down to within the exponential we’ve all been working with.
Phrased this way, however, it’s a much stronger point and I think it would require more discussion to be meaningful.
Bear in mind that if we zoom in to sufficiently fine granularity, we can get literally infinite speedup—some of my most productive programming days have been when I’ve found a way to make a section of code unnecessary, and delete it entirely.
To show this part of my argument is not infinitely flexible, I will say one algorithmic breakthrough that would invalidate it, would be a constructive proof of P=NP (or a way to make quantum computers solve NP-complete problems in polynomial time—I’m told the latter has been proven impossible, but I don’t know how certain it is that there aren’t any ways around that).
There are no known strong results concerning the relationship between BQP (roughly the analogue of P for quantum computers) and NP. There is strong consensus that BQP does not contain NP, but it is not as strong as the overwhelming consensus that P != NP.
Presumably because P = NP would imply that NP is contained in BQP, so you can’t believe the first of your statements without believing the second.
It’s not even known if NP contains BQP?
No. The best we can do is that both contain BPP and are contained in PP, as far as I recall.
And there exist oracles relative to which BQP is not contained in MA (which contains NP).
May I suggest that the reasons that the proverbial person on the street has heard of Moore’s Law is more due to that 1) It is easier to understand 2) It has more of a visibly obvious impact on their lives?
Edit: Also, regarding 5%, sometimes the entire problem is just in the algorithm. For example, in the one I gave, factoring, the entire problem is can you factor integers quickly.