So, I can think of several different things that could all be the ‘protein folding problem.’
Figure out the trajectory an unfolded protein takes towards a folded protein, with a known starting state. (P)
Given a known folded protein, find local minima that unfolded proteins starting with random start states might get stuck in. (NP, probably)
Given a desired chemical reaction, find a folded protein that will catalyze the reaction. (Not sure, probably NP.)
Given a desired chemical reaction, find a folded protein that will catalyze the reaction that is the local minimum reached by most arbitrary unfolded positions for that protein (Optimal is definitely NP, but I suspect feasible is too.)
Others. (Here, the closest I get to molecular nanotech is ‘catalyze reactions,’ but I imagine the space for ‘build a protein that looks like X’ might actually be smaller.)
It looks to me like the problems here that have significant returns are NP.
Finding new useful proteins tends to occur via point mutations so that can’t be NP-complete either.
It’s not at all clear to me what you mean by this. I mean, take the traveling salesman problem. It’s NP-Hard*, but you can get decent solutions by using genetic algorithms to breed solutions given feasible initial solutions. Most improvements to the route will be introduced by mutations, and yet the problem is still NP-hard.
That is, it’s not clear to me that you’re differentiating between the problem of finding an optimal solution being NP hard, it taking NP time to find a ‘decent’ solution, and an algorithm requiring NP time to finish running.
(The second is rarely true for things like the traveling salesman problem, but is often true for practical problems where you throw in tons of constraints.)
* A variant is NP-Complete, which is what I originally wrote.
Nothing that has physically happened on Earth in real life, such as proteins folding inside a cell, or the evolution of new enzymes, or hominid brains solving problems, or whatever, can have been NP-hard. Period. It could be a physical event that you choose to regard as a P-approximation to a theoretical problem whose optimal solution would be NP-hard, but so what, that wouldn’t have anything to do with what physically happened. It would take unknown, exotic physics to have anything NP-hard physically happen. Anything that could not plausibly have involved black holes rotating at half the speed of light to produce closed timelike curves, or whatever, cannot have plausibly involved NP-hard problems. NP-hard = “did not physically happen”. “Physically happened” = not NP-hard.
Nothing that has physically happened on Earth in real life, such as proteins folding inside a cell, or the evolution of new enzymes, or hominid brains solving problems, or whatever, can have been NP-hard. Period.
I’ve seen you say this a couple of times, and your interlocutors seem to understand you, even when they dispute your conclusion. But my brain keeps returning an error when I try to parse your claim.
Read literally, “NP-hard” is not a predicate that can be meaningfully applied to individual events. So, in that sense, trivially, nothing that happens (physically or otherwise, if “physically” is doing any work here) can be NP-hard. But you are evidently not making such a trivial claim.
So, what would it look like if the physical universe “solved an NP-hard problem”? Presumably it wouldn’t just mean that some actual salesman found a why to use existing airline routes to visit a bunch of pre-specified cities without revisiting any one of them. Presumably it wouldn’t just mean that someone built a computer that implements a brute-force exhaustive search for a solution to the traveling salesman problem given an arbitrary graph (a search that the computer will never finish before the heat death of the universe if the example is large). But I can’t think of any other interpretation to give to your claim.
Here’s Scott Aaronson describing people (university professors in computer science and cognitive science at RPI) who claim that the physical universe efficiently solves NP-hard problems:
It was only a matter of time before someone put the pieces together. Last summer Bringsjord
and Taylor [24] posted a paper entitled “P=NP” to the arXiv. This paper argues that, since (1)
finding a Steiner tree is NP-hard, (2) soap bubbles find a Steiner tree in polynomial time, (3) soap
bubbles are classical objects, and (4) classical physics can be simulated by a Turing machine with
polynomial slowdown, it follows that P = NP.
My immediate reaction was that the paper was a parody. However, a visit to Bringsjord’s
home page2 suggested that it was not. Impelled, perhaps, by the same sort of curiosity that
causes people to watch reality TV shows, I checked the discussion of this paper on the comp.theory
newsgroup to see if anyone recognized the obvious error. And indeed, several posters pointed out
that, although soap bubbles might reach a minimum-energy configuration with a small number of
pegs, there is no “magical” reason why this should be true in general. By analogy, a rock in a
mountain crevice could reach a lower-energy configuration by rolling up first and then down, but it
is not observed to do so.
In other news, Bringsjord also claims to show by a modal argument, similar to the theistic modal argument (which he also endorses), that human brains are capable of hypercomputation: “it’s possible humans are capable of hypercomputation, so they are capable of hypercomputation.” For this reason he argues that superhumanly intelligent Turing machines/Von Neumann computers are impossible and belief in their possibility is fideistic.
Here’s Scott Aaronson describing people (university professors at RPI) who claim that the physical universe efficiently solves NP-hard problems.
This doesn’t refute what you are responding to. Saying the universe can’t solve a general NP problem in polynomial time is not the same thing as saying the universe cannot possibly solve specific instances of generally NP-complete problems, which is Tyrrell_McAllister’s point, as far as I can parse. In general, the traveling salesman is NP-complete, however there are lots of cases where heuristics get the job done in polynomial time, even if those heuristics would run-away if they were given the wrong case.
To use Aaronson’s soap bubbles, sometimes the soap bubble finds a Steiner tree, sometimes it doesn’t. When it DOES, it has solved one instance of an NP-complete problem fairly quickly.
Existence of God, not existence of god [or gods]; my sincere apologies for pedantry, it’s just that LW folk generally misunderstand theology and so it seems important to be especially careful when discussing it. The distinction between God and god is quite important and the words look so similar, it’s unfortunate. ETA: Retracting because Carl edited the comment. (Thanks Carl!)
In many contexts where a quite important distinction between two terms exists, it is helpful to say what it is, rather than simply assert that there is one.
Admittedly, I have no reason to believe this is one such context.
I’m pretty sure the distinction Will was making was as follows.
A “god” is a being with superhuman powers and so forth, possibly existing somehow “on another plane of existence”, but still in some sense the same kind of thing as us: another thing among all the things there are.
“God” is not in any way the same sort of thing as we are. God is not just “another thing” but, in theologian-speak, the Ground Of All Being, a sort of precondition for everything else.
However, it’s possible that he’s making a different distinction, wherein “God” means a unique divine being of infinite power who created everything and “god” means lesser beings like the Greek gods.
(If he’s making the first distinction, I would remark that (1) most theists’ notions of God do in fact fall under the description of “god”, and (2) the radically different notion of “God” is of very doubtful coherence. If he’s making the second, I’m unconvinced that the distinction is as important as he suggests it is.)
(Hastily written:) I agree with your (1), but think that’s all the more reason to make clear distinctions instead of further muddying the epistemic waters; it’s much as if creationists didn’t bother to distinguish between something like Pokemon evolution and actual evolutionary biology, because after all most evolutionists can’t tell the difference.
Mildly disagree with your (2): I can see how the coherence of the God idea is somewhat doubtful, but there aren’t actually any overwhelmingly strong arguments in terms of metaphysics that that is the case, and most atheists take a different route by more or less rejecting all of metaphysics and instead placing emphasis on epistemology. (Then there are weaksauce theists like Kant and William James to argue with but I don’t think that’s as challenging.) Although I’m sympathetic to skepticism of metaphysics we should keep in mind that the obvious attempts to banish it have failed (e.g. logical positivism), and we should also keep in mind that though LessWrong is (somewhat justifiably) allergic to the word “metaphysics”, metaphysics actually shows up here quite a bit in the guise of computationalism/simulationism and in some semi-epistemic rules like Eliezer’s GAZP. So to reject metaphysics entirely would be inconsistent; from there, charitably engaging the actual metaphysical arguments of philosophical theists would be necessary, and I see this done very rarely.
In the meantime I think assigning probabilities below, say, 1% to philosophical theism would be premature, especially when the motivations for doing so seem largely to be desires to reverse the stupidity of religious thinking, when philosophical theism stems from Socrates, Plato, and Aristotle and isn’t easily dismissed as the rationalizations of a Christian hegemony the way that most atheists seem to assume in practice.
(ETA: It occurs to me that David Chalmers, who is LW-friendly and the editor of Metametaphysics, would be a good person to ask about the tenability of philosophical theism, from a metaphilosophical perspective. I might send him an email / LW message.)
(ETA: It occurs to me that David Chalmers, who is LW-friendly and the editor of Metametaphysics, would be a good person to ask about the tenability of philosophical theism, from a metaphilosophical perspective. I might send him an email / LW message.)
Did you ever end up doing this, and if so, would you mind sharing the response?
That seems a little strongly put—NP-hard scales very poorly, so no real process can take N up to large numbers. I can solve the traveling salesman problem in my head with only modest effort if there are only 4 stops. And it’s trivial if there are 2 or 3 stops.
Yes (At least that’s the general consensus among complexity theorists, though it hasn’t been proved.) This doesn’t contradict anything Eliezer said in the grandparent. The following are all consensus-but-not-proved:
P⊂BQP⊂EXP P⊂NP⊂EXP BQP≠NP (Neither is confidently predicted to be a subset of the other, though BQP⊂NP is at least plausible, while NP⊆BQP is not.) If you don’t measure any distinctions finer than P vs EXP, then you’re using a ridiculously coarse scale. There are lots of complexity classes strictly between P and EXP, defined by limiting resources other than time-on-a-classical-computer. Some of them are tractable under our physics and some aren’t.
Nothing that has physically happened on Earth in real life, such as proteins folding inside a cell, or the evolution of new enzymes, or hominid brains solving problems, or whatever, can have been NP-hard.
I don’t understand why you think new physics is required to solve hard instances of NP-complete problems. We routinely solve the hard instances of NP-hard problems in practice on computers—just not on large instances of the problem. New physics might be required to solve those problems quickly, but if you are willing to wait exponentially long, you can solve the problems just fine.
If you want to argue that actual practical biological folding of proteins isn’t NP-hard, the argument can’t start from “it happens quickly”—you need to say something about how the time to fold scales with the length of the amino acid strings, and in particular in the limit for very large strings.
Similarly, I don’t see why biological optimization couldn’t have solved hard cases of NP-compete problems. If you wait long enough for evolution to do its thing, the result could be equivalent to an exhaustive search. No new physics required.
I don’t understand why you think new physics is required to solve hard instances of NP-complete problems. We routinely solve the hard instances of NP-hard problems in practice on computers—just not on large instances of the problem.
Eliezer already conceded that trivial instances of such problems can be solved. (We can assume that before he made that concession he thought it went without saying.)
New physics might be required to solve those problems quickly, but if you are willing to wait exponentially long, you can solve the problems just fine.
The physics and engineering required to last sufficiently long may be challenging. I hear it gets harder to power computers once the stars have long since burned out. As far as I know the physics isn’t settled yet.
(In other words, I am suggesting that “just fine” is an something of an overstatement when it comes to solving seriously difficult problems by brute force.)
The physics and engineering required to last sufficiently long may be challenging. I hear it gets harder to power computers once the stars have long since burned out. As far as I know the physics isn’t settled yet.
That counterargument is a bit too general, since it applies not only to NP problems, but even to P problems (such as deciding whether a number is the GCD of two other numbers), or even any arbitrary algorithm modified by a few lines of codes such that its result is unaffected, merely delayed until after the stars burned out, or whatever limit we postulate.
For NP problems and e.g. P problems both, given how we understand the universe, there is only a finite number of inputs in both cases which are tractable, and an infinite number of inputs which aren’t. Though the finite number is well different for both, as a fraction of all “possible”, or rather well-defined (let’s avoid that ambiguity cliff) inputs, it would be the same.
Cue “We all live in a Finite State Machine, Finite State Machine, Finite State Machine …”
Eliezer already conceded that trivial instances of such problems can be solved. (We can assume that before he made that concession he thought it went without saying.)
The point can’t be confined to “trivial instances”. For any NP-complete problem on some reasonable computing platform that can solve small instances quickly, there will be instance sizes that are non-trivial (take appreciable time to solve) but do not require eons to solve. There is absolutely no mathematical reason for assuming that for “natural” NP-complete problems, interesting-sized instances can’t be solved on a timescale of months/years/centuries by natural processes.
The dichotomy between “trivial” and “impossible to solve in a useful time-frame” is a false one.
So, I can think of several different things that could all be the ‘protein folding problem.’
Figure out the trajectory an unfolded protein takes towards a folded protein, with a known starting state. (P)
Given a known folded protein, find local minima that unfolded proteins starting with random start states might get stuck in. (NP, probably)
Given a desired chemical reaction, find a folded protein that will catalyze the reaction. (Not sure, probably NP.)
Given a desired chemical reaction, find a folded protein that will catalyze the reaction that is the local minimum reached by most arbitrary unfolded positions for that protein (Optimal is definitely NP, but I suspect feasible is too.)
Others. (Here, the closest I get to molecular nanotech is ‘catalyze reactions,’ but I imagine the space for ‘build a protein that looks like X’ might actually be smaller.)
It looks to me like the problems here that have significant returns are NP.
It’s not at all clear to me what you mean by this. I mean, take the traveling salesman problem. It’s NP-Hard*, but you can get decent solutions by using genetic algorithms to breed solutions given feasible initial solutions. Most improvements to the route will be introduced by mutations, and yet the problem is still NP-hard.
That is, it’s not clear to me that you’re differentiating between the problem of finding an optimal solution being NP hard, it taking NP time to find a ‘decent’ solution, and an algorithm requiring NP time to finish running.
(The second is rarely true for things like the traveling salesman problem, but is often true for practical problems where you throw in tons of constraints.)
* A variant is NP-Complete, which is what I originally wrote.
Nothing that has physically happened on Earth in real life, such as proteins folding inside a cell, or the evolution of new enzymes, or hominid brains solving problems, or whatever, can have been NP-hard. Period. It could be a physical event that you choose to regard as a P-approximation to a theoretical problem whose optimal solution would be NP-hard, but so what, that wouldn’t have anything to do with what physically happened. It would take unknown, exotic physics to have anything NP-hard physically happen. Anything that could not plausibly have involved black holes rotating at half the speed of light to produce closed timelike curves, or whatever, cannot have plausibly involved NP-hard problems. NP-hard = “did not physically happen”. “Physically happened” = not NP-hard.
I’ve seen you say this a couple of times, and your interlocutors seem to understand you, even when they dispute your conclusion. But my brain keeps returning an error when I try to parse your claim.
Read literally, “NP-hard” is not a predicate that can be meaningfully applied to individual events. So, in that sense, trivially, nothing that happens (physically or otherwise, if “physically” is doing any work here) can be NP-hard. But you are evidently not making such a trivial claim.
So, what would it look like if the physical universe “solved an NP-hard problem”? Presumably it wouldn’t just mean that some actual salesman found a why to use existing airline routes to visit a bunch of pre-specified cities without revisiting any one of them. Presumably it wouldn’t just mean that someone built a computer that implements a brute-force exhaustive search for a solution to the traveling salesman problem given an arbitrary graph (a search that the computer will never finish before the heat death of the universe if the example is large). But I can’t think of any other interpretation to give to your claim.
ETA: this is a side point.
Here’s Scott Aaronson describing people (university professors in computer science and cognitive science at RPI) who claim that the physical universe efficiently solves NP-hard problems:
In other news, Bringsjord also claims to show by a modal argument, similar to the theistic modal argument (which he also endorses), that human brains are capable of hypercomputation: “it’s possible humans are capable of hypercomputation, so they are capable of hypercomputation.” For this reason he argues that superhumanly intelligent Turing machines/Von Neumann computers are impossible and belief in their possibility is fideistic.
This doesn’t refute what you are responding to. Saying the universe can’t solve a general NP problem in polynomial time is not the same thing as saying the universe cannot possibly solve specific instances of generally NP-complete problems, which is Tyrrell_McAllister’s point, as far as I can parse. In general, the traveling salesman is NP-complete, however there are lots of cases where heuristics get the job done in polynomial time, even if those heuristics would run-away if they were given the wrong case.
To use Aaronson’s soap bubbles, sometimes the soap bubble finds a Steiner tree, sometimes it doesn’t. When it DOES, it has solved one instance of an NP-complete problem fairly quickly.
Existence of God, not existence of god [or gods]; my sincere apologies for pedantry, it’s just that LW folk generally misunderstand theology and so it seems important to be especially careful when discussing it. The distinction between God and god is quite important and the words look so similar, it’s unfortunate. ETA: Retracting because Carl edited the comment. (Thanks Carl!)
In many contexts where a quite important distinction between two terms exists, it is helpful to say what it is, rather than simply assert that there is one.
Admittedly, I have no reason to believe this is one such context.
I’m pretty sure the distinction Will was making was as follows.
A “god” is a being with superhuman powers and so forth, possibly existing somehow “on another plane of existence”, but still in some sense the same kind of thing as us: another thing among all the things there are.
“God” is not in any way the same sort of thing as we are. God is not just “another thing” but, in theologian-speak, the Ground Of All Being, a sort of precondition for everything else.
However, it’s possible that he’s making a different distinction, wherein “God” means a unique divine being of infinite power who created everything and “god” means lesser beings like the Greek gods.
(If he’s making the first distinction, I would remark that (1) most theists’ notions of God do in fact fall under the description of “god”, and (2) the radically different notion of “God” is of very doubtful coherence. If he’s making the second, I’m unconvinced that the distinction is as important as he suggests it is.)
(Hastily written:) I agree with your (1), but think that’s all the more reason to make clear distinctions instead of further muddying the epistemic waters; it’s much as if creationists didn’t bother to distinguish between something like Pokemon evolution and actual evolutionary biology, because after all most evolutionists can’t tell the difference.
Mildly disagree with your (2): I can see how the coherence of the God idea is somewhat doubtful, but there aren’t actually any overwhelmingly strong arguments in terms of metaphysics that that is the case, and most atheists take a different route by more or less rejecting all of metaphysics and instead placing emphasis on epistemology. (Then there are weaksauce theists like Kant and William James to argue with but I don’t think that’s as challenging.) Although I’m sympathetic to skepticism of metaphysics we should keep in mind that the obvious attempts to banish it have failed (e.g. logical positivism), and we should also keep in mind that though LessWrong is (somewhat justifiably) allergic to the word “metaphysics”, metaphysics actually shows up here quite a bit in the guise of computationalism/simulationism and in some semi-epistemic rules like Eliezer’s GAZP. So to reject metaphysics entirely would be inconsistent; from there, charitably engaging the actual metaphysical arguments of philosophical theists would be necessary, and I see this done very rarely.
In the meantime I think assigning probabilities below, say, 1% to philosophical theism would be premature, especially when the motivations for doing so seem largely to be desires to reverse the stupidity of religious thinking, when philosophical theism stems from Socrates, Plato, and Aristotle and isn’t easily dismissed as the rationalizations of a Christian hegemony the way that most atheists seem to assume in practice.
(ETA: It occurs to me that David Chalmers, who is LW-friendly and the editor of Metametaphysics, would be a good person to ask about the tenability of philosophical theism, from a metaphilosophical perspective. I might send him an email / LW message.)
Did you ever end up doing this, and if so, would you mind sharing the response?
I agree with your parse error. It looks like EY has moved away from the claim made in the grandparent, though.
That seems a little strongly put—NP-hard scales very poorly, so no real process can take N up to large numbers. I can solve the traveling salesman problem in my head with only modest effort if there are only 4 stops. And it’s trivial if there are 2 or 3 stops.
Conceded.
Um… doesn’t it take exponential time in order to simulate quantum mechanics on a classical computer?
Yes (At least that’s the general consensus among complexity theorists, though it hasn’t been proved.) This doesn’t contradict anything Eliezer said in the grandparent. The following are all consensus-but-not-proved:
P⊂BQP⊂EXP
P⊂NP⊂EXP
BQP≠NP (Neither is confidently predicted to be a subset of the other, though BQP⊂NP is at least plausible, while NP⊆BQP is not.)
If you don’t measure any distinctions finer than P vs EXP, then you’re using a ridiculously coarse scale. There are lots of complexity classes strictly between P and EXP, defined by limiting resources other than time-on-a-classical-computer. Some of them are tractable under our physics and some aren’t.
Is that just that we don’t know any better algorithms, or is there a proof that exptime is needed?
I really don’t know; some Wikipedia browsing suggests that there’s a proof, but I’d rather have a statement from someone who actually knows.
I don’t understand why you think new physics is required to solve hard instances of NP-complete problems. We routinely solve the hard instances of NP-hard problems in practice on computers—just not on large instances of the problem. New physics might be required to solve those problems quickly, but if you are willing to wait exponentially long, you can solve the problems just fine.
If you want to argue that actual practical biological folding of proteins isn’t NP-hard, the argument can’t start from “it happens quickly”—you need to say something about how the time to fold scales with the length of the amino acid strings, and in particular in the limit for very large strings.
Similarly, I don’t see why biological optimization couldn’t have solved hard cases of NP-compete problems. If you wait long enough for evolution to do its thing, the result could be equivalent to an exhaustive search. No new physics required.
Eliezer already conceded that trivial instances of such problems can be solved. (We can assume that before he made that concession he thought it went without saying.)
The physics and engineering required to last sufficiently long may be challenging. I hear it gets harder to power computers once the stars have long since burned out. As far as I know the physics isn’t settled yet.
(In other words, I am suggesting that “just fine” is an something of an overstatement when it comes to solving seriously difficult problems by brute force.)
That counterargument is a bit too general, since it applies not only to NP problems, but even to P problems (such as deciding whether a number is the GCD of two other numbers), or even any arbitrary algorithm modified by a few lines of codes such that its result is unaffected, merely delayed until after the stars burned out, or whatever limit we postulate.
For NP problems and e.g. P problems both, given how we understand the universe, there is only a finite number of inputs in both cases which are tractable, and an infinite number of inputs which aren’t. Though the finite number is well different for both, as a fraction of all “possible”, or rather well-defined (let’s avoid that ambiguity cliff) inputs, it would be the same.
Cue “We all live in a Finite State Machine, Finite State Machine, Finite State Machine …”
The point can’t be confined to “trivial instances”. For any NP-complete problem on some reasonable computing platform that can solve small instances quickly, there will be instance sizes that are non-trivial (take appreciable time to solve) but do not require eons to solve. There is absolutely no mathematical reason for assuming that for “natural” NP-complete problems, interesting-sized instances can’t be solved on a timescale of months/years/centuries by natural processes.
The dichotomy between “trivial” and “impossible to solve in a useful time-frame” is a false one.
Presumably quantum suicide is a part of “whatever”.