Complexity theorists don’t know anything, but they at least know that it’s impossible to solve all NP problems in O(N^300) time. In fact they know it’s impossible to solve all P problems in O(N^300) time.
I think the charitable interpretation is that Eliezer meant someone might figure out an O(N^300) algorithm for some NP-complete problem. I believe that’s consistent with what the complexity theorists know, it certainly implies P=NP, but it doesn’t help anyone with the goal of replacing mathematicians with microchips.
I don’t think that interpretation is necessary. A better one is that even if all NP problems could be solved in O(N^300) time, we’d still need mathematicians.
Are you saying that the counterfactual implication contradicts what we already know, or that the antecedent of the counterfactual implication contradicts what we already know?
I’d be surprised by the former, and the latter is obvious from that it is a counterfactual.
I’m not really comfortable with counterfactuals, when the counterfactual is a mathematical statement. I think I can picture a universe in which isolated pieces of history or reality are different; I can’t picture a universe in which the math is different.
I suppose such a counterfactual makes sense from the standpoint of someone who does not know the antecedent is mathematically impossible, and thinks rather that it is a hypothetical. I was trying to give a hypothetical (rather than a counterfactual) with the same intent, which is not obviously counterfactual given the current state-of-the-art.
To elaborate on this a little bit, you can think of the laws of physics as a differential equation, and the universe as a solution. You can imagine what would happen if the universe passed through a different state (just solve the differential equation again, with new initial conditions), or even different physics (solve the new differential equation), but how do you figure out what happens when calculus changes?
When I first read that TDT was about counterfactuals involving logically impossible worlds I was uncomfortable with that but I wasn’t sure why, and when I first read about the five-and-ten problem I dismissed it as wordplay, but then it dawned on me that the five-and-ten problem is indeed what you get if you allow counterfactuals to range over logically impossible worlds.
I was having trouble articulating why I was uncomfortable reasoning under a mathematical counterfactual, but more comfortable reasoning under a mathematical hypothesis that might turn out to be false. This comment helped me clarify that for myself.
I’ll explain my reasoning using Boolean logic, since it’s easier to understand that way, but obviously the same problem must occur with Bayesian logic, since Bayes generalizes Boole.
Suppose we are reasoning about the effects of a mathematical conjecture P, and conclude that P → Q, and ¬P → Q’. Let’s assume Q and Q’ can’t both be true, because we’re interested in the difference between how the world would look if P were true, and how the world would look if P’ were true. Let’s assume we don’t have any idea which of P or ¬P is true. We can’t also have concluded P → Q’, because then the contradiction would allow as to conclude ¬P, and for the same reason we can’t also have concluded ¬P → Q. When we assume P, we only have one causal chain leading us to distinguish between Q and Q’, so we have an unambiguous model of how the universe will look under assumption P. This is true even if P turns out to be false, because we are aware of a chain of causal inferences beginning with P leading to only one conclusion.
However, the second we conclude ¬P, we have two contradictory causal chains starting from P: P → Q, and P → ¬P → Q’, so our model of a universe where P is true is confused. We can no longer make sense of this counterfactual, because we are no longer sure which causal inferences to draw from the counterfactual.
Complexity theorists don’t know anything, but they at least know that it’s impossible to solve all NP problems in O(N^300) time. In fact they know it’s impossible to solve all P problems in O(N^300) time.
http://en.wikipedia.org/wiki/Time_hierarchy_theorem
I think Yudkowsky meant big omega.
I think the charitable interpretation is that Eliezer meant someone might figure out an O(N^300) algorithm for some NP-complete problem. I believe that’s consistent with what the complexity theorists know, it certainly implies P=NP, but it doesn’t help anyone with the goal of replacing mathematicians with microchips.
I don’t think that interpretation is necessary. A better one is that even if all NP problems could be solved in O(N^300) time, we’d still need mathematicians.
Sewing-Machine correctly pointed out, above, that this contradicts what we already know.
Are you saying that the counterfactual implication contradicts what we already know, or that the antecedent of the counterfactual implication contradicts what we already know?
I’d be surprised by the former, and the latter is obvious from that it is a counterfactual.
I’m not really comfortable with counterfactuals, when the counterfactual is a mathematical statement. I think I can picture a universe in which isolated pieces of history or reality are different; I can’t picture a universe in which the math is different.
I suppose such a counterfactual makes sense from the standpoint of someone who does not know the antecedent is mathematically impossible, and thinks rather that it is a hypothetical. I was trying to give a hypothetical (rather than a counterfactual) with the same intent, which is not obviously counterfactual given the current state-of-the-art.
To elaborate on this a little bit, you can think of the laws of physics as a differential equation, and the universe as a solution. You can imagine what would happen if the universe passed through a different state (just solve the differential equation again, with new initial conditions), or even different physics (solve the new differential equation), but how do you figure out what happens when calculus changes?
When I first read that TDT was about counterfactuals involving logically impossible worlds I was uncomfortable with that but I wasn’t sure why, and when I first read about the five-and-ten problem I dismissed it as wordplay, but then it dawned on me that the five-and-ten problem is indeed what you get if you allow counterfactuals to range over logically impossible worlds.
I was having trouble articulating why I was uncomfortable reasoning under a mathematical counterfactual, but more comfortable reasoning under a mathematical hypothesis that might turn out to be false. This comment helped me clarify that for myself.
I’ll explain my reasoning using Boolean logic, since it’s easier to understand that way, but obviously the same problem must occur with Bayesian logic, since Bayes generalizes Boole.
Suppose we are reasoning about the effects of a mathematical conjecture P, and conclude that P → Q, and ¬P → Q’. Let’s assume Q and Q’ can’t both be true, because we’re interested in the difference between how the world would look if P were true, and how the world would look if P’ were true. Let’s assume we don’t have any idea which of P or ¬P is true. We can’t also have concluded P → Q’, because then the contradiction would allow as to conclude ¬P, and for the same reason we can’t also have concluded ¬P → Q. When we assume P, we only have one causal chain leading us to distinguish between Q and Q’, so we have an unambiguous model of how the universe will look under assumption P. This is true even if P turns out to be false, because we are aware of a chain of causal inferences beginning with P leading to only one conclusion.
However, the second we conclude ¬P, we have two contradictory causal chains starting from P: P → Q, and P → ¬P → Q’, so our model of a universe where P is true is confused. We can no longer make sense of this counterfactual, because we are no longer sure which causal inferences to draw from the counterfactual.