Unless you think it’s possible to program a conscious being in Haskell.”
Ahemhem. Haskell is as fine a turing complete language; we just like to have our side effects explicit!
Also, can we just conclude that “consciousness” is the leakiest of surface generalizations ever? If I one day get the cog-psy skills I am going to run a stack-trace on what makes us say “consciousness” without knowing diddy about what it is.
As a budding AI researcher, I am frankly offended by philosophers pretending to be wise like that.
No. There is no such thing as “consciousness” because it is not even a bucket to put things in. It’s metal shreds.
You are a some sort of self-introspective algorithm implemented on a biochemical computing substrate, so let’s make the blankness of our maps self-evident by calling it “magic” or something.
Yes, probably. It is a really good idea to taboo any and all of “conscious,” “self-,” “introspective,” “thinking,” and so on when doing AI work, or so I heard.
Ahemhem. Haskell is as fine a turing complete language; we just like to have our side effects explicit!
I don’t think we even need turing-completeness, really. Take a total language, and give it a loop-timeout of say 3^^^^3. It can compute anything we care about, and is still guaranteed to terminate in bounded time.
(I’m reminded of Godel’s letter discussing P=NP—should such an algorithm be found, mathematicians could be replaced by a machine that searches all proofs with less than a few million symbols; anything that couldn’t be proved with that many symbols would be of little to no interest to us.)
(I’m reminded of Godel’s letter discussing P=NP—should such an algorithm be found, mathematicians could be replaced by a machine that searches all proofs with less than a few million symbols; anything that couldn’t be proved with that many symbols would be of little to no interest to us.)
Or, at least, they could be replaced by people who can understand and seek out proofs that are relevant to us out of the arbitrarily large number of useless ones. So mathematicians basically.
Likewise, EXPTIME doesn’t mean Large EXPTIME—an algorithm running in exp(1e-15*N) seconds is asymptotically slower than one running in N^300 seconds, but it is faster for pretty much all practical purposes.
I once read an Usenet post or Web page along the lines of “There are two kinds of numbers: those smaller than Graham’s number and those larger than Graham’s number. Computational complexity theory traditionally only concerns itself with the latter, but only the former are relevant to real-world problems.”
No, slower. N^300 is polynomial, exp(1e-15*N) is exponential.
Maybe it’s easier to understand them if you take log ? log(N^300) = log(N) 300 and log(exp(1e-15 N)) = 1e-15 * N
Whatever >0 multiplicative constants a and b you put, a N will at one point become bigger (so, slower) than b log(N). In this, that’ll happen roughly when N will be somewhat above 10^15, around 10^20 according to a simple guess.
Complexity theorists don’t know anything, but they at least know that it’s impossible to solve all NP problems in O(N^300) time. In fact they know it’s impossible to solve all P problems in O(N^300) time.
I think the charitable interpretation is that Eliezer meant someone might figure out an O(N^300) algorithm for some NP-complete problem. I believe that’s consistent with what the complexity theorists know, it certainly implies P=NP, but it doesn’t help anyone with the goal of replacing mathematicians with microchips.
I don’t think that interpretation is necessary. A better one is that even if all NP problems could be solved in O(N^300) time, we’d still need mathematicians.
Are you saying that the counterfactual implication contradicts what we already know, or that the antecedent of the counterfactual implication contradicts what we already know?
I’d be surprised by the former, and the latter is obvious from that it is a counterfactual.
I’m not really comfortable with counterfactuals, when the counterfactual is a mathematical statement. I think I can picture a universe in which isolated pieces of history or reality are different; I can’t picture a universe in which the math is different.
I suppose such a counterfactual makes sense from the standpoint of someone who does not know the antecedent is mathematically impossible, and thinks rather that it is a hypothetical. I was trying to give a hypothetical (rather than a counterfactual) with the same intent, which is not obviously counterfactual given the current state-of-the-art.
To elaborate on this a little bit, you can think of the laws of physics as a differential equation, and the universe as a solution. You can imagine what would happen if the universe passed through a different state (just solve the differential equation again, with new initial conditions), or even different physics (solve the new differential equation), but how do you figure out what happens when calculus changes?
When I first read that TDT was about counterfactuals involving logically impossible worlds I was uncomfortable with that but I wasn’t sure why, and when I first read about the five-and-ten problem I dismissed it as wordplay, but then it dawned on me that the five-and-ten problem is indeed what you get if you allow counterfactuals to range over logically impossible worlds.
I was having trouble articulating why I was uncomfortable reasoning under a mathematical counterfactual, but more comfortable reasoning under a mathematical hypothesis that might turn out to be false. This comment helped me clarify that for myself.
I’ll explain my reasoning using Boolean logic, since it’s easier to understand that way, but obviously the same problem must occur with Bayesian logic, since Bayes generalizes Boole.
Suppose we are reasoning about the effects of a mathematical conjecture P, and conclude that P → Q, and ¬P → Q’. Let’s assume Q and Q’ can’t both be true, because we’re interested in the difference between how the world would look if P were true, and how the world would look if P’ were true. Let’s assume we don’t have any idea which of P or ¬P is true. We can’t also have concluded P → Q’, because then the contradiction would allow as to conclude ¬P, and for the same reason we can’t also have concluded ¬P → Q. When we assume P, we only have one causal chain leading us to distinguish between Q and Q’, so we have an unambiguous model of how the universe will look under assumption P. This is true even if P turns out to be false, because we are aware of a chain of causal inferences beginning with P leading to only one conclusion.
However, the second we conclude ¬P, we have two contradictory causal chains starting from P: P → Q, and P → ¬P → Q’, so our model of a universe where P is true is confused. We can no longer make sense of this counterfactual, because we are no longer sure which causal inferences to draw from the counterfactual.
Ahemhem. Haskell is as fine a turing complete language; we just like to have our side effects explicit!
Also, can we just conclude that “consciousness” is the leakiest of surface generalizations ever? If I one day get the cog-psy skills I am going to run a stack-trace on what makes us say “consciousness” without knowing diddy about what it is.
As a budding AI researcher, I am frankly offended by philosophers pretending to be wise like that. No. There is no such thing as “consciousness” because it is not even a bucket to put things in. It’s metal shreds. You are a some sort of self-introspective algorithm implemented on a biochemical computing substrate, so let’s make the blankness of our maps self-evident by calling it “magic” or something.
Glaring redundancy aside, isn’t “self-introspective” just as intensionally valid or void as “conscious”?
Yes, probably. It is a really good idea to taboo any and all of “conscious,” “self-,” “introspective,” “thinking,” and so on when doing AI work, or so I heard.
I don’t think we even need turing-completeness, really. Take a total language, and give it a loop-timeout of say 3^^^^3. It can compute anything we care about, and is still guaranteed to terminate in bounded time.
(I’m reminded of Godel’s letter discussing P=NP—should such an algorithm be found, mathematicians could be replaced by a machine that searches all proofs with less than a few million symbols; anything that couldn’t be proved with that many symbols would be of little to no interest to us.)
Or, at least, they could be replaced by people who can understand and seek out proofs that are relevant to us out of the arbitrarily large number of useless ones. So mathematicians basically.
Unless for any NP problem there exists an algorithm which solves it in just O(N^300) time.
NP=P is not the same as NP=Small P
Likewise, EXPTIME doesn’t mean Large EXPTIME—an algorithm running in exp(1e-15*N) seconds is asymptotically slower than one running in N^300 seconds, but it is faster for pretty much all practical purposes.
I once read an Usenet post or Web page along the lines of “There are two kinds of numbers: those smaller than Graham’s number and those larger than Graham’s number. Computational complexity theory traditionally only concerns itself with the latter, but only the former are relevant to real-world problems.”
Do you mean faster?
No, slower. N^300 is polynomial, exp(1e-15*N) is exponential.
Maybe it’s easier to understand them if you take log ? log(N^300) = log(N) 300 and log(exp(1e-15 N)) = 1e-15 * N
Whatever >0 multiplicative constants a and b you put, a N will at one point become bigger (so, slower) than b log(N). In this, that’ll happen roughly when N will be somewhat above 10^15, around 10^20 according to a simple guess.
Complexity theorists don’t know anything, but they at least know that it’s impossible to solve all NP problems in O(N^300) time. In fact they know it’s impossible to solve all P problems in O(N^300) time.
http://en.wikipedia.org/wiki/Time_hierarchy_theorem
I think Yudkowsky meant big omega.
I think the charitable interpretation is that Eliezer meant someone might figure out an O(N^300) algorithm for some NP-complete problem. I believe that’s consistent with what the complexity theorists know, it certainly implies P=NP, but it doesn’t help anyone with the goal of replacing mathematicians with microchips.
I don’t think that interpretation is necessary. A better one is that even if all NP problems could be solved in O(N^300) time, we’d still need mathematicians.
Sewing-Machine correctly pointed out, above, that this contradicts what we already know.
Are you saying that the counterfactual implication contradicts what we already know, or that the antecedent of the counterfactual implication contradicts what we already know?
I’d be surprised by the former, and the latter is obvious from that it is a counterfactual.
I’m not really comfortable with counterfactuals, when the counterfactual is a mathematical statement. I think I can picture a universe in which isolated pieces of history or reality are different; I can’t picture a universe in which the math is different.
I suppose such a counterfactual makes sense from the standpoint of someone who does not know the antecedent is mathematically impossible, and thinks rather that it is a hypothetical. I was trying to give a hypothetical (rather than a counterfactual) with the same intent, which is not obviously counterfactual given the current state-of-the-art.
To elaborate on this a little bit, you can think of the laws of physics as a differential equation, and the universe as a solution. You can imagine what would happen if the universe passed through a different state (just solve the differential equation again, with new initial conditions), or even different physics (solve the new differential equation), but how do you figure out what happens when calculus changes?
When I first read that TDT was about counterfactuals involving logically impossible worlds I was uncomfortable with that but I wasn’t sure why, and when I first read about the five-and-ten problem I dismissed it as wordplay, but then it dawned on me that the five-and-ten problem is indeed what you get if you allow counterfactuals to range over logically impossible worlds.
I was having trouble articulating why I was uncomfortable reasoning under a mathematical counterfactual, but more comfortable reasoning under a mathematical hypothesis that might turn out to be false. This comment helped me clarify that for myself.
I’ll explain my reasoning using Boolean logic, since it’s easier to understand that way, but obviously the same problem must occur with Bayesian logic, since Bayes generalizes Boole.
Suppose we are reasoning about the effects of a mathematical conjecture P, and conclude that P → Q, and ¬P → Q’. Let’s assume Q and Q’ can’t both be true, because we’re interested in the difference between how the world would look if P were true, and how the world would look if P’ were true. Let’s assume we don’t have any idea which of P or ¬P is true. We can’t also have concluded P → Q’, because then the contradiction would allow as to conclude ¬P, and for the same reason we can’t also have concluded ¬P → Q. When we assume P, we only have one causal chain leading us to distinguish between Q and Q’, so we have an unambiguous model of how the universe will look under assumption P. This is true even if P turns out to be false, because we are aware of a chain of causal inferences beginning with P leading to only one conclusion.
However, the second we conclude ¬P, we have two contradictory causal chains starting from P: P → Q, and P → ¬P → Q’, so our model of a universe where P is true is confused. We can no longer make sense of this counterfactual, because we are no longer sure which causal inferences to draw from the counterfactual.
Are you sure? Maybe there are exist concise, powerful theorems that have really long proofs.
Oh good grief, since everyone here is intent on nitpicking the observation to death, here is his bloody letter: http://rjlipton.wordpress.com/the-gdel-letter/