Is this your complete response? I guess I could expand this to “I expect all the problems an AI needs to solve on the way to an intelligence explosion to be easy in principle but hard in practice,” and I guess I could expand your other comments to “the problem sizes an AI will need to deal with are small enough that asymptotic statements about difficulty won’t come into play.” Both of these claims seem like they require justification.
It’s not meant as a response to everything, just noting that protein structure prediction can’t be NP-hard. More generally, I tend to take P!=NP as a background assumption; I can’t say I’ve worried too much about how the universe would look different if P=NP. I never thought superintelligences could solve NP-hard problems to begin with, since they’re made out of wavefunction and quantum mechanics can’t do that. My model of an intelligence explosion just doesn’t include anyone trying to do anything NP-hard at any point, unless it’s in the trivial sense of doing it for N=20 or something. Since I already expect things to local FOOM with P!=NP, adding P=NP doesn’t seem to change much, even if the polynomial itself is small. Though Scott Aaronson seems to think there’d be long-term fun-theoretic problems because it would make so many challenges uninteresting. :)
Suppose there is a single A.I. with a ‘Devote x % of resources to Smartening myself’ directive. Suppose further that the A.I is already operating with Daid Lewis ‘elite eligible’ ways of carving up the World along its joints- i.e. it is climbing the right hill. Presumably, the Smartening module faces a race hazard type problem in deciding whether it is smarter to devote resources to evaluating returns to smartness or to just release resources back to existing operations. I suppose it could internally breed its own heuristics for Karnaugh map type pattern recognition so as to avoid falling into an NP problem. However, if NP hard problems are like predators, there has to be a heuristic to stop the A.I avoiding them to the extent of roaming uninteresting space and breeding only ‘Speigelman monster’ or trivial or degenerate results. In other words the A.I’s ‘smarten yourself’ Module is now doing just enough to justify its upkeep but not so much as to endanger its own survival.
At this point it is enough for there to be some exogenous shock or random discontinuity on the morphology of the fitness landscape for some sort of gender dimorphism and sexual selection to start taking place within the A.I. with speciation events and so on. However, this opens an exploit for parasites- i.e. humans- so FOOM cashes out as …of fuck, it’s the episode of South Park with the cat saying ‘O long Johnson’.
Beenakker solution to Hempel’s dillemma was wrong- http://en.wikipedia.org/wiki/Hempel’s_dilemma- The boundary between physics and metaphysics is NOT the boundary between what can and what cannot be computed in the age of the universe’ because South Park has resolved every philosophical puzzle in the space of what?- a few hundred hours?
Is this your complete response? I guess I could expand this to “I expect all the problems an AI needs to solve on the way to an intelligence explosion to be easy in principle but hard in practice,” and I guess I could expand your other comments to “the problem sizes an AI will need to deal with are small enough that asymptotic statements about difficulty won’t come into play.” Both of these claims seem like they require justification.
It’s not meant as a response to everything, just noting that protein structure prediction can’t be NP-hard. More generally, I tend to take P!=NP as a background assumption; I can’t say I’ve worried too much about how the universe would look different if P=NP. I never thought superintelligences could solve NP-hard problems to begin with, since they’re made out of wavefunction and quantum mechanics can’t do that. My model of an intelligence explosion just doesn’t include anyone trying to do anything NP-hard at any point, unless it’s in the trivial sense of doing it for N=20 or something. Since I already expect things to local FOOM with P!=NP, adding P=NP doesn’t seem to change much, even if the polynomial itself is small. Though Scott Aaronson seems to think there’d be long-term fun-theoretic problems because it would make so many challenges uninteresting. :)
Suppose there is a single A.I. with a ‘Devote x % of resources to Smartening myself’ directive. Suppose further that the A.I is already operating with Daid Lewis ‘elite eligible’ ways of carving up the World along its joints- i.e. it is climbing the right hill. Presumably, the Smartening module faces a race hazard type problem in deciding whether it is smarter to devote resources to evaluating returns to smartness or to just release resources back to existing operations. I suppose it could internally breed its own heuristics for Karnaugh map type pattern recognition so as to avoid falling into an NP problem. However, if NP hard problems are like predators, there has to be a heuristic to stop the A.I avoiding them to the extent of roaming uninteresting space and breeding only ‘Speigelman monster’ or trivial or degenerate results. In other words the A.I’s ‘smarten yourself’ Module is now doing just enough to justify its upkeep but not so much as to endanger its own survival. At this point it is enough for there to be some exogenous shock or random discontinuity on the morphology of the fitness landscape for some sort of gender dimorphism and sexual selection to start taking place within the A.I. with speciation events and so on. However, this opens an exploit for parasites- i.e. humans- so FOOM cashes out as …of fuck, it’s the episode of South Park with the cat saying ‘O long Johnson’. Beenakker solution to Hempel’s dillemma was wrong- http://en.wikipedia.org/wiki/Hempel’s_dilemma- The boundary between physics and metaphysics is NOT the boundary between what can and what cannot be computed in the age of the universe’ because South Park has resolved every philosophical puzzle in the space of what?- a few hundred hours?