Suppose there is a single A.I. with a ‘Devote x % of resources to Smartening myself’ directive. Suppose further that the A.I is already operating with Daid Lewis ‘elite eligible’ ways of carving up the World along its joints- i.e. it is climbing the right hill. Presumably, the Smartening module faces a race hazard type problem in deciding whether it is smarter to devote resources to evaluating returns to smartness or to just release resources back to existing operations. I suppose it could internally breed its own heuristics for Karnaugh map type pattern recognition so as to avoid falling into an NP problem. However, if NP hard problems are like predators, there has to be a heuristic to stop the A.I avoiding them to the extent of roaming uninteresting space and breeding only ‘Speigelman monster’ or trivial or degenerate results. In other words the A.I’s ‘smarten yourself’ Module is now doing just enough to justify its upkeep but not so much as to endanger its own survival. At this point it is enough for there to be some exogenous shock or random discontinuity on the morphology of the fitness landscape for some sort of gender dimorphism and sexual selection to start taking place within the A.I. with speciation events and so on. However, this opens an exploit for parasites- i.e. humans- so FOOM cashes out as …of fuck, it’s the episode of South Park with the cat saying ‘O long Johnson’. Beenakker solution to Hempel’s dillemma was wrong- http://en.wikipedia.org/wiki/Hempel’s_dilemma- The boundary between physics and metaphysics is NOT the boundary between what can and what cannot be computed in the age of the universe’ because South Park has resolved every philosophical puzzle in the space of what?- a few hundred hours?
polypubs
Karma: −15
- polypubs 1 May 2013 23:04 UTC−8 pointsin reply to: Eliezer Yudkowsky’s comment on: New report: Intelligence Explosion Microeconomics
You may be aware of the use of negative probabilities in machine learning and quantum mechanics and, of course, Economics. For the last, the existence of a Matrix Lord has such a large negative probability that it swamps his proffer (perhaps because it is altruistic?) and no money changes hands. In other words, there is nothing interesting here- it’s just that some type of decision theory haven’t incorporated negative probabilities yet. The reverse situation- Job’s complaint against God- is more interesting. It shows why variables with negative probabilities tend to disappear out of discourse to be replaced by the difference between two independent ‘normal’ variables- in this case Cosmic Justice is replaced by the I-Thou relationship of ‘God’ & ‘Man’.