The philosophical implication is that actually running such an algorithm on an infinite Turing Machine would have the interesting side effect of actually creating all such universes.
That’s an interesting point! At least, it’s more interesting than Tipler’s way of arriving at that conclusion.
If you accept that the reasonable assumption of progress holds, then AIXI implies that we almost certainly live in a simulation now.
See my response to the claim that the anthropic argument suggests it is highly improbable that you would find yourself to be a human. I don’t know if this holds water, but basically: If the amount of information processing done by entities operating in the root world (excluding those running simulations) is much greater than the amount of information processing dedicated to simulation, then one “unit of consciousness” is more likely to find itself in the root world.
The AIXI model proposes that simulation is the core of intelligence (with human-like thoughts being simply one approximate algorithm), and as you approach the universal limits, the simulations which universal intelligences necessarily employ will approach the fidelity of real universes—complete with all the entailed trappings such as conscious simulated entities.
No. However much computational power you have, you have a tradeoff between resolution, and the complexity of the simulated entity. What you’re saying is like a bacteria arguing that the universe is full of finely-detailed simulations of bacteria.
Also, AIXI is a tool for exploring the structure of the universe of algorithms. No one would ever actually run AIXI, unless they had infinite computational power per second. They would always be better off using more-efficient algorithms to get more done.
That’s an interesting point! At least, it’s more interesting than Tipler’s way of arriving at that conclusion.
I found his earlier work with Barrow, The Anthropic Cosmological Principle, to be filled with interesting useful knowledge, if not overly detailed—almost like a history of science in one book. But then with his next two books you can just follow the convulsions as he runs for the diving board and goes off the deep end.
His take on extending Chardin’s Omega Point idea with computationalism isn’t all that bad itself, but he really stretches logic unnecessarily to make it all fit neatly into some prepacked orthodox Christianity memetic happy meal. That being said, there is an interesting connection between the two, but I don’t buy Tipler’s take on it.
See my response to the claim that the anthropic argument suggests it is highly improbable that you would find yourself to be a human. I don’t know if this holds water, but basically: If the amount of information processing done by entities operating in the root world (excluding those running simulations) is much greater than the amount of information processing dedicated to simulation, then one “unit of consciousness” is more likely to find itself in the root world.
Read the response involving the ants and stuff. I don’t hold much weight to that train of thought. I agree that consciousness is a fluid fuzzy concept, but I also know based on my own capacities and understanding of physics that an ant colony probably could not encode me (I’m not entirely sure, but I’m highly skeptical). Also, the end point of your argument leads to the realization that ants have less intelligence/mass than humans—and its actually much less than your basic analysis, because you have to factor in connectivity measures.
No. However much computational power you have, you have a tradeoff between resolution, and the complexity of the simulated entity. What you’re saying is like a bacteria arguing that the universe is full of finely-detailed simulations of bacteria.
There’s always a tradeoff, this is true, but one should avoid grossly overestimating the computational costs of simulations of various fidelity. For instance, we know that fully nearly perfect deterministic simulation of a current computer requires vastly less computation than molecular level simulation—with full knowledge of its exact organization. Once we understand the human mind’s algorithms, we should be able to simulate human minds to fairly high accuracy using computers of only slightly more complexity (than the brain itself).
Take that principle and combine it with a tentative estimate from pure simulation theory and computer graphics that the ultimate observer-relative simulation algorithm requires only constant time and space proportional to the intelligence and sensor capacity of the observer’s mind.
If you then work out the math (which would take a longer article length discussion), you could simulate an entire earth with billions of humans using a laptop sized ultimate computer. And you could simulate millienia in seconds.
A key point though is beyond just general vague approximate statistical simulations, you eventually get to a point where you need to simulate real intelligences. And real intelligences have complexity similar to computers—vague simulation is very vague until you reach this critical point of scale separation, at which point you can simulate it near perfectly.
But at this point of scale separation, the simulated (computer, mind) becomes isomorphic and indistinguishable from the original system, and this is necessarily always true. Simulations of complex systems become suddenly vastly more effective per computational cost when you hit the scale separation level.
This falls out of computation theory directly, but just think of the what the ultimate simulation of your desktop is—its an equivalent program that has all the data and software—the molecules are completely 100% irrelevant. But that ultimate simulation is an emulation—it is an equivalent copy of the software itself.
Also, AIXI is a tool for exploring the structure of the universe of algorithms. No one would ever actually run AIXI, unless they had infinite computational power per second. They would always be better off using more-efficient algorithms to get more done.
Right—AIXI is a limit theorem, but its a dumb algorithm. We already do far better (in terms of efficiency) with mammalian cortex and current computers, and we have just started down a long, long, exponential road.
If you then work out the math (which would take a longer article length discussion), you could simulate an entire earth with billions of humans using a laptop sized ultimate computer. And you could simulate millienia in seconds.
I don’t think you got what I was trying to say about bacteria. If you have enough computing power to simulate more than a universe full of humans, you would likely instead use it to simulate a smaller number of much-more-complex beings. You can always use more computing power to up the complexity of the thing you’re studying; hence, you never use AIXI. Your original argument implied that, once you’ve gotten to the human level, there’s no place up to go from there; so you simulate vast quantities of them in exact detail.
I never intended to imply that future hyper-intelligences will spend all of their computational power to simulate humans, not even close. But nor will they spend it all on simulating bacteria or other hyper-intelligences.
In general, a universal hyperintelligence will simulate slices of the entire universe from the big bang to the end of time. It will certainly not simulate all regions of space-time at equal fidelity of course. In general I expect observer-relevant simulation, with fidelity falling off nonlinearly with distance corresponding to the locality of physics.
The other principle I expect them to use is more difficult to precisely quantify, but it amounts to what I would call future-historical-priority. This is the notion that not all physical events in the same region of space-time have equal importance. In fact, the importance is massively non-uniform, and this notion is related to complexity theory itself. Simulating complex things at high accuracy (such as humans, computers, etc) is vastly more important for future accuracy then simulating the interior of earth, the sun, bacteria, etc etc. The complexity and cost of accurate simulation will increase with time and technological progress.
So in general, I expect future hyper-intelligences to use what we could call Universal Approximate Simulation. More speculatively, I also expect that the theory of UAS relates directly to practical AGI. Tentatively, you can imagine UAS as a spectrum of algorithms and sub-algorithms. One side of this spectrum are arbitrarily accurate physics-inspired approaches such as those we use in big physics simulations and computer graphics, and on the other side of the spectrum are input-data driven statistical approximations using learning techniques (similar to what mammalian cortex uses). I expect future hyper-intelligences will have a better theoretical understanding of this spectrum, and where and when different simulation approaches are more efficient.
Tentatively, I expect that we will eventually find that there is no single most-efficient algorithm for UAS, and that it is instead a vast space of algorithms, with different subpoints in that space having varying utility depending on desired simulation fidelity scale. I draw this tentative conclusion from what we can currently learn from graphics research and simulation theory, and the observation that statistical learning approaches are increasingly taking residence in the space of useful algorithms for general world simulation.
At this point people usually start talking about chaos and turbulence. Simulating chaos to high accuracy is always a waste of computation. In theory a butterfly in moscow could alter the weather and change history, but the historical utility divided by the computational cost makes it astronomically less efficient than simulating say the interiors of human skulls to high fidelity.
This also implies that one run through your simulation history just gives you one sampling of the space, but with monte carlo simulation and a number of runs all the random events tend to balance out and you get a good idea of the future evolution of the system.
Random events such as the weather have had great historical impact, but most of the big ones are in distant history, and as we enter the modern era their effect is increasingly damped.
The big ‘weather’ random events if you were going to rank them, would start with say KT level impacts (which are some of the singular most important events in history), then progress down to glaciation events, volcanoes, etc—earthquakes and hurricanes being down lower on the list.
That’s an interesting point! At least, it’s more interesting than Tipler’s way of arriving at that conclusion.
See my response to the claim that the anthropic argument suggests it is highly improbable that you would find yourself to be a human. I don’t know if this holds water, but basically: If the amount of information processing done by entities operating in the root world (excluding those running simulations) is much greater than the amount of information processing dedicated to simulation, then one “unit of consciousness” is more likely to find itself in the root world.
No. However much computational power you have, you have a tradeoff between resolution, and the complexity of the simulated entity. What you’re saying is like a bacteria arguing that the universe is full of finely-detailed simulations of bacteria.
Also, AIXI is a tool for exploring the structure of the universe of algorithms. No one would ever actually run AIXI, unless they had infinite computational power per second. They would always be better off using more-efficient algorithms to get more done.
I found his earlier work with Barrow, The Anthropic Cosmological Principle, to be filled with interesting useful knowledge, if not overly detailed—almost like a history of science in one book. But then with his next two books you can just follow the convulsions as he runs for the diving board and goes off the deep end.
His take on extending Chardin’s Omega Point idea with computationalism isn’t all that bad itself, but he really stretches logic unnecessarily to make it all fit neatly into some prepacked orthodox Christianity memetic happy meal. That being said, there is an interesting connection between the two, but I don’t buy Tipler’s take on it.
Read the response involving the ants and stuff. I don’t hold much weight to that train of thought. I agree that consciousness is a fluid fuzzy concept, but I also know based on my own capacities and understanding of physics that an ant colony probably could not encode me (I’m not entirely sure, but I’m highly skeptical). Also, the end point of your argument leads to the realization that ants have less intelligence/mass than humans—and its actually much less than your basic analysis, because you have to factor in connectivity measures.
There’s always a tradeoff, this is true, but one should avoid grossly overestimating the computational costs of simulations of various fidelity. For instance, we know that fully nearly perfect deterministic simulation of a current computer requires vastly less computation than molecular level simulation—with full knowledge of its exact organization. Once we understand the human mind’s algorithms, we should be able to simulate human minds to fairly high accuracy using computers of only slightly more complexity (than the brain itself).
Take that principle and combine it with a tentative estimate from pure simulation theory and computer graphics that the ultimate observer-relative simulation algorithm requires only constant time and space proportional to the intelligence and sensor capacity of the observer’s mind.
If you then work out the math (which would take a longer article length discussion), you could simulate an entire earth with billions of humans using a laptop sized ultimate computer. And you could simulate millienia in seconds.
A key point though is beyond just general vague approximate statistical simulations, you eventually get to a point where you need to simulate real intelligences. And real intelligences have complexity similar to computers—vague simulation is very vague until you reach this critical point of scale separation, at which point you can simulate it near perfectly.
But at this point of scale separation, the simulated (computer, mind) becomes isomorphic and indistinguishable from the original system, and this is necessarily always true. Simulations of complex systems become suddenly vastly more effective per computational cost when you hit the scale separation level.
This falls out of computation theory directly, but just think of the what the ultimate simulation of your desktop is—its an equivalent program that has all the data and software—the molecules are completely 100% irrelevant. But that ultimate simulation is an emulation—it is an equivalent copy of the software itself.
Right—AIXI is a limit theorem, but its a dumb algorithm. We already do far better (in terms of efficiency) with mammalian cortex and current computers, and we have just started down a long, long, exponential road.
I don’t think you got what I was trying to say about bacteria. If you have enough computing power to simulate more than a universe full of humans, you would likely instead use it to simulate a smaller number of much-more-complex beings. You can always use more computing power to up the complexity of the thing you’re studying; hence, you never use AIXI. Your original argument implied that, once you’ve gotten to the human level, there’s no place up to go from there; so you simulate vast quantities of them in exact detail.
I never intended to imply that future hyper-intelligences will spend all of their computational power to simulate humans, not even close. But nor will they spend it all on simulating bacteria or other hyper-intelligences.
In general, a universal hyperintelligence will simulate slices of the entire universe from the big bang to the end of time. It will certainly not simulate all regions of space-time at equal fidelity of course. In general I expect observer-relevant simulation, with fidelity falling off nonlinearly with distance corresponding to the locality of physics.
The other principle I expect them to use is more difficult to precisely quantify, but it amounts to what I would call future-historical-priority. This is the notion that not all physical events in the same region of space-time have equal importance. In fact, the importance is massively non-uniform, and this notion is related to complexity theory itself. Simulating complex things at high accuracy (such as humans, computers, etc) is vastly more important for future accuracy then simulating the interior of earth, the sun, bacteria, etc etc. The complexity and cost of accurate simulation will increase with time and technological progress.
So in general, I expect future hyper-intelligences to use what we could call Universal Approximate Simulation. More speculatively, I also expect that the theory of UAS relates directly to practical AGI. Tentatively, you can imagine UAS as a spectrum of algorithms and sub-algorithms. One side of this spectrum are arbitrarily accurate physics-inspired approaches such as those we use in big physics simulations and computer graphics, and on the other side of the spectrum are input-data driven statistical approximations using learning techniques (similar to what mammalian cortex uses). I expect future hyper-intelligences will have a better theoretical understanding of this spectrum, and where and when different simulation approaches are more efficient.
Tentatively, I expect that we will eventually find that there is no single most-efficient algorithm for UAS, and that it is instead a vast space of algorithms, with different subpoints in that space having varying utility depending on desired simulation fidelity scale. I draw this tentative conclusion from what we can currently learn from graphics research and simulation theory, and the observation that statistical learning approaches are increasingly taking residence in the space of useful algorithms for general world simulation.
At this point people usually start talking about chaos and turbulence. Simulating chaos to high accuracy is always a waste of computation. In theory a butterfly in moscow could alter the weather and change history, but the historical utility divided by the computational cost makes it astronomically less efficient than simulating say the interiors of human skulls to high fidelity.
This also implies that one run through your simulation history just gives you one sampling of the space, but with monte carlo simulation and a number of runs all the random events tend to balance out and you get a good idea of the future evolution of the system.
Random events such as the weather have had great historical impact, but most of the big ones are in distant history, and as we enter the modern era their effect is increasingly damped.
The big ‘weather’ random events if you were going to rank them, would start with say KT level impacts (which are some of the singular most important events in history), then progress down to glaciation events, volcanoes, etc—earthquakes and hurricanes being down lower on the list.