I never intended to imply that future hyper-intelligences will spend all of their computational power to simulate humans, not even close. But nor will they spend it all on simulating bacteria or other hyper-intelligences.
In general, a universal hyperintelligence will simulate slices of the entire universe from the big bang to the end of time. It will certainly not simulate all regions of space-time at equal fidelity of course. In general I expect observer-relevant simulation, with fidelity falling off nonlinearly with distance corresponding to the locality of physics.
The other principle I expect them to use is more difficult to precisely quantify, but it amounts to what I would call future-historical-priority. This is the notion that not all physical events in the same region of space-time have equal importance. In fact, the importance is massively non-uniform, and this notion is related to complexity theory itself. Simulating complex things at high accuracy (such as humans, computers, etc) is vastly more important for future accuracy then simulating the interior of earth, the sun, bacteria, etc etc. The complexity and cost of accurate simulation will increase with time and technological progress.
So in general, I expect future hyper-intelligences to use what we could call Universal Approximate Simulation. More speculatively, I also expect that the theory of UAS relates directly to practical AGI. Tentatively, you can imagine UAS as a spectrum of algorithms and sub-algorithms. One side of this spectrum are arbitrarily accurate physics-inspired approaches such as those we use in big physics simulations and computer graphics, and on the other side of the spectrum are input-data driven statistical approximations using learning techniques (similar to what mammalian cortex uses). I expect future hyper-intelligences will have a better theoretical understanding of this spectrum, and where and when different simulation approaches are more efficient.
Tentatively, I expect that we will eventually find that there is no single most-efficient algorithm for UAS, and that it is instead a vast space of algorithms, with different subpoints in that space having varying utility depending on desired simulation fidelity scale. I draw this tentative conclusion from what we can currently learn from graphics research and simulation theory, and the observation that statistical learning approaches are increasingly taking residence in the space of useful algorithms for general world simulation.
At this point people usually start talking about chaos and turbulence. Simulating chaos to high accuracy is always a waste of computation. In theory a butterfly in moscow could alter the weather and change history, but the historical utility divided by the computational cost makes it astronomically less efficient than simulating say the interiors of human skulls to high fidelity.
This also implies that one run through your simulation history just gives you one sampling of the space, but with monte carlo simulation and a number of runs all the random events tend to balance out and you get a good idea of the future evolution of the system.
Random events such as the weather have had great historical impact, but most of the big ones are in distant history, and as we enter the modern era their effect is increasingly damped.
The big ‘weather’ random events if you were going to rank them, would start with say KT level impacts (which are some of the singular most important events in history), then progress down to glaciation events, volcanoes, etc—earthquakes and hurricanes being down lower on the list.
I never intended to imply that future hyper-intelligences will spend all of their computational power to simulate humans, not even close. But nor will they spend it all on simulating bacteria or other hyper-intelligences.
In general, a universal hyperintelligence will simulate slices of the entire universe from the big bang to the end of time. It will certainly not simulate all regions of space-time at equal fidelity of course. In general I expect observer-relevant simulation, with fidelity falling off nonlinearly with distance corresponding to the locality of physics.
The other principle I expect them to use is more difficult to precisely quantify, but it amounts to what I would call future-historical-priority. This is the notion that not all physical events in the same region of space-time have equal importance. In fact, the importance is massively non-uniform, and this notion is related to complexity theory itself. Simulating complex things at high accuracy (such as humans, computers, etc) is vastly more important for future accuracy then simulating the interior of earth, the sun, bacteria, etc etc. The complexity and cost of accurate simulation will increase with time and technological progress.
So in general, I expect future hyper-intelligences to use what we could call Universal Approximate Simulation. More speculatively, I also expect that the theory of UAS relates directly to practical AGI. Tentatively, you can imagine UAS as a spectrum of algorithms and sub-algorithms. One side of this spectrum are arbitrarily accurate physics-inspired approaches such as those we use in big physics simulations and computer graphics, and on the other side of the spectrum are input-data driven statistical approximations using learning techniques (similar to what mammalian cortex uses). I expect future hyper-intelligences will have a better theoretical understanding of this spectrum, and where and when different simulation approaches are more efficient.
Tentatively, I expect that we will eventually find that there is no single most-efficient algorithm for UAS, and that it is instead a vast space of algorithms, with different subpoints in that space having varying utility depending on desired simulation fidelity scale. I draw this tentative conclusion from what we can currently learn from graphics research and simulation theory, and the observation that statistical learning approaches are increasingly taking residence in the space of useful algorithms for general world simulation.
At this point people usually start talking about chaos and turbulence. Simulating chaos to high accuracy is always a waste of computation. In theory a butterfly in moscow could alter the weather and change history, but the historical utility divided by the computational cost makes it astronomically less efficient than simulating say the interiors of human skulls to high fidelity.
This also implies that one run through your simulation history just gives you one sampling of the space, but with monte carlo simulation and a number of runs all the random events tend to balance out and you get a good idea of the future evolution of the system.
Random events such as the weather have had great historical impact, but most of the big ones are in distant history, and as we enter the modern era their effect is increasingly damped.
The big ‘weather’ random events if you were going to rank them, would start with say KT level impacts (which are some of the singular most important events in history), then progress down to glaciation events, volcanoes, etc—earthquakes and hurricanes being down lower on the list.