Why do we write books about Roman History and debate what really happened? Why do we make television shows or movies out of it?
Consider this just the evolution of what we already do today, for much of the same reasons, but amplified by astronomical powers of increased intelligence/computation generating thought/simulation.
This is the kind of naive forward extrapolation that gets you sci fi dystopias. Most of the things we do today don’t bear extrapolating to logical extremes, certainly not this.
Calculations of the likely outcomes of certain events are the mental equivalents of thermostat operations—they are the types of things you do and think about when you lack hyperintelligence.
Eventually you want a nice canonical history. Not a book, not a movie, but the complete data set and recreation.
No I don’t. I think you should try asking more people if this is actually something they would want, with knowledge of the things they could be doing instead, rather than assuming it’s a logical extrapolation of things that they do want. If I could do that, it wouldn’t even bottom the list of things I’d want to do with that power.
Put another way, there is a limit where you can know absolutely every conceivable thing there is to know about your history, and this necessitates lots of massively super-detailed thinking about it—aka simulation.
The simulation doesn’t teach us more than we already know about history. What we already know about history sets the upper bound on how similar we can make it. Given the size of the possibility space, we can only reasonably assume that it’s different in every way that we do not enforce similarity on it. The simulation doesn’t contribute to knowing everything you could possibly know about your history, that’s a prerequisite, if you want the simulation to be faithful.
The simulation doesn’t teach us more than we already know about history. What we already know about history sets the upper bound on how similar we can make it. Given the size of the possibility space, we can only reasonably assume that it’s different in every way that we do not enforce similarity on it. The simulation doesn’t contribute to knowing everything you could possibly know about your history, that’s a prerequisite, if you want the simulation to be faithful.
This would be true if we were equally ignorant about all of history. However, there are some facts regarding history we can be quite confident about- particularly recent history and the present. You can then check possible hypotheses about history (starting from what is hopefully an excellent estimation of starting conditions) against those facts you do have. Given how contingent the genetic make-up of a human is on the timing of their conception and how strongly genetics influences who we are it seems plausible a physical simulation of this part of the universe could radically narrow the space of possibilities given enough computing power. Of course parts of the simulation might remain under-determined but it seems implausible that a simulation would tell us nothing new about history as a simulation should be more proficient than humans at assessing the necessary consequences and antecedences to any known event.
Radically narrow, but given just how vast the option space is, it takes a whole lot more than radically narrowing before you can winnow it down to a manageable set of possibilities.
This post puts some numbers to the possible configurations you can get for a single lump of matter of about 1.5 kilograms. In a simulation of Earth, far more matter than that is in a completely unknown state and free to vary through a huge portion of its possibility space (that’s not to say that even an appreciable fraction of matter on Earth is free to vary through all possible states, but the numbers are mind boggling enough even if we’re only dealing with a few kilograms.) Every unknown configuration is a potential confounding factor which could lead to cascading changes. The space is so phenomenally vast that you could narrow it by a billion orders of magnitude, and it would still occupy approximately the same space on the scale of sheer incomprehensibility. You would have to actively and continuously enforce similarity on the simulation to keep it from diverging more and more widely from the original.
This post puts some numbers to the possible configurations you can get for a single lump of matter of about 1.5 kilograms.
Said reference post by AndrewHickey starts with a ridiculous assumption:
Assume, for a start, that all the information in your brain is necessary to resurrect you, down to the quantum level.
This is voodoo-quantum consciousness: the idea that your mind-identity somehow depends on details down to the quantum state. This can’t possibly be true—because the vast vast majority of that state changes rapidly from quantum moment to moment in a mostly random fashion. There thus is no single quantum state that corresponds uniquely to a mind, rather there is a vast configuration space.
You can reduce that space down to a smaller bit representation by removing redundant details. Does it really matter if I remove one molecule from one glial cell in your brain? The whole glial cell? All the glial cells?
There is a single minimal representation of a computer—it reduces exactly down to it’s circuit diagram and the current values it holds in it’s memory/storage.
If you don’t buy into the idea that a human mind ultimately reduces down to some functional equivalent computer program, than of course the entire Simulation Argument won’t follow.
In a simulation of Earth, far more matter than that is in a completely unknown state and free to vary through a huge portion of its possibility space.
Who cares?
There could be infinite detail in the universe—we could find that there are entire layers beneath the quantum level, recursing to infinity, such that perfect simulation was impossible in principle .. and it still wouldn’t matter in the slightest.
You only need as much detail in the simulation as . . you want detail in the simulation.
Some details at certain spatial scales are more important than others based on their leverage casual effect—such as the bit values in computers, synaptic weights in brains.
A simulation at the human-level scale would only need enough detail to simulate conscious humans, which will probably include simulating down to rough approximations to synaptic-net equivalents. I doubt you would even simulate every cell in the body, for example—unless that itself was what you were really interested in.
There is another significant mistake in typical feasibility critique of simulationism: assuming your current knowledge of algorithmic simulation is the absolute state of the art for now to eternity, the final word, and superintelligences won’t improve on it in the slightest.
As a starting example, AndrewHickey and you both appear to be assuming that the simulation must maintain full simulation fidelity across the entire spatio-temporal field. This is a primitive algorithm. A better approach is to adaptively subdivide space-time and simulate at multiple scales at varying fidelity using importance sampling, for example.
This is voodoo-quantum consciousness: the idea that your mind-identity somehow depends on details down to the quantum state. This can’t possibly be true—because the vast vast majority of that state changes rapidly from quantum moment to moment in a mostly random fashion. There thus is no single quantum state that corresponds uniquely to a mind, rather there is a vast configuration space.
That assumption is not part of my argument. The states of objects outside the people you’re simulating ultimately effect everything else once the changes propagate far enough down the simulation.
You can reduce that space down to a smaller bit representation by removing redundant details. Does it really matter if I remove one molecule from one glial cell in your brain? The whole glial cell? All the glial cells?
Underestimating the importance of glial cells could get you a pretty bad model of the brain. But my point isn’t simply about the thoughts you’d have to simulate; remove one glial cell from a person’s brain, and the gravitational effects mean that if they throw a superball really hard, after enough bounces it’ll end up somewhere entirely different than it would have (calculating the trajectories of superballs is one of the best ways to appreciate the propagation of small changes.)
Who cares?
There could be infinite detail in the universe—we could find that there are entire layers beneath the quantum level, recursing to infinity, such that perfect simulation was impossible in principle .. and it still wouldn’t matter in the slightest.
You only need as much detail in the simulation as . . you want detail in the simulation.
Why would you want as much detail in the simulation as we observe in our reality?
I wonder what kind of cascade effect there actually is- perhaps there are parts of the simulation that could be done using heuristics and statistical simplifications. Perhaps that could be done to initially narrow the answer space and then the precise simulation could be sped up by not having to simulate those answers that contradict the simplified model?
I wonder how a hidden variable theory of quantum mechanics being true would effect the prospects for simulation- assuming a super intelligence could leverage that fact somehow (which is admittedly unlikely).
I wonder what kind of cascade effect there actually is- perhaps there are parts of the simulation that could be done using heuristics and statistical simplifications.
Even using the low-res datasets and simple computers available today (by future standards), we are able to simulate chaotic weather systems about a week into the future.
Simulating down to the quantum level is overkill to the thousandth degree in most cases, unless you have some causal amplifier—such as a human observing quantum level phenomena down the quantum scale. In that situation the quantum-scale events have a massive impact, so the simulation subdivides space-time down to that scale in those regions. Similar techniques are already employed today in state of the art simulation in computer graphics.
There will always be divergences in chaotic systems, but this isn’t important.
You will never get some exact recreation of our actual history, that’s impossible—but you can converge on a set of close traces through the Everett branches. It may even be possible to force them to ‘connect’ to an approximation of our current branch (although this may take some manual patching).
Even using the low-res datasets and simple computers available today (by future standards), we are able to simulate chaotic weather systems about a week into the future.
Not with great accuracy. And that’s only a week; making accurate predictions gets exponentially more difficult the further into the future you go. And human society is much more chaotic (contains far more opportunities for small changes to multiply to become large changes) than the weather. The weather is just one of the chaos factors in human society.
Even using the low-res datasets and simple computers available today (by future standards), we are able to simulate chaotic weather systems about a week into the future.
And that’s only a week; making accurate predictions gets exponentially more difficult the further into the future you go.
I’m not sure about this in general—why do you think that prediction accuracy has an exponential relation to simulation time across the entire space of possible simulation algorithms?
And human society is much more chaotic (contains far more opportunities for small changes to multiply to become large changes) than the weather.
Yes and no. Human society is largely determined by stuff going on in human brains. Brains are complex systems, but like computers and other circuits they can be simulated extremely accurately at a particular level of detail where they exhibit scale separation, but are essentially randomly chaotic when simulated at coarser levels of detail.
Turbulence in fluid systems, important in weather, has no scale separation level and is chaotic all the way down.
I’m not sure about this in general—why do you think that prediction accuracy has an exponential relation to simulation time across the entire space of possible simulation algorithms?
Basic principle of chaos theory. Small scale interferences propagate to large scale interferences, while tiny scale interferences propagate to small scale, and then to large scale. If you try to calculate the trajectory of a superball, you can project it for a couple bounces just modeling mass, elasticity and wind resistance. A couple more? You need detailed information on air turbulence. One article, which I am having a hard time locating, calculated that somewhere in the teens of bounces you would need to integrate the positions of particles across the observable universe due to their gravitational effects.
A kid throws a superball. Bounce, bounce, bounce, bounce, bounce, bounce, bounce, bounce, crash. It bounces out into the street, and they’re hit by a car chasing after it. In a matter of seconds, deviations on a particulate level have propagated to the societal level. The lives of everyone the kid would have interacted with will be affected, and by extension, the lives of everyone that those people would have interacted with, and so on. The course of history will be dramatically different than if you had calculated those slight turbulence effects that would have sent the ball off in an entirely different direction. You can expect many history altering deviations like this to occur every minute.
I’m aware of the error propagation issues and they can be magnified in some phenomena up spatial scales. A roll of the dice in vegas is probably a better example of that than your ball.
I should point out though that this is all somewhat tangential to our original discussion.
But nonetheless . ..
None of the examples you give actually prove that simulation fidelity has an exponential relation to simulation time across the entire space of possible simulation algorithms.
Intuitively it seems to make sense—as each particle’s state is dependent on a few other particles it interacts with at each timestep the information dependency fans out exponentially over time. However intuitions in these situations often can be wrong, and this is nothing like a formal proof.
Getting back to the original discussion, none of this is especially relevant to my main points.
Many of the important questions we want to answer are probabilistic—how unlikely was that event? For example to truly understand the likelihood of life elsewhere in the galaxy and get a good model of galactical development, we will want to understand the likelihood of pivotal events in earth’s history—such as the evolution of hominids or the appearance of early life itself.
You get answers to those only by running many simulations and mapping out branches of the metaverse. The die roll turns out differently in each and in some this leads to different consequences.
In some cases, especially in an initial simulations, one can focus on the branches that match most closely to known history, and even intervene or at least prune to enforce this. But eventually you want to explore the entire space.
You get answers to those only by running many simulations and mapping out branches of the metaverse. The die roll turns out differently in each and in some this leads to different consequence
While this is a good way to get such data, it isn’t the only way . If we expand enough to look at a large number of planets in the galaxy we should arrive at decent estimates simply based on empirical data.
Certainly expanding our observational bubble and looking at other stars will give us valuable information. Simulation is a way of expanding on that.
However, its questionable when or if we ever will make it out to the stars.
Lightyears are vast for humans, but they will be even vaster units of time for posthuman civilizations that think thousands or millions of times faster than us.
It could be that the vast cost of travelling out into space is never worthwhile and those resources are always best used towards developing more local intelligence. John Smart makes a pretty good case for inward expansion always trumping outward expansion.
If you do probabilistic estimates based on large numbers of simulations though, you can cut down on the fidelity of the simulations dramatically. I know that this is something you’re arguing for, but really, there’s no good reason to make the simulations as detailed as the universe we observe.
To take forest succession modeling programs (something I have more experience with than most types of computer modeling) as an example, there are some ecological mechanisms that, if left out, will completely change the trends of the simulation, and some that won’t, and you can leave those that don’t out entirely, because your uncertainty margins stay pretty much the same whether you integrate them or not. If you created a computer simulation of the forest with such fidelity that it contained animals with awareness, you’d use up a phenomenal amount of computing power, but it wouldn’t do you any good as far as accuracy is concerned.
If you care about the lives of the people in the past for their own sake, and are capable of creating high fidelity recreations of their personality from the data available to you, why not upload them into the present so you can interact with them? That, if possible, is something that people actually seem to want to do.
But nonetheless . ..
None of the examples you give actually prove that simulation fidelity has an exponential relation to simulation time across the entire space of possible simulation algorithms.
That’s true, they don’t constitute a formal proof. Maybe a proof already exists and I’m not aware of it, or maybe not, but regardless, given the information available to us in this conversation, right now, the weight of evidence is clearly on the side of such a simulation not being possible over it being possible. You don’t get high probability future predictions by imagining ways in which our understanding of chaos theory maybe gets overhauled.
What about genetic mutations from stray cosmic rays? Would evolution have occurred the same way? Would my genetic code be one allele different?
I feel like the quantum level would matter a lot more the earlier you started your simulation.
I’m worried about how motivated my cognition is. I really want this to be possible for very personal reasons- so I am liable to grasp tightly to any plausible argument for close-enough simulation of dead people.
Well if you started a sim back a billion years ago, well yes I expect you’d get a very different earth.
How different is an interesting open problem. Even if hominid-like creatures develop say 10% of the time after a billion years (reasonable), all of history would likely be quite different each time.
For a sim built for the purpose of resurrection, you’d want to start back just a little earlier—perhaps just before the generation was born.
Getting the DNA right might actually be the easiest sub-problem. Simulating biological development may be tougher than simulating a mind, although I suspect it would get easier as development slows.
Hopefully we don’t have to simulate all of the 10^13 cells in a typical human body at full detail, let alone the 10^14 symbiotes in the human gut.
It’s still an open question whether it’s even possible in principle to create a conscious mind from scratch. Currently complex neural net systems must be created through training—there is no shortcut to just fill in the data (assuming you don’t already have it from a scan or something which of course is inapplicable in this case).
So even a posthuman god may only have the ability to create conscious infants. If that’s the case, you’d have the DNA right and then would have to carefully simulate the entire history of inputs to create the right mind.
You’d probably have to start with some actors (played by AIs or posthumans) to kickstart the thing. If that’s the general approach, then you could also force alot of stuff—intervene continuously to keep the sim events as close to known history as possible (perhaps actors play important historical roles even when it’s running? open). Active intervention would of course make it much more feasible to get minds closer to the ones you’d want.
Would they be the same? I think that will be an open philosophical issue for a while, but I suspect that you could create minds this way that are close enough.
This is interesting enough that it could make a nice follow up paper to the current SA/simulism stuff—or perhaps somebody has already written about it, not sure.
I’m worried about how motivated my cognition is. I really want this to be possible for very personal reasons- so I am liable to grasp tightly to any plausible argument for close-enough simulation of dead people.
It’s good you are conscious of that which you wish to be true.
If uploading is possible, then this too should be possible as they rely on the same fundamental assumption.
If there is a computer program data set that recreates (is equivalent to) the consciousness of a particular person, then such a data set also exists for all possible people, including all dead people.
Thus the problem boils down to finding a particular data set (or range) out of many. This may be a vast computational problem for a mind of 1^15 bits, but it should be at least possible in principle.
How different is an interesting open problem. Even if hominid-like creatures develop say 10% of the time after a billion years (reasonable), all of history would likely be quite different each time.
This is the kind of naive forward extrapolation that gets you sci fi dystopias. Most of the things we do today don’t bear extrapolating to logical extremes, certainly not this.
Most of the things we do today are predictable developments of what previous generations did, and this statement holds across time.
There is a natural evolutionary progression:
dreams/daydreams/visualizations → oral stories/mythologies → written stories/plays/art → movies/television->CG/virtual reality/games->large scale simulations
It isn’t ‘extrapolating to logical extremes’, it is future prediction based on extrapolation of system evolution.
The simulation doesn’t teach us more than we already know about history.
Of course it does. What is our current knowledge about history? It consists of some rough beliefs stored in the low precision analog synapses of our neural networks and a bunch of word-symbols equivalent to the rough beliefs.
With enough simulation we could get concise probability estimates or samples of the full configuration of particles on earth every second for the last billion years—all stored in precise digital transistors, for example.
What we already know about history sets the upper bound on how similar we can make it [the simulation].
This is true only for some initial simulation, but each successive simulation refines knowledge, expands the belief network, and improves the next simulation. You recurse.
The simulation doesn’t contribute to knowing everything you could possibly know about your history, that’s a prerequisite, if you want the simulation to be faithful.
Not at all. Given an estimate on the state of a system at time T and the rules of the system’s time evolution (physics), simulation can derive values for all subsequent time steps. The generated data is then analyzed and confirms or adjusts theories. You can then iteratively refine.
For a quick primitive example, perhaps future posthumans want to understand in more detail why the roman empire collapsed. A bunch of historian/designers reach some rough consensus on a model (built on pieces of earlier models) to build an earth at that time and populate it with inhabitants (creating minds may involve using stand in actors for an initial generation of parents).
Running this model forward may reveal that the lead had little effect, that previous models of some roman military formations don’t actually work, that a crop harvest in 32BC may have been more important than previously thought .. and so on.
Most of the things we do today are predictable developments of what previous generations did, and this statement holds across time.
As wedrifid says, in the light of hindsight bias. Instead of looking at the past and seeing how reliably it seems to lead to the present, try looking at people who actually tried to predict the future. “Future prediction based on extrapolation of system evolution” has reliably failed to make predictions about the direction of human society that were both accurate and meaningful.
For a quick primitive example, perhaps future posthumans want to understand in more detail why the roman empire collapsed. A bunch of historian/designers reach some rough consensus on a model (built on pieces of earlier models) to build an earth at that time and populate it with inhabitants (creating minds may involve using stand in actors for an initial generation of parents).
Running this model forward may reveal that the lead had little effect, that previous models of some roman military formations don’t actually work, that a crop harvest in 32BC may have been more important than previously thought .. and so on.
Or you could very easily find them removing the lead from their pipes and wine, and changing their military formations. If you don’t already know what their crop harvest in 32BC was like, you can practically guarantee that it won’t be the same in the simulation. This is exactly the kind of use that, as I pointed out earlier, if you had enough information to actually pull it off, you wouldn’t need to.
If you don’t already know what their crop harvest in 32BC was like, you can practically guarantee that it won’t be the same in the simulation. This is exactly the kind of use that, as I pointed out earlier, if you had enough information to actually pull it off, you wouldn’t need to.
I’ll just reiterate my response then:
Any information about a physical system at time T reveals information about that system at all other times—places constraints on it’s configuraiton. Physics is a set of functions that describe the exact relations between system states across time steps, ie the temporal evolution of the system.
We developed physics in order to simulate physical systems and predict and understand their behavior.
This seems then to be a matter of details—how much simulation is required to produce how much knowledge from how much initial information about the system.
For example, with infinite computing power I could iterate through all simulations of earth’s history that are consistent with current observational knowledge.
This algorithm computes the probabilities of every fact about the system—the probability of a good crop harvest in 32BC in Egypt is just the fraction of the simulated multiverse for which this property is true.
This algorithm is in fact equivalent to the search procedure in the AIXI universal intelligence algorithm.
This is the kind of naive forward extrapolation that gets you sci fi dystopias. Most of the things we do today don’t bear extrapolating to logical extremes, certainly not this.
No I don’t. I think you should try asking more people if this is actually something they would want, with knowledge of the things they could be doing instead, rather than assuming it’s a logical extrapolation of things that they do want. If I could do that, it wouldn’t even bottom the list of things I’d want to do with that power.
The simulation doesn’t teach us more than we already know about history. What we already know about history sets the upper bound on how similar we can make it. Given the size of the possibility space, we can only reasonably assume that it’s different in every way that we do not enforce similarity on it. The simulation doesn’t contribute to knowing everything you could possibly know about your history, that’s a prerequisite, if you want the simulation to be faithful.
This would be true if we were equally ignorant about all of history. However, there are some facts regarding history we can be quite confident about- particularly recent history and the present. You can then check possible hypotheses about history (starting from what is hopefully an excellent estimation of starting conditions) against those facts you do have. Given how contingent the genetic make-up of a human is on the timing of their conception and how strongly genetics influences who we are it seems plausible a physical simulation of this part of the universe could radically narrow the space of possibilities given enough computing power. Of course parts of the simulation might remain under-determined but it seems implausible that a simulation would tell us nothing new about history as a simulation should be more proficient than humans at assessing the necessary consequences and antecedences to any known event.
Radically narrow, but given just how vast the option space is, it takes a whole lot more than radically narrowing before you can winnow it down to a manageable set of possibilities.
This post puts some numbers to the possible configurations you can get for a single lump of matter of about 1.5 kilograms. In a simulation of Earth, far more matter than that is in a completely unknown state and free to vary through a huge portion of its possibility space (that’s not to say that even an appreciable fraction of matter on Earth is free to vary through all possible states, but the numbers are mind boggling enough even if we’re only dealing with a few kilograms.) Every unknown configuration is a potential confounding factor which could lead to cascading changes. The space is so phenomenally vast that you could narrow it by a billion orders of magnitude, and it would still occupy approximately the same space on the scale of sheer incomprehensibility. You would have to actively and continuously enforce similarity on the simulation to keep it from diverging more and more widely from the original.
Said reference post by AndrewHickey starts with a ridiculous assumption:
This is voodoo-quantum consciousness: the idea that your mind-identity somehow depends on details down to the quantum state. This can’t possibly be true—because the vast vast majority of that state changes rapidly from quantum moment to moment in a mostly random fashion. There thus is no single quantum state that corresponds uniquely to a mind, rather there is a vast configuration space.
You can reduce that space down to a smaller bit representation by removing redundant details. Does it really matter if I remove one molecule from one glial cell in your brain? The whole glial cell? All the glial cells?
There is a single minimal representation of a computer—it reduces exactly down to it’s circuit diagram and the current values it holds in it’s memory/storage.
If you don’t buy into the idea that a human mind ultimately reduces down to some functional equivalent computer program, than of course the entire Simulation Argument won’t follow.
Who cares?
There could be infinite detail in the universe—we could find that there are entire layers beneath the quantum level, recursing to infinity, such that perfect simulation was impossible in principle .. and it still wouldn’t matter in the slightest.
You only need as much detail in the simulation as . . you want detail in the simulation.
Some details at certain spatial scales are more important than others based on their leverage casual effect—such as the bit values in computers, synaptic weights in brains.
A simulation at the human-level scale would only need enough detail to simulate conscious humans, which will probably include simulating down to rough approximations to synaptic-net equivalents. I doubt you would even simulate every cell in the body, for example—unless that itself was what you were really interested in.
There is another significant mistake in typical feasibility critique of simulationism: assuming your current knowledge of algorithmic simulation is the absolute state of the art for now to eternity, the final word, and superintelligences won’t improve on it in the slightest.
As a starting example, AndrewHickey and you both appear to be assuming that the simulation must maintain full simulation fidelity across the entire spatio-temporal field. This is a primitive algorithm. A better approach is to adaptively subdivide space-time and simulate at multiple scales at varying fidelity using importance sampling, for example.
That assumption is not part of my argument. The states of objects outside the people you’re simulating ultimately effect everything else once the changes propagate far enough down the simulation.
Underestimating the importance of glial cells could get you a pretty bad model of the brain. But my point isn’t simply about the thoughts you’d have to simulate; remove one glial cell from a person’s brain, and the gravitational effects mean that if they throw a superball really hard, after enough bounces it’ll end up somewhere entirely different than it would have (calculating the trajectories of superballs is one of the best ways to appreciate the propagation of small changes.)
Why would you want as much detail in the simulation as we observe in our reality?
Good point. I’m reconsidering...
I wonder what kind of cascade effect there actually is- perhaps there are parts of the simulation that could be done using heuristics and statistical simplifications. Perhaps that could be done to initially narrow the answer space and then the precise simulation could be sped up by not having to simulate those answers that contradict the simplified model?
I wonder how a hidden variable theory of quantum mechanics being true would effect the prospects for simulation- assuming a super intelligence could leverage that fact somehow (which is admittedly unlikely).
What? ;(
Even using the low-res datasets and simple computers available today (by future standards), we are able to simulate chaotic weather systems about a week into the future.
Simulating down to the quantum level is overkill to the thousandth degree in most cases, unless you have some causal amplifier—such as a human observing quantum level phenomena down the quantum scale. In that situation the quantum-scale events have a massive impact, so the simulation subdivides space-time down to that scale in those regions. Similar techniques are already employed today in state of the art simulation in computer graphics.
There will always be divergences in chaotic systems, but this isn’t important.
You will never get some exact recreation of our actual history, that’s impossible—but you can converge on a set of close traces through the Everett branches. It may even be possible to force them to ‘connect’ to an approximation of our current branch (although this may take some manual patching).
Not with great accuracy. And that’s only a week; making accurate predictions gets exponentially more difficult the further into the future you go. And human society is much more chaotic (contains far more opportunities for small changes to multiply to become large changes) than the weather. The weather is just one of the chaos factors in human society.
I’m not sure about this in general—why do you think that prediction accuracy has an exponential relation to simulation time across the entire space of possible simulation algorithms?
Yes and no. Human society is largely determined by stuff going on in human brains. Brains are complex systems, but like computers and other circuits they can be simulated extremely accurately at a particular level of detail where they exhibit scale separation, but are essentially randomly chaotic when simulated at coarser levels of detail.
Turbulence in fluid systems, important in weather, has no scale separation level and is chaotic all the way down.
Basic principle of chaos theory. Small scale interferences propagate to large scale interferences, while tiny scale interferences propagate to small scale, and then to large scale. If you try to calculate the trajectory of a superball, you can project it for a couple bounces just modeling mass, elasticity and wind resistance. A couple more? You need detailed information on air turbulence. One article, which I am having a hard time locating, calculated that somewhere in the teens of bounces you would need to integrate the positions of particles across the observable universe due to their gravitational effects.
A kid throws a superball. Bounce, bounce, bounce, bounce, bounce, bounce, bounce, bounce, crash. It bounces out into the street, and they’re hit by a car chasing after it. In a matter of seconds, deviations on a particulate level have propagated to the societal level. The lives of everyone the kid would have interacted with will be affected, and by extension, the lives of everyone that those people would have interacted with, and so on. The course of history will be dramatically different than if you had calculated those slight turbulence effects that would have sent the ball off in an entirely different direction. You can expect many history altering deviations like this to occur every minute.
I’m aware of the error propagation issues and they can be magnified in some phenomena up spatial scales. A roll of the dice in vegas is probably a better example of that than your ball.
I should point out though that this is all somewhat tangential to our original discussion.
But nonetheless . ..
None of the examples you give actually prove that simulation fidelity has an exponential relation to simulation time across the entire space of possible simulation algorithms.
Intuitively it seems to make sense—as each particle’s state is dependent on a few other particles it interacts with at each timestep the information dependency fans out exponentially over time. However intuitions in these situations often can be wrong, and this is nothing like a formal proof.
Getting back to the original discussion, none of this is especially relevant to my main points.
Many of the important questions we want to answer are probabilistic—how unlikely was that event? For example to truly understand the likelihood of life elsewhere in the galaxy and get a good model of galactical development, we will want to understand the likelihood of pivotal events in earth’s history—such as the evolution of hominids or the appearance of early life itself.
You get answers to those only by running many simulations and mapping out branches of the metaverse. The die roll turns out differently in each and in some this leads to different consequences.
In some cases, especially in an initial simulations, one can focus on the branches that match most closely to known history, and even intervene or at least prune to enforce this. But eventually you want to explore the entire space.
While this is a good way to get such data, it isn’t the only way . If we expand enough to look at a large number of planets in the galaxy we should arrive at decent estimates simply based on empirical data.
Certainly expanding our observational bubble and looking at other stars will give us valuable information. Simulation is a way of expanding on that.
However, its questionable when or if we ever will make it out to the stars.
Lightyears are vast for humans, but they will be even vaster units of time for posthuman civilizations that think thousands or millions of times faster than us.
It could be that the vast cost of travelling out into space is never worthwhile and those resources are always best used towards developing more local intelligence. John Smart makes a pretty good case for inward expansion always trumping outward expansion.
If you do probabilistic estimates based on large numbers of simulations though, you can cut down on the fidelity of the simulations dramatically. I know that this is something you’re arguing for, but really, there’s no good reason to make the simulations as detailed as the universe we observe.
To take forest succession modeling programs (something I have more experience with than most types of computer modeling) as an example, there are some ecological mechanisms that, if left out, will completely change the trends of the simulation, and some that won’t, and you can leave those that don’t out entirely, because your uncertainty margins stay pretty much the same whether you integrate them or not. If you created a computer simulation of the forest with such fidelity that it contained animals with awareness, you’d use up a phenomenal amount of computing power, but it wouldn’t do you any good as far as accuracy is concerned.
If you care about the lives of the people in the past for their own sake, and are capable of creating high fidelity recreations of their personality from the data available to you, why not upload them into the present so you can interact with them? That, if possible, is something that people actually seem to want to do.
That’s true, they don’t constitute a formal proof. Maybe a proof already exists and I’m not aware of it, or maybe not, but regardless, given the information available to us in this conversation, right now, the weight of evidence is clearly on the side of such a simulation not being possible over it being possible. You don’t get high probability future predictions by imagining ways in which our understanding of chaos theory maybe gets overhauled.
What about genetic mutations from stray cosmic rays? Would evolution have occurred the same way? Would my genetic code be one allele different?
I feel like the quantum level would matter a lot more the earlier you started your simulation.
I’m worried about how motivated my cognition is. I really want this to be possible for very personal reasons- so I am liable to grasp tightly to any plausible argument for close-enough simulation of dead people.
Well if you started a sim back a billion years ago, well yes I expect you’d get a very different earth.
How different is an interesting open problem. Even if hominid-like creatures develop say 10% of the time after a billion years (reasonable), all of history would likely be quite different each time.
For a sim built for the purpose of resurrection, you’d want to start back just a little earlier—perhaps just before the generation was born.
Getting the DNA right might actually be the easiest sub-problem. Simulating biological development may be tougher than simulating a mind, although I suspect it would get easier as development slows.
Hopefully we don’t have to simulate all of the 10^13 cells in a typical human body at full detail, let alone the 10^14 symbiotes in the human gut.
It’s still an open question whether it’s even possible in principle to create a conscious mind from scratch. Currently complex neural net systems must be created through training—there is no shortcut to just fill in the data (assuming you don’t already have it from a scan or something which of course is inapplicable in this case).
So even a posthuman god may only have the ability to create conscious infants. If that’s the case, you’d have the DNA right and then would have to carefully simulate the entire history of inputs to create the right mind.
You’d probably have to start with some actors (played by AIs or posthumans) to kickstart the thing. If that’s the general approach, then you could also force alot of stuff—intervene continuously to keep the sim events as close to known history as possible (perhaps actors play important historical roles even when it’s running? open). Active intervention would of course make it much more feasible to get minds closer to the ones you’d want.
Would they be the same? I think that will be an open philosophical issue for a while, but I suspect that you could create minds this way that are close enough.
This is interesting enough that it could make a nice follow up paper to the current SA/simulism stuff—or perhaps somebody has already written about it, not sure.
It’s good you are conscious of that which you wish to be true.
If uploading is possible, then this too should be possible as they rely on the same fundamental assumption.
If there is a computer program data set that recreates (is equivalent to) the consciousness of a particular person, then such a data set also exists for all possible people, including all dead people.
Thus the problem boils down to finding a particular data set (or range) out of many. This may be a vast computational problem for a mind of 1^15 bits, but it should be at least possible in principle.
How on earth can we know that 10% is reasonable?
The “even if” and “say” should indicate the intent—it wasn’t even a guess, just an example used as an upper bound.
I’m not convinced the evolution of hominids is a black swan, but it’s not an issue I’ve researched much.
The (reasonable) assertion was what struck me.
Most of the things we do today are predictable developments of what previous generations did, and this statement holds across time.
There is a natural evolutionary progression: dreams/daydreams/visualizations → oral stories/mythologies → written stories/plays/art → movies/television->CG/virtual reality/games->large scale simulations
It isn’t ‘extrapolating to logical extremes’, it is future prediction based on extrapolation of system evolution.
Of course it does. What is our current knowledge about history? It consists of some rough beliefs stored in the low precision analog synapses of our neural networks and a bunch of word-symbols equivalent to the rough beliefs.
With enough simulation we could get concise probability estimates or samples of the full configuration of particles on earth every second for the last billion years—all stored in precise digital transistors, for example.
This is true only for some initial simulation, but each successive simulation refines knowledge, expands the belief network, and improves the next simulation. You recurse.
Not at all. Given an estimate on the state of a system at time T and the rules of the system’s time evolution (physics), simulation can derive values for all subsequent time steps. The generated data is then analyzed and confirms or adjusts theories. You can then iteratively refine.
For a quick primitive example, perhaps future posthumans want to understand in more detail why the roman empire collapsed. A bunch of historian/designers reach some rough consensus on a model (built on pieces of earlier models) to build an earth at that time and populate it with inhabitants (creating minds may involve using stand in actors for an initial generation of parents).
Running this model forward may reveal that the lead had little effect, that previous models of some roman military formations don’t actually work, that a crop harvest in 32BC may have been more important than previously thought .. and so on.
With the help of hindsight bias.
As wedrifid says, in the light of hindsight bias. Instead of looking at the past and seeing how reliably it seems to lead to the present, try looking at people who actually tried to predict the future. “Future prediction based on extrapolation of system evolution” has reliably failed to make predictions about the direction of human society that were both accurate and meaningful.
Or you could very easily find them removing the lead from their pipes and wine, and changing their military formations. If you don’t already know what their crop harvest in 32BC was like, you can practically guarantee that it won’t be the same in the simulation. This is exactly the kind of use that, as I pointed out earlier, if you had enough information to actually pull it off, you wouldn’t need to.
I’ll just reiterate my response then:
Any information about a physical system at time T reveals information about that system at all other times—places constraints on it’s configuraiton. Physics is a set of functions that describe the exact relations between system states across time steps, ie the temporal evolution of the system.
We developed physics in order to simulate physical systems and predict and understand their behavior.
This seems then to be a matter of details—how much simulation is required to produce how much knowledge from how much initial information about the system.
For example, with infinite computing power I could iterate through all simulations of earth’s history that are consistent with current observational knowledge.
This algorithm computes the probabilities of every fact about the system—the probability of a good crop harvest in 32BC in Egypt is just the fraction of the simulated multiverse for which this property is true.
This algorithm is in fact equivalent to the search procedure in the AIXI universal intelligence algorithm.