Even using the low-res datasets and simple computers available today (by future standards), we are able to simulate chaotic weather systems about a week into the future.
And that’s only a week; making accurate predictions gets exponentially more difficult the further into the future you go.
I’m not sure about this in general—why do you think that prediction accuracy has an exponential relation to simulation time across the entire space of possible simulation algorithms?
And human society is much more chaotic (contains far more opportunities for small changes to multiply to become large changes) than the weather.
Yes and no. Human society is largely determined by stuff going on in human brains. Brains are complex systems, but like computers and other circuits they can be simulated extremely accurately at a particular level of detail where they exhibit scale separation, but are essentially randomly chaotic when simulated at coarser levels of detail.
Turbulence in fluid systems, important in weather, has no scale separation level and is chaotic all the way down.
I’m not sure about this in general—why do you think that prediction accuracy has an exponential relation to simulation time across the entire space of possible simulation algorithms?
Basic principle of chaos theory. Small scale interferences propagate to large scale interferences, while tiny scale interferences propagate to small scale, and then to large scale. If you try to calculate the trajectory of a superball, you can project it for a couple bounces just modeling mass, elasticity and wind resistance. A couple more? You need detailed information on air turbulence. One article, which I am having a hard time locating, calculated that somewhere in the teens of bounces you would need to integrate the positions of particles across the observable universe due to their gravitational effects.
A kid throws a superball. Bounce, bounce, bounce, bounce, bounce, bounce, bounce, bounce, crash. It bounces out into the street, and they’re hit by a car chasing after it. In a matter of seconds, deviations on a particulate level have propagated to the societal level. The lives of everyone the kid would have interacted with will be affected, and by extension, the lives of everyone that those people would have interacted with, and so on. The course of history will be dramatically different than if you had calculated those slight turbulence effects that would have sent the ball off in an entirely different direction. You can expect many history altering deviations like this to occur every minute.
I’m aware of the error propagation issues and they can be magnified in some phenomena up spatial scales. A roll of the dice in vegas is probably a better example of that than your ball.
I should point out though that this is all somewhat tangential to our original discussion.
But nonetheless . ..
None of the examples you give actually prove that simulation fidelity has an exponential relation to simulation time across the entire space of possible simulation algorithms.
Intuitively it seems to make sense—as each particle’s state is dependent on a few other particles it interacts with at each timestep the information dependency fans out exponentially over time. However intuitions in these situations often can be wrong, and this is nothing like a formal proof.
Getting back to the original discussion, none of this is especially relevant to my main points.
Many of the important questions we want to answer are probabilistic—how unlikely was that event? For example to truly understand the likelihood of life elsewhere in the galaxy and get a good model of galactical development, we will want to understand the likelihood of pivotal events in earth’s history—such as the evolution of hominids or the appearance of early life itself.
You get answers to those only by running many simulations and mapping out branches of the metaverse. The die roll turns out differently in each and in some this leads to different consequences.
In some cases, especially in an initial simulations, one can focus on the branches that match most closely to known history, and even intervene or at least prune to enforce this. But eventually you want to explore the entire space.
You get answers to those only by running many simulations and mapping out branches of the metaverse. The die roll turns out differently in each and in some this leads to different consequence
While this is a good way to get such data, it isn’t the only way . If we expand enough to look at a large number of planets in the galaxy we should arrive at decent estimates simply based on empirical data.
Certainly expanding our observational bubble and looking at other stars will give us valuable information. Simulation is a way of expanding on that.
However, its questionable when or if we ever will make it out to the stars.
Lightyears are vast for humans, but they will be even vaster units of time for posthuman civilizations that think thousands or millions of times faster than us.
It could be that the vast cost of travelling out into space is never worthwhile and those resources are always best used towards developing more local intelligence. John Smart makes a pretty good case for inward expansion always trumping outward expansion.
If you do probabilistic estimates based on large numbers of simulations though, you can cut down on the fidelity of the simulations dramatically. I know that this is something you’re arguing for, but really, there’s no good reason to make the simulations as detailed as the universe we observe.
To take forest succession modeling programs (something I have more experience with than most types of computer modeling) as an example, there are some ecological mechanisms that, if left out, will completely change the trends of the simulation, and some that won’t, and you can leave those that don’t out entirely, because your uncertainty margins stay pretty much the same whether you integrate them or not. If you created a computer simulation of the forest with such fidelity that it contained animals with awareness, you’d use up a phenomenal amount of computing power, but it wouldn’t do you any good as far as accuracy is concerned.
If you care about the lives of the people in the past for their own sake, and are capable of creating high fidelity recreations of their personality from the data available to you, why not upload them into the present so you can interact with them? That, if possible, is something that people actually seem to want to do.
But nonetheless . ..
None of the examples you give actually prove that simulation fidelity has an exponential relation to simulation time across the entire space of possible simulation algorithms.
That’s true, they don’t constitute a formal proof. Maybe a proof already exists and I’m not aware of it, or maybe not, but regardless, given the information available to us in this conversation, right now, the weight of evidence is clearly on the side of such a simulation not being possible over it being possible. You don’t get high probability future predictions by imagining ways in which our understanding of chaos theory maybe gets overhauled.
I’m not sure about this in general—why do you think that prediction accuracy has an exponential relation to simulation time across the entire space of possible simulation algorithms?
Yes and no. Human society is largely determined by stuff going on in human brains. Brains are complex systems, but like computers and other circuits they can be simulated extremely accurately at a particular level of detail where they exhibit scale separation, but are essentially randomly chaotic when simulated at coarser levels of detail.
Turbulence in fluid systems, important in weather, has no scale separation level and is chaotic all the way down.
Basic principle of chaos theory. Small scale interferences propagate to large scale interferences, while tiny scale interferences propagate to small scale, and then to large scale. If you try to calculate the trajectory of a superball, you can project it for a couple bounces just modeling mass, elasticity and wind resistance. A couple more? You need detailed information on air turbulence. One article, which I am having a hard time locating, calculated that somewhere in the teens of bounces you would need to integrate the positions of particles across the observable universe due to their gravitational effects.
A kid throws a superball. Bounce, bounce, bounce, bounce, bounce, bounce, bounce, bounce, crash. It bounces out into the street, and they’re hit by a car chasing after it. In a matter of seconds, deviations on a particulate level have propagated to the societal level. The lives of everyone the kid would have interacted with will be affected, and by extension, the lives of everyone that those people would have interacted with, and so on. The course of history will be dramatically different than if you had calculated those slight turbulence effects that would have sent the ball off in an entirely different direction. You can expect many history altering deviations like this to occur every minute.
I’m aware of the error propagation issues and they can be magnified in some phenomena up spatial scales. A roll of the dice in vegas is probably a better example of that than your ball.
I should point out though that this is all somewhat tangential to our original discussion.
But nonetheless . ..
None of the examples you give actually prove that simulation fidelity has an exponential relation to simulation time across the entire space of possible simulation algorithms.
Intuitively it seems to make sense—as each particle’s state is dependent on a few other particles it interacts with at each timestep the information dependency fans out exponentially over time. However intuitions in these situations often can be wrong, and this is nothing like a formal proof.
Getting back to the original discussion, none of this is especially relevant to my main points.
Many of the important questions we want to answer are probabilistic—how unlikely was that event? For example to truly understand the likelihood of life elsewhere in the galaxy and get a good model of galactical development, we will want to understand the likelihood of pivotal events in earth’s history—such as the evolution of hominids or the appearance of early life itself.
You get answers to those only by running many simulations and mapping out branches of the metaverse. The die roll turns out differently in each and in some this leads to different consequences.
In some cases, especially in an initial simulations, one can focus on the branches that match most closely to known history, and even intervene or at least prune to enforce this. But eventually you want to explore the entire space.
While this is a good way to get such data, it isn’t the only way . If we expand enough to look at a large number of planets in the galaxy we should arrive at decent estimates simply based on empirical data.
Certainly expanding our observational bubble and looking at other stars will give us valuable information. Simulation is a way of expanding on that.
However, its questionable when or if we ever will make it out to the stars.
Lightyears are vast for humans, but they will be even vaster units of time for posthuman civilizations that think thousands or millions of times faster than us.
It could be that the vast cost of travelling out into space is never worthwhile and those resources are always best used towards developing more local intelligence. John Smart makes a pretty good case for inward expansion always trumping outward expansion.
If you do probabilistic estimates based on large numbers of simulations though, you can cut down on the fidelity of the simulations dramatically. I know that this is something you’re arguing for, but really, there’s no good reason to make the simulations as detailed as the universe we observe.
To take forest succession modeling programs (something I have more experience with than most types of computer modeling) as an example, there are some ecological mechanisms that, if left out, will completely change the trends of the simulation, and some that won’t, and you can leave those that don’t out entirely, because your uncertainty margins stay pretty much the same whether you integrate them or not. If you created a computer simulation of the forest with such fidelity that it contained animals with awareness, you’d use up a phenomenal amount of computing power, but it wouldn’t do you any good as far as accuracy is concerned.
If you care about the lives of the people in the past for their own sake, and are capable of creating high fidelity recreations of their personality from the data available to you, why not upload them into the present so you can interact with them? That, if possible, is something that people actually seem to want to do.
That’s true, they don’t constitute a formal proof. Maybe a proof already exists and I’m not aware of it, or maybe not, but regardless, given the information available to us in this conversation, right now, the weight of evidence is clearly on the side of such a simulation not being possible over it being possible. You don’t get high probability future predictions by imagining ways in which our understanding of chaos theory maybe gets overhauled.