In the long-run CDT case, why the assumption that people in a sim can’t affect people in the far future? At the very least, if we’re in a sim, we can affect people in the far future of our sim; and probably indirectly in baseline too, insofar as if we come up with a really good idea in the sim, then those who are running the sim may take notice of the idea and implement it outside said sim.
As for the figures; I have a few thoughts about f. Let us assume that the far future consists of one base world, which runs a number of simulations, which in turn run sub-simulations (and those run sub-sub-simulations, and so on). Let us assume that, at any given moment, each simulation’s internal clock is set to a randomly determined year. Let us further assume that our universe is fairly typical in terms of population.
The number of humans who have ever lived, up until 2011, has been estimated at 107 billion. This means that, if all simulations are constrained to run up until 2014 only, the fraction of people in simulations (at any given moment) who believe that they are alive in 2014 will be approximately 7⁄107 (the baseline will not significantly affect this figure if the number of simulations is large). If the simulations are permitted to run longer (and I see no reason why they wouldn’t be), then that figure will of course be lower, and possibly significantly lower.
I can therefore conclude that, in all probability, f < 7⁄107.
At the same time, Nf >> 1 means that f > 1/N. Of course, since N can be arbitrarily large, this tells us little; but it does imply, at least, that f>0.
Simulating humans near the singularity may be more interesting than simulating hunter-gatherers, so it may be that the fraction of sims around now is more than 7⁄107.
One reason not to expect the sims to go into the far future is that any far future with high altruistic import will have high numbers of computations, which would be expensive to simulate. It’s cheaper to simulate a few billion humans who have only modest computing power. For the same reason, it’s not clear that we’d have lots of sims within sims within sims, because those would get really expensive—unless computing power is so trivially cheap in the basement that it doesn’t matter.
That said, you’re right there could be at least a reasonable future ahead of us in a sim, but I’m doubtful many sims run the whole length of galactic history—again, unless the basement is drowning in computing power that it doesn’t know what to do with.
Interesting point about coming up with a really good idea. But one would tend to think that the superintelligent AIs in the basement would be much better at that. Why would they bother creating dumb little humans who go on to create their own superintelligences in the sim when they could just use superintelligences in the basement? If the simulators are interested in cognitive/evolutionary diversity, maybe that could be a reason.
Simulating humans near the singularity may be more interesting than simulating hunter-gatherers, so it may be that the fraction of sims around now is more than 7⁄107.
Possibly, but every 2014 needs to have a history; we can find evidence in our universe that around 107 billion people have existed, and I’m assuming that we’re fairly typical so far as universes go.
...annnnnd I’ve just realised that there’s no reason why someone in the future couldn’t run a simulation up to (say) 1800, save that, and then run several simulations from that date forwards, each with little tweaks (a sort of a Monte Carlo approach to history).
One reason not to expect the sims to go into the far future is that any far future with high altruistic import will have high numbers of computations, which would be expensive to simulate. It’s cheaper to simulate a few billion humans who have only modest computing power.
I question the applicability of this assertion to our universe. Yes, a game like Sid Meier’s Civilisation is a whole lot easier to simulate than (say) a crate of soil at the level of individual grains—because there’s a lot of detail being glossed over in Civilisation. The game does not simulate every grain of soil, every drop of water.
Our universe—whether it’s baseline or a simulation—seems to be running right down to the atomic level. That is, if we’re being simulated, then every individual atom, every electron and proton, is being simulated. Simulating a grain of sand at that level of detail is quite a feat of computing—but simulating a grain-of-sand-sized computer would be no harder. In each case, it’s the individual atoms that are being simulated, and atoms follow the same laws whether in a grain of sand or in a CPU. (They have to, or we’d never have figured out how to build the CPU).
So I don’t think there’s been any change in the computing power required to simulate our universe with the increase in human population and computing power.
For the same reason, it’s not clear that we’d have lots of sims within sims within sims, because those would get really expensive—unless computing power is so trivially cheap in the basement that it doesn’t matter.
Sub-sims just need to be computationally simpler by a few orders of magnitude than their parent sims. If we create a sim, then computing power in that universe will be fantastically expensive as compared to ours; if we are a sim, then computing power in our parent universe must be sufficient to run our universe (and it is therefore fantastically cheap as compared to our universe). I have no idea how to tell whether we’re in a top-end one-of-a-kind research lab computer, or the one-universe-up equivalent of a smartphone.
That said, you’re right there could be at least a reasonable future ahead of us in a sim, but I’m doubtful many sims run the whole length of galactic history—again, unless the basement is drowning in computing power that it doesn’t know what to do with.
You have a good point. If we’re a sim, we could be terminated unexpectedly at any time. Presumably as soon as the conditions of the sim are fulfilled.
Of course, the fact that our sim (if we are a sim) is running at all implies that the baseline must have the computing power to run us; in comparison with which, everything that we could possibly do with computing power is so trivial that it hardly even counts as a drain on resources. Of course, that doesn’t mean that there aren’t equivalently computationally expensive things that they might want to do with our computing resources (like running a slightly different sim, perhaps)...
Interesting point about coming up with a really good idea. But one would tend to think that the superintelligent AIs in the basement would be much better at that. Why would they bother creating dumb little humans who go on to create their own superintelligences in the sim when they could just use superintelligences in the basement?
Maybe we’re the sim that the superintelligence is using to test its ideas before introducing them to the baseline? If our universe fulfills its criteria better than any other, then it acts in such a way as to make baseline more like our universe. (Whatever those criteria are...)
there’s no reason why someone in the future couldn’t run a simulation up to (say) 1800, save that, and then run several simulations from that date forwards, each with little tweaks
Yep, exactly. That’s how you can get more than 7⁄107 of the people in 2014.
That is, if we’re being simulated, then every individual atom, every electron and proton, is being simulated.
Probably not, though. In Bostrom’s simulation-argument paper, he notes that you only need the environment to be accurate enough that observers think the sim is atomically precise. For instance, when they perform quantum experiments, you make those experiments come out right, but that doesn’t mean you actually have to simulate quantum mechanics everywhere. Because superficial sims would be vastly cheaper, we should expect vastly more of them, so we’d probably be in one of them.
Many present-day computer simulations capture high-level features of a system without delving into all the gory details. Probably most sims could suffice to have intermediate levels of detail for physics and even minds. (E.g., maybe you don’t need to simulate every neuron, just their higher-level aggregate behaviors, except when neuroscientists look at individual neurons.)
Of course, the fact that our sim (if we are a sim) is running at all implies that the baseline must have the computing power to run us; in comparison with which, everything that we could possibly do with computing power is so trivial
This is captured by the N term in my rough calculations above. If the basement has gobs of computing power, that means N is really big. But N cancels out from the final action-relevant ie/f expression.
Probably not, though. In Bostrom’s simulation-argument paper, he notes that you only need the environment to be accurate enough that observers think the sim is atomically precise.
Hmmm. It’s a fair argument, but I’m not sure how well it would work out in practice.
To clarify, I’m not saying that the sim couldn’t be run like that. My claim is, rather, that if we are in a sim being run with varying levels of accuracy as suggested, then we should be able to detect it.
Consider, for the moment, a hill. That hill consists of a very large number of electrons, protons and neutrons. Assume for the moment that the hill is not the focus of a scientific experiment. Then, it may be that the hill is being simulated in some computationally cheaper manner than simulating every individual particle.
There are two options. Either the computationally cheaper manner is, in every single possible way, indistinguishable from simulating every individual particle. In this case, there is no reason to use the more computationally expensive method when a scientist tries to run an experiment which includes the hill; all hills can use the computationally cheaper method.
The alternative is that there is some way, however slight or subtle, in which the behaviour of the atoms in the hill differs from the behaviour of those same atoms when under scientific investigation. If this is the case, then it means that the scientific laws deduced from experiments on the hill will, in some subtle way, not match the behaviour of hills in general. In this case, there must be a detectable difference; in effect, under certain circumstances hills are following a different set of physical laws and sooner or later someone is going to notice that. (Note that this can be avoided, to some degree, by saving the sim at regular intervals; if someone notices the difference between the approximation and a hill made out of properly simulated atoms, then the simulation is reloaded from a save just before that difference happened and the approximation is updated to hide that detail. This can’t be done forever—after a few iterations, the approximation’s computational complexity will begin to approach the computational complexity of the atomic hill in any case, plus you’ve now wasted a lot of cycles running sims that had no purpose other than refining the approximation—but it could stave off discovery for a period, at least).
Having said that, though, another thought has occurred to me. There’s no guarantee (if we are in a sim) that the laws of physics are the same in our universe as they are in baseline; we may, in fact, have laws of physics specifically designed to be easier to compute. Consider, for example, the uncertainty principle. Now, I’m no quantum physicist, but as I understand it, the more precisely a particle’s position can be determined, the less precisely its momentum can be known—and, at the same time, the more precisely its momentum is known, the less precisely its position can be found. Now, in terms of a simulation, the uncertainty principle means that the computer running the simulation need not keep track of the position and momentum of every particle at full precision. It may, instead, keep track of some single combined value (a real quantum physicist might be able to guess at what that value is, and how position and/or momentum can be derived from it). And given the number of atoms in the observable universe, the data storage saved by this is massive (and suggests that Baseline’s storage space, while immense, is not infinite).
Of course, like any good simplification, the Uncertainty Principle is applied everywhere, whether a scientist is looking at the data or not.
What is and isn’t simulated to a high degree of detail can be determined dynamically. If people decide they want to investigate a hill, some system watching the sim can notice that and send a signal that the sim needs to make the hill observations correspond with quantum/etc. physics. This shouldn’t be hard to do. For instance, if the theory predicts observation X +/- Y, you can generate some random numbers centered around X with std. dev. Y. Or you can make them somewhat different if the theory is wrong and to account for model uncertainty.
If the scientists would do lots of experiments that are connected in complex ways such that consistency requires them to come out with certain complex relationships, you’d need to get somewhat more fancy with faking the measurements. Worst case, you can actually do a brute-force sim of that part of physics for the brief period required. And yeah, as you say, you can always revert to a previous state if you screw up and the scientists find something amiss, though you probably wouldn’t want to do that too often.
There’s no guarantee (if we are in a sim) that the laws of physics are the same in our universe as they are in baseline; we may, in fact, have laws of physics specifically designed to be easier to compute.
Worst case, you can actually do a brute-force sim of that part of physics for the brief period required.
This is kind of where the trouble starts to come in. What happens when the scientist, instead of looking at hills in the present, turns instead to look at historical records of hills a hundred years in the past?
If he has actually found some complex interaction that the simplified model fails to cover, then he has a chance of finding evidence of living in a simulation; yes, the simulation can be rolled back a hundred years and then re-run from that point onwards, but is that really more computationally efficient than just running the full physics all the time? (Especially if you have to regularly keep going back to update the model).
This is where his fellow scientists call him a “crackpot” because he can’t replicate any of his experimental findings. ;)
More seriously, the sim could modify his observations to make him observe the right things. For instance, change the photons entering his eyes to be in line with what they should be, change the historical records a la 1984, etc. Or let him add an epicycle to his theory to account for the otherwise unexplainable results.
In practice, I doubt atomic-level effects are ever going to produce clearly observable changes outside of physics labs, so 99.99999% of the time the simulators wouldn’t have to worry about this as long as they simulated macroscopic objects to enough detail.
In practice, I doubt atomic-level effects are ever going to produce clearly observable changes outside of physics labs, so 99.99999% of the time the simulators wouldn’t have to worry about this as long as they simulated macroscopic objects to enough detail.
Well, yes, I’m not saying that this would make it easy to discover evidence that we are living in a simulation. It would simply make it possible to do so.
In the long-run CDT case, why the assumption that people in a sim can’t affect people in the far future? At the very least, if we’re in a sim, we can affect people in the far future of our sim; and probably indirectly in baseline too, insofar as if we come up with a really good idea in the sim, then those who are running the sim may take notice of the idea and implement it outside said sim.
As for the figures; I have a few thoughts about f. Let us assume that the far future consists of one base world, which runs a number of simulations, which in turn run sub-simulations (and those run sub-sub-simulations, and so on). Let us assume that, at any given moment, each simulation’s internal clock is set to a randomly determined year. Let us further assume that our universe is fairly typical in terms of population.
The number of humans who have ever lived, up until 2011, has been estimated at 107 billion. This means that, if all simulations are constrained to run up until 2014 only, the fraction of people in simulations (at any given moment) who believe that they are alive in 2014 will be approximately 7⁄107 (the baseline will not significantly affect this figure if the number of simulations is large). If the simulations are permitted to run longer (and I see no reason why they wouldn’t be), then that figure will of course be lower, and possibly significantly lower.
I can therefore conclude that, in all probability, f < 7⁄107.
At the same time, Nf >> 1 means that f > 1/N. Of course, since N can be arbitrarily large, this tells us little; but it does imply, at least, that f>0.
Thanks, CCC. :)
Simulating humans near the singularity may be more interesting than simulating hunter-gatherers, so it may be that the fraction of sims around now is more than 7⁄107.
One reason not to expect the sims to go into the far future is that any far future with high altruistic import will have high numbers of computations, which would be expensive to simulate. It’s cheaper to simulate a few billion humans who have only modest computing power. For the same reason, it’s not clear that we’d have lots of sims within sims within sims, because those would get really expensive—unless computing power is so trivially cheap in the basement that it doesn’t matter.
That said, you’re right there could be at least a reasonable future ahead of us in a sim, but I’m doubtful many sims run the whole length of galactic history—again, unless the basement is drowning in computing power that it doesn’t know what to do with.
Interesting point about coming up with a really good idea. But one would tend to think that the superintelligent AIs in the basement would be much better at that. Why would they bother creating dumb little humans who go on to create their own superintelligences in the sim when they could just use superintelligences in the basement? If the simulators are interested in cognitive/evolutionary diversity, maybe that could be a reason.
Possibly, but every 2014 needs to have a history; we can find evidence in our universe that around 107 billion people have existed, and I’m assuming that we’re fairly typical so far as universes go.
...annnnnd I’ve just realised that there’s no reason why someone in the future couldn’t run a simulation up to (say) 1800, save that, and then run several simulations from that date forwards, each with little tweaks (a sort of a Monte Carlo approach to history).
I question the applicability of this assertion to our universe. Yes, a game like Sid Meier’s Civilisation is a whole lot easier to simulate than (say) a crate of soil at the level of individual grains—because there’s a lot of detail being glossed over in Civilisation. The game does not simulate every grain of soil, every drop of water.
Our universe—whether it’s baseline or a simulation—seems to be running right down to the atomic level. That is, if we’re being simulated, then every individual atom, every electron and proton, is being simulated. Simulating a grain of sand at that level of detail is quite a feat of computing—but simulating a grain-of-sand-sized computer would be no harder. In each case, it’s the individual atoms that are being simulated, and atoms follow the same laws whether in a grain of sand or in a CPU. (They have to, or we’d never have figured out how to build the CPU).
So I don’t think there’s been any change in the computing power required to simulate our universe with the increase in human population and computing power.
Sub-sims just need to be computationally simpler by a few orders of magnitude than their parent sims. If we create a sim, then computing power in that universe will be fantastically expensive as compared to ours; if we are a sim, then computing power in our parent universe must be sufficient to run our universe (and it is therefore fantastically cheap as compared to our universe). I have no idea how to tell whether we’re in a top-end one-of-a-kind research lab computer, or the one-universe-up equivalent of a smartphone.
You have a good point. If we’re a sim, we could be terminated unexpectedly at any time. Presumably as soon as the conditions of the sim are fulfilled.
Of course, the fact that our sim (if we are a sim) is running at all implies that the baseline must have the computing power to run us; in comparison with which, everything that we could possibly do with computing power is so trivial that it hardly even counts as a drain on resources. Of course, that doesn’t mean that there aren’t equivalently computationally expensive things that they might want to do with our computing resources (like running a slightly different sim, perhaps)...
Maybe we’re the sim that the superintelligence is using to test its ideas before introducing them to the baseline? If our universe fulfills its criteria better than any other, then it acts in such a way as to make baseline more like our universe. (Whatever those criteria are...)
Hi CCC :)
Yep, exactly. That’s how you can get more than 7⁄107 of the people in 2014.
Probably not, though. In Bostrom’s simulation-argument paper, he notes that you only need the environment to be accurate enough that observers think the sim is atomically precise. For instance, when they perform quantum experiments, you make those experiments come out right, but that doesn’t mean you actually have to simulate quantum mechanics everywhere. Because superficial sims would be vastly cheaper, we should expect vastly more of them, so we’d probably be in one of them.
Many present-day computer simulations capture high-level features of a system without delving into all the gory details. Probably most sims could suffice to have intermediate levels of detail for physics and even minds. (E.g., maybe you don’t need to simulate every neuron, just their higher-level aggregate behaviors, except when neuroscientists look at individual neurons.)
This is captured by the N term in my rough calculations above. If the basement has gobs of computing power, that means N is really big. But N cancels out from the final action-relevant ie/f expression.
Hmmm. It’s a fair argument, but I’m not sure how well it would work out in practice.
To clarify, I’m not saying that the sim couldn’t be run like that. My claim is, rather, that if we are in a sim being run with varying levels of accuracy as suggested, then we should be able to detect it.
Consider, for the moment, a hill. That hill consists of a very large number of electrons, protons and neutrons. Assume for the moment that the hill is not the focus of a scientific experiment. Then, it may be that the hill is being simulated in some computationally cheaper manner than simulating every individual particle.
There are two options. Either the computationally cheaper manner is, in every single possible way, indistinguishable from simulating every individual particle. In this case, there is no reason to use the more computationally expensive method when a scientist tries to run an experiment which includes the hill; all hills can use the computationally cheaper method.
The alternative is that there is some way, however slight or subtle, in which the behaviour of the atoms in the hill differs from the behaviour of those same atoms when under scientific investigation. If this is the case, then it means that the scientific laws deduced from experiments on the hill will, in some subtle way, not match the behaviour of hills in general. In this case, there must be a detectable difference; in effect, under certain circumstances hills are following a different set of physical laws and sooner or later someone is going to notice that. (Note that this can be avoided, to some degree, by saving the sim at regular intervals; if someone notices the difference between the approximation and a hill made out of properly simulated atoms, then the simulation is reloaded from a save just before that difference happened and the approximation is updated to hide that detail. This can’t be done forever—after a few iterations, the approximation’s computational complexity will begin to approach the computational complexity of the atomic hill in any case, plus you’ve now wasted a lot of cycles running sims that had no purpose other than refining the approximation—but it could stave off discovery for a period, at least).
Having said that, though, another thought has occurred to me. There’s no guarantee (if we are in a sim) that the laws of physics are the same in our universe as they are in baseline; we may, in fact, have laws of physics specifically designed to be easier to compute. Consider, for example, the uncertainty principle. Now, I’m no quantum physicist, but as I understand it, the more precisely a particle’s position can be determined, the less precisely its momentum can be known—and, at the same time, the more precisely its momentum is known, the less precisely its position can be found. Now, in terms of a simulation, the uncertainty principle means that the computer running the simulation need not keep track of the position and momentum of every particle at full precision. It may, instead, keep track of some single combined value (a real quantum physicist might be able to guess at what that value is, and how position and/or momentum can be derived from it). And given the number of atoms in the observable universe, the data storage saved by this is massive (and suggests that Baseline’s storage space, while immense, is not infinite).
Of course, like any good simplification, the Uncertainty Principle is applied everywhere, whether a scientist is looking at the data or not.
What is and isn’t simulated to a high degree of detail can be determined dynamically. If people decide they want to investigate a hill, some system watching the sim can notice that and send a signal that the sim needs to make the hill observations correspond with quantum/etc. physics. This shouldn’t be hard to do. For instance, if the theory predicts observation X +/- Y, you can generate some random numbers centered around X with std. dev. Y. Or you can make them somewhat different if the theory is wrong and to account for model uncertainty.
If the scientists would do lots of experiments that are connected in complex ways such that consistency requires them to come out with certain complex relationships, you’d need to get somewhat more fancy with faking the measurements. Worst case, you can actually do a brute-force sim of that part of physics for the brief period required. And yeah, as you say, you can always revert to a previous state if you screw up and the scientists find something amiss, though you probably wouldn’t want to do that too often.
SMBC
This is kind of where the trouble starts to come in. What happens when the scientist, instead of looking at hills in the present, turns instead to look at historical records of hills a hundred years in the past?
If he has actually found some complex interaction that the simplified model fails to cover, then he has a chance of finding evidence of living in a simulation; yes, the simulation can be rolled back a hundred years and then re-run from that point onwards, but is that really more computationally efficient than just running the full physics all the time? (Especially if you have to regularly keep going back to update the model).
This is where his fellow scientists call him a “crackpot” because he can’t replicate any of his experimental findings. ;)
More seriously, the sim could modify his observations to make him observe the right things. For instance, change the photons entering his eyes to be in line with what they should be, change the historical records a la 1984, etc. Or let him add an epicycle to his theory to account for the otherwise unexplainable results.
In practice, I doubt atomic-level effects are ever going to produce clearly observable changes outside of physics labs, so 99.99999% of the time the simulators wouldn’t have to worry about this as long as they simulated macroscopic objects to enough detail.
Well, yes, I’m not saying that this would make it easy to discover evidence that we are living in a simulation. It would simply make it possible to do so.