“(1) civilizations like ours tend to self-destruct before reaching technological maturity, (2) civilizations like ours tend to reach technological maturity but refrain from running a large number of ancestral simulations, or (3) we are almost certainly in a simulation.”
Case 2 seems far, far more likely than case 3, and without a much more specific definition of “technological maturity”, I can’t make any statement on 1. Why does case 2 seem more likely than 3?
Energy. If we are to run an ancestral simulation that even remotely wants to correctly simulate as complex phenomenon as weather, we would probably need the scale of the simulation to be quite large. We would definitely need to simulate the entire earth, moon, and sun, as the physical relationships between these three are very intertwined. Now, let’s focus on the sun for a second, because it should provide us with all the evidence we need that a simulation would be implausible.
The sun has a lot of energy, and to simulate it would itself require a lot of energy. To simulate the sun exactly as we know it would take MORE energy than the sun, because the entire energy of the sun must be simulated and we must account for the energy lost due to heat or other factors as an engineering concern. So just to properly simulate the sun, we’d need to generate more energy than the sun has, which already seems very implausible on earth, given we can’t create a reactor larger than the sun on the earth. If we extend this argument to simulating the entire universe, it seems impossible that humans would ever have the necessary energy to simulate all the energy in the universe, so we must only be able to simulate a part of the universe or a smaller universe. This again follows from the fact that to perfectly simulate something, it requires more energy than the thing simulated.
To simulate the sun exactly as we know it would take MORE energy than the sun, because the entire energy of the sun must be simulated and we must account for the energy lost due to heat or other factors as an engineering concern.
I don’t understand this argument. If it’s appealing to a general principle that “simulating something with energy E requires energy at least E” then I don’t see any reason why that should be true. Why should it take twice as much energy to simulate a blue photon as a red photon, for instance?
(I am sympathetic to the overall pattern of your argument; I also do not expect civilizations like ours to run a lot of ancestral simulations and have never understood why they should be expected to, and I suspect that one reason why not is that the resources to do it well would be very large and even if it were possible there ought to be more useful things to do with those resources.)
Let me be a little more clear. Let’s assume that we’re in a simulation, and that the parent universe hosting ours is the top level (for whatever reason, this is just to avoid turtles all the way down). We know that we can harness the energy of the sun, because not only do plants utilize that energy to metabolize, but we also can harness that energy and use it as electricity; energy can transfer.
Some machine that we’re being simulated on must take into account these kinds of interactions and make them happen in some way. The machine must represent the sun in some way, perhaps as 0s and 1s. This encoding takes energy, and if we were to simply encode all the energy of the sun, the potential energy of the sun must exist somewhere in that machine. Even if the sun’s information is compressed, it would still have to be decompressed when used (or else we have a “lossy” sun, not good if you don’t want your simulations to figure out they’re in a simulation) - and compressing/decompressing takes energy.
We know that even in a perfect simulation, the sun must have the same amount of energy as outside the simulation, otherwise it is not a perfect simulation. So if a blue photon has twice as much energy as a red photon, then that fact is what causes twice as much energy to be encoded in a simulated blue photon. This energy encoding is necessary if/when the blue photon interacts with something.
Said another way: If, in our simulation, we encode the energy of physical things with the smallest number of bits possible to describe that thing, and blue photons have twice as much energy as red photons, then it should take X bits to describe the energy of the red photon and 2*X bits to describe the blue photon.
As to extra energy, as a practical (engineering) matter alone it would take more energy to simulate a thing even after the encoding for the thing is done: in our universe, there are no perfect energy transfers, some is inevitably lost as heat, so it would take extra energy to overcome this loss. Secondly, if the simulation had any meta-data, that would take extra information and hence extra energy.
I still don’t understand. (Less tactfully, I think what you’re saying is simply wrong; but I may be missing something.)
Suppose we have one simulated photon with 1000 units of energy and another with 2000 units of energy. Here is the binary representation of the number 1000: 1111101000. And here is the binary representation of the number 2000: 11111010000. The second number is longer—by one bit—and therefore may take a little more energy to do things with; but it’s only 10% bigger than the first number.
Now, if we imagine that eventually each of those photons gets turned into lots of little blobs carrying one unit of energy each, or in some other way has a bunch of interactions whose number is proportional to its energy, then indeed you end up with an amount of simulation effort proportional to the energy. But it’s not clear to me that that must be so. And if most interactions inside the simulation involve the exchange of a quantity of energy that’s larger than the amount of energy required to simulate one interaction—which seems kinda unlikely, which is one reason why I am sympathetic to your argument overall, but again I see no obvious way to rule it out—then even if simulation effort is proportional to energy the relevant constant of proportionality could be smaller than 1.
I think the feasibility argument described here better encapsulates what I’m trying to get at, and I’ll defer to this argument until I can better (more mathematically) state mine.
“Yet the number of interactions required to make such a “perfect” simulation are vast, and in some cases require an infinite number of functions operating on each other to describe. Perhaps the only way to solve this would be to assume “simulation” is an analogy for how the universe (operating under the laws of quantum mechanics) acts like a quantum computer—and therefore it can “calculate” itself. But then, that doesn’t really say the same thing as “we exist in someone else’s simulation”.” (from the link).
This conclusion about the universe “simulating itself” is really what I’m trying to get at. That it would take the same amount of energy to simulate the universe as there is energy in the universe, so that a “self-simulating universe” is the most likely conclusion, which is of course just a base universe.
Case 2 seems far, far more likely than case 3, and without a much more specific definition of “technological maturity”, I can’t make any statement on 1. Why does case 2 seem more likely than 3?
“Technical maturity” as used in the first disjunct means “capable of running high-fidelity ancestor simulations”. So, it sounds like you are arguing for the 1st disjunct (or something very close to it) rather than the second, since you are arguing that, due to energy constraints, a civilization like ours would be incapable of reaching technological maturity.
Yes, then I’m arguing that case 1 cannot happen. Although I find it a little tediously tautological (and even more so reductive) to define technological maturity as being solely the technology that makes this disjunction make sense....
“(1) civilizations like ours tend to self-destruct before reaching technological maturity, (2) civilizations like ours tend to reach technological maturity but refrain from running a large number of ancestral simulations, or (3) we are almost certainly in a simulation.”
Case 2 seems far, far more likely than case 3, and without a much more specific definition of “technological maturity”, I can’t make any statement on 1. Why does case 2 seem more likely than 3?
Energy. If we are to run an ancestral simulation that even remotely wants to correctly simulate as complex phenomenon as weather, we would probably need the scale of the simulation to be quite large. We would definitely need to simulate the entire earth, moon, and sun, as the physical relationships between these three are very intertwined. Now, let’s focus on the sun for a second, because it should provide us with all the evidence we need that a simulation would be implausible.
The sun has a lot of energy, and to simulate it would itself require a lot of energy. To simulate the sun exactly as we know it would take MORE energy than the sun, because the entire energy of the sun must be simulated and we must account for the energy lost due to heat or other factors as an engineering concern. So just to properly simulate the sun, we’d need to generate more energy than the sun has, which already seems very implausible on earth, given we can’t create a reactor larger than the sun on the earth. If we extend this argument to simulating the entire universe, it seems impossible that humans would ever have the necessary energy to simulate all the energy in the universe, so we must only be able to simulate a part of the universe or a smaller universe. This again follows from the fact that to perfectly simulate something, it requires more energy than the thing simulated.
I don’t understand this argument. If it’s appealing to a general principle that “simulating something with energy E requires energy at least E” then I don’t see any reason why that should be true. Why should it take twice as much energy to simulate a blue photon as a red photon, for instance?
(I am sympathetic to the overall pattern of your argument; I also do not expect civilizations like ours to run a lot of ancestral simulations and have never understood why they should be expected to, and I suspect that one reason why not is that the resources to do it well would be very large and even if it were possible there ought to be more useful things to do with those resources.)
Let me be a little more clear. Let’s assume that we’re in a simulation, and that the parent universe hosting ours is the top level (for whatever reason, this is just to avoid turtles all the way down). We know that we can harness the energy of the sun, because not only do plants utilize that energy to metabolize, but we also can harness that energy and use it as electricity; energy can transfer.
Some machine that we’re being simulated on must take into account these kinds of interactions and make them happen in some way. The machine must represent the sun in some way, perhaps as 0s and 1s. This encoding takes energy, and if we were to simply encode all the energy of the sun, the potential energy of the sun must exist somewhere in that machine. Even if the sun’s information is compressed, it would still have to be decompressed when used (or else we have a “lossy” sun, not good if you don’t want your simulations to figure out they’re in a simulation) - and compressing/decompressing takes energy.
We know that even in a perfect simulation, the sun must have the same amount of energy as outside the simulation, otherwise it is not a perfect simulation. So if a blue photon has twice as much energy as a red photon, then that fact is what causes twice as much energy to be encoded in a simulated blue photon. This energy encoding is necessary if/when the blue photon interacts with something.
Said another way: If, in our simulation, we encode the energy of physical things with the smallest number of bits possible to describe that thing, and blue photons have twice as much energy as red photons, then it should take X bits to describe the energy of the red photon and 2*X bits to describe the blue photon.
As to extra energy, as a practical (engineering) matter alone it would take more energy to simulate a thing even after the encoding for the thing is done: in our universe, there are no perfect energy transfers, some is inevitably lost as heat, so it would take extra energy to overcome this loss. Secondly, if the simulation had any meta-data, that would take extra information and hence extra energy.
I still don’t understand. (Less tactfully, I think what you’re saying is simply wrong; but I may be missing something.)
Suppose we have one simulated photon with 1000 units of energy and another with 2000 units of energy. Here is the binary representation of the number 1000: 1111101000. And here is the binary representation of the number 2000: 11111010000. The second number is longer—by one bit—and therefore may take a little more energy to do things with; but it’s only 10% bigger than the first number.
Now, if we imagine that eventually each of those photons gets turned into lots of little blobs carrying one unit of energy each, or in some other way has a bunch of interactions whose number is proportional to its energy, then indeed you end up with an amount of simulation effort proportional to the energy. But it’s not clear to me that that must be so. And if most interactions inside the simulation involve the exchange of a quantity of energy that’s larger than the amount of energy required to simulate one interaction—which seems kinda unlikely, which is one reason why I am sympathetic to your argument overall, but again I see no obvious way to rule it out—then even if simulation effort is proportional to energy the relevant constant of proportionality could be smaller than 1.
I tried to see if anyone else had previously made my argument (but better); instead I found these arguments:
http://rationalwiki.org/wiki/Simulated_reality#Feasibility
I think the feasibility argument described here better encapsulates what I’m trying to get at, and I’ll defer to this argument until I can better (more mathematically) state mine.
“Yet the number of interactions required to make such a “perfect” simulation are vast, and in some cases require an infinite number of functions operating on each other to describe. Perhaps the only way to solve this would be to assume “simulation” is an analogy for how the universe (operating under the laws of quantum mechanics) acts like a quantum computer—and therefore it can “calculate” itself. But then, that doesn’t really say the same thing as “we exist in someone else’s simulation”.” (from the link).
This conclusion about the universe “simulating itself” is really what I’m trying to get at. That it would take the same amount of energy to simulate the universe as there is energy in the universe, so that a “self-simulating universe” is the most likely conclusion, which is of course just a base universe.
“Technical maturity” as used in the first disjunct means “capable of running high-fidelity ancestor simulations”. So, it sounds like you are arguing for the 1st disjunct (or something very close to it) rather than the second, since you are arguing that, due to energy constraints, a civilization like ours would be incapable of reaching technological maturity.
Yes, then I’m arguing that case 1 cannot happen. Although I find it a little tediously tautological (and even more so reductive) to define technological maturity as being solely the technology that makes this disjunction make sense....