Let me be a little more clear. Let’s assume that we’re in a simulation, and that the parent universe hosting ours is the top level (for whatever reason, this is just to avoid turtles all the way down). We know that we can harness the energy of the sun, because not only do plants utilize that energy to metabolize, but we also can harness that energy and use it as electricity; energy can transfer.
Some machine that we’re being simulated on must take into account these kinds of interactions and make them happen in some way. The machine must represent the sun in some way, perhaps as 0s and 1s. This encoding takes energy, and if we were to simply encode all the energy of the sun, the potential energy of the sun must exist somewhere in that machine. Even if the sun’s information is compressed, it would still have to be decompressed when used (or else we have a “lossy” sun, not good if you don’t want your simulations to figure out they’re in a simulation) - and compressing/decompressing takes energy.
We know that even in a perfect simulation, the sun must have the same amount of energy as outside the simulation, otherwise it is not a perfect simulation. So if a blue photon has twice as much energy as a red photon, then that fact is what causes twice as much energy to be encoded in a simulated blue photon. This energy encoding is necessary if/when the blue photon interacts with something.
Said another way: If, in our simulation, we encode the energy of physical things with the smallest number of bits possible to describe that thing, and blue photons have twice as much energy as red photons, then it should take X bits to describe the energy of the red photon and 2*X bits to describe the blue photon.
As to extra energy, as a practical (engineering) matter alone it would take more energy to simulate a thing even after the encoding for the thing is done: in our universe, there are no perfect energy transfers, some is inevitably lost as heat, so it would take extra energy to overcome this loss. Secondly, if the simulation had any meta-data, that would take extra information and hence extra energy.
I still don’t understand. (Less tactfully, I think what you’re saying is simply wrong; but I may be missing something.)
Suppose we have one simulated photon with 1000 units of energy and another with 2000 units of energy. Here is the binary representation of the number 1000: 1111101000. And here is the binary representation of the number 2000: 11111010000. The second number is longer—by one bit—and therefore may take a little more energy to do things with; but it’s only 10% bigger than the first number.
Now, if we imagine that eventually each of those photons gets turned into lots of little blobs carrying one unit of energy each, or in some other way has a bunch of interactions whose number is proportional to its energy, then indeed you end up with an amount of simulation effort proportional to the energy. But it’s not clear to me that that must be so. And if most interactions inside the simulation involve the exchange of a quantity of energy that’s larger than the amount of energy required to simulate one interaction—which seems kinda unlikely, which is one reason why I am sympathetic to your argument overall, but again I see no obvious way to rule it out—then even if simulation effort is proportional to energy the relevant constant of proportionality could be smaller than 1.
I think the feasibility argument described here better encapsulates what I’m trying to get at, and I’ll defer to this argument until I can better (more mathematically) state mine.
“Yet the number of interactions required to make such a “perfect” simulation are vast, and in some cases require an infinite number of functions operating on each other to describe. Perhaps the only way to solve this would be to assume “simulation” is an analogy for how the universe (operating under the laws of quantum mechanics) acts like a quantum computer—and therefore it can “calculate” itself. But then, that doesn’t really say the same thing as “we exist in someone else’s simulation”.” (from the link).
This conclusion about the universe “simulating itself” is really what I’m trying to get at. That it would take the same amount of energy to simulate the universe as there is energy in the universe, so that a “self-simulating universe” is the most likely conclusion, which is of course just a base universe.
Let me be a little more clear. Let’s assume that we’re in a simulation, and that the parent universe hosting ours is the top level (for whatever reason, this is just to avoid turtles all the way down). We know that we can harness the energy of the sun, because not only do plants utilize that energy to metabolize, but we also can harness that energy and use it as electricity; energy can transfer.
Some machine that we’re being simulated on must take into account these kinds of interactions and make them happen in some way. The machine must represent the sun in some way, perhaps as 0s and 1s. This encoding takes energy, and if we were to simply encode all the energy of the sun, the potential energy of the sun must exist somewhere in that machine. Even if the sun’s information is compressed, it would still have to be decompressed when used (or else we have a “lossy” sun, not good if you don’t want your simulations to figure out they’re in a simulation) - and compressing/decompressing takes energy.
We know that even in a perfect simulation, the sun must have the same amount of energy as outside the simulation, otherwise it is not a perfect simulation. So if a blue photon has twice as much energy as a red photon, then that fact is what causes twice as much energy to be encoded in a simulated blue photon. This energy encoding is necessary if/when the blue photon interacts with something.
Said another way: If, in our simulation, we encode the energy of physical things with the smallest number of bits possible to describe that thing, and blue photons have twice as much energy as red photons, then it should take X bits to describe the energy of the red photon and 2*X bits to describe the blue photon.
As to extra energy, as a practical (engineering) matter alone it would take more energy to simulate a thing even after the encoding for the thing is done: in our universe, there are no perfect energy transfers, some is inevitably lost as heat, so it would take extra energy to overcome this loss. Secondly, if the simulation had any meta-data, that would take extra information and hence extra energy.
I still don’t understand. (Less tactfully, I think what you’re saying is simply wrong; but I may be missing something.)
Suppose we have one simulated photon with 1000 units of energy and another with 2000 units of energy. Here is the binary representation of the number 1000: 1111101000. And here is the binary representation of the number 2000: 11111010000. The second number is longer—by one bit—and therefore may take a little more energy to do things with; but it’s only 10% bigger than the first number.
Now, if we imagine that eventually each of those photons gets turned into lots of little blobs carrying one unit of energy each, or in some other way has a bunch of interactions whose number is proportional to its energy, then indeed you end up with an amount of simulation effort proportional to the energy. But it’s not clear to me that that must be so. And if most interactions inside the simulation involve the exchange of a quantity of energy that’s larger than the amount of energy required to simulate one interaction—which seems kinda unlikely, which is one reason why I am sympathetic to your argument overall, but again I see no obvious way to rule it out—then even if simulation effort is proportional to energy the relevant constant of proportionality could be smaller than 1.
I tried to see if anyone else had previously made my argument (but better); instead I found these arguments:
http://rationalwiki.org/wiki/Simulated_reality#Feasibility
I think the feasibility argument described here better encapsulates what I’m trying to get at, and I’ll defer to this argument until I can better (more mathematically) state mine.
“Yet the number of interactions required to make such a “perfect” simulation are vast, and in some cases require an infinite number of functions operating on each other to describe. Perhaps the only way to solve this would be to assume “simulation” is an analogy for how the universe (operating under the laws of quantum mechanics) acts like a quantum computer—and therefore it can “calculate” itself. But then, that doesn’t really say the same thing as “we exist in someone else’s simulation”.” (from the link).
This conclusion about the universe “simulating itself” is really what I’m trying to get at. That it would take the same amount of energy to simulate the universe as there is energy in the universe, so that a “self-simulating universe” is the most likely conclusion, which is of course just a base universe.