Perhaps if you are an L-zombie and your decisions can influence the real world, then your decision doesn’t matter. Omega has selected you from the space of all possible minds, which is unimaginably large, and asked for 1 bit of information, there will be a 50% chance that the bit will be either 0 or 1, since the space of all possible L-zombies has maximum entropy(?). Or perhaps not.
More generally, I think the idea that if you are an L-zombie your decisions might effect the real world is a violation of the thought experiment. It’s presupposing something that doesn’t make sense internally. If logical zombies can affect the universe, in what sense can they be said to exist only logically?
The difference between an L-zombie who only exists logically and a physical simulation is that simulations have some measure, but that measure must be smaller than the measure of actual reality because greater “resources of the universe” are required to provide the inputs to the simulation than are required to run an actually real person on the universe providing its own inputs.
If we ever developed a friendly AI, I’m pretty sure that I would want it to answer (A) to this question if it had reason to believe that simulations of its processing were physically impossible, technologically improbable, or (B) to this question if it had reason to believe that simulations were possible, and that there were people/aliens/gods likely to use these simulations and carry out the threats contained therein, which is exactly the sort of things humans would also do. I’m not sure what we are actually supposed to take away from this thought experiment?
I’m not standing by anything said above with high certainty, as I find this very confusing to think about.
I completely agree that L-zombies, if the concept made sense, would not be able to impact our world. The simulation of the L-zombie used to make it change the world brings the L-zombie to life.
Can you explain to me this notion of “resources of the universe?” I would have thought that simulated brains would have the same measure as actual brains.
Thinking about it more carefully, I think the statement that they don’t have the same measure is broken (not even wrong, incoherent).
As far as resources, I think the argument can be made in terms of entropy or energy, but I will make it in terms of energy because it’s easier to understand. Suppose for the sake of argument that a perfect simulation of a brain requires the same amount of energy to run as a real brain (in actuality it would require more, because you either have to create a duplicate of the brain, which requires the same energy, or solve the field equations for the brain explicitly, which requires greater energy). In order to provide the simulated inputs to the brain, you have to spend energy to make sure that they are internally consistent, react properly to the outputs of the brain, etc. So it’s impossible for a perfect simulation to require less energy than a real brain. If we are somewhere in a tower of simulated universes, either our simulation is being run imperfectly, or each universe in the tower must be smaller than the last, and probably dramatically smaller.
Now, imagine that you have a solar system worth of energy, and that running a simulation incurs an overhead of 3 orders of magnitude to calculate the simulation and consistent inputs to it. Using that solar system’s energy, you can either support (warning: completely made up numbers ahead) a trillion people accepting inputs from the universe at large, or a billion people running on efficient perfect simulations (perfect with respect to their brain activity, but obviously not perfect with respect to the universe, because that’s not possible without a greater than universe sized source of energy).
Measure with respect to minds is related to probability, so it really relates an existing consciousness to its futures. If I step into a duplicator, the measure of each of my future selves is 1⁄2 with respect to my current self, because I have a 50% probability of ending up in either of those future bodies, but from the perspective of my duplicates post-duplication, their measure is once again one. Bearing this in mind, the measure of a simulation currently running is 1, from its own perspective, and the measure of any given individual is also 1 if they currently exist.
Thinking about it more carefully, I think the statement that they don’t have the same measure is broken (not even wrong, incoherent).
So you agree with me then, that they have the same measure?
As for resources: I really don’t think that the amount of energy and matter used to compute a mind has any bearing on the measure of that mind. What matters is whether or not the energy and matter instantiates the correct program; if it does, then the mind exists there, if it doesn’t, then it doesn’t.
True, the quantity of minds matters (probably) for measure. So a mind with a trillion copies has greater measure than a mind with a billion copies. If we think that the relevant level of detail for implementation is exactly the fundamental level for our brains, then yes this would mean we should expect ourselves, other things equal, to be brains rather than simulations. But I’d say it is highly likely that the relevant level of detail for implementation is much higher—the neuron level, say—and thus simulations quite possibly outnumber brains by a great deal.
Of course, either way, it comes down to more than just the resource requirements—it also comes down to e.g. how likely it is that a posthuman society would create large numbers of ancestor simulations.
Perhaps if you are an L-zombie and your decisions can influence the real world, then your decision doesn’t matter. Omega has selected you from the space of all possible minds, which is unimaginably large, and asked for 1 bit of information, there will be a 50% chance that the bit will be either 0 or 1, since the space of all possible L-zombies has maximum entropy(?). Or perhaps not.
More generally, I think the idea that if you are an L-zombie your decisions might effect the real world is a violation of the thought experiment. It’s presupposing something that doesn’t make sense internally. If logical zombies can affect the universe, in what sense can they be said to exist only logically?
The difference between an L-zombie who only exists logically and a physical simulation is that simulations have some measure, but that measure must be smaller than the measure of actual reality because greater “resources of the universe” are required to provide the inputs to the simulation than are required to run an actually real person on the universe providing its own inputs.
If we ever developed a friendly AI, I’m pretty sure that I would want it to answer (A) to this question if it had reason to believe that simulations of its processing were physically impossible, technologically improbable, or (B) to this question if it had reason to believe that simulations were possible, and that there were people/aliens/gods likely to use these simulations and carry out the threats contained therein, which is exactly the sort of things humans would also do. I’m not sure what we are actually supposed to take away from this thought experiment?
I’m not standing by anything said above with high certainty, as I find this very confusing to think about.
I completely agree that L-zombies, if the concept made sense, would not be able to impact our world. The simulation of the L-zombie used to make it change the world brings the L-zombie to life.
Can you explain to me this notion of “resources of the universe?” I would have thought that simulated brains would have the same measure as actual brains.
Thinking about it more carefully, I think the statement that they don’t have the same measure is broken (not even wrong, incoherent).
As far as resources, I think the argument can be made in terms of entropy or energy, but I will make it in terms of energy because it’s easier to understand. Suppose for the sake of argument that a perfect simulation of a brain requires the same amount of energy to run as a real brain (in actuality it would require more, because you either have to create a duplicate of the brain, which requires the same energy, or solve the field equations for the brain explicitly, which requires greater energy). In order to provide the simulated inputs to the brain, you have to spend energy to make sure that they are internally consistent, react properly to the outputs of the brain, etc. So it’s impossible for a perfect simulation to require less energy than a real brain. If we are somewhere in a tower of simulated universes, either our simulation is being run imperfectly, or each universe in the tower must be smaller than the last, and probably dramatically smaller.
Now, imagine that you have a solar system worth of energy, and that running a simulation incurs an overhead of 3 orders of magnitude to calculate the simulation and consistent inputs to it. Using that solar system’s energy, you can either support (warning: completely made up numbers ahead) a trillion people accepting inputs from the universe at large, or a billion people running on efficient perfect simulations (perfect with respect to their brain activity, but obviously not perfect with respect to the universe, because that’s not possible without a greater than universe sized source of energy).
Measure with respect to minds is related to probability, so it really relates an existing consciousness to its futures. If I step into a duplicator, the measure of each of my future selves is 1⁄2 with respect to my current self, because I have a 50% probability of ending up in either of those future bodies, but from the perspective of my duplicates post-duplication, their measure is once again one. Bearing this in mind, the measure of a simulation currently running is 1, from its own perspective, and the measure of any given individual is also 1 if they currently exist.
So you agree with me then, that they have the same measure?
As for resources: I really don’t think that the amount of energy and matter used to compute a mind has any bearing on the measure of that mind. What matters is whether or not the energy and matter instantiates the correct program; if it does, then the mind exists there, if it doesn’t, then it doesn’t.
True, the quantity of minds matters (probably) for measure. So a mind with a trillion copies has greater measure than a mind with a billion copies. If we think that the relevant level of detail for implementation is exactly the fundamental level for our brains, then yes this would mean we should expect ourselves, other things equal, to be brains rather than simulations. But I’d say it is highly likely that the relevant level of detail for implementation is much higher—the neuron level, say—and thus simulations quite possibly outnumber brains by a great deal.
Of course, either way, it comes down to more than just the resource requirements—it also comes down to e.g. how likely it is that a posthuman society would create large numbers of ancestor simulations.