Very interesting article. Most of my objections have been covered by previous commentators, except:
1a. Implicit in the usual definition of the word ‘simulation’ is approximation, or ‘data compression’ as Michaël Trazzi characterises it. It doesn’t seem fair to claim that a real system and its simulation are identical but for the absence of consciousness in the latter, if the latter is only an approximation. A weather forecasting algorithm, no matter how sophisticated and accurate, will never be as accurate as waiting to see what the real weather does, because some data have been discarded in its input and processing stages. Equally, a lossy simulation of a conscious human mind is unlikely to be conscious.
1b. What Bostrom, Tegmark and other proponents of substrate-independent consciousness (and therefore the possibility of qualia in simulations) have in mind is more like the emulators or ‘virtual machines’ of computer science: lossless software reproductions of specific (hardware and software) systems, running on arbitrary hardware. Given any input state, and the assumption that both emulator and emulated system are working correctly, the emulator will always return the same output state as the emulated system. In other words, emulation is bit-perfect simulation.
1c. Even if brains are analogue computers, one can emulate them accurately enough to invoke consciousness (if it can be invoked) in a digital system with sufficiently high spatial and temporal resolution: real brains must have their own error correction to account for environmental noise, so the emulator’s precision merely needs to exceed the brain’s. Against Roger Penrose’s vague, rather superstitious claim that quantum effects unique to meat brains are necessary for consciousness, much the same argument holds: brain electrochemistry operates at a vastly larger scale than the quantum upper limit, and even if some mysterious sauce does bubble up from the quantum to the classical realm in brains and not in silicone, that sauce is made of the same quarks and gluons that comprise brains and silicone, so can also be understood and built as necessary into the emulator.
2. You say that “People in universe A can act upon (change their conscious experiences) people in Universe B, at least by shutting the computer down.” But shutting the computer down in Universe A does NOT change the conscious experiences of the people in the emulated Universe B, because they are only conscious while the emulator is running. Imagine being a conscious entity in Universe B. You are in the middle of a sneeze, or a laugh. An operator in Universe A shuts down the machine. Then imagine one of three scenarios taking place: the operator never restarts the machine; after a year the operator restarts the machine and the simulation continues from where it was paused; or the operator reboots the machine from its initial state. In none of these scenarios is your conscious experience affected. In the first, you are gone. You experience nothing, not even being annihilated from a higher magisterium, since the emulator has to be running for you to experience anything. In the second, you continue your laugh or sneeze, having felt no discontinuity. (Time is, after all, being emulated in Universe B along with every other quale.) In the third, the continuous ‘you’ from before the system was rebooted is gone. Another emulated consciousness begins the emulation again and, if no settings are changed, will have exactly the experience you had up to the reboot. But that new ‘you’ will have no memory of the previous run, nor of being shut down, nor of the reboot or anything subsequent.
3. Against your argument that stripping away inputs and outputs from the simulation constitutes a reductio ad absurdum of the premise that emulations can be conscious: this is true of meat brains too. To nourish an embryonic brain-in-a-jar to physical ‘maturity’ (whatever that might mean in this context), in the absence of all communication with the outside world (including its own body), and expect it to come close to being conscious, is absurd as well as ghoulish. Moreover—relating this with 1. above—to say that you have strictly emulated a brain-centred system whose consciousness you are trying to establish, you would have to include a sphere of emulated space of radius greater than tc around the brain’s sensory organs, where t is the length of time you want to emulate and c is the speed of light in a vacuum (assuming no strong simulated gravitational fields). This is because information from anywhere in the tc-radius sphere of real space could conceivably affect the brain you’re trying to emulate within that time interval.
Very interesting article. Most of my objections have been covered by previous commentators, except:
1a. Implicit in the usual definition of the word ‘simulation’ is approximation, or ‘data compression’ as Michaël Trazzi characterises it. It doesn’t seem fair to claim that a real system and its simulation are identical but for the absence of consciousness in the latter, if the latter is only an approximation. A weather forecasting algorithm, no matter how sophisticated and accurate, will never be as accurate as waiting to see what the real weather does, because some data have been discarded in its input and processing stages. Equally, a lossy simulation of a conscious human mind is unlikely to be conscious.
1b. What Bostrom, Tegmark and other proponents of substrate-independent consciousness (and therefore the possibility of qualia in simulations) have in mind is more like the emulators or ‘virtual machines’ of computer science: lossless software reproductions of specific (hardware and software) systems, running on arbitrary hardware. Given any input state, and the assumption that both emulator and emulated system are working correctly, the emulator will always return the same output state as the emulated system. In other words, emulation is bit-perfect simulation.
1c. Even if brains are analogue computers, one can emulate them accurately enough to invoke consciousness (if it can be invoked) in a digital system with sufficiently high spatial and temporal resolution: real brains must have their own error correction to account for environmental noise, so the emulator’s precision merely needs to exceed the brain’s. Against Roger Penrose’s vague, rather superstitious claim that quantum effects unique to meat brains are necessary for consciousness, much the same argument holds: brain electrochemistry operates at a vastly larger scale than the quantum upper limit, and even if some mysterious sauce does bubble up from the quantum to the classical realm in brains and not in silicone, that sauce is made of the same quarks and gluons that comprise brains and silicone, so can also be understood and built as necessary into the emulator.
2. You say that “People in universe A can act upon (change their conscious experiences) people in Universe B, at least by shutting the computer down.” But shutting the computer down in Universe A does NOT change the conscious experiences of the people in the emulated Universe B, because they are only conscious while the emulator is running. Imagine being a conscious entity in Universe B. You are in the middle of a sneeze, or a laugh. An operator in Universe A shuts down the machine. Then imagine one of three scenarios taking place: the operator never restarts the machine; after a year the operator restarts the machine and the simulation continues from where it was paused; or the operator reboots the machine from its initial state. In none of these scenarios is your conscious experience affected. In the first, you are gone. You experience nothing, not even being annihilated from a higher magisterium, since the emulator has to be running for you to experience anything. In the second, you continue your laugh or sneeze, having felt no discontinuity. (Time is, after all, being emulated in Universe B along with every other quale.) In the third, the continuous ‘you’ from before the system was rebooted is gone. Another emulated consciousness begins the emulation again and, if no settings are changed, will have exactly the experience you had up to the reboot. But that new ‘you’ will have no memory of the previous run, nor of being shut down, nor of the reboot or anything subsequent.
3. Against your argument that stripping away inputs and outputs from the simulation constitutes a reductio ad absurdum of the premise that emulations can be conscious: this is true of meat brains too. To nourish an embryonic brain-in-a-jar to physical ‘maturity’ (whatever that might mean in this context), in the absence of all communication with the outside world (including its own body), and expect it to come close to being conscious, is absurd as well as ghoulish. Moreover—relating this with 1. above—to say that you have strictly emulated a brain-centred system whose consciousness you are trying to establish, you would have to include a sphere of emulated space of radius greater than tc around the brain’s sensory organs, where t is the length of time you want to emulate and c is the speed of light in a vacuum (assuming no strong simulated gravitational fields). This is because information from anywhere in the tc-radius sphere of real space could conceivably affect the brain you’re trying to emulate within that time interval.