wolfgang proposed a similar example on Scott’s blog:
I wonder if we can turn this into a real physics problem:
1) Assume a large-scale quantum computer is possible (thinking deep thoughts, but not really self-conscious as long as its evolution is fully unitary).
2) Assume there is a channel which allows enough photons to escape in such a way to enable consciousness.
3) However, at the end of this channel we place a mirror – if it is in the consciousness-OFF position the photons are reflected back into the machine and unitarity is restored, but in the consciousness-ON position the photons escape into the deSitter universe.
4) As you can guess we use a radioactive device to set the mirror into c-ON or c-OFF position with 50% probability.
Will the quantum computer now experience i) a superposition of
consciousness and unconsciousness or ii) will it always
have a “normal” conscious experience or iii) will it have a conscious experience in 50% of the cases ?
Scott responded that
I tend to gravitate toward an option that’s not any of the three you listed. Namely: the fact that the system is set up in such a way that we could have restored unitarity, seems like a clue that there’s no consciousness there at all—even if, as it turns out, we don’t restore unitarity.
This answer is consistent with my treatment of other, simpler cases. For example, the view I’m exploring doesn’t assert that, if you make a perfect copy of an AI bot, then your act of copying causes the original to be unconscious. Rather, it says that the fact that you could (consistent with the laws of physics) perfectly copy the bot’s state and thereafter predict all its behavior, is an empirical clue that the bot isn’t conscious—even before you make a copy, and even if you never make a copy.
His example is different in a very particular way:
His conscious entity gets to dump photons into de Sitter space directly and only if you open it. This makes Scott’s counter-claim prima facie basically plausible—if your putative consciousness only involves reversible actions, then is it really conscious?
But, I specifically drew a line between Alice and Alice’s Room, and specified that Alice’s normal operations are irreversible—but they must also dump entropy into the Room, taking in one of its 0 bits and returning something that might be 1 or 0, and if you feed her a 1 bit, she dies on waste heat (maybe she has some degree of tolerance for 1s, but as the density of 1s approaches 50% she cannot survive).
If you were to just leave the Room open all the time, always resetting its qbits to 0, Alice would operate the same, aside from having no risk of heatstroke. (In this case, of course, if you run the simulation backwards, the result would not be where you started, but catastrophe).
I think this is a pretty crucial distinction.
...
At least that find explains why the comment disappeared without a ripple. It triggered “I’ve seen this before”.
wolfgang proposed a similar example on Scott’s blog:
Scott responded that
His example is different in a very particular way:
His conscious entity gets to dump photons into de Sitter space directly and only if you open it. This makes Scott’s counter-claim prima facie basically plausible—if your putative consciousness only involves reversible actions, then is it really conscious?
But, I specifically drew a line between Alice and Alice’s Room, and specified that Alice’s normal operations are irreversible—but they must also dump entropy into the Room, taking in one of its 0 bits and returning something that might be 1 or 0, and if you feed her a 1 bit, she dies on waste heat (maybe she has some degree of tolerance for 1s, but as the density of 1s approaches 50% she cannot survive).
If you were to just leave the Room open all the time, always resetting its qbits to 0, Alice would operate the same, aside from having no risk of heatstroke. (In this case, of course, if you run the simulation backwards, the result would not be where you started, but catastrophe).
I think this is a pretty crucial distinction.
...
At least that find explains why the comment disappeared without a ripple. It triggered “I’ve seen this before”.