I would say that, by the time you get to C, there probably isn’t any problem anymore. You’re not actually computing the torture; or, rather, you already did that.
Scenario C is actually this:
You scan John Smith’s brain, run a detailed simulation of his being tortured while streaming the intermediate stages to disk, and then stream the disk state back to memory (for no good reason).
There is torture there, to be sure; it’s in the “detailed simulation” step. I find it hard to believe that streaming, without doing any serious computation, is sufficient to produce consciousness. Scenario D and E are the same. Now, if you manage to construct scenario B in a homomorphic encryption system, then I’d have to admit to some real uncertainty.
I find it hard to believe that streaming, without doing any serious computation, is sufficient to produce consciousness.
That’s the key observation here, I think. There’s a good case to be made that scenario B has consciousness. But does scenario C have it? It’s not so obvious anymore.
Now, if you manage to construct scenario B in a homomorphic encryption system, then I’d have to admit to some real uncertainty.
I don’t think that’s different even if we threw away the private key before beginning the simulation. It’s akin to sending spaceships beyond the observable edge of the universe or otherwise hiding parts of reality from ourselves. In fact, I think it may be beneficial to live in a homomorphically encrypted environment that is essentially immune to outside manipulation. It could be made to either work flawlessly or acquire near-maximum entropy at every time step with very high probability and with nearly as much measure in the “works flawlessly” region as a traditional simulation.
I would say that, by the time you get to C, there probably isn’t any problem anymore. You’re not actually computing the torture; or, rather, you already did that.
Scenario C is actually this:
You scan John Smith’s brain, run a detailed simulation of his being tortured while streaming the intermediate stages to disk, and then stream the disk state back to memory (for no good reason).
There is torture there, to be sure; it’s in the “detailed simulation” step. I find it hard to believe that streaming, without doing any serious computation, is sufficient to produce consciousness. Scenario D and E are the same. Now, if you manage to construct scenario B in a homomorphic encryption system, then I’d have to admit to some real uncertainty.
That’s the key observation here, I think. There’s a good case to be made that scenario B has consciousness. But does scenario C have it? It’s not so obvious anymore.
I don’t think that’s different even if we threw away the private key before beginning the simulation. It’s akin to sending spaceships beyond the observable edge of the universe or otherwise hiding parts of reality from ourselves. In fact, I think it may be beneficial to live in a homomorphically encrypted environment that is essentially immune to outside manipulation. It could be made to either work flawlessly or acquire near-maximum entropy at every time step with very high probability and with nearly as much measure in the “works flawlessly” region as a traditional simulation.