Could the relevant moral change happen going from B to C, perhaps? i.e. maybe a mind needs to actually be physically/causally computed in order to experience things. Then the torture would have occurred whenever John’s mind was first simulated, but not for subsequent “replays,” where you’re just reloading data.
Check out “Counterfactuals Can’t Count” for a response to this. Basically, if a recording is different in what it experiences than running a computation, then two computations that calculate the same thing in the same way, but one has bits of code that never run, experience things differently.
Sorry, I was just trying to paraphrase the paper in one sentence.
The point of the paper is that there is something wrong with computationalism. It attempts to prove that two systems with the same sequence of computational states must have different conscious experiences. It does this by taking a robot brain that calculates the same way as a conscious human brain, and transforms it, always using computationally equivalent steps, to a system that is computationally equivalent to a digital clock. This means that either we accept that a clock is at every moment experiencing everything that can be experienced, or that something is wrong with computationalism. If we take the second option, it means that two systems with the exact same behavior and computational structure can have different perceptual consciousness.
Could the relevant moral change happen going from B to C, perhaps? i.e. maybe a mind needs to actually be physically/causally computed in order to experience things. Then the torture would have occurred whenever John’s mind was first simulated, but not for subsequent “replays,” where you’re just reloading data.
Check out “Counterfactuals Can’t Count” for a response to this. Basically, if a recording is different in what it experiences than running a computation, then two computations that calculate the same thing in the same way, but one has bits of code that never run, experience things differently.
The reference is a good one—thanks! But I don’t quite understand the rest of your comments. Can you rephrase more clearly?
Sorry, I was just trying to paraphrase the paper in one sentence. The point of the paper is that there is something wrong with computationalism. It attempts to prove that two systems with the same sequence of computational states must have different conscious experiences. It does this by taking a robot brain that calculates the same way as a conscious human brain, and transforms it, always using computationally equivalent steps, to a system that is computationally equivalent to a digital clock. This means that either we accept that a clock is at every moment experiencing everything that can be experienced, or that something is wrong with computationalism. If we take the second option, it means that two systems with the exact same behavior and computational structure can have different perceptual consciousness.
duplicate comment