The confusion appears to result from the fact that I’m not talking about the pseudo-causal structure of the modeling units comprising the simulation, but rather the causal structure of the underlying physical basis of the computer running the simulation.
The natural objection is, why would the physical substrate matter?
Let’s assume you replace somebody’s brain with a Von Neumann computer running a simulation of that person’s brain. You get something that behaves like a conscious person, and even claims to be conscious person if asked. Would you say that this thing is not conscious?
If you think it is not conscious, then what does “conscious” actually mean in epistemic terms? If I tell you that X is conscious, how do you update your posterior beliefs on the outcomes of future observations about X?
The natural objection is, why would the physical substrate matter?
Let’s assume you replace somebody’s brain with a Von Neumann computer running a simulation of that person’s brain. You get something that behaves like a conscious person, and even claims to be conscious person if asked. Would you say that this thing is not conscious?
If you think it is not conscious, then what does “conscious” actually mean in epistemic terms? If I tell you that X is conscious, how do you update your posterior beliefs on the outcomes of future observations about X?