Therefore a WBE will have different consciousness (i.e. qualitatively different experiences), although very similar to the corresponding human consciousness.
That would depend on the granularity of the WBE, which has not beens pecified, and the nature of the superveninece of experince on brains states, which is unknown.
The truth of the claim, or the degree of difference? The claim is that identity obtains in the limit, i.e. in any practical scenario there wouldn’t be identity between experiences of a biological brain and WBE, only similarity. OTOH identity between WBEs can obviously be obtained.
Isn’t it sufficient for computationalism that WBEs are conscious and that experience would be identical in the limit of behavioral identity? My intent with the claim is to weaken computationalism—accommodate some aspects of identity theory—but not to directly deny it.
You seem to be suggesting that there are properties of the system that are relevant for the quality of its experiences, but are not computational properties. To get clearer on this, what kind of physical details do you have in mind, specifically?
I do not strongly believe the claim, just lay it out for discussion. I do not claim that experiences do not supervene on computations: they have observable, long-term behavioral effects which follow from the computable laws of physics. I just claim that in practice, not all processes in a brain will ever be reproduced in WBEs due to computational resource constraints and lack of relevance to rationality and the range of reported experiences of the subjects. Experiences can be different yet have roughly the same heterophenomenology (with behavior diverging only statistically or over long term).
That would depend on the granularity of the WBE, which has not beens pecified, and the nature of the superveninece of experince on brains states, which is unknown.
The truth of the claim, or the degree of difference? The claim is that identity obtains in the limit, i.e. in any practical scenario there wouldn’t be identity between experiences of a biological brain and WBE, only similarity. OTOH identity between WBEs can obviously be obtained.
The claim then rules out computationalism.
Isn’t it sufficient for computationalism that WBEs are conscious and that experience would be identical in the limit of behavioral identity? My intent with the claim is to weaken computationalism—accommodate some aspects of identity theory—but not to directly deny it.
You seem to be suggesting that there are properties of the system that are relevant for the quality of its experiences, but are not computational properties. To get clearer on this, what kind of physical details do you have in mind, specifically?
I do not strongly believe the claim, just lay it out for discussion. I do not claim that experiences do not supervene on computations: they have observable, long-term behavioral effects which follow from the computable laws of physics. I just claim that in practice, not all processes in a brain will ever be reproduced in WBEs due to computational resource constraints and lack of relevance to rationality and the range of reported experiences of the subjects. Experiences can be different yet have roughly the same heterophenomenology (with behavior diverging only statistically or over long term).