Maybe I should clarify a bit. I have two intuitions about the relation of consciousness and calculation. The first is that abstract existence of a computation, in the mathematical sense (where “X exists” basically means that the definition of X is free of contradictions), doesn’t guarantee consciousness. The computations should be physically implemented somewhere, by which I mean there should be a physical structure isomorphic to the abstract process of computation.
The second intuition is that the specific qualities of consciousness should be invariant with respect to some transformations of the physical implementation. One can get a quale of hearing a high-pitched sound by actually hearing it, or because of dozens of physically different causes, many of which lie inside the brain. And because not all details of the physical structure are important, there must be some property which the systems with indistinguishable qualia share, and there is a non-negligible chance that this property is computational isomorphism between these systems. So, I don’t express the same objection as you in other language, since I think there is a non-negligible probability that the simulation could be isomorphic to the real world to degree which enables the same qualia.
(Edit: even if the qualia of the simulated agents are different from qualia of the real agents, how does this constitute an argument against us being in a simulation? If so, we know our simulated qualia and not the real ones and can’t compare.)
The most confusing question to me is how the boundaries between different conscious systems are set, i.e. why aren’t there two or more consciousnesses in one brain or one consciousness in more brains. The question is not only confusing, but probably confused, but I don’t see a resolution. But this is off topic here anyway.
I would not bet much money on any of the above positions.
Maybe I should clarify a bit. I have two intuitions about the relation of consciousness and calculation. The first is that abstract existence of a computation, in the mathematical sense (where “X exists” basically means that the definition of X is free of contradictions), doesn’t guarantee consciousness. The computations should be physically implemented somewhere, by which I mean there should be a physical structure isomorphic to the abstract process of computation.
The second intuition is that the specific qualities of consciousness should be invariant with respect to some transformations of the physical implementation. One can get a quale of hearing a high-pitched sound by actually hearing it, or because of dozens of physically different causes, many of which lie inside the brain. And because not all details of the physical structure are important, there must be some property which the systems with indistinguishable qualia share, and there is a non-negligible chance that this property is computational isomorphism between these systems. So, I don’t express the same objection as you in other language, since I think there is a non-negligible probability that the simulation could be isomorphic to the real world to degree which enables the same qualia.
(Edit: even if the qualia of the simulated agents are different from qualia of the real agents, how does this constitute an argument against us being in a simulation? If so, we know our simulated qualia and not the real ones and can’t compare.)
The most confusing question to me is how the boundaries between different conscious systems are set, i.e. why aren’t there two or more consciousnesses in one brain or one consciousness in more brains. The question is not only confusing, but probably confused, but I don’t see a resolution. But this is off topic here anyway.
I would not bet much money on any of the above positions.