Therefore, every phenomenon that a physical human brain can produce, can be produced by any Turing-complete computer.
You’re continuing to confuse reasoning about a physical phenomenon with causing a physical phenomenon. By the Church-Turing thesis, which I am in full agreement with, a Turing machine can reason about any physical phenomenon. That does not mean a Turing machine can cause any physical phenomenon. A PC running a program which reasons about Jupiter’s gravity cannot cause Jupiter’s gravity.
From inside the simulation, the simulation “reasoning” about phenomenon cannot be distincted from actually causing this phenomenon. From my point of view, gravity inside two-body simulator is real for all bodies inside the simulator.
If you separate “reasoning” from “happening” only because you are able to tell one from another from your point of view, why don’t we say that all working of our world can be “reasoning” instead of real phenomena if there are entities that can separate its “simulated working” from their “real” universe?
gravity inside two-body simulator is real for all bodies inside the simulator.
For a two body simulator we can just use the Newtonian equation for F = G m1m2 / (r^2), right? You aren’t claiming we need any sort of computing apparatus to make gravity real for “all bodies inside the simulator”?
I don’t get the question, frankly. Simulation, in my opinion, is not a single formula but the means of knowing the state of system at particular time. In this case, we need an “apparatus”, even if it’s only a piece of paper, crayon and our own brain. It will be a very simple simulator, yes.
Basically I’m asking: is gravity “real for all bodies inside the system” or “real for all bodies inside the simulator”?
If the former, then we have Tegmark IV.
If ONLY the latter, then you’re saying that a system requires a means to be made known by someone outside the system, in order to have gravity “be real” for it. That’s not substrate independence; we’re no longer talking about its point of view, as it only becomes “real” when it informs our point of view, and not before.
Oh, I got what you mean by “Tegmark IV” here from your another answer. Then it’s more complicated and depends on our definition of “existance” (there can be many, I presume).
I think gravity is “real” for any bodies that it affects. For the person running the simulator it’s “real” too, but in some other sense — it’s not affecting the person physically but it produces some information for him that wouldn’t be there without the simulator (so we cannot say they’re entirely causally disconnected). All this requires further thinking :)
Also, english is not my main language so there can be some misunderstanding on my part :)
Okay, I had pondered this question for some time and the preliminary conclusions are strange. Either “existance” is physically meaningless or it should be split to at least three terms with slightly different meanings. Or “existance” is purely subjective things and we can’t meaningfully argue about “existance” of things that are causally disconnected from us.
I’m asserting that qualia, reasoning, and other relevant phenomena that a brain produces are computational, and that by computing them, a Turing machine can reproduce them with perfect accuracy. I apologize if this was not clear.
Adding two and two is a computation. An abacus is one substrate on which addition can be performed; a computer is another.
I know what it means to compute “2+2” on an abacus. I know what it means to compute “2+2″ on a computer. I know what it means to simulate “2+2 on an abacus” on a computer. I even know what it means to simulate “2+2 on a computer” on an abacus (although I certainly wouldn’t want to have to actually do so!). I do not know what it means to simulate “2+2” on a computer.
You simulate physical phenomena—things that actually exist. You compute combinations of formal symbols, which are abstract ideas. 2 and 4 are abstract; they don’t exist. To claim that qualia are purely computational is to claim that they don’t exist.
Computation does not exist within physics, it’s a linguistic abstraction of things that exist within physics, such as the behavior of a CPU. Similarly, “2” is an abstraction of a pair of apples, a pair of oranges, etc. To say that the actions of one physical medium necessarily has a similar physical effect (the production of qualia) as the actions of another physical medium, just because they abstractly embody the same computation, is analagous to saying that two apples produce the same qualia as two oranges, because they’re both “2″.
This is my last reply for tonight. I’ll return in the morning.
If computation doesn’t exist because it’s “a linguistic abstraction of things that exist within physics”, then CPUs, apples, oranges, qualia, “physical media” and people don’t exist; all of those things are also linguistic abstractions of things that exist within physics. Physics is made of things like quarks and leptons, not apples and qualia. I don’t think this definition of existence is particularly useful in context.
As to your fruit analogy: two apples do in fact produce the same qualia as two oranges, with respect to number! Obviously color, smell, etc. are different, but in both cases I have the experience of seeing two objects. And if I’m trying to do sums by putting apples or oranges together, substituting one for the other will give the same result. In comparing my brain to a hypothetical simulation of my brain running on a microchip, I would claim a number of differences (weight, moisture content, smell...), but I hold that what makes me me would be present in either one.
If computation doesn’t exist because it’s “a linguistic abstraction of things that exist within physics”, then CPUs, apples, oranges, qualia, “physical media” and people don’t exist; all of those things are also linguistic abstractions of things that exist within physics. Physics is made of things like quarks and leptons, not apples and qualia. I don’t think this definition of existence is particularly useful in context.
Not quite reductionist enough, actually: physics is made of the relationship rules between configurations of spacetime which exist independently of any formal model of them that give us concepts like “quark” and “lepton”. But digging deeper into this linguistic rathole won’t clarify my point any further, so I’ll drop this line of argument.
As to your fruit analogy: two apples do in fact produce the same qualia as two oranges, with respect to number! Obviously color, smell, etc. are different, but in both cases I have the experience of seeing two objects. And if I’m trying to do sums by putting apples or oranges together, substituting one for the other will give the same result. In comparing my brain to a hypothetical simulation of my brain running on a microchip, I would claim a number of differences (weight, moisture content, smell...), but I hold that what makes me me would be present in either one.
If you started perceiving two apples identically to the way you perceive two oranges, without noticing their difference in weight, smell, etc., then you or at least others around you would conclude that you were quite ill. What is your justification for believing that being unable to distinguish between things that are “computationally identical” would leave you any healthier?
I didn’t intend to start a reductionist “race to the bottom,” only to point out that minds and computations clearly do exist. “Reducible” and “non-existent” aren’tsynonyms!
Since you prefer the question in your edit, I’ll answer it directly:
if I replaced the two hemispheres of your brain with two apples, clearly you would become quite ill, even though similarity in number has been preserved. If you believe that “embodying the same computation” is somehow a privileged concept in this regard—that if I replaced your brain with something else embodying the same computation that you would feel yourself to be unharmed—what is your justification for believing this?
Computation is “privileged” only in the sense that computationally identical substitutions leave my mind, preferences, qualia, etc. intact; because those things are themselves computations. If you replaced my brain with a computationally equivalent computer weighing two tons, I would certainly notice a difference and consider myself harmed. But the harm wouldn’t have been done to my mind.
I feel like there must be something we’ve missed, because I’m still not sure where exactly we disagree. I’m pretty sure you don’t think that qualia are reified in the brain—that a surgeon could go in with tongs and pull out a little lump of qualia—and I think you might even agree with the analogy that brains:hardware::minds:software. So if there’s still a disagreement to be had, what is it? If qualia and other mental phenomena are not computational, then what are they?
I’m pretty sure you don’t think that qualia are reified in the brain—that a surgeon could go in with tongs and pull out a little lump of qualia
I do think that qualia are reified in the brain. I do not think that a surgeon could go in with tongs and remove them any more than he could in with tongs and remove your recognition of your grandmother.
If qualia and other mental phenomena are not computational, then what are they?
They’re a physical effect caused by the operation of a brain, just as gravity is a physical effect of mass and temperature is a physical effect of Brownian motion. See here and here for one reason why I think the computational view falls somewhere in between problematic and not-even-wrong, inclusive.
ETA: The “grandmother cell” might have been a poorly chosen counterexample, since apparently there’s some research that sort of actually supports that notion with respect to face recognition. I learned the phrase as identifying a fallacy. Feel free to mentally substitute some other complex idea that is clearly not embodied in any discrete piece of the brain.
Where they find apparent “Jennifer Anniston” and “Halle Berry” cells. The former is a little bit muddled as it doesn’t fire when a picture contains both her and Brad Pitt. The latter fires for both pictures of her, and the text of her name.
Do you mean, “know enough to tell for sure whether a given complex idea is embodied in any discrete piece of the brain?”. No, but we know for sure that some must exist which are not, because conceptspace is bigger than thingspace.
“know enough to tell for sure whether a given complex idea is embodied in any discrete piece of the brain?”.
Depending on various details, this might well be impossible. Rice’s theorem comes to mind—if it’s impossible to perfectly determine any interesting property for arbitrary Turing machines, that doesn’t bode well for similar questions for Turing-equivalent substrates.
Brains, like PCs, aren’t actually Turing-equivalent: they only have finite storage. To actually be equivalent to a Turing machine, they’d need something equivalent to a Turing machine’s infinite tape. There’s nothing analogous to Rice’s theorem or the halting theorem which holds for finite state machines. All those problems are decidable. Of course, decidable doesn’t mean tractable.
There’s nothing analogous to Rice’s theorem or the halting theorem which holds for finite state machines.
It is true that you can run finite state machines until they either terminate or start looping or run past the Busy Beaver for that length of tape; but while you may avoid Rice’s theorem by pointing out that ‘actually brains are just FSMs’, you replace it with another question, ‘are they FSMs decidable within the length of tape available to us?’
Given how fast the Busy Beaver grows, the answer is almost surely no—there is no runnable algorithm. Leading to the dilemma that either there are insufficient resources (per above), or it’s impossible in principle (if there are unbounded resources there likely are unbounded brains and Rice’s theorem applies again).
(I know you understand this because you pointed out ‘Of course, decidable doesn’t mean tractable.’ but it’s not obvious to a lot of people and is worth noting.)
This is just a pedantic technical correction since we agree on all the practical implications, but nothing involving FSMs grows nearly as fast as Busy Beaver. The relevant complexity class for the hardest problems concerning FSMs, such as determining whether two regular expressions represent the same language, is the class of EXPSPACE-complete problems. This is as opposed to R for decidable problems, and RE and co-RE for semidecidable problems like the halting problem. Those classes are way, WAY bigger than EXPSPACE.
Do you mean, “know enough to tell for sure whether a given complex idea is embodied in any discrete piece of the brain?”
Yes
No, but we know for sure that some must exist which are not, because conceptspace is bigger than thingspace.
Potential, easily accessible concept space, not necessarily actually used concept space. Even granting the brain using some concepts without corresponding discrete anatomy I don’t see how they can serve as a replacement in your argument when we can’t identify them.
The only role that this example-of-an-idea is playing in my argument is as an analogy to illustrate what I mean when I assert that qualia physically exist in the brain without there being such thing as a “qualia cell”. You clearly already understand this concept, so is my particular choice of analogy so terribly important that it’s necessary to nitpick over this?
The very same uncertainty would also apply to qualia (assuming that even is a meaningful concept), only worse because we understand them even less. If we can’t answer the question of whether a particular concept is embedded in discrete anatomy, how could we possibly answer that question for qualia when we can’t even verify their existence in the first place?
They’re a physical effect caused by the operation of a brain
You haven’t excluded a computational explanation of qualia by saying this. You haven’t even argued against it! Computations are physical phenomena that have meaningful consequences.
“Mental phenomena are a physical effect caused by the operation of a brain.”
“The image on my computer monitor is a physical effect caused by the operation of the computer.”
I’m starting to think you’re confused as a result of using language in a way that allows you to claim computations “don’t exist,” while qualia do.
As to your linked comment: ISTM that qualia are what an experience feels like from the inside. Maybe it’s just me, but qualia don’t seem especially difficult to explain or understand. I don’t think qualia would even be regarded as worth talking about, except that confused dualists try to use them against materialism.
If I have in front of me four apples that appear to me to be identical, but a specific two of them consistently are referred to as oranges by sources I normally trust, they are not computationally identical. If everyone perceived them as apples, I doubt I would be seen as ill.
I did a better job of phrasing my question in the edit I made to my original post than I did in my reply to Sideways that you responded to. Are you able to rephrase your response so that it answers the better version of the question? I can’t figure out how to do so.
You seem to me to be fundamentally confused about the separation between the (at a minimum) two levels of reality being proposed. We have a simulation, and we have a real world. If you affect things in the simulation, such as replacing Venus with a planet twice the mass of Venus, then they are not the same; the gravitational field will be different and the simulation will follow a path different to the simulation with the original Venus. These two options are not “computationally the same”.
If, on the other hand, in the real world you replace your old, badly programmed Venus Simulation Chip 2000 with the new, shiny Venus Simulation Chip XD500, which does precisely the same thing as the old chip but in fewer steps so we in the real world have to sit around waiting for fewer processor cycles to end, then the simulation will follow the same path as it would have done before. Observers in the sim won’t know what Venus Chip we’re running, and they won’t know how many processor cycles it’s taking to simulate it. These two different situations are “computationally the same”.
If, in the simulation world, you replaced half of my brain with an apple, then I would be dead. If you replaced half of my brain with a computer that mimicked perfectly my old meat brain, I would be fine. If we’re in the computation world then we should point out that again, the gravitational field of my brain computer will likely be different from the gravitational field of my meat brain, and so I would label these as “not computationally the same” for clarity. If we are interested in my particular experiences of the world, given that I can’t detect gravitational fields very well, then I would label them as “computationally the same” if I am substrate independent, and “computationally different” if not.
I grew up in this universe, and my consciousness is embedded in a complex set of systems, my human brain, which is designed to make things make sense at any cost. I feel purple whenever I go outside—that’s just how I’ve always felt. Purple makes sense. This is fatal for your argument.
(Now, if one day soon my qualia jump from one state to another, now that would be something interesting.)
You’re continuing to confuse reasoning about a physical phenomenon with causing a physical phenomenon. By the Church-Turing thesis, which I am in full agreement with, a Turing machine can reason about any physical phenomenon. That does not mean a Turing machine can cause any physical phenomenon. A PC running a program which reasons about Jupiter’s gravity cannot cause Jupiter’s gravity.
From inside the simulation, the simulation “reasoning” about phenomenon cannot be distincted from actually causing this phenomenon. From my point of view, gravity inside two-body simulator is real for all bodies inside the simulator.
If you separate “reasoning” from “happening” only because you are able to tell one from another from your point of view, why don’t we say that all working of our world can be “reasoning” instead of real phenomena if there are entities that can separate its “simulated working” from their “real” universe?
For a two body simulator we can just use the Newtonian equation for F = G m1m2 / (r^2), right? You aren’t claiming we need any sort of computing apparatus to make gravity real for “all bodies inside the simulator”?
I don’t get the question, frankly. Simulation, in my opinion, is not a single formula but the means of knowing the state of system at particular time. In this case, we need an “apparatus”, even if it’s only a piece of paper, crayon and our own brain. It will be a very simple simulator, yes.
Basically I’m asking: is gravity “real for all bodies inside the system” or “real for all bodies inside the simulator”?
If the former, then we have Tegmark IV.
If ONLY the latter, then you’re saying that a system requires a means to be made known by someone outside the system, in order to have gravity “be real” for it. That’s not substrate independence; we’re no longer talking about its point of view, as it only becomes “real” when it informs our point of view, and not before.
Oh, I got what you mean by “Tegmark IV” here from your another answer. Then it’s more complicated and depends on our definition of “existance” (there can be many, I presume).
I think gravity is “real” for any bodies that it affects. For the person running the simulator it’s “real” too, but in some other sense — it’s not affecting the person physically but it produces some information for him that wouldn’t be there without the simulator (so we cannot say they’re entirely causally disconnected). All this requires further thinking :)
Also, english is not my main language so there can be some misunderstanding on my part :)
Okay, I had pondered this question for some time and the preliminary conclusions are strange. Either “existance” is physically meaningless or it should be split to at least three terms with slightly different meanings. Or “existance” is purely subjective things and we can’t meaningfully argue about “existance” of things that are causally disconnected from us.
I’m asserting that qualia, reasoning, and other relevant phenomena that a brain produces are computational, and that by computing them, a Turing machine can reproduce them with perfect accuracy. I apologize if this was not clear.
Adding two and two is a computation. An abacus is one substrate on which addition can be performed; a computer is another.
I know what it means to compute “2+2” on an abacus. I know what it means to compute “2+2″ on a computer. I know what it means to simulate “2+2 on an abacus” on a computer. I even know what it means to simulate “2+2 on a computer” on an abacus (although I certainly wouldn’t want to have to actually do so!). I do not know what it means to simulate “2+2” on a computer.
You simulate physical phenomena—things that actually exist. You compute combinations of formal symbols, which are abstract ideas. 2 and 4 are abstract; they don’t exist. To claim that qualia are purely computational is to claim that they don’t exist.
“Computation exists within physics” is not equivalent to ” “2″ exists within physics.”
If computation doesn’t exist within physics, then we’re communicating supernaturally.
If qualia aren’t computations embodied in the physical substrate of a mind, then I don’t know what they are.
Computation does not exist within physics, it’s a linguistic abstraction of things that exist within physics, such as the behavior of a CPU. Similarly, “2” is an abstraction of a pair of apples, a pair of oranges, etc. To say that the actions of one physical medium necessarily has a similar physical effect (the production of qualia) as the actions of another physical medium, just because they abstractly embody the same computation, is analagous to saying that two apples produce the same qualia as two oranges, because they’re both “2″.
This is my last reply for tonight. I’ll return in the morning.
If computation doesn’t exist because it’s “a linguistic abstraction of things that exist within physics”, then CPUs, apples, oranges, qualia, “physical media” and people don’t exist; all of those things are also linguistic abstractions of things that exist within physics. Physics is made of things like quarks and leptons, not apples and qualia. I don’t think this definition of existence is particularly useful in context.
As to your fruit analogy: two apples do in fact produce the same qualia as two oranges, with respect to number! Obviously color, smell, etc. are different, but in both cases I have the experience of seeing two objects. And if I’m trying to do sums by putting apples or oranges together, substituting one for the other will give the same result. In comparing my brain to a hypothetical simulation of my brain running on a microchip, I would claim a number of differences (weight, moisture content, smell...), but I hold that what makes me me would be present in either one.
See you in the morning! :)
Not quite reductionist enough, actually: physics is made of the relationship rules between configurations of spacetime which exist independently of any formal model of them that give us concepts like “quark” and “lepton”. But digging deeper into this linguistic rathole won’t clarify my point any further, so I’ll drop this line of argument.
If you started perceiving two apples identically to the way you perceive two oranges, without noticing their difference in weight, smell, etc., then you or at least others around you would conclude that you were quite ill. What is your justification for believing that being unable to distinguish between things that are “computationally identical” would leave you any healthier?
I didn’t intend to start a reductionist “race to the bottom,” only to point out that minds and computations clearly do exist. “Reducible” and “non-existent” aren’t synonyms!
Since you prefer the question in your edit, I’ll answer it directly:
Computation is “privileged” only in the sense that computationally identical substitutions leave my mind, preferences, qualia, etc. intact; because those things are themselves computations. If you replaced my brain with a computationally equivalent computer weighing two tons, I would certainly notice a difference and consider myself harmed. But the harm wouldn’t have been done to my mind.
I feel like there must be something we’ve missed, because I’m still not sure where exactly we disagree. I’m pretty sure you don’t think that qualia are reified in the brain—that a surgeon could go in with tongs and pull out a little lump of qualia—and I think you might even agree with the analogy that brains:hardware::minds:software. So if there’s still a disagreement to be had, what is it? If qualia and other mental phenomena are not computational, then what are they?
I do think that qualia are reified in the brain. I do not think that a surgeon could go in with tongs and remove them any more than he could in with tongs and remove your recognition of your grandmother.
They’re a physical effect caused by the operation of a brain, just as gravity is a physical effect of mass and temperature is a physical effect of Brownian motion. See here and here for one reason why I think the computational view falls somewhere in between problematic and not-even-wrong, inclusive.
ETA: The “grandmother cell” might have been a poorly chosen counterexample, since apparently there’s some research that sort of actually supports that notion with respect to face recognition. I learned the phrase as identifying a fallacy. Feel free to mentally substitute some other complex idea that is clearly not embodied in any discrete piece of the brain.
See for instance this report
http://www.scientificamerican.com/article.cfm?id=one-face-one-neuron
on this paper
http://www.nature.com/nature/journal/v435/n7045/full/nature03687.html
Where they find apparent “Jennifer Anniston” and “Halle Berry” cells. The former is a little bit muddled as it doesn’t fire when a picture contains both her and Brad Pitt. The latter fires for both pictures of her, and the text of her name.
Do we know enough to tell for sure?
Do you mean, “know enough to tell for sure whether a given complex idea is embodied in any discrete piece of the brain?”. No, but we know for sure that some must exist which are not, because conceptspace is bigger than thingspace.
Depending on various details, this might well be impossible. Rice’s theorem comes to mind—if it’s impossible to perfectly determine any interesting property for arbitrary Turing machines, that doesn’t bode well for similar questions for Turing-equivalent substrates.
Brains, like PCs, aren’t actually Turing-equivalent: they only have finite storage. To actually be equivalent to a Turing machine, they’d need something equivalent to a Turing machine’s infinite tape. There’s nothing analogous to Rice’s theorem or the halting theorem which holds for finite state machines. All those problems are decidable. Of course, decidable doesn’t mean tractable.
It is true that you can run finite state machines until they either terminate or start looping or run past the Busy Beaver for that length of tape; but while you may avoid Rice’s theorem by pointing out that ‘actually brains are just FSMs’, you replace it with another question, ‘are they FSMs decidable within the length of tape available to us?’
Given how fast the Busy Beaver grows, the answer is almost surely no—there is no runnable algorithm. Leading to the dilemma that either there are insufficient resources (per above), or it’s impossible in principle (if there are unbounded resources there likely are unbounded brains and Rice’s theorem applies again).
(I know you understand this because you pointed out ‘Of course, decidable doesn’t mean tractable.’ but it’s not obvious to a lot of people and is worth noting.)
This is just a pedantic technical correction since we agree on all the practical implications, but nothing involving FSMs grows nearly as fast as Busy Beaver. The relevant complexity class for the hardest problems concerning FSMs, such as determining whether two regular expressions represent the same language, is the class of EXPSPACE-complete problems. This is as opposed to R for decidable problems, and RE and co-RE for semidecidable problems like the halting problem. Those classes are way, WAY bigger than EXPSPACE.
Yes
Potential, easily accessible concept space, not necessarily actually used concept space. Even granting the brain using some concepts without corresponding discrete anatomy I don’t see how they can serve as a replacement in your argument when we can’t identify them.
The only role that this example-of-an-idea is playing in my argument is as an analogy to illustrate what I mean when I assert that qualia physically exist in the brain without there being such thing as a “qualia cell”. You clearly already understand this concept, so is my particular choice of analogy so terribly important that it’s necessary to nitpick over this?
The very same uncertainty would also apply to qualia (assuming that even is a meaningful concept), only worse because we understand them even less. If we can’t answer the question of whether a particular concept is embedded in discrete anatomy, how could we possibly answer that question for qualia when we can’t even verify their existence in the first place?
You haven’t excluded a computational explanation of qualia by saying this. You haven’t even argued against it! Computations are physical phenomena that have meaningful consequences.
“Mental phenomena are a physical effect caused by the operation of a brain.”
“The image on my computer monitor is a physical effect caused by the operation of the computer.”
I’m starting to think you’re confused as a result of using language in a way that allows you to claim computations “don’t exist,” while qualia do.
As to your linked comment: ISTM that qualia are what an experience feels like from the inside. Maybe it’s just me, but qualia don’t seem especially difficult to explain or understand. I don’t think qualia would even be regarded as worth talking about, except that confused dualists try to use them against materialism.
If I have in front of me four apples that appear to me to be identical, but a specific two of them consistently are referred to as oranges by sources I normally trust, they are not computationally identical. If everyone perceived them as apples, I doubt I would be seen as ill.
I did a better job of phrasing my question in the edit I made to my original post than I did in my reply to Sideways that you responded to. Are you able to rephrase your response so that it answers the better version of the question? I can’t figure out how to do so.
Ok, I’ll give a longer response a go.
You seem to me to be fundamentally confused about the separation between the (at a minimum) two levels of reality being proposed. We have a simulation, and we have a real world. If you affect things in the simulation, such as replacing Venus with a planet twice the mass of Venus, then they are not the same; the gravitational field will be different and the simulation will follow a path different to the simulation with the original Venus. These two options are not “computationally the same”.
If, on the other hand, in the real world you replace your old, badly programmed Venus Simulation Chip 2000 with the new, shiny Venus Simulation Chip XD500, which does precisely the same thing as the old chip but in fewer steps so we in the real world have to sit around waiting for fewer processor cycles to end, then the simulation will follow the same path as it would have done before. Observers in the sim won’t know what Venus Chip we’re running, and they won’t know how many processor cycles it’s taking to simulate it. These two different situations are “computationally the same”.
If, in the simulation world, you replaced half of my brain with an apple, then I would be dead. If you replaced half of my brain with a computer that mimicked perfectly my old meat brain, I would be fine. If we’re in the computation world then we should point out that again, the gravitational field of my brain computer will likely be different from the gravitational field of my meat brain, and so I would label these as “not computationally the same” for clarity. If we are interested in my particular experiences of the world, given that I can’t detect gravitational fields very well, then I would label them as “computationally the same” if I am substrate independent, and “computationally different” if not.
I grew up in this universe, and my consciousness is embedded in a complex set of systems, my human brain, which is designed to make things make sense at any cost. I feel purple whenever I go outside—that’s just how I’ve always felt. Purple makes sense. This is fatal for your argument.
(Now, if one day soon my qualia jump from one state to another, now that would be something interesting.)