Also, here’s a sufficient reason why this isn’t true. As far as I know, Integrated Information Theory is currently the only highly formalized theory of consciousness in the literature. It’s also a functionalist theory (at least according to my operationalization of the term.) If you apply the formalism of IIT, it says that simulations on classical computers are minimally conscious at best, regardless of what software is run.
Now I’m not saying IIT is correct; in fact, my actual opinion on IIT is “100% wrong, no relation how consciousness actually works”. But nonetheless, if the only formalized proposal for consciousness doesn’t have the property that simulations preserve consciousness, then clearly the property is not guaranteed.
So why does IIT not have this property? Well because IIT analyzes the information flow/computational steps of a system—abstracting away the physical details, which is why I’m calling it functionalist—and a simulation of a system performs completely different computational steps than the original system. I mean it’s the same thing I said in my other reply; a simulation does not do the same thing as the thing it’s simulating, it only arrives at the same outputs, so any theory looking at computational steps will evaluate them differently. They’re two different algorithms/computations/programs, which is the level of abstraction that is generally believed to matter on LW. Idk how else to put this.
“100% wrong, no relation how consciousness actually works”
indeed. I think we should stop there though. The fact that it’s so formalized is part of the absurdity of IIT. There are a bunch of equations that are completely meaningless and not based in anything empirical whatsoever.
The goal of my effort with this proof, regardless of whether there is a flaw in the logic somewhere, is that I think if we can take a single inch forward based on logical or axiomatic proofs, this can begin to narrow down our sea of endless speculative hypotheses, then those inches matter.
I don’t think just because we have no way of solving the hard problem yet, or formulating a complete theory of consciousness, that this doesn’t mean we can’t make at least a couple of tiny inferences we can know with a high degree of certainty. I think it’s a disservice to this field that most high profile efforts have a complete framework of the entirety of consciousness stated as theory, when it’s completely possible to start moving forward one tiny step at a time without relying on speculation.
The fact that it’s so formalized is part of the absurdity of IIT. There are a bunch of equations that are completely meaningless and not based in anything empirical whatsoever. The goal of my effort with this proof, regardless of whether there is a flaw in the logic somewhere, is that I think if we can take a single inch forward based on logical or axiomatic proofs, this can begin to narrow down our sea of endless speculative hypotheses, then those inches matter.
I’m totally on board with everything you said here. But I didn’t bring up IIT as a rebuttal to anything you said in your post. In fact, your argument about swapping out neurons specifically avoids the problem I’m talking about in this above comment. The formalism of IIT actually agrees with you that swapping out neurons in a brain doesn’t change consciousness (given the assumptions I’ve mentioned in the other comment)!
I’ve brought up IIT as a response to a specific claim—which I’m just going state again since I feel like I keep getting misunderstood as making more vague/general claims than I’m in fact making. The claim (which I’ve seen made on LW before) is that we know for a fact that a simulation of a human brain on a digital computer is conscious because of the Turing thesis. Or at least, that we know this for a fact if we assume some very basic things about the universe like laws of physics are complete and functionalism is true. So like, the claim is that every theory of consciousness that agrees with these two premises also states that a simulation of a human brain has the same consciousness as that human brain.
Well, IIT is a theory that agrees with both of these premises—it’s a functionalist proposal that doesn’t postulate any violation to the laws of physics—and it says that simulations of human brains have completely different consciousness than human brains themselves. Therefore, the above claim doesn’t seem true. This is my point; no more, no less. If there is a counter-example to an implication A⟹B, then the implication isn’t true; it doesn’t matter if the counter-example is stupid.
Again, does not apply to your post because you talked about swapping neurons in a brain, which is different—IIT agrees with your argument but disagrees with green_leaf’s argument.
Also, here’s a sufficient reason why this isn’t true. As far as I know, Integrated Information Theory is currently the only highly formalized theory of consciousness in the literature. It’s also a functionalist theory (at least according to my operationalization of the term.) If you apply the formalism of IIT, it says that simulations on classical computers are minimally conscious at best, regardless of what software is run.
Now I’m not saying IIT is correct; in fact, my actual opinion on IIT is “100% wrong, no relation how consciousness actually works”. But nonetheless, if the only formalized proposal for consciousness doesn’t have the property that simulations preserve consciousness, then clearly the property is not guaranteed.
So why does IIT not have this property? Well because IIT analyzes the information flow/computational steps of a system—abstracting away the physical details, which is why I’m calling it functionalist—and a simulation of a system performs completely different computational steps than the original system. I mean it’s the same thing I said in my other reply; a simulation does not do the same thing as the thing it’s simulating, it only arrives at the same outputs, so any theory looking at computational steps will evaluate them differently. They’re two different algorithms/computations/programs, which is the level of abstraction that is generally believed to matter on LW. Idk how else to put this.
indeed. I think we should stop there though. The fact that it’s so formalized is part of the absurdity of IIT. There are a bunch of equations that are completely meaningless and not based in anything empirical whatsoever.
The goal of my effort with this proof, regardless of whether there is a flaw in the logic somewhere, is that I think if we can take a single inch forward based on logical or axiomatic proofs, this can begin to narrow down our sea of endless speculative hypotheses, then those inches matter.
I don’t think just because we have no way of solving the hard problem yet, or formulating a complete theory of consciousness, that this doesn’t mean we can’t make at least a couple of tiny inferences we can know with a high degree of certainty. I think it’s a disservice to this field that most high profile efforts have a complete framework of the entirety of consciousness stated as theory, when it’s completely possible to start moving forward one tiny step at a time without relying on speculation.
I’m totally on board with everything you said here. But I didn’t bring up IIT as a rebuttal to anything you said in your post. In fact, your argument about swapping out neurons specifically avoids the problem I’m talking about in this above comment. The formalism of IIT actually agrees with you that swapping out neurons in a brain doesn’t change consciousness (given the assumptions I’ve mentioned in the other comment)!
I’ve brought up IIT as a response to a specific claim—which I’m just going state again since I feel like I keep getting misunderstood as making more vague/general claims than I’m in fact making. The claim (which I’ve seen made on LW before) is that we know for a fact that a simulation of a human brain on a digital computer is conscious because of the Turing thesis. Or at least, that we know this for a fact if we assume some very basic things about the universe like laws of physics are complete and functionalism is true. So like, the claim is that every theory of consciousness that agrees with these two premises also states that a simulation of a human brain has the same consciousness as that human brain.
Well, IIT is a theory that agrees with both of these premises—it’s a functionalist proposal that doesn’t postulate any violation to the laws of physics—and it says that simulations of human brains have completely different consciousness than human brains themselves. Therefore, the above claim doesn’t seem true. This is my point; no more, no less. If there is a counter-example to an implication A⟹B, then the implication isn’t true; it doesn’t matter if the counter-example is stupid.
Again, does not apply to your post because you talked about swapping neurons in a brain, which is different—IIT agrees with your argument but disagrees with green_leaf’s argument.