The fact that it’s so formalized is part of the absurdity of IIT. There are a bunch of equations that are completely meaningless and not based in anything empirical whatsoever. The goal of my effort with this proof, regardless of whether there is a flaw in the logic somewhere, is that I think if we can take a single inch forward based on logical or axiomatic proofs, this can begin to narrow down our sea of endless speculative hypotheses, then those inches matter.
I’m totally on board with everything you said here. But I didn’t bring up IIT as a rebuttal to anything you said in your post. In fact, your argument about swapping out neurons specifically avoids the problem I’m talking about in this above comment. The formalism of IIT actually agrees with you that swapping out neurons in a brain doesn’t change consciousness (given the assumptions I’ve mentioned in the other comment)!
I’ve brought up IIT as a response to a specific claim—which I’m just going state again since I feel like I keep getting misunderstood as making more vague/general claims than I’m in fact making. The claim (which I’ve seen made on LW before) is that we know for a fact that a simulation of a human brain on a digital computer is conscious because of the Turing thesis. Or at least, that we know this for a fact if we assume some very basic things about the universe like laws of physics are complete and functionalism is true. So like, the claim is that every theory of consciousness that agrees with these two premises also states that a simulation of a human brain has the same consciousness as that human brain.
Well, IIT is a theory that agrees with both of these premises—it’s a functionalist proposal that doesn’t postulate any violation to the laws of physics—and it says that simulations of human brains have completely different consciousness than human brains themselves. Therefore, the above claim doesn’t seem true. This is my point; no more, no less. If there is a counter-example to an implication A⟹B, then the implication isn’t true; it doesn’t matter if the counter-example is stupid.
Again, does not apply to your post because you talked about swapping neurons in a brain, which is different—IIT agrees with your argument but disagrees with green_leaf’s argument.
I’m totally on board with everything you said here. But I didn’t bring up IIT as a rebuttal to anything you said in your post. In fact, your argument about swapping out neurons specifically avoids the problem I’m talking about in this above comment. The formalism of IIT actually agrees with you that swapping out neurons in a brain doesn’t change consciousness (given the assumptions I’ve mentioned in the other comment)!
I’ve brought up IIT as a response to a specific claim—which I’m just going state again since I feel like I keep getting misunderstood as making more vague/general claims than I’m in fact making. The claim (which I’ve seen made on LW before) is that we know for a fact that a simulation of a human brain on a digital computer is conscious because of the Turing thesis. Or at least, that we know this for a fact if we assume some very basic things about the universe like laws of physics are complete and functionalism is true. So like, the claim is that every theory of consciousness that agrees with these two premises also states that a simulation of a human brain has the same consciousness as that human brain.
Well, IIT is a theory that agrees with both of these premises—it’s a functionalist proposal that doesn’t postulate any violation to the laws of physics—and it says that simulations of human brains have completely different consciousness than human brains themselves. Therefore, the above claim doesn’t seem true. This is my point; no more, no less. If there is a counter-example to an implication A⟹B, then the implication isn’t true; it doesn’t matter if the counter-example is stupid.
Again, does not apply to your post because you talked about swapping neurons in a brain, which is different—IIT agrees with your argument but disagrees with green_leaf’s argument.