so the idea is that you can describe the brain by treating each neuron as a little black box about which you just know its input/output behavior, and then describe the interactions between those little black boxes. Then, assuming you can implement the input/output behavior of your black boxes with a different substrate (i.e., an artificial neuron)
This is guaranteed, because the universe (and any of its subsets) is computable (that means a classical computer can run software that acts the same way).
Also, here’s a sufficient reason why this isn’t true. As far as I know, Integrated Information Theory is currently the only highly formalized theory of consciousness in the literature. It’s also a functionalist theory (at least according to my operationalization of the term.) If you apply the formalism of IIT, it says that simulations on classical computers are minimally conscious at best, regardless of what software is run.
Now I’m not saying IIT is correct; in fact, my actual opinion on IIT is “100% wrong, no relation how consciousness actually works”. But nonetheless, if the only formalized proposal for consciousness doesn’t have the property that simulations preserve consciousness, then clearly the property is not guaranteed.
So why does IIT not have this property? Well because IIT analyzes the information flow/computational steps of a system—abstracting away the physical details, which is why I’m calling it functionalist—and a simulation of a system performs completely different computational steps than the original system. I mean it’s the same thing I said in my other reply; a simulation does not do the same thing as the thing it’s simulating, it only arrives at the same outputs, so any theory looking at computational steps will evaluate them differently. They’re two different algorithms/computations/programs, which is the level of abstraction that is generally believed to matter on LW. Idk how else to put this.
“100% wrong, no relation how consciousness actually works”
indeed. I think we should stop there though. The fact that it’s so formalized is part of the absurdity of IIT. There are a bunch of equations that are completely meaningless and not based in anything empirical whatsoever.
The goal of my effort with this proof, regardless of whether there is a flaw in the logic somewhere, is that I think if we can take a single inch forward based on logical or axiomatic proofs, this can begin to narrow down our sea of endless speculative hypotheses, then those inches matter.
I don’t think just because we have no way of solving the hard problem yet, or formulating a complete theory of consciousness, that this doesn’t mean we can’t make at least a couple of tiny inferences we can know with a high degree of certainty. I think it’s a disservice to this field that most high profile efforts have a complete framework of the entirety of consciousness stated as theory, when it’s completely possible to start moving forward one tiny step at a time without relying on speculation.
The fact that it’s so formalized is part of the absurdity of IIT. There are a bunch of equations that are completely meaningless and not based in anything empirical whatsoever. The goal of my effort with this proof, regardless of whether there is a flaw in the logic somewhere, is that I think if we can take a single inch forward based on logical or axiomatic proofs, this can begin to narrow down our sea of endless speculative hypotheses, then those inches matter.
I’m totally on board with everything you said here. But I didn’t bring up IIT as a rebuttal to anything you said in your post. In fact, your argument about swapping out neurons specifically avoids the problem I’m talking about in this above comment. The formalism of IIT actually agrees with you that swapping out neurons in a brain doesn’t change consciousness (given the assumptions I’ve mentioned in the other comment)!
I’ve brought up IIT as a response to a specific claim—which I’m just going state again since I feel like I keep getting misunderstood as making more vague/general claims than I’m in fact making. The claim (which I’ve seen made on LW before) is that we know for a fact that a simulation of a human brain on a digital computer is conscious because of the Turing thesis. Or at least, that we know this for a fact if we assume some very basic things about the universe like laws of physics are complete and functionalism is true. So like, the claim is that every theory of consciousness that agrees with these two premises also states that a simulation of a human brain has the same consciousness as that human brain.
Well, IIT is a theory that agrees with both of these premises—it’s a functionalist proposal that doesn’t postulate any violation to the laws of physics—and it says that simulations of human brains have completely different consciousness than human brains themselves. Therefore, the above claim doesn’t seem true. This is my point; no more, no less. If there is a counter-example to an implication A⟹B, then the implication isn’t true; it doesn’t matter if the counter-example is stupid.
Again, does not apply to your post because you talked about swapping neurons in a brain, which is different—IIT agrees with your argument but disagrees with green_leaf’s argument.
(that means a classical computer can run software that acts the same way).
No. Computability shows that you can have a classical computer that has the same input/output behavior, not that you can have a classical computer that acts the same way. Input/Output behavior is generally not considered to be enough to guarantee same consciousness, so this doesn’t give you what you need. Without arguing about the internal workings of the brain, a simulation of a brain is just a different physical process doing different computational steps that arrives at the same result. A GLUT (giant look-up table) is also a different physical process doing different computational steps that arrives at the same result, and Eliezer himself argued that GLUT isn’t conscious.
The “let’s swap neurons in the brain with artificial neurons” is actually a much better argument than “let’s build a simulation of the human brain on a different physical system” for this exact reason, and I don’t think it’s a coincidence that Eliezer used the former argument in his post.
Computability shows that you can have a classical computer that has the same input/output behavior
That’s what I mean (I’m talking about the input/output behavior of individual neurons).
Input/Output behavior is generally not considered to be enough to guarantee same consciousness
It should be, because it is, in fact, enough. (However, neither the post, nor my comment require that.)
Eliezer himself argued that GLUT isn’t conscious.
Yes, and that’s false (but since that’s not the argument in the OP, I don’t think I should get sidetracked).
But nonetheless, if the only formalized proposal for consciousness doesn’t have the property that simulations preserve consciousness, then clearly the property is not guaranteed.
That’s false. If we assume for a second that the ITT really is the only formalized theory of consciousness, it doesn’t follow that the property is not, in fact, guaranteed. It could also be that the ITT is wrong and that in the actual reality, the property is, in fact, guaranteed.
That’s what I mean (I’m talking about the input/output behavior of individual neurons).
Ah, I see. Nvm then. (I misunderstood the previous comment to apply to the entire brain—idk why, it was pretty clear that you were talking about a single neuron. My bad.)
This is guaranteed, because the universe (and any of its subsets) is computable (that means a classical computer can run software that acts the same way).
Also, here’s a sufficient reason why this isn’t true. As far as I know, Integrated Information Theory is currently the only highly formalized theory of consciousness in the literature. It’s also a functionalist theory (at least according to my operationalization of the term.) If you apply the formalism of IIT, it says that simulations on classical computers are minimally conscious at best, regardless of what software is run.
Now I’m not saying IIT is correct; in fact, my actual opinion on IIT is “100% wrong, no relation how consciousness actually works”. But nonetheless, if the only formalized proposal for consciousness doesn’t have the property that simulations preserve consciousness, then clearly the property is not guaranteed.
So why does IIT not have this property? Well because IIT analyzes the information flow/computational steps of a system—abstracting away the physical details, which is why I’m calling it functionalist—and a simulation of a system performs completely different computational steps than the original system. I mean it’s the same thing I said in my other reply; a simulation does not do the same thing as the thing it’s simulating, it only arrives at the same outputs, so any theory looking at computational steps will evaluate them differently. They’re two different algorithms/computations/programs, which is the level of abstraction that is generally believed to matter on LW. Idk how else to put this.
indeed. I think we should stop there though. The fact that it’s so formalized is part of the absurdity of IIT. There are a bunch of equations that are completely meaningless and not based in anything empirical whatsoever.
The goal of my effort with this proof, regardless of whether there is a flaw in the logic somewhere, is that I think if we can take a single inch forward based on logical or axiomatic proofs, this can begin to narrow down our sea of endless speculative hypotheses, then those inches matter.
I don’t think just because we have no way of solving the hard problem yet, or formulating a complete theory of consciousness, that this doesn’t mean we can’t make at least a couple of tiny inferences we can know with a high degree of certainty. I think it’s a disservice to this field that most high profile efforts have a complete framework of the entirety of consciousness stated as theory, when it’s completely possible to start moving forward one tiny step at a time without relying on speculation.
I’m totally on board with everything you said here. But I didn’t bring up IIT as a rebuttal to anything you said in your post. In fact, your argument about swapping out neurons specifically avoids the problem I’m talking about in this above comment. The formalism of IIT actually agrees with you that swapping out neurons in a brain doesn’t change consciousness (given the assumptions I’ve mentioned in the other comment)!
I’ve brought up IIT as a response to a specific claim—which I’m just going state again since I feel like I keep getting misunderstood as making more vague/general claims than I’m in fact making. The claim (which I’ve seen made on LW before) is that we know for a fact that a simulation of a human brain on a digital computer is conscious because of the Turing thesis. Or at least, that we know this for a fact if we assume some very basic things about the universe like laws of physics are complete and functionalism is true. So like, the claim is that every theory of consciousness that agrees with these two premises also states that a simulation of a human brain has the same consciousness as that human brain.
Well, IIT is a theory that agrees with both of these premises—it’s a functionalist proposal that doesn’t postulate any violation to the laws of physics—and it says that simulations of human brains have completely different consciousness than human brains themselves. Therefore, the above claim doesn’t seem true. This is my point; no more, no less. If there is a counter-example to an implication A⟹B, then the implication isn’t true; it doesn’t matter if the counter-example is stupid.
Again, does not apply to your post because you talked about swapping neurons in a brain, which is different—IIT agrees with your argument but disagrees with green_leaf’s argument.
No. Computability shows that you can have a classical computer that has the same input/output behavior, not that you can have a classical computer that acts the same way. Input/Output behavior is generally not considered to be enough to guarantee same consciousness, so this doesn’t give you what you need. Without arguing about the internal workings of the brain, a simulation of a brain is just a different physical process doing different computational steps that arrives at the same result. A GLUT (giant look-up table) is also a different physical process doing different computational steps that arrives at the same result, and Eliezer himself argued that GLUT isn’t conscious.
The “let’s swap neurons in the brain with artificial neurons” is actually a much better argument than “let’s build a simulation of the human brain on a different physical system” for this exact reason, and I don’t think it’s a coincidence that Eliezer used the former argument in his post.
That’s what I mean (I’m talking about the input/output behavior of individual neurons).
It should be, because it is, in fact, enough. (However, neither the post, nor my comment require that.)
Yes, and that’s false (but since that’s not the argument in the OP, I don’t think I should get sidetracked).
That’s false. If we assume for a second that the ITT really is the only formalized theory of consciousness, it doesn’t follow that the property is not, in fact, guaranteed. It could also be that the ITT is wrong and that in the actual reality, the property is, in fact, guaranteed.
Ah, I see. Nvm then. (I misunderstood the previous comment to apply to the entire brain—idk why, it was pretty clear that you were talking about a single neuron. My bad.)