(that means a classical computer can run software that acts the same way).
No. Computability shows that you can have a classical computer that has the same input/output behavior, not that you can have a classical computer that acts the same way. Input/Output behavior is generally not considered to be enough to guarantee same consciousness, so this doesn’t give you what you need. Without arguing about the internal workings of the brain, a simulation of a brain is just a different physical process doing different computational steps that arrives at the same result. A GLUT (giant look-up table) is also a different physical process doing different computational steps that arrives at the same result, and Eliezer himself argued that GLUT isn’t conscious.
The “let’s swap neurons in the brain with artificial neurons” is actually a much better argument than “let’s build a simulation of the human brain on a different physical system” for this exact reason, and I don’t think it’s a coincidence that Eliezer used the former argument in his post.
Computability shows that you can have a classical computer that has the same input/output behavior
That’s what I mean (I’m talking about the input/output behavior of individual neurons).
Input/Output behavior is generally not considered to be enough to guarantee same consciousness
It should be, because it is, in fact, enough. (However, neither the post, nor my comment require that.)
Eliezer himself argued that GLUT isn’t conscious.
Yes, and that’s false (but since that’s not the argument in the OP, I don’t think I should get sidetracked).
But nonetheless, if the only formalized proposal for consciousness doesn’t have the property that simulations preserve consciousness, then clearly the property is not guaranteed.
That’s false. If we assume for a second that the ITT really is the only formalized theory of consciousness, it doesn’t follow that the property is not, in fact, guaranteed. It could also be that the ITT is wrong and that in the actual reality, the property is, in fact, guaranteed.
That’s what I mean (I’m talking about the input/output behavior of individual neurons).
Ah, I see. Nvm then. (I misunderstood the previous comment to apply to the entire brain—idk why, it was pretty clear that you were talking about a single neuron. My bad.)
No. Computability shows that you can have a classical computer that has the same input/output behavior, not that you can have a classical computer that acts the same way. Input/Output behavior is generally not considered to be enough to guarantee same consciousness, so this doesn’t give you what you need. Without arguing about the internal workings of the brain, a simulation of a brain is just a different physical process doing different computational steps that arrives at the same result. A GLUT (giant look-up table) is also a different physical process doing different computational steps that arrives at the same result, and Eliezer himself argued that GLUT isn’t conscious.
The “let’s swap neurons in the brain with artificial neurons” is actually a much better argument than “let’s build a simulation of the human brain on a different physical system” for this exact reason, and I don’t think it’s a coincidence that Eliezer used the former argument in his post.
That’s what I mean (I’m talking about the input/output behavior of individual neurons).
It should be, because it is, in fact, enough. (However, neither the post, nor my comment require that.)
Yes, and that’s false (but since that’s not the argument in the OP, I don’t think I should get sidetracked).
That’s false. If we assume for a second that the ITT really is the only formalized theory of consciousness, it doesn’t follow that the property is not, in fact, guaranteed. It could also be that the ITT is wrong and that in the actual reality, the property is, in fact, guaranteed.
Ah, I see. Nvm then. (I misunderstood the previous comment to apply to the entire brain—idk why, it was pretty clear that you were talking about a single neuron. My bad.)