PK, Phil Goetz, and Larry D’Anna are making a crucial point here but I’m afraid it is somewhat getting lost in the noise. The point is (in my words) that lookup tables are a philosophical red herring. To emulate a human being they can’t just map external inputs to external outputs. They also have to map a big internal state to the next version of that big external state. (That’s what Larry’s equations mean.)
If there was no internal state like this, a GLUT couldn’t emulate a person with any memory at all. But by hypothesis, it does emulate a person (perfectly). So it must have this internal state.
And given that a GLUT is maintaining a big internal state it is equivalent to a Turing machine, as Phil says.
But that means that is can implement any computationally well defined process. If we believe that consciousness can be a property of some computation then GLUTs can have consciousness. This isn’t even a stretch, it is totally unavoidable.
The whole reason that philosopher talk about GLUTs, or that Searle talks about the Chinese room, is to try to trick the reader into being overwhelmed by the intuition that “that can’t possibly be conscious” and to STOP THINKING.
Looking at this discussion, to some extent that works! Most people didn’t say “Hmmm, I wonder how a GLUT could emulate a human...” and then realize it would need internal state, and the internal state would be supporting a complex computational process, and that the GLUT would in effect be a virtual machine, etc.
This is like an argument where someone tries to throw up examples that are so scary, or disgusting, or tear jerking, or whatever that we STOP THINKING and vote for whatever they are trying to sneak through. In other words it does not deserve the honor of being called an argument.
This leaves the very interesting question of whether a computational process can support consciousness. I think yes, but the discussion is richer. GLUTs are a red herring and don’t lead much of anywhere.
PK, Phil Goetz, and Larry D’Anna are making a crucial point here but I’m afraid it is somewhat getting lost in the noise. The point is (in my words) that lookup tables are a philosophical red herring. To emulate a human being they can’t just map external inputs to external outputs. They also have to map a big internal state to the next version of that big external state. (That’s what Larry’s equations mean.)
If there was no internal state like this, a GLUT couldn’t emulate a person with any memory at all. But by hypothesis, it does emulate a person (perfectly). So it must have this internal state.
And given that a GLUT is maintaining a big internal state it is equivalent to a Turing machine, as Phil says.
But that means that is can implement any computationally well defined process. If we believe that consciousness can be a property of some computation then GLUTs can have consciousness. This isn’t even a stretch, it is totally unavoidable.
The whole reason that philosopher talk about GLUTs, or that Searle talks about the Chinese room, is to try to trick the reader into being overwhelmed by the intuition that “that can’t possibly be conscious” and to STOP THINKING.
Looking at this discussion, to some extent that works! Most people didn’t say “Hmmm, I wonder how a GLUT could emulate a human...” and then realize it would need internal state, and the internal state would be supporting a complex computational process, and that the GLUT would in effect be a virtual machine, etc.
This is like an argument where someone tries to throw up examples that are so scary, or disgusting, or tear jerking, or whatever that we STOP THINKING and vote for whatever they are trying to sneak through. In other words it does not deserve the honor of being called an argument.
This leaves the very interesting question of whether a computational process can support consciousness. I think yes, but the discussion is richer. GLUTs are a red herring and don’t lead much of anywhere.