Do you expect lookup tables to be able to demonstrate convincing consciousnesslike behavior (a la Searle’s Chinese Room), while still not satisfying your lower bound?
If not, would encountering such a convincing GLUT-based system (that is, one that violated your expectations) change your opinions at all about where the lower bound actually is?
Because in general, I agree with you that there exists a lower bound and GLUTs don’t satisfy it, but I don’t think a GLUT can convincingly simulate consciousness, and if I encountered one that did (as I initially understood Peter to be proposing) I’d have to significantly update my beliefs in this whole area.
Do you expect lookup tables to be able to demonstrate convincing consciousnesslike behavior (a la Searle’s Chinese Room), while still not satisfying your lower bound?
I expect them to be theoretically able to exhibit conscious-like behave, but don’t endorse the idea that Searle’s Chinese Room is a lookup table, or unconscious. Searle’s Chinese Room is carrying out algorithms; and Searle’s commentary on it is incoherent, and I disagree with his definitions, assumptions, arguments, and conclusions.
In practice, I don’t expect a lookup table to produce any such behavior until long after we have learned much more about consciousness. A lookup table might be theoretically incapable of exhibiting human-like behavior due to the limited memory and computational capacity of this universe.
A lookup table might be theoretically incapable of exhibiting human-like behavior due to the limited memory and computational capacity of this universe.
Yeah, that’s my expectation. So confirming the actual existence of a human-like GLUT would cause me to sharply revise many of my existing beliefs on the whole subject.
My confidence, in that scenario, that the GLUT was not conscious would not be very high.
Do you expect lookup tables to be able to demonstrate convincing consciousnesslike behavior (a la Searle’s Chinese Room), while still not satisfying your lower bound?
If not, would encountering such a convincing GLUT-based system (that is, one that violated your expectations) change your opinions at all about where the lower bound actually is?
Because in general, I agree with you that there exists a lower bound and GLUTs don’t satisfy it, but I don’t think a GLUT can convincingly simulate consciousness, and if I encountered one that did (as I initially understood Peter to be proposing) I’d have to significantly update my beliefs in this whole area.
I expect them to be theoretically able to exhibit conscious-like behave, but don’t endorse the idea that Searle’s Chinese Room is a lookup table, or unconscious. Searle’s Chinese Room is carrying out algorithms; and Searle’s commentary on it is incoherent, and I disagree with his definitions, assumptions, arguments, and conclusions.
In practice, I don’t expect a lookup table to produce any such behavior until long after we have learned much more about consciousness. A lookup table might be theoretically incapable of exhibiting human-like behavior due to the limited memory and computational capacity of this universe.
Yeah, that’s my expectation. So confirming the actual existence of a human-like GLUT would cause me to sharply revise many of my existing beliefs on the whole subject.
My confidence, in that scenario, that the GLUT was not conscious would not be very high.