(I know this is an old article; let me know if commenting on it is a faux pas of some sort)
I can’t recall ever seeing anyone claim that a GLUT is conscious.
Well, I’d definitely claim it. If we could somehow disregard all practical considerations, and conjure up a GLUT despite the unimaginably huge space requirements—then we could, presumably, hold conversations with it, read those philosophy papers that it writes, etc. How is that different from consciousness ? Sure, the GLUT’s hardware is weird and inefficient, but if we agree that robots and zombies and such can be conscious, then why not GLUTs ?
I can’t possibly be the only person in the world who’d ever made this observation...
My reluctance to treat GLUTs as conscious primarily has to do with the sense that, whatever conscious experience the GLUT might have, there is no reason it had to wait for the triggering event to have it; the data structures associated with that experience already existed inside the GLUT’s mind prior to the event, in a way that isn’t true for a system synthesizing new internal states that trigger/represent conscious experience.
That said, I’m not sure I understand why that difference should matter to the conscious/not-conscious distinction, so perhaps I’m just being parochial. (That is in general the conclusion I come to about most conscious/not-conscious distinctions, which mostly leads me to conclude that it’s a wrong question.)
there is no reason it had to wait for the triggering event to have it; the data structures associated with that experience already existed inside the GLUT’s mind prior to the event, in a way that isn’t true for a system synthesizing new internal states that trigger/represent conscious experience.
IMO that’s an implementation detail. Th GLUT doesn’t need to synthesize new internal states because it already contains all possible states. Synthesizing new internal states is an optimization that our non-GLUT brains (and computers) use in order to get around the space requirements (as well as our lack of time-traveling capabilities).
That is in general the conclusion I come to about most conscious/not-conscious distinctions, which mostly leads me to conclude that it’s a wrong question.
Yeah, consciousness is probably just a philosophical red herring, as far as I understand...
Yeah, I don’t exactly disagree (though admittedly, I also think intuitions about whether implementation details matter aren’t terribly trustworthy when we’re talking about a proposed design that cannot conceivably work in practice). Mostly, I think what I’m talking about here is my poorly grounded intuitions, rather than about an actual thing in the world. Still, it’s sometimes useful to get clear about what my poorly grounded intuitions are, if only so I can get better at recognizing when they distort my perceptions or expectations.
though admittedly, I also think intuitions about whether implementation details matter aren’t terribly trustworthy when we’re talking about a proposed design that cannot conceivably work in practice
Yeah, the whole GLUT scenario is really pretty silly to begin with, so I don’t exactly disagree (as you’d say) . Perhaps the main lesson from here is that it’s rather difficult, if not impossible, to draw useful conclusions from silly scenarios.
(I know this is an old article; let me know if commenting on it is a faux pas of some sort)
Well, I’d definitely claim it. If we could somehow disregard all practical considerations, and conjure up a GLUT despite the unimaginably huge space requirements—then we could, presumably, hold conversations with it, read those philosophy papers that it writes, etc. How is that different from consciousness ? Sure, the GLUT’s hardware is weird and inefficient, but if we agree that robots and zombies and such can be conscious, then why not GLUTs ?
I can’t possibly be the only person in the world who’d ever made this observation...
My reluctance to treat GLUTs as conscious primarily has to do with the sense that, whatever conscious experience the GLUT might have, there is no reason it had to wait for the triggering event to have it; the data structures associated with that experience already existed inside the GLUT’s mind prior to the event, in a way that isn’t true for a system synthesizing new internal states that trigger/represent conscious experience.
That said, I’m not sure I understand why that difference should matter to the conscious/not-conscious distinction, so perhaps I’m just being parochial. (That is in general the conclusion I come to about most conscious/not-conscious distinctions, which mostly leads me to conclude that it’s a wrong question.)
IMO that’s an implementation detail. Th GLUT doesn’t need to synthesize new internal states because it already contains all possible states. Synthesizing new internal states is an optimization that our non-GLUT brains (and computers) use in order to get around the space requirements (as well as our lack of time-traveling capabilities).
Yeah, consciousness is probably just a philosophical red herring, as far as I understand...
Yeah, I don’t exactly disagree (though admittedly, I also think intuitions about whether implementation details matter aren’t terribly trustworthy when we’re talking about a proposed design that cannot conceivably work in practice). Mostly, I think what I’m talking about here is my poorly grounded intuitions, rather than about an actual thing in the world. Still, it’s sometimes useful to get clear about what my poorly grounded intuitions are, if only so I can get better at recognizing when they distort my perceptions or expectations.
Yeah, the whole GLUT scenario is really pretty silly to begin with, so I don’t exactly disagree (as you’d say) . Perhaps the main lesson from here is that it’s rather difficult, if not impossible, to draw useful conclusions from silly scenarios.