In fact I don’t know why you have the intuition that you can’t simulate complex processing with a Giant Look Up Table. All you have to do is record a series of inputs and outputs from a piece of software, and there is the database for your GLUT.
That’s not a lookup table, that’s just a transcript. I only ever heard of one person believing a transcript is conscious. A lookup table gives the right answers for all possible inputs.
The reason we have the intuition that you can’t simulate complex processing with a lookup table is that it’s physically impossible—the size would be exponential in the amount of state, making it larger than the visible universe for anything nontrivial. But it is logically possible to have a lookup table for, say, a human mind for the duration of a human lifespan; and such a thing would, yes, contain consciousness.
Note that the piece of the original comment you don’t quote attempts to engage with this, by admitting that such a GLUT “will only be convincing if it is asked the right questions” and thus only simulates the original “up to a point.”
Which is trivially true with even a physically possible LUT. Heck, a one-line perl script that prints “Yes” every time it is given input simulates the behavior of any person you might care to name, as long as the questioner is sufficiently constrained.
Whether Peterdjones intends to generalize from that to a less trivial result, I don’t know.
That’s not a lookup table, that’s just a transcript. I only ever heard of one person believing a transcript is conscious. A lookup table gives the right answers for all possible inputs.
The reason we have the intuition that you can’t simulate complex processing with a lookup table is that it’s physically impossible—the size would be exponential in the amount of state, making it larger than the visible universe for anything nontrivial. But it is logically possible to have a lookup table for, say, a human mind for the duration of a human lifespan; and such a thing would, yes, contain consciousness.
Note that the piece of the original comment you don’t quote attempts to engage with this, by admitting that such a GLUT “will only be convincing if it is asked the right questions” and thus only simulates the original “up to a point.”
Which is trivially true with even a physically possible LUT. Heck, a one-line perl script that prints “Yes” every time it is given input simulates the behavior of any person you might care to name, as long as the questioner is sufficiently constrained.
Whether Peterdjones intends to generalize from that to a less trivial result, I don’t know.