One black box is equivalent to another so long as you don’t peek inside. So the outputs you get if, or instance, you X ray it, are not part of the subset of outputs under which they are equivalent.
If such official, at-the-edge outputs are all that matters for computationalism, then
dumb-but-fast Look Up Tables could be conscious, which is a problem.
If the inner workings of black boxes count, then the Turing Test is flawed,. for similar reasons.
dumb-but-fast Look Up Tables could be conscious, which is a problem
Sincere question: why would this be a problem?
I mean, I get that LUTs violate our intuitions about what ought to be necessary to get genuine consciousness, but then they also violate my intuitions about what ought to be necessary to get a convincing simulation of it. If I throw out the latter intuitions to accept a convincing LUT, I’m not sure why I shouldn’t be willing to throw out the former intuitions as well.
See my lower bound for consciousness.
Lookup tables don’t satisfy the lower bound.
The lower bound is that point at which Quine’s theory of ontological relativity / confirmation holism is demonstrably false, and so “meaning” can exist.
Do you expect lookup tables to be able to demonstrate convincing consciousnesslike behavior (a la Searle’s Chinese Room), while still not satisfying your lower bound?
If not, would encountering such a convincing GLUT-based system (that is, one that violated your expectations) change your opinions at all about where the lower bound actually is?
Because in general, I agree with you that there exists a lower bound and GLUTs don’t satisfy it, but I don’t think a GLUT can convincingly simulate consciousness, and if I encountered one that did (as I initially understood Peter to be proposing) I’d have to significantly update my beliefs in this whole area.
Do you expect lookup tables to be able to demonstrate convincing consciousnesslike behavior (a la Searle’s Chinese Room), while still not satisfying your lower bound?
I expect them to be theoretically able to exhibit conscious-like behave, but don’t endorse the idea that Searle’s Chinese Room is a lookup table, or unconscious. Searle’s Chinese Room is carrying out algorithms; and Searle’s commentary on it is incoherent, and I disagree with his definitions, assumptions, arguments, and conclusions.
In practice, I don’t expect a lookup table to produce any such behavior until long after we have learned much more about consciousness. A lookup table might be theoretically incapable of exhibiting human-like behavior due to the limited memory and computational capacity of this universe.
A lookup table might be theoretically incapable of exhibiting human-like behavior due to the limited memory and computational capacity of this universe.
Yeah, that’s my expectation. So confirming the actual existence of a human-like GLUT would cause me to sharply revise many of my existing beliefs on the whole subject.
My confidence, in that scenario, that the GLUT was not conscious would not be very high.
You shouldn’t because they are different intuitions. In fact I don’t know why you have the intuition that you can’t simulate complex processing with a Giant Look Up Table. All you have to do is record a series of inputs and outputs from a piece of software, and there is the database for your GLUT. Of course, that GLUT will only be convincing if it is asked the right questions. If any software is Gluttable up to a point, the Consciousness Programme is Gluttable UTAP. But we don’t have to believe a programme that is spitting out pre recorded digits of pi is calculating pi. We can keep that intuition.
In fact I don’t know why you have the intuition that you can’t simulate complex processing with a Giant Look Up Table. All you have to do is record a series of inputs and outputs from a piece of software, and there is the database for your GLUT.
That’s not a lookup table, that’s just a transcript. I only ever heard of one person believing a transcript is conscious. A lookup table gives the right answers for all possible inputs.
The reason we have the intuition that you can’t simulate complex processing with a lookup table is that it’s physically impossible—the size would be exponential in the amount of state, making it larger than the visible universe for anything nontrivial. But it is logically possible to have a lookup table for, say, a human mind for the duration of a human lifespan; and such a thing would, yes, contain consciousness.
Note that the piece of the original comment you don’t quote attempts to engage with this, by admitting that such a GLUT “will only be convincing if it is asked the right questions” and thus only simulates the original “up to a point.”
Which is trivially true with even a physically possible LUT. Heck, a one-line perl script that prints “Yes” every time it is given input simulates the behavior of any person you might care to name, as long as the questioner is sufficiently constrained.
Whether Peterdjones intends to generalize from that to a less trivial result, I don’t know.
If I say a GLUT can’t compute the output that is consciousness (suppose we have a consciousness detecting machine, the output will be whatever causes the needle on that machine to jump) without a model of a person equivalent to a person, you’ll probably say I’m begging the question. I can’t think of a way around that, but if you could refute that thought of mine, that would probably resolve a lot for me.
I agree that if the questioner is sufficiently constrained, then a GLUT (or even a Tiny Lookup Table) can simulate any process’s responses to that questioner, however complex or self-referential the process.
So, yes, any process—including conscious processes—can be simulated UTAP by a simple look-up table, in the same sense that living biological systems can be simulated by rocks UTAP.
If the intuition that look-up is not sufficient computation for consciousness is correct, then a flaw in the Turing Test is exposed. If a complex Computation Programme could pass
the TT, then a GLUT version must be able to as well.
The values of the GLUT have to be populated somehow, which means matching an instance of the associated computation against an identical stimulus by some means at some point in the past. Intuitively it seems likely that a GLUT is too simple to instantiate consciousness on its own, but it seems to be better viewed as one component of a larger system that must in practice include a conscious agent, albeit one temporally and spatially removed from the thought experiment’s present.
Isn’t this basically a restatement of the Chinese Room?
If such official, at-the-edge outputs are all that matters for computationalism, then dumb-but-fast Look Up Tables could be conscious, which is a problem.
That’s not what I claimed, in fact, I was trying to be careful to discredit that. I said the system can be arbitrarily divided, and replacing any part with a different part/black box that gives the same outputs as the original would have would not affect the rest of the system. Some patterns of replacement of parts remove the conscious parts. Some do not.
This is important because I am trying to establish “red” and other phenomena as relational properties of a system containing both me and a red object. This is something that I think distinguishes my answer from others’.
I’m distinguishing further between removing my eyes and the red object and replacing them with a black box sending inputs into my optic nerves, which preserves consciousness, and replacing my brain with a black box lookup table and keeping my eyes and the object intact, which removes the conscious subsystem of the larger system. Note that some form of the larger system is a requirement for seeing red.
My answer highlights how only some parts of the conscious system are necessary for the output we cal consciousness, and makes sure we don’t confuse ourselves and think that all elements of the conscious computing system are essential to consciousness, or that all may be replaced.
The algorithm is sensitive to certain replacement of its parts with functions, but not others.
One black box is equivalent to another so long as you don’t peek inside. So the outputs you get if, or instance, you X ray it, are not part of the subset of outputs under which they are equivalent.
If such official, at-the-edge outputs are all that matters for computationalism, then dumb-but-fast Look Up Tables could be conscious, which is a problem.
If the inner workings of black boxes count, then the Turing Test is flawed,. for similar reasons.
Sincere question: why would this be a problem?
I mean, I get that LUTs violate our intuitions about what ought to be necessary to get genuine consciousness, but then they also violate my intuitions about what ought to be necessary to get a convincing simulation of it. If I throw out the latter intuitions to accept a convincing LUT, I’m not sure why I shouldn’t be willing to throw out the former intuitions as well.
Is there more here than just dueling intuitions?
See my lower bound for consciousness. Lookup tables don’t satisfy the lower bound. The lower bound is that point at which Quine’s theory of ontological relativity / confirmation holism is demonstrably false, and so “meaning” can exist.
Do you expect lookup tables to be able to demonstrate convincing consciousnesslike behavior (a la Searle’s Chinese Room), while still not satisfying your lower bound?
If not, would encountering such a convincing GLUT-based system (that is, one that violated your expectations) change your opinions at all about where the lower bound actually is?
Because in general, I agree with you that there exists a lower bound and GLUTs don’t satisfy it, but I don’t think a GLUT can convincingly simulate consciousness, and if I encountered one that did (as I initially understood Peter to be proposing) I’d have to significantly update my beliefs in this whole area.
I expect them to be theoretically able to exhibit conscious-like behave, but don’t endorse the idea that Searle’s Chinese Room is a lookup table, or unconscious. Searle’s Chinese Room is carrying out algorithms; and Searle’s commentary on it is incoherent, and I disagree with his definitions, assumptions, arguments, and conclusions.
In practice, I don’t expect a lookup table to produce any such behavior until long after we have learned much more about consciousness. A lookup table might be theoretically incapable of exhibiting human-like behavior due to the limited memory and computational capacity of this universe.
Yeah, that’s my expectation. So confirming the actual existence of a human-like GLUT would cause me to sharply revise many of my existing beliefs on the whole subject.
My confidence, in that scenario, that the GLUT was not conscious would not be very high.
You shouldn’t because they are different intuitions. In fact I don’t know why you have the intuition that you can’t simulate complex processing with a Giant Look Up Table. All you have to do is record a series of inputs and outputs from a piece of software, and there is the database for your GLUT. Of course, that GLUT will only be convincing if it is asked the right questions. If any software is Gluttable up to a point, the Consciousness Programme is Gluttable UTAP. But we don’t have to believe a programme that is spitting out pre recorded digits of pi is calculating pi. We can keep that intuition.
That’s not a lookup table, that’s just a transcript. I only ever heard of one person believing a transcript is conscious. A lookup table gives the right answers for all possible inputs.
The reason we have the intuition that you can’t simulate complex processing with a lookup table is that it’s physically impossible—the size would be exponential in the amount of state, making it larger than the visible universe for anything nontrivial. But it is logically possible to have a lookup table for, say, a human mind for the duration of a human lifespan; and such a thing would, yes, contain consciousness.
Note that the piece of the original comment you don’t quote attempts to engage with this, by admitting that such a GLUT “will only be convincing if it is asked the right questions” and thus only simulates the original “up to a point.”
Which is trivially true with even a physically possible LUT. Heck, a one-line perl script that prints “Yes” every time it is given input simulates the behavior of any person you might care to name, as long as the questioner is sufficiently constrained.
Whether Peterdjones intends to generalize from that to a less trivial result, I don’t know.
If I say a GLUT can’t compute the output that is consciousness (suppose we have a consciousness detecting machine, the output will be whatever causes the needle on that machine to jump) without a model of a person equivalent to a person, you’ll probably say I’m begging the question. I can’t think of a way around that, but if you could refute that thought of mine, that would probably resolve a lot for me.
I agree that if the questioner is sufficiently constrained, then a GLUT (or even a Tiny Lookup Table) can simulate any process’s responses to that questioner, however complex or self-referential the process.
So, yes, any process—including conscious processes—can be simulated UTAP by a simple look-up table, in the same sense that living biological systems can be simulated by rocks UTAP.
I’ve lost track of why that is important.
If the intuition that look-up is not sufficient computation for consciousness is correct, then a flaw in the Turing Test is exposed. If a complex Computation Programme could pass the TT, then a GLUT version must be able to as well.
Sure, I agree that with a sufficiently constrained questioner, the Turing Test is pretty much useless.
The values of the GLUT have to be populated somehow, which means matching an instance of the associated computation against an identical stimulus by some means at some point in the past. Intuitively it seems likely that a GLUT is too simple to instantiate consciousness on its own, but it seems to be better viewed as one component of a larger system that must in practice include a conscious agent, albeit one temporally and spatially removed from the thought experiment’s present.
Isn’t this basically a restatement of the Chinese Room?
That’s not what I claimed, in fact, I was trying to be careful to discredit that. I said the system can be arbitrarily divided, and replacing any part with a different part/black box that gives the same outputs as the original would have would not affect the rest of the system. Some patterns of replacement of parts remove the conscious parts. Some do not.
This is important because I am trying to establish “red” and other phenomena as relational properties of a system containing both me and a red object. This is something that I think distinguishes my answer from others’.
I’m distinguishing further between removing my eyes and the red object and replacing them with a black box sending inputs into my optic nerves, which preserves consciousness, and replacing my brain with a black box lookup table and keeping my eyes and the object intact, which removes the conscious subsystem of the larger system. Note that some form of the larger system is a requirement for seeing red.
My answer highlights how only some parts of the conscious system are necessary for the output we cal consciousness, and makes sure we don’t confuse ourselves and think that all elements of the conscious computing system are essential to consciousness, or that all may be replaced.
The algorithm is sensitive to certain replacement of its parts with functions, but not others.