Actually the whole idea of the GLUT machine (dubbed the ‘blockhead’ in Braddon-Mitchell’s and Jackson’s book, The Philosophy of Mind and Cognition) IS precisely to use live intelligent humans to store an intelligent response to every response a judge might make under a pre-specified limit (including silence and looping, which is discussed explicitly in the paper). The idea is to show that even though the resulting machine has the capacity to emit an intelligent response to any comment within the finite specified limits, it nonetheless has the intelligence of a juke-box. The point is that the intelligent programmers anticipate anything that the “judge” could say in the finite span. The upshot is that the capacity of a machine to pass a Turing Test of a finite length does not entail actual intelligence.
silence and looping, which is discussed explicitly in the paper
I confess to having downloaded the paper recently and not given it more attention than was necessary to satisfy my usual habit of having primary sources at hand. I’ve gone back and read it more carefully, but it probably deserves still longer scrutiny.
(Welcome to Less Wrong, by the way. I don’t suppose you need to post an introduction, seeing as you have your own Wikipedia page. Nice to be chatting with you here!)
However, I’m not seeing where this is discussed explicitly, other than (this is perhaps what you mean) under the general heading of using “quantized stimulus parameters” as input to the GLUT-generating process. I grant that this does adequately deal with the most crude timing attacks imaginable.
There do seem to me to be other, more subtle attacks which—according to my earlier argument that, if you have to go back to the drawing board each time such an attack is found, leave the GLUT critique of behaviourism ineffective—would still prove fatal. For instance we can consider teachability of the GLUT, to uncover an entire class of attacks.
Suppose there is some theoretical concept, unknown to the putative human programmers of the GLUT (or perhaps we should call them conversation-authors, as the programming involved is minimal), but which can be taught to someone of normal intelligence. I don’t want to restrict my argument to any particular domain, but for illustrative purposes let’s pick the phenomenon of lasing light. This is a reasonable example, since the GLUT concept would have been implementable as early as Babbage’s time and the key insights date from Einstein’s.
In this scenario, the GLUT’s interviewer choses as his conversation topic the theoretical background needed to build up to the concept of lasing light. The test comes when she (gender picked by flipping a coin) asks the GLUT to make specific predictions about a given experimental setup that extrapolates relevant physical law into a domain not previously discussed, but where that law still applies.
By my earlier stipulation, the GLUT’s builders must discover, in the process of building the GLUT, the physical law of lasing light. They must also prune the conversation tree of “wrong” predictions, since that would alert the interviewer to the fact that the GLUT was “faking” understanding up to the point of the experimental test; this rules out the builders merely “covering all (conversational) bases”. They must truly understand the phenomenon themselves.
(One may object that it would take an inordinately long time to teach a person of merely normal intelligence about a phenomenon such as lasing light. But we have earlier stipulated that the length of the test can be extended to human lifespans; that is surely enough for a person of normal intelligence to eventually get there.)
We are led to what is (to me at least) a disturbing conclusion. The building of a GLUT entails the discovery by the builders of all experimentally discoverable physical laws of our universe that can be taught a person of normal intelligence in a reasonable finite lifespan.
I’m not a professional philosopher, so possibly this argument has holes.
Nevertheless it seems to me that this unpalatable conclusion points to one primordial flaw in the GLUT argument: it goes counter to the open-ended nature of the optimization process known as intelligence. You cannot optimize by covering all bases, for the same reason that a theory that can explain all conceivable events has no real content.
The original paper tried to anticipate this objection by offering as a general defense the stipulation that the GLUT should simulate a “desert island” type of castaway, so that the GLUT would be dispensed of the capacity to converse fluently about current events. But the objection is more general and its force becomes harder to avoid if the duration of the test is extended greatly: we need to imagine that the GLUT can be brought up to date with current events, and afterwards respond appropriately to them, as would a person of normal intelligence. This requires the GLUT builders to anticipate the future with enough precision to prune “inappropriate” responses, and so the defense that the builders would “cover all bases” is untenable.
The domain of physical law is the one where the consequences of the teachability test are brought into sharpest focus, but I suspect that “merely social” tests of the GLUT in everyday life would very quickly expose its supposed intelligence as a sham.
Behaviourism, or God-like GLUT builders: pick your poison.
There is an aspect of the construction that you are not quite taking in. The programmers give a response to EVERY sequence of letters and spaces that a judge COULD type in the remaining segment of the original hour. One or more of those sequences will be a description of a laser, another will be a description of some similar device that goes counter to physical law, etc. The programmers are supposed to respond to each string as an intelligent person would respond. Here is the relevant part of the description: “Suppose the interrogator goes first, typing in one of A1...An. The programmers produce one sensible response to each of these sentences, B1...Bn. For each of B1...Bn, the interrogator can make various replies [every possible reply of all lengths up to the remaining time], so many branches will sprout below each of the Bi. Again, for each of these replies, the programmers produce one sensible response, and so on.” The general point is that there is no need for the programmers to “think of” every theory: that is accomplished by exhaustion. Of course the machine is impossible but that is OK because the point is a conceptual one: having the capacity to respond intelligently for any stipulated finite period (as in the Turing Test) is not conceptually sufficient for genuine intelligence.
there is no need for the programmers to “think of” every theory: that is accomplished by exhaustion
That is plainly wrong. The “input’ space (possible judge queries) is exhaustively covered, I’m getting that just fine. No such thing can be said about the “output” space: we’re requiring that the output consist of strings encoding responses that an intelligent person would emit. The judge is allowed to say random, possibly wrong, things, but the GLUT is not so allowed.
Consider an input string which consists of a correct explanation of quantum mechanics (which we assume the builders don’t know yet at build time), plus a question to the GLUT about what happens in a novel, never before encountered (by the GLUT) experimental setup. This input string is possible, and so must be considered by the builders (along with input strings that are incorrect explanations of QM plus questions about TV shows, but we needn’t concern ourselves with those, an actual “judge from the builder’s future” will not emit them).
In order to construct even one sensible response to this input string, to respond “as an intelligent person would”, the GLUT builders must correctly predict the experimental result. An incorrect response will signal to the “judge” that the GLUT is responding by rote, without understanding. If the GLUT equivocates with “I don’t know”, the judge will press for an answer; we are assuming that the GLUT has answered all previous queries sensibly up to this point, that it has been a “good student” of QM. If the GLUT keeps dodging the judge’s request for a prediction, the game is up: the jduge will flunk it on the Turing Test.
To correctly predict an experimental result, the builders must know and understand QM, but we have assumed they don’t. Assuming that the GLUT always passes the Turing Test leads us to a contradiction, so we must allow that there are some Turing Tests the GLUT is unable to pass: those that require it to learn something its builders didn’t know. The GLUT does not have the capacity you are claiming for it.
(If you disagree, and think I’m still not getting it, please kindly answer the following: considering only a single input string QM+NE—explanation of quantum mechanics plus novel experiment—how do you propose that a builder who doesn’t understand QM construct a sensible answer to that input string?)
You’re assuming that the GLUT is simulating a person of average intelligence, right? So they ask a person of average intelligence how they’d respond to that particular sentence, given various kinds of context, and program in the answer(s).
What you’re trying to get at, I think, is a situation for which the GLUT has no response, but that’s already ruled out by the fact that the hypothetical situation specifies that the programmers have to have systematically considered every possible situation and programmed in a response to it. (It doesn’t have to be a good response, just how a person of average intelligence would respond, so variations on ‘I don’t know’ or ‘that doesn’t make sense to me’ would be not just acceptable but actually correct in some situations.)
You’re assuming that the GLUT is simulating a person of average intelligence, right?
Heh. I’d claim that your use of “average” here is smuggling in precisely the kind of connotation that are relied on to make the GLUT concept plausible, but which do not stand up to scrutiny.
Let’s say I’m assuming the GLUT is simulating an intelligence “equivalent” to mine. And assume the GLUT builder is me, ten years ago, when I didn’t know about Brehme diagrams but was otherwise relatively smart. Assume the input string is the first few chapters of the Shadowitz text on special relativity I have recently gone through. Under these assumptions, “equivalent” intelligence consists of being able to answer the exercises as correctly as I recently did.
(Crucially, if the supposed-to-be-equivalent-to-mine intelligence turns out to be for some reason cornered into saying “I don’t know” or “I can’t make sense of this text”, I can tell for sure it’s not as smart as I am, and we have a contradiction.)
The GLUT intuition pump requires that the me-of-today can “teach” the me-of-ten-years-ago how to use Brehme diagrams, to the point where the me-of-ten-years ago can correctly answer the kind of questions about time dilation that I can answer today.
We’re led to concluding one of the following:
that I can send information backwards in time
that the me-of-ten-years-ago did know about SR, contrary to stipulation
that the builders have another way of computing sensible answers, contrary to stipulation
that the “intelligence” exhibited by GLUT is restricted to making passable conversational answers but is limited in not being able to acquire new knowledge
My hunch is that this last is really what the fuzziness of the word “intelligence” allows someone thinking about GLUTs to get away with, and not realize it. The GLUT is a smarter ELIZA, but if we try to give it a specific, operational, predictive kind of intelligence of which humans are demonstrably capable, it is easily exposed as a dummy.
In the course of building the GLUT, you-of-10-years-ago would have to, in the course of going through every possible input that the GLUT might need to respond to, encounter the first few chapters of the book in question, and figure out a correct response to that particular input string. So you-of-10-years-ago would have to know about SR, not necessarily at the start of the project, but definitely by the end of it. (And the GLUT simulating you-of-10-years-ago would be able to simulate the responses that you-of-10-years-ago generated in the learning process, assuming that you-of-10-years-ago put them in as generated rather than programming the GLUT to react as if it already knew about SR.)
Going through every possible random string is an extremely inefficient way to gain new information, though.
So you-of-10-years-ago would have to know about SR,
So you agree with me: since there is nothing special about either the 10-year stipulation or about the theory in question, we’re requiring the GLUT builders to have discovered and understood every physical theory that will ever be discovered and can be taught to a person of my intelligence.
This is conceptually an even taller order than the already hard to swallow “impossible-but-conceptually-conceivable” machine. Where are they supposed to get the information from? This is—so we are led to conclude—a civilization which can take a stroll through the Library of Babel and pick out just those books which correspond to a sensible physical theory.
I think you misunderstood. You-of-10-years-ago doesn’t have to have figured out SR prior to building the GLUT; you-of-10-years-ago would learn about SR—and an unimaginable number of other things, many of them wrong—in the course of programming the GLUT. That’s implied in ‘going through every possible input’. Also, you-of-10-years-ago wouldn’t have to program the objectively-right answers into the GLUT, just their own responses to the various inputs, so no external data source is necessary.
The GLUT builder has to understand the given theory, and derive its implications to the novel experiment. But they don’t have to know that the theory is correct. It is your later input of a correct explanation that picks the correct answer out of all the wrong ones, and the GLUT builder doesn’t have to care which is which.
If the tester gives the GLUT a plausible-sounding explanation of some event that is incorrect, but that you-of-10-years-ago would be deceived by, the GLUT simulation of you should respond as if deceived. Similarly, if the tester gives the GLUT an incorrect but plausible-sounding explanation of SR that you-of-10-years-ago would take as correct, the GLUT should respond as if it thinks the explanation is correct. You-of-10-years-ago would need to program both sets of responses—thinking that the incorrect explanation of SR is correct, and thinking that the correct explanation of SL is correct—into the GLUT. You-of-10-years-ago would not need to know which of those two explanations of SR was actually correct in order to program thinking-that-they-are-correct responses into the GLUT.
I do not accept that a me-of-10-years ago could convincingly simulate these responses after forcing himself to learn every possible variation on the Shadowitz book and sincerely accepting that as true information. Conversely, if he started with the “true” Shadowitz he would have a hard time erasing that knowledge afterwards to give convincing answers to the “false” versions.
Not only would the me-of-10-years ago not be able to convincingly reproduce, e.g. the excitement of learning new stuff and finding that it works; that me would (I suspect) simply go mad under such bizarre circumstances! This is not how learning works in an intelligent mind stipulated as “equivalent” to mine.
I do not accept that a me-of-10-years ago could convincingly simulate these responses after forcing himself to learn every possible variation on the Shadowitz book and sincerely accepting that as true information.
That’s a trivial inconvenience. You can use a molecular assembler to build duplicates of your 10-years-ago self. Assuming that physicalism is correct and that consciousness involves no quantum effects, these doppelgänger will be conscious and you can feed each a version of the Shadowitz book.
My answer is that this is nothing like a GLUT any more. We are postulating a process of construction which is functionally the same as hooking me up to a source of quantum noise, and recording all of my Everett branches subsequent to that point. The so-called GLUT is the holographic sum of all these branches. The look-up consists of finding the branch which looks like a given input.
What this GLUT in fact looks like is simply the universe as conceived of under the relative state interpretation of QM. (Whether the relative state interpretation is correct or not is immaterial.) So how, exactly, are we supposed to “look inside” the GLUT and realize that it is “obviously” not conscious but just a big jukebox?
After having followed the line of reasoning that led us here, “looking inside” the GLUT has precisely the same informational structure as “looking inside” the relative-state universe (not as we do, confined to one particular Everett branch, but as would entities “outside” our universe, assuming for instance that we lived in a simulation).
The GLUT, assuming this process of construction, looks precisely like a timeless universe. And we have no reason to doubt that the minds inhabiting this universe are not conscious, and every reason to suppose that they are conscious.
So how, exactly, are we supposed to “look inside” the GLUT and realize that it is “obviously” not conscious but just a big jukebox?
You can look at the substrate of the GLUT. This is actually an excellent objection to computationalism, since an algorithm can be memoized to various degrees, a simulation can be more or less strict, etc. so there’s no sharp difference in character between a GLUT and a simulation of the physical universe.
And claiming that the GLUT is conscious suffers from a particularly sharp version of the conscious-rock argument. Encrypt the GLUT with a random one-time pad, and neither the resulting data nor the key will be conscious; but you can plug both into a decrypter and consciousness is restored. This makes very little sense.
Actually the whole idea of the GLUT machine (dubbed the ‘blockhead’ in Braddon-Mitchell’s and Jackson’s book, The Philosophy of Mind and Cognition) IS precisely to use live intelligent humans to store an intelligent response to every response a judge might make under a pre-specified limit (including silence and looping, which is discussed explicitly in the paper). The idea is to show that even though the resulting machine has the capacity to emit an intelligent response to any comment within the finite specified limits, it nonetheless has the intelligence of a juke-box. The point is that the intelligent programmers anticipate anything that the “judge” could say in the finite span. The upshot is that the capacity of a machine to pass a Turing Test of a finite length does not entail actual intelligence.
I confess to having downloaded the paper recently and not given it more attention than was necessary to satisfy my usual habit of having primary sources at hand. I’ve gone back and read it more carefully, but it probably deserves still longer scrutiny.
(Welcome to Less Wrong, by the way. I don’t suppose you need to post an introduction, seeing as you have your own Wikipedia page. Nice to be chatting with you here!)
However, I’m not seeing where this is discussed explicitly, other than (this is perhaps what you mean) under the general heading of using “quantized stimulus parameters” as input to the GLUT-generating process. I grant that this does adequately deal with the most crude timing attacks imaginable.
There do seem to me to be other, more subtle attacks which—according to my earlier argument that, if you have to go back to the drawing board each time such an attack is found, leave the GLUT critique of behaviourism ineffective—would still prove fatal. For instance we can consider teachability of the GLUT, to uncover an entire class of attacks.
Suppose there is some theoretical concept, unknown to the putative human programmers of the GLUT (or perhaps we should call them conversation-authors, as the programming involved is minimal), but which can be taught to someone of normal intelligence. I don’t want to restrict my argument to any particular domain, but for illustrative purposes let’s pick the phenomenon of lasing light. This is a reasonable example, since the GLUT concept would have been implementable as early as Babbage’s time and the key insights date from Einstein’s.
In this scenario, the GLUT’s interviewer choses as his conversation topic the theoretical background needed to build up to the concept of lasing light. The test comes when she (gender picked by flipping a coin) asks the GLUT to make specific predictions about a given experimental setup that extrapolates relevant physical law into a domain not previously discussed, but where that law still applies.
By my earlier stipulation, the GLUT’s builders must discover, in the process of building the GLUT, the physical law of lasing light. They must also prune the conversation tree of “wrong” predictions, since that would alert the interviewer to the fact that the GLUT was “faking” understanding up to the point of the experimental test; this rules out the builders merely “covering all (conversational) bases”. They must truly understand the phenomenon themselves.
(One may object that it would take an inordinately long time to teach a person of merely normal intelligence about a phenomenon such as lasing light. But we have earlier stipulated that the length of the test can be extended to human lifespans; that is surely enough for a person of normal intelligence to eventually get there.)
We are led to what is (to me at least) a disturbing conclusion. The building of a GLUT entails the discovery by the builders of all experimentally discoverable physical laws of our universe that can be taught a person of normal intelligence in a reasonable finite lifespan.
I’m not a professional philosopher, so possibly this argument has holes.
Nevertheless it seems to me that this unpalatable conclusion points to one primordial flaw in the GLUT argument: it goes counter to the open-ended nature of the optimization process known as intelligence. You cannot optimize by covering all bases, for the same reason that a theory that can explain all conceivable events has no real content.
The original paper tried to anticipate this objection by offering as a general defense the stipulation that the GLUT should simulate a “desert island” type of castaway, so that the GLUT would be dispensed of the capacity to converse fluently about current events. But the objection is more general and its force becomes harder to avoid if the duration of the test is extended greatly: we need to imagine that the GLUT can be brought up to date with current events, and afterwards respond appropriately to them, as would a person of normal intelligence. This requires the GLUT builders to anticipate the future with enough precision to prune “inappropriate” responses, and so the defense that the builders would “cover all bases” is untenable.
The domain of physical law is the one where the consequences of the teachability test are brought into sharpest focus, but I suspect that “merely social” tests of the GLUT in everyday life would very quickly expose its supposed intelligence as a sham.
Behaviourism, or God-like GLUT builders: pick your poison.
There is an aspect of the construction that you are not quite taking in. The programmers give a response to EVERY sequence of letters and spaces that a judge COULD type in the remaining segment of the original hour. One or more of those sequences will be a description of a laser, another will be a description of some similar device that goes counter to physical law, etc. The programmers are supposed to respond to each string as an intelligent person would respond. Here is the relevant part of the description: “Suppose the interrogator goes first, typing in one of A1...An. The programmers produce one sensible response to each of these sentences, B1...Bn. For each of B1...Bn, the interrogator can make various replies [every possible reply of all lengths up to the remaining time], so many branches will sprout below each of the Bi. Again, for each of these replies, the programmers produce one sensible response, and so on.” The general point is that there is no need for the programmers to “think of” every theory: that is accomplished by exhaustion. Of course the machine is impossible but that is OK because the point is a conceptual one: having the capacity to respond intelligently for any stipulated finite period (as in the Turing Test) is not conceptually sufficient for genuine intelligence.
That is plainly wrong. The “input’ space (possible judge queries) is exhaustively covered, I’m getting that just fine. No such thing can be said about the “output” space: we’re requiring that the output consist of strings encoding responses that an intelligent person would emit. The judge is allowed to say random, possibly wrong, things, but the GLUT is not so allowed.
Consider an input string which consists of a correct explanation of quantum mechanics (which we assume the builders don’t know yet at build time), plus a question to the GLUT about what happens in a novel, never before encountered (by the GLUT) experimental setup. This input string is possible, and so must be considered by the builders (along with input strings that are incorrect explanations of QM plus questions about TV shows, but we needn’t concern ourselves with those, an actual “judge from the builder’s future” will not emit them).
In order to construct even one sensible response to this input string, to respond “as an intelligent person would”, the GLUT builders must correctly predict the experimental result. An incorrect response will signal to the “judge” that the GLUT is responding by rote, without understanding. If the GLUT equivocates with “I don’t know”, the judge will press for an answer; we are assuming that the GLUT has answered all previous queries sensibly up to this point, that it has been a “good student” of QM. If the GLUT keeps dodging the judge’s request for a prediction, the game is up: the jduge will flunk it on the Turing Test.
To correctly predict an experimental result, the builders must know and understand QM, but we have assumed they don’t. Assuming that the GLUT always passes the Turing Test leads us to a contradiction, so we must allow that there are some Turing Tests the GLUT is unable to pass: those that require it to learn something its builders didn’t know. The GLUT does not have the capacity you are claiming for it.
(If you disagree, and think I’m still not getting it, please kindly answer the following: considering only a single input string QM+NE—explanation of quantum mechanics plus novel experiment—how do you propose that a builder who doesn’t understand QM construct a sensible answer to that input string?)
You’re assuming that the GLUT is simulating a person of average intelligence, right? So they ask a person of average intelligence how they’d respond to that particular sentence, given various kinds of context, and program in the answer(s).
What you’re trying to get at, I think, is a situation for which the GLUT has no response, but that’s already ruled out by the fact that the hypothetical situation specifies that the programmers have to have systematically considered every possible situation and programmed in a response to it. (It doesn’t have to be a good response, just how a person of average intelligence would respond, so variations on ‘I don’t know’ or ‘that doesn’t make sense to me’ would be not just acceptable but actually correct in some situations.)
Heh. I’d claim that your use of “average” here is smuggling in precisely the kind of connotation that are relied on to make the GLUT concept plausible, but which do not stand up to scrutiny.
Let’s say I’m assuming the GLUT is simulating an intelligence “equivalent” to mine. And assume the GLUT builder is me, ten years ago, when I didn’t know about Brehme diagrams but was otherwise relatively smart. Assume the input string is the first few chapters of the Shadowitz text on special relativity I have recently gone through. Under these assumptions, “equivalent” intelligence consists of being able to answer the exercises as correctly as I recently did.
(Crucially, if the supposed-to-be-equivalent-to-mine intelligence turns out to be for some reason cornered into saying “I don’t know” or “I can’t make sense of this text”, I can tell for sure it’s not as smart as I am, and we have a contradiction.)
The GLUT intuition pump requires that the me-of-today can “teach” the me-of-ten-years-ago how to use Brehme diagrams, to the point where the me-of-ten-years ago can correctly answer the kind of questions about time dilation that I can answer today.
We’re led to concluding one of the following:
that I can send information backwards in time
that the me-of-ten-years-ago did know about SR, contrary to stipulation
that the builders have another way of computing sensible answers, contrary to stipulation
that the “intelligence” exhibited by GLUT is restricted to making passable conversational answers but is limited in not being able to acquire new knowledge
My hunch is that this last is really what the fuzziness of the word “intelligence” allows someone thinking about GLUTs to get away with, and not realize it. The GLUT is a smarter ELIZA, but if we try to give it a specific, operational, predictive kind of intelligence of which humans are demonstrably capable, it is easily exposed as a dummy.
In the course of building the GLUT, you-of-10-years-ago would have to, in the course of going through every possible input that the GLUT might need to respond to, encounter the first few chapters of the book in question, and figure out a correct response to that particular input string. So you-of-10-years-ago would have to know about SR, not necessarily at the start of the project, but definitely by the end of it. (And the GLUT simulating you-of-10-years-ago would be able to simulate the responses that you-of-10-years-ago generated in the learning process, assuming that you-of-10-years-ago put them in as generated rather than programming the GLUT to react as if it already knew about SR.)
Going through every possible random string is an extremely inefficient way to gain new information, though.
So you agree with me: since there is nothing special about either the 10-year stipulation or about the theory in question, we’re requiring the GLUT builders to have discovered and understood every physical theory that will ever be discovered and can be taught to a person of my intelligence.
This is conceptually an even taller order than the already hard to swallow “impossible-but-conceptually-conceivable” machine. Where are they supposed to get the information from? This is—so we are led to conclude—a civilization which can take a stroll through the Library of Babel and pick out just those books which correspond to a sensible physical theory.
I think you misunderstood. You-of-10-years-ago doesn’t have to have figured out SR prior to building the GLUT; you-of-10-years-ago would learn about SR—and an unimaginable number of other things, many of them wrong—in the course of programming the GLUT. That’s implied in ‘going through every possible input’. Also, you-of-10-years-ago wouldn’t have to program the objectively-right answers into the GLUT, just their own responses to the various inputs, so no external data source is necessary.
The GLUT builder has to understand the given theory, and derive its implications to the novel experiment. But they don’t have to know that the theory is correct. It is your later input of a correct explanation that picks the correct answer out of all the wrong ones, and the GLUT builder doesn’t have to care which is which.
I don’t get what you mean here. Please clarify?
If the tester gives the GLUT a plausible-sounding explanation of some event that is incorrect, but that you-of-10-years-ago would be deceived by, the GLUT simulation of you should respond as if deceived. Similarly, if the tester gives the GLUT an incorrect but plausible-sounding explanation of SR that you-of-10-years-ago would take as correct, the GLUT should respond as if it thinks the explanation is correct. You-of-10-years-ago would need to program both sets of responses—thinking that the incorrect explanation of SR is correct, and thinking that the correct explanation of SL is correct—into the GLUT. You-of-10-years-ago would not need to know which of those two explanations of SR was actually correct in order to program thinking-that-they-are-correct responses into the GLUT.
I do not accept that a me-of-10-years ago could convincingly simulate these responses after forcing himself to learn every possible variation on the Shadowitz book and sincerely accepting that as true information. Conversely, if he started with the “true” Shadowitz he would have a hard time erasing that knowledge afterwards to give convincing answers to the “false” versions.
Not only would the me-of-10-years ago not be able to convincingly reproduce, e.g. the excitement of learning new stuff and finding that it works; that me would (I suspect) simply go mad under such bizarre circumstances! This is not how learning works in an intelligent mind stipulated as “equivalent” to mine.
That’s a trivial inconvenience. You can use a molecular assembler to build duplicates of your 10-years-ago self. Assuming that physicalism is correct and that consciousness involves no quantum effects, these doppelgänger will be conscious and you can feed each a version of the Shadowitz book.
I was anticipating precisely this objection.
My answer is that this is nothing like a GLUT any more. We are postulating a process of construction which is functionally the same as hooking me up to a source of quantum noise, and recording all of my Everett branches subsequent to that point. The so-called GLUT is the holographic sum of all these branches. The look-up consists of finding the branch which looks like a given input.
What this GLUT in fact looks like is simply the universe as conceived of under the relative state interpretation of QM. (Whether the relative state interpretation is correct or not is immaterial.) So how, exactly, are we supposed to “look inside” the GLUT and realize that it is “obviously” not conscious but just a big jukebox?
After having followed the line of reasoning that led us here, “looking inside” the GLUT has precisely the same informational structure as “looking inside” the relative-state universe (not as we do, confined to one particular Everett branch, but as would entities “outside” our universe, assuming for instance that we lived in a simulation).
The GLUT, assuming this process of construction, looks precisely like a timeless universe. And we have no reason to doubt that the minds inhabiting this universe are not conscious, and every reason to suppose that they are conscious.
You can look at the substrate of the GLUT. This is actually an excellent objection to computationalism, since an algorithm can be memoized to various degrees, a simulation can be more or less strict, etc. so there’s no sharp difference in character between a GLUT and a simulation of the physical universe.
And claiming that the GLUT is conscious suffers from a particularly sharp version of the conscious-rock argument. Encrypt the GLUT with a random one-time pad, and neither the resulting data nor the key will be conscious; but you can plug both into a decrypter and consciousness is restored. This makes very little sense.