The GLUT builder has to understand the given theory, and derive its implications to the novel experiment. But they don’t have to know that the theory is correct. It is your later input of a correct explanation that picks the correct answer out of all the wrong ones, and the GLUT builder doesn’t have to care which is which.
If the tester gives the GLUT a plausible-sounding explanation of some event that is incorrect, but that you-of-10-years-ago would be deceived by, the GLUT simulation of you should respond as if deceived. Similarly, if the tester gives the GLUT an incorrect but plausible-sounding explanation of SR that you-of-10-years-ago would take as correct, the GLUT should respond as if it thinks the explanation is correct. You-of-10-years-ago would need to program both sets of responses—thinking that the incorrect explanation of SR is correct, and thinking that the correct explanation of SL is correct—into the GLUT. You-of-10-years-ago would not need to know which of those two explanations of SR was actually correct in order to program thinking-that-they-are-correct responses into the GLUT.
I do not accept that a me-of-10-years ago could convincingly simulate these responses after forcing himself to learn every possible variation on the Shadowitz book and sincerely accepting that as true information. Conversely, if he started with the “true” Shadowitz he would have a hard time erasing that knowledge afterwards to give convincing answers to the “false” versions.
Not only would the me-of-10-years ago not be able to convincingly reproduce, e.g. the excitement of learning new stuff and finding that it works; that me would (I suspect) simply go mad under such bizarre circumstances! This is not how learning works in an intelligent mind stipulated as “equivalent” to mine.
I do not accept that a me-of-10-years ago could convincingly simulate these responses after forcing himself to learn every possible variation on the Shadowitz book and sincerely accepting that as true information.
That’s a trivial inconvenience. You can use a molecular assembler to build duplicates of your 10-years-ago self. Assuming that physicalism is correct and that consciousness involves no quantum effects, these doppelgänger will be conscious and you can feed each a version of the Shadowitz book.
My answer is that this is nothing like a GLUT any more. We are postulating a process of construction which is functionally the same as hooking me up to a source of quantum noise, and recording all of my Everett branches subsequent to that point. The so-called GLUT is the holographic sum of all these branches. The look-up consists of finding the branch which looks like a given input.
What this GLUT in fact looks like is simply the universe as conceived of under the relative state interpretation of QM. (Whether the relative state interpretation is correct or not is immaterial.) So how, exactly, are we supposed to “look inside” the GLUT and realize that it is “obviously” not conscious but just a big jukebox?
After having followed the line of reasoning that led us here, “looking inside” the GLUT has precisely the same informational structure as “looking inside” the relative-state universe (not as we do, confined to one particular Everett branch, but as would entities “outside” our universe, assuming for instance that we lived in a simulation).
The GLUT, assuming this process of construction, looks precisely like a timeless universe. And we have no reason to doubt that the minds inhabiting this universe are not conscious, and every reason to suppose that they are conscious.
So how, exactly, are we supposed to “look inside” the GLUT and realize that it is “obviously” not conscious but just a big jukebox?
You can look at the substrate of the GLUT. This is actually an excellent objection to computationalism, since an algorithm can be memoized to various degrees, a simulation can be more or less strict, etc. so there’s no sharp difference in character between a GLUT and a simulation of the physical universe.
And claiming that the GLUT is conscious suffers from a particularly sharp version of the conscious-rock argument. Encrypt the GLUT with a random one-time pad, and neither the resulting data nor the key will be conscious; but you can plug both into a decrypter and consciousness is restored. This makes very little sense.
The GLUT builder has to understand the given theory, and derive its implications to the novel experiment. But they don’t have to know that the theory is correct. It is your later input of a correct explanation that picks the correct answer out of all the wrong ones, and the GLUT builder doesn’t have to care which is which.
I don’t get what you mean here. Please clarify?
If the tester gives the GLUT a plausible-sounding explanation of some event that is incorrect, but that you-of-10-years-ago would be deceived by, the GLUT simulation of you should respond as if deceived. Similarly, if the tester gives the GLUT an incorrect but plausible-sounding explanation of SR that you-of-10-years-ago would take as correct, the GLUT should respond as if it thinks the explanation is correct. You-of-10-years-ago would need to program both sets of responses—thinking that the incorrect explanation of SR is correct, and thinking that the correct explanation of SL is correct—into the GLUT. You-of-10-years-ago would not need to know which of those two explanations of SR was actually correct in order to program thinking-that-they-are-correct responses into the GLUT.
I do not accept that a me-of-10-years ago could convincingly simulate these responses after forcing himself to learn every possible variation on the Shadowitz book and sincerely accepting that as true information. Conversely, if he started with the “true” Shadowitz he would have a hard time erasing that knowledge afterwards to give convincing answers to the “false” versions.
Not only would the me-of-10-years ago not be able to convincingly reproduce, e.g. the excitement of learning new stuff and finding that it works; that me would (I suspect) simply go mad under such bizarre circumstances! This is not how learning works in an intelligent mind stipulated as “equivalent” to mine.
That’s a trivial inconvenience. You can use a molecular assembler to build duplicates of your 10-years-ago self. Assuming that physicalism is correct and that consciousness involves no quantum effects, these doppelgänger will be conscious and you can feed each a version of the Shadowitz book.
I was anticipating precisely this objection.
My answer is that this is nothing like a GLUT any more. We are postulating a process of construction which is functionally the same as hooking me up to a source of quantum noise, and recording all of my Everett branches subsequent to that point. The so-called GLUT is the holographic sum of all these branches. The look-up consists of finding the branch which looks like a given input.
What this GLUT in fact looks like is simply the universe as conceived of under the relative state interpretation of QM. (Whether the relative state interpretation is correct or not is immaterial.) So how, exactly, are we supposed to “look inside” the GLUT and realize that it is “obviously” not conscious but just a big jukebox?
After having followed the line of reasoning that led us here, “looking inside” the GLUT has precisely the same informational structure as “looking inside” the relative-state universe (not as we do, confined to one particular Everett branch, but as would entities “outside” our universe, assuming for instance that we lived in a simulation).
The GLUT, assuming this process of construction, looks precisely like a timeless universe. And we have no reason to doubt that the minds inhabiting this universe are not conscious, and every reason to suppose that they are conscious.
You can look at the substrate of the GLUT. This is actually an excellent objection to computationalism, since an algorithm can be memoized to various degrees, a simulation can be more or less strict, etc. so there’s no sharp difference in character between a GLUT and a simulation of the physical universe.
And claiming that the GLUT is conscious suffers from a particularly sharp version of the conscious-rock argument. Encrypt the GLUT with a random one-time pad, and neither the resulting data nor the key will be conscious; but you can plug both into a decrypter and consciousness is restored. This makes very little sense.