As far as I can tell from looking at those links both Searle and Pearce would deny the possibility of simulating a person with a conventional computer. I understand that position and while I think it is probably wrong it is not obviously wrong and it could turn out to be true. It seems that this is also Penrose’s position.
From the Chinese Room Wikipedia entry for example:
Searle accuses strong AI of dualism, the idea that the mind and the body are made up of different “substances”. He writes that “strong AI only makes sense given the dualistic assumption that, where the mind is concerned, the brain doesn’t matter.” He rejects any form of dualism, writing that “brains cause minds” and that “actual human mental phenomena [are] dependent on actual physical-chemical properties of actual human brains”, a position called “biological naturalism” (as opposed to alternatives like behaviourism, functionalism, identity theory and dualism).
From the Pearce link you gave:
Secondly, why is it that, say, an ant colony or the population of China or (I’d argue) a digital computer—with its classical serial architecture and “von Neumann bottleneck”—don’t support a unitary consciousness beyond the aggregate consciousness of its individual constituents, whereas a hundred billion (apparently) discrete but functionally interconnected nerve cells of a waking/dreaming vertebrate CNS can generate a unitary experiential field? I’d argue that it’s the functionally unique valence properties of the carbon atom that generate the macromolecular structures needed for unitary conscious mind from the primordial quantum minddust.
So I still wonder whether anyone actually believes that you could simulate a human mind with a computer but that it would not be conscious.
both Searle and Pearce would deny the possibility of simulating a person with a conventional computer.
They would deny that a conventional computer simulation can create subjective experience. However, the Church-Turing thesis implies that if physicalism is true then conscious beings can be simulated. AFAICT, it is only Penrose who would deny this.
Do you mean the Church-Turing-Deutsch principle? It appears to me that Pearce at least in the linked article is making a claim which effectively denies that principle—his claim implies that physics is not computable.
It appears to me that Pearce at least in the linked article is making a claim which effectively denies that principle—his claim implies that physics is not computable.
Why? Pearce is a physicalist, not a computationalist; he ought to accept the possibility of a computation which is behaviorally identical to consciousness but has no conscious experience.
he ought to accept the possibility of a computation which is behaviorally identical to consciousness but has no conscious experience.
What sense of ‘ought’ are you using here? That seems like a very odd thing to believe to me. If you think that’s what he actually believes you’re going to have to point me to some evidence.
That seems like a very odd thing to believe to me.
So that means you are a computationalist? Fine, but why do you think physicalism may be incoherent?
If you think that’s what he actually believes you’re going to have to point me to some evidence.
It’s hard to fish for evidence in a single interview, but Pearce says:
The behaviour of the stuff of the world is exhaustively described by the universal Schrodinger equation (or its relativistic generalization). This rules out dualism (casual closure) or epiphenomenalism (epiphenomenal qualia would lack the causal efficacy to talk about their own existence). But theoretical physics is completely silent on the intrinsic nature of the stuff of the world; physics describes only its formal structure.
To me, this reads as an express acknowledgement of the CT thesis (unless quantum gravity turns out to be uncomputable, in which case the CTT is just plain false).
So that means you are a computationalist? Fine, but why do you think physicalism may be incoherent?
The distinction seems to hinge on whether physics is computable. I suspect the Church-Turing-Deutsch principle is true and if it is then it is possible to simulate a human mind using a classical computer and that simulation would be conscious. If it is false however then it is possible that consciousness depends on some physical process that cannot be simulated in a computer. That seems to me to be what Pearce is claiming and that is not incoherent. If we live in such a universe however then it is not possible to simulate a human using a classical computer / universal Turing machine and so it is incoherent to claim that you could simulate a human but the simulation would not be conscious because you can’t simulate a human.
To me, this reads as an express acknowledgement of the CT thesis (unless quantum gravity turns out to be uncomputable, in which case the CTT is just plain false).
I honestly don’t see how you make that connection. It seems clear to me that Pearce is implying that consciousness depends on non-computable physical processes.
if it is then it is possible to simulate a human mind using a classical computer and that simulation would be conscious.
You seem to be begging the question: I suspect that we simply have different models of what the “problem of consciousness” is.
Regardless, physicalism seems to be the most parsimonious theory; computationalism implies that any physical system instantiates all conscious beings, which makes it a non-starter.
Basically, the interpretation of a physical system as implementing a computation is subjective, and a sufficiently complex interpretation can interpret it as implementing any computation you want, or at least any up to the size of the physical system. AKA the “conscious rocks” or “joke interpretations” problem.
Basically, the interpretation of a physical system as implementing a computation is subjective, and a sufficiently complex interpretation can interpret it as implementing any computation you want, or at least any up to the size of the physical system.
I can see why someone might think that, but surely the requirement that any interpretation be a homomorphism from the computation to the processes of the object would be strong restriction on the sets of computation that it is instantiating?
surely the requirement that any interpretation be a homomorphism from the computation to the processes of the object would be strong restriction on the sets of computation that it is instantiating
Intriguing. Could you elaborate? Apparently “homomorphism” is a very general term.
I think the idea is that you can’t pick a different interpretation for the rock implementing a specific computation for each instant of time. A convincing narrative of the physical processes in a rock instantiating a consciousness would require a mapping from rock states to the computational process of the consciousness that remains stable over time. With the physical processes going on in rocks being pretty much random, you wouldn’t get the moment-to-moment coherence you’d need for this even if you can come up with interpretations for single instants.
One intuition here is that once you come up with a good interpretation, the physical system needs to be able to come up with correct results from computations that go on longer than where you extrapolated doing your interpretation. If you try to get around the single instant thing and make a tortured interpretation of rock states representing the computation of, say, 100 consecutive computations of the consciousness, the interpretation is going to have the rock give you garbage for computation 101. You’re just doing the computation yourself now and painstakingly fitting things to random physical noise in the rock.
A homomorphism is a “structure preserving map”, and is quite general until you specify what is preserved.
From my brief reading of Chalmers, he’s basically captured my objection. As Risto_Saarelma says, the point is that a mapping merely of states should not count. As long as the sets of object states are not overlapping, there’s a mapping into the abstract computation. That’s boring. To truly instantiate the computation, what has to be put in is the causal structure, the rules of the computation, and these seem to be far more restrictive than one trace of possible states.
Chalmer’s “clock and dial” seems to get around this in that it can enumerate all possible traces, which seems to be equivalent to capturing the rules, but still feels decidedly wrong.
Having printed it out and read it, it seems that “any physical system instantiates all conscious beings” is fairly well refuted, and what is left reduces to the GLUT problem.
I remember seeing the Chalmers paper before, but never reading far enough to understand his reasoning—I should probably print it out and see if I can understand it on paper.
Edit: Yes, I know that he’s criticizing the argument—I’m just saying I got lost last time I tried to read it.
So do you think there is a meaningful difference between computationalism and physicalism if the Church-Turing-Deutsch principle is true? If so, what is it?
So do you think there is a meaningful difference between computationalism and physicalism if the Church-Turing-Deutsch principle is true?
Basically, physicalism need not be substrate-independent. For instance, it could be that Pearce is right: subjective experience is implemented by a complex quantum state in the brain, and our qualia, intentionality and other features of subjective experience are directly mapped to the states of this quantum system. This would account for the illusion that our consciousness is “just” our brain, while dramatically simplifying the underlying ontology.
Is that a yes or a no? It seems to me that saying physicalism is not substrate-independent is equivalent to saying the Church-Turing-Deutsch principle is false. In other words, that a Turing machine cannot simulate every physical process. My question is whether you think there is a meaningful difference between physicalism and computationalism if the Church-Turing-Deutsch principle is true. There is obviously a difference if it is false.
In other words, that a Turing machine cannot simulate every physical process.
Why would this be? Because of free will? Even if free will exists, just replace the input of free will with a randomness oracle and your Turing machine will still be simulating a conscious system, albeit perhaps a weird one.
I don’t think free will is particularly relevant to the question. Pearce seems to be claiming that some kind of quantum effects in the brain are essential to consciousness and that a simulation of a brain in a computer therefore cannot be conscious. If you could simulate the quantum processes then the argument falls apart. It only makes sense if the Church-Turing-Deutsch principle is false and there are physical processes that cannot be simulated by a Turing machine. I think that is unlikely but possible and a coherent position.
If all physical processes can be simulated by a Turing machine then I don’t see a meaningful difference between physicalism and computationalism. I still don’t know what your answer is to that question. If you do think there is still a meaningful difference then please share.
a simulation of a brain in a computer therefore cannot be conscious. If you could simulate the quantum processes then the argument falls apart.
\sigh** You seem to be so committed to computationalism that you’re unable to understand competing theories.
Simulating quantum processes on a classical computer is not the same as instantiating them in the real world. And physicalism commits us to giving a special status to the real world, since it’s what our consciousness is made of. (Perhaps other “consciousnesses” exist which are made out of something else entirely, but physicalism is silent on this issue.) Hence, consciousness is not invariant under simulation; a classical simulation of a conscious system is similar to a zombie in that it behaves like a conscious being but has no subjective experience.
ETA: I think you are under the mistaken impression that a theory of consciousness needs to explain your heterophenomenological intuitions, i.e. what kinds of beings your brain would model as conscious. These intuitions are a result of evolution, and they must necessarily have a functionalist character, since your models of other beings have no input other than the general form of said beings and their behavior. Philosophy of mind mostly seeks to explain subjective experience, which is just something entirely different.
So you do think there is a difference between physicalism and computationalism even if the Church-Turing-Deutsch principle is true? And this difference is something to do with a special status held by the real world vs. simulations of the real world? I’m trying to understand what these competing theories are but there seems to be a communication problem that means you are failing to convey them to me.
And this difference is something to do with a special status held by the real world vs. simulations of the real world?
That’s what it means to say that physicalism is substrate-dependent. There is a (simple) psycho-physical law which states that subjective experience is implemented on a specific substrate.
It just so happens that evolution has invented some analog supercomputers called “brains” and optimized them for computational efficiency. At some point, it hit on a “trick” for running quantum computations with larger and larger state spaces, and started implementing useful algorithms such as reinforcement learning, aversive learning, perception, cognition etc. on this substrate. As it turns out, the most efficient physical implementations of such quantum algorithms have subjective experience as a side effect, or perhaps as a crucial building block. So subjective awareness got selected for and persisted in the population to this day.
It seems a fairly simple story to me. What’s wrong with it?
That’s what it means to say that physicalism is substrate-dependent. There is a (simple) psycho-physical law which states that subjective experience is implemented on a specific substrate.
So is one of the properties of that specific substrate (the physical world) that it cannot be simulated by a Turing machine? I don’t know why you can’t just give a yes/no answer to that question. I’ve stated it explicitly enough times now that you just come across as deliberately obtuse by not answering it.
I think I’ve been fairly clear that I don’t deny the possibility that consciousness depends on non-computable physics. I don’t think it is the most likely explanation but it doesn’t seem to be clearly ruled out given our current understanding of the universe. Your story might be something close to the truth if the Church-Turing-Deutsch principle is false. It appears to me to be incoherent if it is true however.
I think the Church-Turing-Deutsch principle is probably true but I don’t think we can rule out the possibility that it is false. If it is true then it seems a simulation of a human running on a conventional computer would be just as conscious as a real human. If it is false then it is not possible to simulate a human being on a conventional computer and it therefore doesn’t make sense to say that such a simulation cannot be conscious because a simulation cannot be created. What if anything do you disagree with from those claims?
Your story might be something close to the truth if the Church-Turing-Deutsch principle is false. It appears to me to be incoherent if it is true however.
Because it implies the possibility of zombies, or for some other reason?
Because it implies the possibility of zombies, or for some other reason?
Basically, yes. Slightly more explicitly, it appears to say that two contradictory things are true: that a Turing machine can simulate every physical process but that there are properties arising from physical processes running directly on their ‘native’ hardware that do not arise when those same processes are simulated. That suggests either that the simulation is actually incomplete (it is missing inputs or algorithms that account for the difference) or that there is some kind of dualism going on: a mysterious and unidentifiable ‘something’ that accounts for consciousness existing in a human brain but not in a perfect simulation of a human brain.
If the missing something is not part of physics then we’re really back to dualism and not physicalism at all. It seems like an attempt to sneak dualism back in without admitting to being a dualist in polite company.
there are properties arising from physical processes running directly on their ‘native’ hardware that do not arise when those same processes are simulated.
Is subjective experience a “property”? By assumption, all the features of subjective experience have physical correlates which are preserved by the simulation. It’s just that the ‘native’ process fits a “format” that allows it to actually be experienced, whereas the simulated version does not. It seems weird to call this a dualist theory when the only commonality is an insistence on taking the problem of subjective experience seriously.
Well, I don’t think it really matters what you call it but I assume we agree that it is a something. Do you believe that it is in principle possible to differentiate between an entity that has that something and an entity that does not?
By assumption, all the features of subjective experience have physical correlates which are preserved by the simulation.
This sounds like your answer to my previous question is ‘no’. So is your position that it is not possible in principle to distinguish between a simulation of a human brain and a ‘real’ human brain but that the latter differs in that it possesses a ‘something’ that is not a function of the laws of physics and is inaccessible to any form of investigation other than introspection by the inhabitant of that brain but that is nonetheless in some sense a meaningful distinction? That sounds a lot like dualism to me.
Do you believe that it is in principle possible to differentiate between an entity that has that something and an entity that does not?
Perhaps not. ‘That something’ may be simply a model which translates the aforementioned physical properties into perceptual terms which are more familiar to us. But this begs the question of why we would be familiar with perception in the first place; “we have subjective experience, and by extension so does anything which is implemented in the same substrate as us” is a good way to escape that dilemma.
the latter differs in that it possesses a ‘something’ that is not a function of the laws of physics and is inaccessible to any form of investigation other than introspection by the inhabitant of that brain
The whole point of physicalism is that subjective experience is a function of the laws of physics, and in fact a fairly low-level function. If you want to avoid any hint of dualism, just remove the “inhabitant” (a misnomer) and the “psycho-physical bridging laws” from the model and enjoy your purely physicalistic theory. Just don’t expect it to do a good job of talking about phenomenology or qualia: physicalist theories are just weird like that.
As the saying goes, those who do not know dualism are doomed to reinvent it, poorly. Beware this tendency.
Are you saying that there is some extra law (on top of the physical laws that explain how our brains implement our cognitive algorithms) that maps our cognitive algorithms, or a certain way of implementing them, to consiousness? So that, in principal, the universe could have not had that law, and we would do all the same things, run all the same cognitive algorithms, but not be consious? Do you believe that p-zombies are conceptially possible?
The psycho-physical law is not really an extra law “on top of the laws of physics”, so much as a correspondence between quantum state spaces and subjective experiences—ideally, the correspondence would be as simple as possible.
You could build a version of the universe which was not endowed with any psycho-physical laws, but it’s not something anyone would ever experience; it would be one formal system plucked out seemingly at random from the set of computational structures. It is as logically possible as anything else, but whether it makes sense to regard such a bizarre thing as “conceptually possible” is another matter.
You could build a version of the universe which was not endowed with any psycho-physical laws, but it’s not something anyone would ever experience;
But would this universe look the same as our universe to an outside observer who cannot directly observe subjective experience, but only the physical states that subjective experience supposedly correspond to?
We’re assuming that physicalism is true, so yes it would look the same. The inhabitants would be p-zombies, but all physical correlates of subjective experience would exist.
So, since in this alternate universe without subjective experience, people have the same discussions about subjective experience as their analogs in this universe, the subjective experience is not the cause of these discussions. So what explains the fact the this physical stuff people are made out of, which only obeys physical laws and can’t be influenced by subjective experience, discusses subjective experience? Where did that improbability come from?
So what explains the fact the this physical stuff people are made out of, which only obeys physical laws and can’t be influenced by subjective experience, discusses subjective experience?
First of all, physical stuff can be influenced by the physical correlates of subjective experience. Since the zombie universe was obtained by removing subjective experience from an universe where it originally existed, it’s not surprising that these physical correlates would show some of the same properties.
The properties which subjective experience and its physical correlates have in this universe could be well explained by a combination of (1) anthropic principles (2) the psycho-physical bridging law (3) the properties of our perceptions and other qualia. Moreover, the fact that we’re having this discussion screens out the possibility that people might have no inclination at all to talk about subjective experience.
First of all, physical stuff can be influenced by the physical correlates of subjective experience.
If the physical properties of the physical correlates of subjective experience are sufficient to explain why we talk about subject experience even without a bridging law, then why are they not enough to also explain the subjective experiences without a bridging law?
Subjective experience is self-evident enough to need no explanation. What needs to be explained is how its content as perceived by us (i.e. qualia, beliefs, thoughts etc.) relates to formally modeled physics: hence, the bridging law maps between the conceptual description and the complex quantum system which is physically implemented in the brain.
Subjective experience is self-evident enough to need no explanation.
No, subjective experience is self-evident enough that we do not need to argue about whether it exists, we can easily agree that it does. (Though, you seem to believe that in the zombie world, we would incorrectly come to the same agreement.) But agreeing that something exists is not the same as understanding how or why it exists. This part is not self-evident and we disagree about it. You seem to believe that the explanation requires macroscopic quantum superpositions and some bringing law that somewhat arbitrarily maps these quantum superpositions onto subjective experiences. I believe that if we had sufficient computing power and knew fully the arrangement of neurons in a brain, we could explain it using only classical approximations of physics.
But agreeing that something exists is not the same as understanding how or why it exists.
We don’t understand why, but then again we don’t know why anything exists. In practice, something as basic as subjective experience is always taken as a given. As for how, our inner phenomenology reveals far more about subjective experience than physics ever could.
Nevertheless, we do also want to know how the self might relate to our physical models; and contrary to what might be expected, macroscopic quantum superposition is actually the parsimonious hypothesis here for a wide variety of reasons.
Unless QM as we know it is badly wrong, it just doesn’t fit our models of physical reality that anything resembling “the self” would be instantiated in a hugely complicated classical system (a brain with an arrangement of brain regions and billions of neurons? Talk about an arbitrary bridging law!) as opposed to a comparatively simple quantum state.
Moreover, it is eminently plausible that evolution should have found some ways of exploiting quantum computation in the brain during its millions-of-years-long development. The current state of neuroscience is admittedly unsatisfactory, but this shouldn’t cause us to shed too much confidence.
We don’t understand why, but then again we don’t know why anything exists.
I am talking about why subjective experience exists given that the physical universe exists. Are you being deliberately obtuse?
Unless QM as we know it is badly wrong, it just doesn’t fit our models of physical reality that anything resembling “the self” would be instantiated in a hugely complicated classical system (a brain with an arrangement of brain areas and billions of neurons? Talk about an arbitrary bridging law!) as opposed to a comparatively simple quantum state.
You are failing to address my actual position, which is that there is no arbitrary bridging law, but a mapping from the mathematical structure of physical systems to subjective experience, because that mathematical structure is the subjective experience, and it mathematically has to be that way. The explanation of why and how I am talking about is an understanding of that mathematical structure, and how physical systems can have that structure.
If you believe that we evolved systems for maintaining stable macroscopic quantum superposition without decoherence, and that we have not noticed this when we study the brain, then QM as you know it is badly wrong.
I am talking about why subjective experience exists given that the physical universe exists.
Interesting. How do you know that the physical universe exists, though? Could it be that your certainty about the physical universe has something to do with your subjective experience?
a mapping from the mathematical structure of physical systems
“The mathematical structure of physical systems” means either physical law, or else something so arbitrary that a large rock can be said to instantiate all human consciousnesses.
If you believe that we evolved systems for maintaining stable macroscopic quantum superposition without decoherence, and that we have not noticed this when we study the brain, then QM as you know it is badly wrong.
Evidence please. Quantum biology is an active research topic, and models of quantum computation differ in how resilient they are to decoherence.
I’m confused about what you mean by “simulating a person”. Presumably you don’t mean simulating in a way that is conscious/has mental states (since that would make the claim under discussion trivially, uninterestingly inconsistent), so presumably you do mean just simulating the physics/neurology and producing the same behavior. While AFAIK neither explicitly says so in the links, Searle and Pearce both seem to me to believe the latter is possible. (Searle in particular has never, AFAIK, denied that an unconscious Chinese Room would be possible in principle; and by “strong AI” Searle means the possibility of AI with an ‘actual mind’/mental states/consciousness, not just generally intelligent behavior.)
so presumably you do mean just simulating the physics/neurology and producing the same behavior.
Yes. Equivalently, is uploading possible with conventional computers?
It seems to me that both Searle and Pearce would answer no to both questions. Pearce in particular seems to be saying that consciousness depends on quantum properties of brains that cannot be simulated by a conventional computer. It appears to me that this is equivalent to a claim that physics is not computable but I’m not totally confident of that equivalence. I have trouble reading any other conclusion from anything in those links. Can you point to a quote that makes you think otherwise?
It appears to me that this is equivalent to a claim that physics is not computable but I’m not totally confident of that equivalence.
I don’t think Pearce or Searle would agree with this, and it sounds like you might be projecting your belief onto them. We already know of philosophers who explicitly endorse the possibility of zombies, so it’s not surprising for philosophers to endorse positions that imply the possibility of zombies.
Can you point to a quote that makes you think otherwise?
Afraid not, but I think if they thought physics were uncomputable (in the behavioral-simulation sense) they would say so more explicitly.
I don’t think Pearce or Searle would agree with this, and it sounds like you might be projecting your belief onto them.
Way back at the beginning of this thread I was trying to establish whether anybody who calls themselves a materialist actually believes the statement “you can’t fully simulate a person without the simulation being conscious” to be false. I still don’t feel I have an answer to that question. It seems that bogus might believe that statement to be false but he is frustratingly evasive when it comes to answering any direct questions about what he actually believes. It seems we are not currently in a position to say definitively what Pearce or Searle believe.
The only reason I asked in the first place is that I’ve tended to assume someone who self-describes as a materialist would also believe that statement to be true. I guess the moral of this thread is that I can’t assume that and should ask if I want to know.
A huge look-up table could always “in principle” provide the innards governing any behavioral regularities whatever, and intuition proclaims that we would not consider anything controlled by such a mere look-up table to have psychological states. (If I discovered that you were in fact controlled by such a giant look-up table, I would conclude that you were not a person at all, but an elaborate phony.) But as Alan Turing recognized when he proposed his notoriously behavioristic imitation game, the Turing Test, this “in principle” possibility is not really a possibility at all. A look-up table larger than the visible universe, accessed at speeds trillions of times in excess of the speed of light, is not a serious possibility, and nothing less than that would suffice. What Turing realized is that for real time responsivity in an unrestricted Turing Test, there is only one seriously conceivable architecture: one that creates its responses locally, on the fly, by processes that systematically uncover the meaning of the inputs, given its previous history, etc., etc
The point being that GLUTs are faulty intuition pumps, so we cannot use them to bolster our intuition that “something mechanical that passed the Turing Test might nevertheless not be conscious”.
It would take a GLUT as large as the universe just to store all possible replies to questions I might ask of it, but it would flounder on a simple test: if I were to repeat the same question several times, it would give me the same answer each time. You could push me into a less convenient possible world by arguing that the GLUT responds to minute differences in my tone of voice, etc. - but I could also record myself on tape and play the same tape back N times, and the GLUT would expose itself as such, and therefore fail the test, by sphexishly reciting back its stored lines.
There’s no way that I can see of going around this, other than to “extend” the GLUT concept to allow for stored states and conditional branches, at which point we recover Turing completeness. To a programmer, the GLUT concept just isn’t credible.
Ok, basic confusion here. The GLUT obviously has to be indexed on conversation histories up to the point of the reply, not just the last statement from the interlocutor. Having it only index using the last statement would make it pretty trivially incapable of passing a good Turing test. It follows that since it’s still assumed to be a finite table, it can only do conversations up to a given length, say half an hour. Half an hour, on the other hand, should be quite long enough to pass a Turing test, and since we’re dealing with crazy scales here, we might just as well make the maximum length of conversation 80 years or something.
Tut, tut. Assuming the confusion you claim to see is mine: you don’t get to tell me that my objection to an intuition pump is incoherent, you are required to show that it is incoherent, and it is preferable to avoid lullaby language in such argumentation.
Yes, the question “what is your index” exposes the GLUT as a confused intuition pump. I am at present looking at the Ned Block (1981) paper Psychologism and Behaviorism which (as best I could ascertain) is the original source for the GLUT concept. It makes a similar claim to yours, namely that “for a Turing Test of any given length, the machine could in principle be programmed in just the same way to pass a Turing Test of that length”.
But sauce for the goose is sauce for the gander: for a GLUT of any size, there is a Turing Test of sufficient duration that exposes the GLUT as not conscious, by looping back to the start of the conversation! This shows that the argument from a necessarily finite index does have force to counter the GLUT as an intuition pump.
It is flawed in other ways. You can’t blame Ned Block who at the time of writing that paper can’t have spent a lot of time on IRC, but someone with that experience would tell you that indexing on character strings wouldn’t be enough to pass a 1-hour Turing test: the GLUT as originally specified would be vulnerable to timing attacks. It wouldn’t be able to spontaneously say something like “You haven’t typed anything back to me for thirty minutes, what’s wrong?”
“OK”, a GLUT advocate might reply, “we can in principle include timings in the index, to whatever timing resolution you are capable of detecting”.
It’s tempting to grant this “in principle” counter-objection, especially as I don’t have the patience to go to the literature and verify that the “timing attack” objection hasn’t been raised and countered before.
But the fact that the timing attack wasn’t anticipated by Ned Block is precisely what shows up the GLUT concept as a faulty intuition pump. You don’t get to “go back to the drawing board” on the GLUT concept each time an attack is found and iteratively improve it until its index has been generalized enough to cover all possible circumstances: that is tantamount to having an actual, live, intelligent human sit behind the keyboard and respond.
Actually the whole idea of the GLUT machine (dubbed the ‘blockhead’ in Braddon-Mitchell’s and Jackson’s book, The Philosophy of Mind and Cognition) IS precisely to use live intelligent humans to store an intelligent response to every response a judge might make under a pre-specified limit (including silence and looping, which is discussed explicitly in the paper). The idea is to show that even though the resulting machine has the capacity to emit an intelligent response to any comment within the finite specified limits, it nonetheless has the intelligence of a juke-box. The point is that the intelligent programmers anticipate anything that the “judge” could say in the finite span. The upshot is that the capacity of a machine to pass a Turing Test of a finite length does not entail actual intelligence.
silence and looping, which is discussed explicitly in the paper
I confess to having downloaded the paper recently and not given it more attention than was necessary to satisfy my usual habit of having primary sources at hand. I’ve gone back and read it more carefully, but it probably deserves still longer scrutiny.
(Welcome to Less Wrong, by the way. I don’t suppose you need to post an introduction, seeing as you have your own Wikipedia page. Nice to be chatting with you here!)
However, I’m not seeing where this is discussed explicitly, other than (this is perhaps what you mean) under the general heading of using “quantized stimulus parameters” as input to the GLUT-generating process. I grant that this does adequately deal with the most crude timing attacks imaginable.
There do seem to me to be other, more subtle attacks which—according to my earlier argument that, if you have to go back to the drawing board each time such an attack is found, leave the GLUT critique of behaviourism ineffective—would still prove fatal. For instance we can consider teachability of the GLUT, to uncover an entire class of attacks.
Suppose there is some theoretical concept, unknown to the putative human programmers of the GLUT (or perhaps we should call them conversation-authors, as the programming involved is minimal), but which can be taught to someone of normal intelligence. I don’t want to restrict my argument to any particular domain, but for illustrative purposes let’s pick the phenomenon of lasing light. This is a reasonable example, since the GLUT concept would have been implementable as early as Babbage’s time and the key insights date from Einstein’s.
In this scenario, the GLUT’s interviewer choses as his conversation topic the theoretical background needed to build up to the concept of lasing light. The test comes when she (gender picked by flipping a coin) asks the GLUT to make specific predictions about a given experimental setup that extrapolates relevant physical law into a domain not previously discussed, but where that law still applies.
By my earlier stipulation, the GLUT’s builders must discover, in the process of building the GLUT, the physical law of lasing light. They must also prune the conversation tree of “wrong” predictions, since that would alert the interviewer to the fact that the GLUT was “faking” understanding up to the point of the experimental test; this rules out the builders merely “covering all (conversational) bases”. They must truly understand the phenomenon themselves.
(One may object that it would take an inordinately long time to teach a person of merely normal intelligence about a phenomenon such as lasing light. But we have earlier stipulated that the length of the test can be extended to human lifespans; that is surely enough for a person of normal intelligence to eventually get there.)
We are led to what is (to me at least) a disturbing conclusion. The building of a GLUT entails the discovery by the builders of all experimentally discoverable physical laws of our universe that can be taught a person of normal intelligence in a reasonable finite lifespan.
I’m not a professional philosopher, so possibly this argument has holes.
Nevertheless it seems to me that this unpalatable conclusion points to one primordial flaw in the GLUT argument: it goes counter to the open-ended nature of the optimization process known as intelligence. You cannot optimize by covering all bases, for the same reason that a theory that can explain all conceivable events has no real content.
The original paper tried to anticipate this objection by offering as a general defense the stipulation that the GLUT should simulate a “desert island” type of castaway, so that the GLUT would be dispensed of the capacity to converse fluently about current events. But the objection is more general and its force becomes harder to avoid if the duration of the test is extended greatly: we need to imagine that the GLUT can be brought up to date with current events, and afterwards respond appropriately to them, as would a person of normal intelligence. This requires the GLUT builders to anticipate the future with enough precision to prune “inappropriate” responses, and so the defense that the builders would “cover all bases” is untenable.
The domain of physical law is the one where the consequences of the teachability test are brought into sharpest focus, but I suspect that “merely social” tests of the GLUT in everyday life would very quickly expose its supposed intelligence as a sham.
Behaviourism, or God-like GLUT builders: pick your poison.
There is an aspect of the construction that you are not quite taking in. The programmers give a response to EVERY sequence of letters and spaces that a judge COULD type in the remaining segment of the original hour. One or more of those sequences will be a description of a laser, another will be a description of some similar device that goes counter to physical law, etc. The programmers are supposed to respond to each string as an intelligent person would respond. Here is the relevant part of the description: “Suppose the interrogator goes first, typing in one of A1...An. The programmers produce one sensible response to each of these sentences, B1...Bn. For each of B1...Bn, the interrogator can make various replies [every possible reply of all lengths up to the remaining time], so many branches will sprout below each of the Bi. Again, for each of these replies, the programmers produce one sensible response, and so on.” The general point is that there is no need for the programmers to “think of” every theory: that is accomplished by exhaustion. Of course the machine is impossible but that is OK because the point is a conceptual one: having the capacity to respond intelligently for any stipulated finite period (as in the Turing Test) is not conceptually sufficient for genuine intelligence.
there is no need for the programmers to “think of” every theory: that is accomplished by exhaustion
That is plainly wrong. The “input’ space (possible judge queries) is exhaustively covered, I’m getting that just fine. No such thing can be said about the “output” space: we’re requiring that the output consist of strings encoding responses that an intelligent person would emit. The judge is allowed to say random, possibly wrong, things, but the GLUT is not so allowed.
Consider an input string which consists of a correct explanation of quantum mechanics (which we assume the builders don’t know yet at build time), plus a question to the GLUT about what happens in a novel, never before encountered (by the GLUT) experimental setup. This input string is possible, and so must be considered by the builders (along with input strings that are incorrect explanations of QM plus questions about TV shows, but we needn’t concern ourselves with those, an actual “judge from the builder’s future” will not emit them).
In order to construct even one sensible response to this input string, to respond “as an intelligent person would”, the GLUT builders must correctly predict the experimental result. An incorrect response will signal to the “judge” that the GLUT is responding by rote, without understanding. If the GLUT equivocates with “I don’t know”, the judge will press for an answer; we are assuming that the GLUT has answered all previous queries sensibly up to this point, that it has been a “good student” of QM. If the GLUT keeps dodging the judge’s request for a prediction, the game is up: the jduge will flunk it on the Turing Test.
To correctly predict an experimental result, the builders must know and understand QM, but we have assumed they don’t. Assuming that the GLUT always passes the Turing Test leads us to a contradiction, so we must allow that there are some Turing Tests the GLUT is unable to pass: those that require it to learn something its builders didn’t know. The GLUT does not have the capacity you are claiming for it.
(If you disagree, and think I’m still not getting it, please kindly answer the following: considering only a single input string QM+NE—explanation of quantum mechanics plus novel experiment—how do you propose that a builder who doesn’t understand QM construct a sensible answer to that input string?)
You’re assuming that the GLUT is simulating a person of average intelligence, right? So they ask a person of average intelligence how they’d respond to that particular sentence, given various kinds of context, and program in the answer(s).
What you’re trying to get at, I think, is a situation for which the GLUT has no response, but that’s already ruled out by the fact that the hypothetical situation specifies that the programmers have to have systematically considered every possible situation and programmed in a response to it. (It doesn’t have to be a good response, just how a person of average intelligence would respond, so variations on ‘I don’t know’ or ‘that doesn’t make sense to me’ would be not just acceptable but actually correct in some situations.)
You’re assuming that the GLUT is simulating a person of average intelligence, right?
Heh. I’d claim that your use of “average” here is smuggling in precisely the kind of connotation that are relied on to make the GLUT concept plausible, but which do not stand up to scrutiny.
Let’s say I’m assuming the GLUT is simulating an intelligence “equivalent” to mine. And assume the GLUT builder is me, ten years ago, when I didn’t know about Brehme diagrams but was otherwise relatively smart. Assume the input string is the first few chapters of the Shadowitz text on special relativity I have recently gone through. Under these assumptions, “equivalent” intelligence consists of being able to answer the exercises as correctly as I recently did.
(Crucially, if the supposed-to-be-equivalent-to-mine intelligence turns out to be for some reason cornered into saying “I don’t know” or “I can’t make sense of this text”, I can tell for sure it’s not as smart as I am, and we have a contradiction.)
The GLUT intuition pump requires that the me-of-today can “teach” the me-of-ten-years-ago how to use Brehme diagrams, to the point where the me-of-ten-years ago can correctly answer the kind of questions about time dilation that I can answer today.
We’re led to concluding one of the following:
that I can send information backwards in time
that the me-of-ten-years-ago did know about SR, contrary to stipulation
that the builders have another way of computing sensible answers, contrary to stipulation
that the “intelligence” exhibited by GLUT is restricted to making passable conversational answers but is limited in not being able to acquire new knowledge
My hunch is that this last is really what the fuzziness of the word “intelligence” allows someone thinking about GLUTs to get away with, and not realize it. The GLUT is a smarter ELIZA, but if we try to give it a specific, operational, predictive kind of intelligence of which humans are demonstrably capable, it is easily exposed as a dummy.
In the course of building the GLUT, you-of-10-years-ago would have to, in the course of going through every possible input that the GLUT might need to respond to, encounter the first few chapters of the book in question, and figure out a correct response to that particular input string. So you-of-10-years-ago would have to know about SR, not necessarily at the start of the project, but definitely by the end of it. (And the GLUT simulating you-of-10-years-ago would be able to simulate the responses that you-of-10-years-ago generated in the learning process, assuming that you-of-10-years-ago put them in as generated rather than programming the GLUT to react as if it already knew about SR.)
Going through every possible random string is an extremely inefficient way to gain new information, though.
So you-of-10-years-ago would have to know about SR,
So you agree with me: since there is nothing special about either the 10-year stipulation or about the theory in question, we’re requiring the GLUT builders to have discovered and understood every physical theory that will ever be discovered and can be taught to a person of my intelligence.
This is conceptually an even taller order than the already hard to swallow “impossible-but-conceptually-conceivable” machine. Where are they supposed to get the information from? This is—so we are led to conclude—a civilization which can take a stroll through the Library of Babel and pick out just those books which correspond to a sensible physical theory.
I think you misunderstood. You-of-10-years-ago doesn’t have to have figured out SR prior to building the GLUT; you-of-10-years-ago would learn about SR—and an unimaginable number of other things, many of them wrong—in the course of programming the GLUT. That’s implied in ‘going through every possible input’. Also, you-of-10-years-ago wouldn’t have to program the objectively-right answers into the GLUT, just their own responses to the various inputs, so no external data source is necessary.
The GLUT builder has to understand the given theory, and derive its implications to the novel experiment. But they don’t have to know that the theory is correct. It is your later input of a correct explanation that picks the correct answer out of all the wrong ones, and the GLUT builder doesn’t have to care which is which.
If the tester gives the GLUT a plausible-sounding explanation of some event that is incorrect, but that you-of-10-years-ago would be deceived by, the GLUT simulation of you should respond as if deceived. Similarly, if the tester gives the GLUT an incorrect but plausible-sounding explanation of SR that you-of-10-years-ago would take as correct, the GLUT should respond as if it thinks the explanation is correct. You-of-10-years-ago would need to program both sets of responses—thinking that the incorrect explanation of SR is correct, and thinking that the correct explanation of SL is correct—into the GLUT. You-of-10-years-ago would not need to know which of those two explanations of SR was actually correct in order to program thinking-that-they-are-correct responses into the GLUT.
I do not accept that a me-of-10-years ago could convincingly simulate these responses after forcing himself to learn every possible variation on the Shadowitz book and sincerely accepting that as true information. Conversely, if he started with the “true” Shadowitz he would have a hard time erasing that knowledge afterwards to give convincing answers to the “false” versions.
Not only would the me-of-10-years ago not be able to convincingly reproduce, e.g. the excitement of learning new stuff and finding that it works; that me would (I suspect) simply go mad under such bizarre circumstances! This is not how learning works in an intelligent mind stipulated as “equivalent” to mine.
I do not accept that a me-of-10-years ago could convincingly simulate these responses after forcing himself to learn every possible variation on the Shadowitz book and sincerely accepting that as true information.
That’s a trivial inconvenience. You can use a molecular assembler to build duplicates of your 10-years-ago self. Assuming that physicalism is correct and that consciousness involves no quantum effects, these doppelgänger will be conscious and you can feed each a version of the Shadowitz book.
My answer is that this is nothing like a GLUT any more. We are postulating a process of construction which is functionally the same as hooking me up to a source of quantum noise, and recording all of my Everett branches subsequent to that point. The so-called GLUT is the holographic sum of all these branches. The look-up consists of finding the branch which looks like a given input.
What this GLUT in fact looks like is simply the universe as conceived of under the relative state interpretation of QM. (Whether the relative state interpretation is correct or not is immaterial.) So how, exactly, are we supposed to “look inside” the GLUT and realize that it is “obviously” not conscious but just a big jukebox?
After having followed the line of reasoning that led us here, “looking inside” the GLUT has precisely the same informational structure as “looking inside” the relative-state universe (not as we do, confined to one particular Everett branch, but as would entities “outside” our universe, assuming for instance that we lived in a simulation).
The GLUT, assuming this process of construction, looks precisely like a timeless universe. And we have no reason to doubt that the minds inhabiting this universe are not conscious, and every reason to suppose that they are conscious.
So how, exactly, are we supposed to “look inside” the GLUT and realize that it is “obviously” not conscious but just a big jukebox?
You can look at the substrate of the GLUT. This is actually an excellent objection to computationalism, since an algorithm can be memoized to various degrees, a simulation can be more or less strict, etc. so there’s no sharp difference in character between a GLUT and a simulation of the physical universe.
And claiming that the GLUT is conscious suffers from a particularly sharp version of the conscious-rock argument. Encrypt the GLUT with a random one-time pad, and neither the resulting data nor the key will be conscious; but you can plug both into a decrypter and consciousness is restored. This makes very little sense.
On a different level of objection, I for one would bite the functionalist bullet: something that could talk to me regularly for 80 years, sensibly, who could actually teach me things or occasionally delight me, all the while insisting that it wasn’t in fact conscious but merely a GLUT simulating my Aunt Bertha...
Well, I would call that thing conscious in spite of itself.
To simulate Aunt Bertha effectively, and to keep that up for 80 years, it would in all likelihood have to be encoded with Aunt Bertha’s memories, Aunt Bertha’s wonderful quirks of personality, Aunt Bertha’s concerns for my little domestic worries as I gradually moved through my own narative arc in life, Aunt Bertha’s nuggets of wisdom that I would sometimes find deep as the ocean and other times silly relics of a different age, and so on and so forth.
The only difference with Aunt Bertha would be that, when I asked her (not “it”) why she thought she answered as she does, she’d tell me, “You know, dear nephew, I don’t want to deceive you, for all that I love you: I’m not really your Aunt Bertha, I’m just a GLUT programmed to act like her. But don’t fret, dear. You’re just an incredibly lucky boy who got handed the jackpot when drawing from the infinite jar of GLUTs. Isn’t that nice? Now, about your youngest’s allergies...”
Wasn’t an objection to these kinds of GLUTs that you’d basically have to make them by running countless actual, conscious copies of Aunt Bertha and record their incremental responses to each possible conversation chain? So you would be in a sense talking with a real, conscious human, although they might be long dead when you start indexing the table.
Though since each path is just a recording of a live person, it wouldn’t agree with being a GLUT unless the Aunt Bertha copies used to build the table would have been briefed earlier about just why they are being locked in a featureless white room and compelled to have conversation with the synthetic voice speaking mostly nonsense syllables at them from the ceiling.
(We can do the “the numbers are already ridiculous, so what the hell” maneuver again here, and replace strings of conversation with the histories of total sensory input Aunt Bertha’s mind can have received at each possible point in her life at a reasonable level of digitization, map these to a set of neurochemical outputs to her muscles and other outside-world affecting bits, and get a simulacrum we can put in a body with similar sensory capabilities and have it walking around, probably quite indistinguishable from the genuine, Turing-complete article. Although this would involve putting the considerably larger number of Bertha-copies used to build the GLUT into somewhat more unpleasant situations than being forced to listen to gibberish for ages.)
Surely there are multiple possible conscious experiences that could be had by non-GLUT entities with Aunt Bertha’s behavior. How would you decide which one to ascribe to the GLUT?
If you asked me, “Is GAunt Bertha conscious”, I would confidently answer “yes”, for the same reason I would answer “yes” if asked that question about you. Namely, both you and her talk fluently about consciousness, about your inner lives, and the parsimonious explanation is that you have inner lives similar to mine.
In the case of GAunt Bertha, it is the parsimonious explanation despite her protestations to the contrary, even though they lower the prior.
In Bayesian terms, I would count those 80 years of correspondence as overwhelming evidence that she has an inner life similar to mine, and the GLUT hypothesis starts out burdened with such a large prior probabilty against it that the amount of evidence you would have to show me to convince me that Aunt Bertha was a GLUT all along would take ages longer to even convey to me.
In Bayesian terms, I would count those 80 years of correspondence as overwhelming evidence that she has an inner life similar to mine, and the GLUT hypothesis starts out burdened with such a large prior probabilty against it that the amount of evidence you would have to show me to convince me that Aunt Bertha was a GLUT all along would take ages longer to even convey to me.
Oh, sorry. I thought you were assuming Aunt Bertha was a GLUT (not just that she claimed to be), and claiming she would be conscious. I agree that if Bertha claims to be a GLUT, she’s ridiculously unlikely to actually be one, but I’m not sure why this is interesting.
Regardless....
Surely there are multiple possible conscious experiences that could be had by non-GLUT entities with Aunt Bertha’s behavior. How would you decide which one to ascribe to the GLUT?
I’m not sure I even understand the question.
If something is conscious, it seems like there should be a fact of the matter as to what it is experiencing. (There might be multiple separate experiences associated with it, but then there should be a fact of the matter as to which experiences and with what relative amounts of reality-fluid.) (If you use UDT or some such theory under which ascription of consciousness is observer-dependent, there is still a subjectively objective fact of the matter here.)
Intuitively, it seems likely that behavior underdetermines experience for non-GLUTs: that, for some set of inputs and outputs that some conscious being exhibits, there are probably two different computations that have those same inputs and outputs but are associated with different experiences.
If the totality of Aunt Bertha’s possible inputs and outputs has this property — if different non-GLUT computations associated with different experiences could give rise to them — and if GBertha is conscious, which of these experiences (or what weighting over them) does GBertha have?
If something is conscious, it seems like there should be a fact of the matter as to what it is experiencing.
Well, going back to humans for a moment, there are two kinds of fact we can ascertain:
how people behave under various experimental conditions, which include asking them what they are experiencing;
how (what we very strongly suspect is) the material substrate of their conscious experience behaves under various experimental conditions, such as MRI, etc.
For anything else of which we have provisionally reached the conclusion that it is conscious, we can broadly make the same two categories of observation. (Sometimes these two categories of observation yield result that appear paradoxical when we compare them, for instance Libet’s experiments. These paradoxes may lead us to revise and refine our concept of consciousness.)
In fact the first kind is only a particular instance of the second; all our observations about conscious beings are mediated through experimental setups of some kind, formal or informal.
I’d go further and claim (based on cumulative refinements and revisions to the notion of consciousness as I understand it) that our observations about ourselves are mediated through the same kind of (decidedly informal) experimental setup. As the Luminosity sequence suggests, the way I know how I think is the same way I know how anybody else thinks: by jotting notes to an experimenter which happens to be myself.
The “multiplicity of possible conscious experiences” isn’t a question we could ask only about GBertha, but about anything that appears conscious, including ourselves.
So, what difference does it make to my objections to a GLUT scenario?
That world is more inconvenient than the one where I wake up with my arm replaced by a purple tentacle. Did you even read the article you linked to?
“No, no!” says the philosopher. “In the thought experiment, they aren’t randomly generating lots of GLUTs, and then using a conscious algorithm to pick out one GLUT that seems humanlike! I am specifying that, in this thought experiment, they reach into the inconceivably vast GLUT bin, and by pure chance pull out a GLUT that is identical to a human brain’s inputs and outputs! There! I’ve got you cornered now! You can’t play Follow-The-Improbability any further!”
Oh. So your specification is the source of the improbability here.
When we play Follow-The-Improbability again, we end up outside the thought experiment, looking at the philosopher.
The point is that you have specified something so improbable that it is not going to actually happen, so I don’t have to explain it, like I don’t have to worry about how I would explain my arm being replaced by a purple tentacle.
Mitchell isn’t asking you to explain anything. He’s asking you to predict (effectively) what would happen, consciousness-wise, given a randomly generated GLUT. There is a fact of the matter as to what would happen in that situation (in the same sense, whatever that may be, that there are facts about consciousness in normal situations), and a complete theory will be able to say what it is; the best you can say is that you don’t currently have a theory that covers that situation (or that the situation is underspecified; maybe it depends on what sort of randomizer you use, or something).
There is a fact of the matter as to what would happen in that situation (in the same sense, whatever that may be, that there are facts about consciousness in normal situations), and a complete theory will be able to say what it is; the best you can say is that you don’t currently have a theory that covers that situation.
My theory does cover that situation; it says the GLUT will not be conscious. It also says that situation will not happen, because GLUTs that act like people come from entanglement with people. Things that don’t actually happen are allowed to violate general rules about things that do happen.
Okay. Why did you bother bringing up the tentacle, or the section you quoted from Eliezer’s post? Why insist on the improbability of a hypothetical when “least convenient possible world” has already been called?
Because I was challenging the applicability of Least Convenient Possible Worlds to this discussion. It is a fully general (and invalid) argument against any theory T to say take this event A that T says is super improbable and suppose that (in the Least Convenient Possible World) A happens, which is overwhelming evidence against T. The tentacle arm replacement is one such event that would contradict a lot of theories. Would you ask someone defending the theory that their body does not drastically change overnight to consider the Least Convenient Possible World where they do wake up with a tentacle instead of an arm?
But you don’t actually need to resort to this dodge. You already said the lookup tables aren’t conscious; that in itself is a step which is troublesome for a lot of computationalists. You could just add a clause to your original statement, e.g.
“The lookup tables are not conscious, but the process that produced them was either conscious or extremely improbable.”
Voila, you now have an answer which covers all possible worlds and not just the probable ones. I think it’s what you wanted to say anyway.
“The lookup tables are not conscious, but the process that produced them was either conscious or extremely improbable.”
If that answer would have satisfied you, why did you ask about a scenario so improbable you felt compelled to justify it with an appeal to the Least Convenient Possible World?
Do you now agree that GLUT simulations do not imply the existence of zombies?
I thought you were overlooking the extremely-improbable case by mistake, rather than overlooking it on principle.
For me, the point of a GLUT is that it is a simulation of consciousness that is not itself conscious, a somewhat different concept from the usual philosophical notion of a zombie, which is supposed to be physically identical to a conscious being, but with the consciousness somehow subtracted. A GLUT is physically different from the thing it simulates, so it’s a different starting point.
The only reason I asked in the first place is that I’ve tended to assume someone who self-describes as a materialist would also believe that statement to be true.
I think your prior estimate for other people’s philosophical competence and/or similarity to you is way too high.
The only reason I asked in the first place is that I’ve tended to assume someone who self-describes as a materialist would also believe that statement to be true.
I suspect your prior estimate for people’s philosophical competence / similarity to you (whichever you prefer) is way too high.
quantum properties of brains that cannot be simulated by a conventional computer.
To the best of our knowledge, any “quantum property” can be simulated by a classical computer with approx. exponential slowdown. Obviously, a classical computer is not going to instantiate these quantum properties.
If the Church-Turing-Deutsch thesis is true and some kind of Digital Physics is an accurate depiction of reality then a simulation of physics should be indistinguishable from ‘actual’ physics. Saying subjective experience would not exist in the simulation under such circumstances would be a particularly bizarre form of dualism.
As far as I can tell from looking at those links both Searle and Pearce would deny the possibility of simulating a person with a conventional computer. I understand that position and while I think it is probably wrong it is not obviously wrong and it could turn out to be true. It seems that this is also Penrose’s position.
From the Chinese Room Wikipedia entry for example:
From the Pearce link you gave:
So I still wonder whether anyone actually believes that you could simulate a human mind with a computer but that it would not be conscious.
They would deny that a conventional computer simulation can create subjective experience. However, the Church-Turing thesis implies that if physicalism is true then conscious beings can be simulated. AFAICT, it is only Penrose who would deny this.
Do you mean the Church-Turing-Deutsch principle? It appears to me that Pearce at least in the linked article is making a claim which effectively denies that principle—his claim implies that physics is not computable.
Why? Pearce is a physicalist, not a computationalist; he ought to accept the possibility of a computation which is behaviorally identical to consciousness but has no conscious experience.
What sense of ‘ought’ are you using here? That seems like a very odd thing to believe to me. If you think that’s what he actually believes you’re going to have to point me to some evidence.
So that means you are a computationalist? Fine, but why do you think physicalism may be incoherent?
It’s hard to fish for evidence in a single interview, but Pearce says:
To me, this reads as an express acknowledgement of the CT thesis (unless quantum gravity turns out to be uncomputable, in which case the CTT is just plain false).
The distinction seems to hinge on whether physics is computable. I suspect the Church-Turing-Deutsch principle is true and if it is then it is possible to simulate a human mind using a classical computer and that simulation would be conscious. If it is false however then it is possible that consciousness depends on some physical process that cannot be simulated in a computer. That seems to me to be what Pearce is claiming and that is not incoherent. If we live in such a universe however then it is not possible to simulate a human using a classical computer / universal Turing machine and so it is incoherent to claim that you could simulate a human but the simulation would not be conscious because you can’t simulate a human.
I honestly don’t see how you make that connection. It seems clear to me that Pearce is implying that consciousness depends on non-computable physical processes.
You seem to be begging the question: I suspect that we simply have different models of what the “problem of consciousness” is.
Regardless, physicalism seems to be the most parsimonious theory; computationalism implies that any physical system instantiates all conscious beings, which makes it a non-starter.
Say again? Why should I believe this to be the case?
Basically, the interpretation of a physical system as implementing a computation is subjective, and a sufficiently complex interpretation can interpret it as implementing any computation you want, or at least any up to the size of the physical system. AKA the “conscious rocks” or “joke interpretations” problem.
Paper by Chalmers criticizing this argument, citing defenses of it by Hilary Putnam and John Searle
Simpler presentation by Jaron Lanier
I can see why someone might think that, but surely the requirement that any interpretation be a homomorphism from the computation to the processes of the object would be strong restriction on the sets of computation that it is instantiating?
Intriguing. Could you elaborate? Apparently “homomorphism” is a very general term.
I think the idea is that you can’t pick a different interpretation for the rock implementing a specific computation for each instant of time. A convincing narrative of the physical processes in a rock instantiating a consciousness would require a mapping from rock states to the computational process of the consciousness that remains stable over time. With the physical processes going on in rocks being pretty much random, you wouldn’t get the moment-to-moment coherence you’d need for this even if you can come up with interpretations for single instants.
One intuition here is that once you come up with a good interpretation, the physical system needs to be able to come up with correct results from computations that go on longer than where you extrapolated doing your interpretation. If you try to get around the single instant thing and make a tortured interpretation of rock states representing the computation of, say, 100 consecutive computations of the consciousness, the interpretation is going to have the rock give you garbage for computation 101. You’re just doing the computation yourself now and painstakingly fitting things to random physical noise in the rock.
A homomorphism is a “structure preserving map”, and is quite general until you specify what is preserved.
From my brief reading of Chalmers, he’s basically captured my objection. As Risto_Saarelma says, the point is that a mapping merely of states should not count. As long as the sets of object states are not overlapping, there’s a mapping into the abstract computation. That’s boring. To truly instantiate the computation, what has to be put in is the causal structure, the rules of the computation, and these seem to be far more restrictive than one trace of possible states.
Chalmer’s “clock and dial” seems to get around this in that it can enumerate all possible traces, which seems to be equivalent to capturing the rules, but still feels decidedly wrong.
Try bisimulation.
Having printed it out and read it, it seems that “any physical system instantiates all conscious beings” is fairly well refuted, and what is left reduces to the GLUT problem.
Thanks for the link.
I remember seeing the Chalmers paper before, but never reading far enough to understand his reasoning—I should probably print it out and see if I can understand it on paper.
Edit: Yes, I know that he’s criticizing the argument—I’m just saying I got lost last time I tried to read it.
So do you think there is a meaningful difference between computationalism and physicalism if the Church-Turing-Deutsch principle is true? If so, what is it?
Basically, physicalism need not be substrate-independent. For instance, it could be that Pearce is right: subjective experience is implemented by a complex quantum state in the brain, and our qualia, intentionality and other features of subjective experience are directly mapped to the states of this quantum system. This would account for the illusion that our consciousness is “just” our brain, while dramatically simplifying the underlying ontology.
Is that a yes or a no? It seems to me that saying physicalism is not substrate-independent is equivalent to saying the Church-Turing-Deutsch principle is false. In other words, that a Turing machine cannot simulate every physical process. My question is whether you think there is a meaningful difference between physicalism and computationalism if the Church-Turing-Deutsch principle is true. There is obviously a difference if it is false.
Why would this be? Because of free will? Even if free will exists, just replace the input of free will with a randomness oracle and your Turing machine will still be simulating a conscious system, albeit perhaps a weird one.
I don’t think free will is particularly relevant to the question. Pearce seems to be claiming that some kind of quantum effects in the brain are essential to consciousness and that a simulation of a brain in a computer therefore cannot be conscious. If you could simulate the quantum processes then the argument falls apart. It only makes sense if the Church-Turing-Deutsch principle is false and there are physical processes that cannot be simulated by a Turing machine. I think that is unlikely but possible and a coherent position.
If all physical processes can be simulated by a Turing machine then I don’t see a meaningful difference between physicalism and computationalism. I still don’t know what your answer is to that question. If you do think there is still a meaningful difference then please share.
\sigh** You seem to be so committed to computationalism that you’re unable to understand competing theories.
Simulating quantum processes on a classical computer is not the same as instantiating them in the real world. And physicalism commits us to giving a special status to the real world, since it’s what our consciousness is made of. (Perhaps other “consciousnesses” exist which are made out of something else entirely, but physicalism is silent on this issue.) Hence, consciousness is not invariant under simulation; a classical simulation of a conscious system is similar to a zombie in that it behaves like a conscious being but has no subjective experience.
ETA: I think you are under the mistaken impression that a theory of consciousness needs to explain your heterophenomenological intuitions, i.e. what kinds of beings your brain would model as conscious. These intuitions are a result of evolution, and they must necessarily have a functionalist character, since your models of other beings have no input other than the general form of said beings and their behavior. Philosophy of mind mostly seeks to explain subjective experience, which is just something entirely different.
So you do think there is a difference between physicalism and computationalism even if the Church-Turing-Deutsch principle is true? And this difference is something to do with a special status held by the real world vs. simulations of the real world? I’m trying to understand what these competing theories are but there seems to be a communication problem that means you are failing to convey them to me.
That’s what it means to say that physicalism is substrate-dependent. There is a (simple) psycho-physical law which states that subjective experience is implemented on a specific substrate.
It just so happens that evolution has invented some analog supercomputers called “brains” and optimized them for computational efficiency. At some point, it hit on a “trick” for running quantum computations with larger and larger state spaces, and started implementing useful algorithms such as reinforcement learning, aversive learning, perception, cognition etc. on this substrate. As it turns out, the most efficient physical implementations of such quantum algorithms have subjective experience as a side effect, or perhaps as a crucial building block. So subjective awareness got selected for and persisted in the population to this day.
It seems a fairly simple story to me. What’s wrong with it?
So is one of the properties of that specific substrate (the physical world) that it cannot be simulated by a Turing machine? I don’t know why you can’t just give a yes/no answer to that question. I’ve stated it explicitly enough times now that you just come across as deliberately obtuse by not answering it.
I think I’ve been fairly clear that I don’t deny the possibility that consciousness depends on non-computable physics. I don’t think it is the most likely explanation but it doesn’t seem to be clearly ruled out given our current understanding of the universe. Your story might be something close to the truth if the Church-Turing-Deutsch principle is false. It appears to me to be incoherent if it is true however.
I think the Church-Turing-Deutsch principle is probably true but I don’t think we can rule out the possibility that it is false. If it is true then it seems a simulation of a human running on a conventional computer would be just as conscious as a real human. If it is false then it is not possible to simulate a human being on a conventional computer and it therefore doesn’t make sense to say that such a simulation cannot be conscious because a simulation cannot be created. What if anything do you disagree with from those claims?
Because it implies the possibility of zombies, or for some other reason?
Basically, yes. Slightly more explicitly, it appears to say that two contradictory things are true: that a Turing machine can simulate every physical process but that there are properties arising from physical processes running directly on their ‘native’ hardware that do not arise when those same processes are simulated. That suggests either that the simulation is actually incomplete (it is missing inputs or algorithms that account for the difference) or that there is some kind of dualism going on: a mysterious and unidentifiable ‘something’ that accounts for consciousness existing in a human brain but not in a perfect simulation of a human brain.
If the missing something is not part of physics then we’re really back to dualism and not physicalism at all. It seems like an attempt to sneak dualism back in without admitting to being a dualist in polite company.
Is subjective experience a “property”? By assumption, all the features of subjective experience have physical correlates which are preserved by the simulation. It’s just that the ‘native’ process fits a “format” that allows it to actually be experienced, whereas the simulated version does not. It seems weird to call this a dualist theory when the only commonality is an insistence on taking the problem of subjective experience seriously.
Well, I don’t think it really matters what you call it but I assume we agree that it is a something. Do you believe that it is in principle possible to differentiate between an entity that has that something and an entity that does not?
This sounds like your answer to my previous question is ‘no’. So is your position that it is not possible in principle to distinguish between a simulation of a human brain and a ‘real’ human brain but that the latter differs in that it possesses a ‘something’ that is not a function of the laws of physics and is inaccessible to any form of investigation other than introspection by the inhabitant of that brain but that is nonetheless in some sense a meaningful distinction? That sounds a lot like dualism to me.
Perhaps not. ‘That something’ may be simply a model which translates the aforementioned physical properties into perceptual terms which are more familiar to us. But this begs the question of why we would be familiar with perception in the first place; “we have subjective experience, and by extension so does anything which is implemented in the same substrate as us” is a good way to escape that dilemma.
The whole point of physicalism is that subjective experience is a function of the laws of physics, and in fact a fairly low-level function. If you want to avoid any hint of dualism, just remove the “inhabitant” (a misnomer) and the “psycho-physical bridging laws” from the model and enjoy your purely physicalistic theory. Just don’t expect it to do a good job of talking about phenomenology or qualia: physicalist theories are just weird like that.
As the saying goes, those who do not know dualism are doomed to reinvent it, poorly. Beware this tendency.
Do you ever answer a direct question?
Are you saying that there is some extra law (on top of the physical laws that explain how our brains implement our cognitive algorithms) that maps our cognitive algorithms, or a certain way of implementing them, to consiousness? So that, in principal, the universe could have not had that law, and we would do all the same things, run all the same cognitive algorithms, but not be consious? Do you believe that p-zombies are conceptially possible?
The psycho-physical law is not really an extra law “on top of the laws of physics”, so much as a correspondence between quantum state spaces and subjective experiences—ideally, the correspondence would be as simple as possible.
You could build a version of the universe which was not endowed with any psycho-physical laws, but it’s not something anyone would ever experience; it would be one formal system plucked out seemingly at random from the set of computational structures. It is as logically possible as anything else, but whether it makes sense to regard such a bizarre thing as “conceptually possible” is another matter.
But would this universe look the same as our universe to an outside observer who cannot directly observe subjective experience, but only the physical states that subjective experience supposedly correspond to?
We’re assuming that physicalism is true, so yes it would look the same. The inhabitants would be p-zombies, but all physical correlates of subjective experience would exist.
So, since in this alternate universe without subjective experience, people have the same discussions about subjective experience as their analogs in this universe, the subjective experience is not the cause of these discussions. So what explains the fact the this physical stuff people are made out of, which only obeys physical laws and can’t be influenced by subjective experience, discusses subjective experience? Where did that improbability come from?
First of all, physical stuff can be influenced by the physical correlates of subjective experience. Since the zombie universe was obtained by removing subjective experience from an universe where it originally existed, it’s not surprising that these physical correlates would show some of the same properties.
The properties which subjective experience and its physical correlates have in this universe could be well explained by a combination of (1) anthropic principles (2) the psycho-physical bridging law (3) the properties of our perceptions and other qualia. Moreover, the fact that we’re having this discussion screens out the possibility that people might have no inclination at all to talk about subjective experience.
If the physical properties of the physical correlates of subjective experience are sufficient to explain why we talk about subject experience even without a bridging law, then why are they not enough to also explain the subjective experiences without a bridging law?
Subjective experience is self-evident enough to need no explanation. What needs to be explained is how its content as perceived by us (i.e. qualia, beliefs, thoughts etc.) relates to formally modeled physics: hence, the bridging law maps between the conceptual description and the complex quantum system which is physically implemented in the brain.
No, subjective experience is self-evident enough that we do not need to argue about whether it exists, we can easily agree that it does. (Though, you seem to believe that in the zombie world, we would incorrectly come to the same agreement.) But agreeing that something exists is not the same as understanding how or why it exists. This part is not self-evident and we disagree about it. You seem to believe that the explanation requires macroscopic quantum superpositions and some bringing law that somewhat arbitrarily maps these quantum superpositions onto subjective experiences. I believe that if we had sufficient computing power and knew fully the arrangement of neurons in a brain, we could explain it using only classical approximations of physics.
We don’t understand why, but then again we don’t know why anything exists. In practice, something as basic as subjective experience is always taken as a given. As for how, our inner phenomenology reveals far more about subjective experience than physics ever could.
Nevertheless, we do also want to know how the self might relate to our physical models; and contrary to what might be expected, macroscopic quantum superposition is actually the parsimonious hypothesis here for a wide variety of reasons.
Unless QM as we know it is badly wrong, it just doesn’t fit our models of physical reality that anything resembling “the self” would be instantiated in a hugely complicated classical system (a brain with an arrangement of brain regions and billions of neurons? Talk about an arbitrary bridging law!) as opposed to a comparatively simple quantum state.
Moreover, it is eminently plausible that evolution should have found some ways of exploiting quantum computation in the brain during its millions-of-years-long development. The current state of neuroscience is admittedly unsatisfactory, but this shouldn’t cause us to shed too much confidence.
I am talking about why subjective experience exists given that the physical universe exists. Are you being deliberately obtuse?
You are failing to address my actual position, which is that there is no arbitrary bridging law, but a mapping from the mathematical structure of physical systems to subjective experience, because that mathematical structure is the subjective experience, and it mathematically has to be that way. The explanation of why and how I am talking about is an understanding of that mathematical structure, and how physical systems can have that structure.
If you believe that we evolved systems for maintaining stable macroscopic quantum superposition without decoherence, and that we have not noticed this when we study the brain, then QM as you know it is badly wrong.
Interesting. How do you know that the physical universe exists, though? Could it be that your certainty about the physical universe has something to do with your subjective experience?
“The mathematical structure of physical systems” means either physical law, or else something so arbitrary that a large rock can be said to instantiate all human consciousnesses.
Evidence please. Quantum biology is an active research topic, and models of quantum computation differ in how resilient they are to decoherence.
Basically, what bogus said.
I’m confused about what you mean by “simulating a person”. Presumably you don’t mean simulating in a way that is conscious/has mental states (since that would make the claim under discussion trivially, uninterestingly inconsistent), so presumably you do mean just simulating the physics/neurology and producing the same behavior. While AFAIK neither explicitly says so in the links, Searle and Pearce both seem to me to believe the latter is possible. (Searle in particular has never, AFAIK, denied that an unconscious Chinese Room would be possible in principle; and by “strong AI” Searle means the possibility of AI with an ‘actual mind’/mental states/consciousness, not just generally intelligent behavior.)
Yes. Equivalently, is uploading possible with conventional computers?
It seems to me that both Searle and Pearce would answer no to both questions. Pearce in particular seems to be saying that consciousness depends on quantum properties of brains that cannot be simulated by a conventional computer. It appears to me that this is equivalent to a claim that physics is not computable but I’m not totally confident of that equivalence. I have trouble reading any other conclusion from anything in those links. Can you point to a quote that makes you think otherwise?
I don’t think Pearce or Searle would agree with this, and it sounds like you might be projecting your belief onto them. We already know of philosophers who explicitly endorse the possibility of zombies, so it’s not surprising for philosophers to endorse positions that imply the possibility of zombies.
Afraid not, but I think if they thought physics were uncomputable (in the behavioral-simulation sense) they would say so more explicitly.
Way back at the beginning of this thread I was trying to establish whether anybody who calls themselves a materialist actually believes the statement “you can’t fully simulate a person without the simulation being conscious” to be false. I still don’t feel I have an answer to that question. It seems that bogus might believe that statement to be false but he is frustratingly evasive when it comes to answering any direct questions about what he actually believes. It seems we are not currently in a position to say definitively what Pearce or Searle believe.
The only reason I asked in the first place is that I’ve tended to assume someone who self-describes as a materialist would also believe that statement to be true. I guess the moral of this thread is that I can’t assume that and should ask if I want to know.
Many people want to draw the line at lookup tables—they don’t believe simulation by lookup table would be conscious.
-- Daniel Dennett (from here)
The point being that GLUTs are faulty intuition pumps, so we cannot use them to bolster our intuition that “something mechanical that passed the Turing Test might nevertheless not be conscious”.
It would take a GLUT as large as the universe just to store all possible replies to questions I might ask of it, but it would flounder on a simple test: if I were to repeat the same question several times, it would give me the same answer each time. You could push me into a less convenient possible world by arguing that the GLUT responds to minute differences in my tone of voice, etc. - but I could also record myself on tape and play the same tape back N times, and the GLUT would expose itself as such, and therefore fail the test, by sphexishly reciting back its stored lines.
There’s no way that I can see of going around this, other than to “extend” the GLUT concept to allow for stored states and conditional branches, at which point we recover Turing completeness. To a programmer, the GLUT concept just isn’t credible.
Ok, basic confusion here. The GLUT obviously has to be indexed on conversation histories up to the point of the reply, not just the last statement from the interlocutor. Having it only index using the last statement would make it pretty trivially incapable of passing a good Turing test. It follows that since it’s still assumed to be a finite table, it can only do conversations up to a given length, say half an hour. Half an hour, on the other hand, should be quite long enough to pass a Turing test, and since we’re dealing with crazy scales here, we might just as well make the maximum length of conversation 80 years or something.
Tut, tut. Assuming the confusion you claim to see is mine: you don’t get to tell me that my objection to an intuition pump is incoherent, you are required to show that it is incoherent, and it is preferable to avoid lullaby language in such argumentation.
Yes, the question “what is your index” exposes the GLUT as a confused intuition pump. I am at present looking at the Ned Block (1981) paper Psychologism and Behaviorism which (as best I could ascertain) is the original source for the GLUT concept. It makes a similar claim to yours, namely that “for a Turing Test of any given length, the machine could in principle be programmed in just the same way to pass a Turing Test of that length”.
But sauce for the goose is sauce for the gander: for a GLUT of any size, there is a Turing Test of sufficient duration that exposes the GLUT as not conscious, by looping back to the start of the conversation! This shows that the argument from a necessarily finite index does have force to counter the GLUT as an intuition pump.
It is flawed in other ways. You can’t blame Ned Block who at the time of writing that paper can’t have spent a lot of time on IRC, but someone with that experience would tell you that indexing on character strings wouldn’t be enough to pass a 1-hour Turing test: the GLUT as originally specified would be vulnerable to timing attacks. It wouldn’t be able to spontaneously say something like “You haven’t typed anything back to me for thirty minutes, what’s wrong?”
“OK”, a GLUT advocate might reply, “we can in principle include timings in the index, to whatever timing resolution you are capable of detecting”.
It’s tempting to grant this “in principle” counter-objection, especially as I don’t have the patience to go to the literature and verify that the “timing attack” objection hasn’t been raised and countered before.
But the fact that the timing attack wasn’t anticipated by Ned Block is precisely what shows up the GLUT concept as a faulty intuition pump. You don’t get to “go back to the drawing board” on the GLUT concept each time an attack is found and iteratively improve it until its index has been generalized enough to cover all possible circumstances: that is tantamount to having an actual, live, intelligent human sit behind the keyboard and respond.
Actually the whole idea of the GLUT machine (dubbed the ‘blockhead’ in Braddon-Mitchell’s and Jackson’s book, The Philosophy of Mind and Cognition) IS precisely to use live intelligent humans to store an intelligent response to every response a judge might make under a pre-specified limit (including silence and looping, which is discussed explicitly in the paper). The idea is to show that even though the resulting machine has the capacity to emit an intelligent response to any comment within the finite specified limits, it nonetheless has the intelligence of a juke-box. The point is that the intelligent programmers anticipate anything that the “judge” could say in the finite span. The upshot is that the capacity of a machine to pass a Turing Test of a finite length does not entail actual intelligence.
I confess to having downloaded the paper recently and not given it more attention than was necessary to satisfy my usual habit of having primary sources at hand. I’ve gone back and read it more carefully, but it probably deserves still longer scrutiny.
(Welcome to Less Wrong, by the way. I don’t suppose you need to post an introduction, seeing as you have your own Wikipedia page. Nice to be chatting with you here!)
However, I’m not seeing where this is discussed explicitly, other than (this is perhaps what you mean) under the general heading of using “quantized stimulus parameters” as input to the GLUT-generating process. I grant that this does adequately deal with the most crude timing attacks imaginable.
There do seem to me to be other, more subtle attacks which—according to my earlier argument that, if you have to go back to the drawing board each time such an attack is found, leave the GLUT critique of behaviourism ineffective—would still prove fatal. For instance we can consider teachability of the GLUT, to uncover an entire class of attacks.
Suppose there is some theoretical concept, unknown to the putative human programmers of the GLUT (or perhaps we should call them conversation-authors, as the programming involved is minimal), but which can be taught to someone of normal intelligence. I don’t want to restrict my argument to any particular domain, but for illustrative purposes let’s pick the phenomenon of lasing light. This is a reasonable example, since the GLUT concept would have been implementable as early as Babbage’s time and the key insights date from Einstein’s.
In this scenario, the GLUT’s interviewer choses as his conversation topic the theoretical background needed to build up to the concept of lasing light. The test comes when she (gender picked by flipping a coin) asks the GLUT to make specific predictions about a given experimental setup that extrapolates relevant physical law into a domain not previously discussed, but where that law still applies.
By my earlier stipulation, the GLUT’s builders must discover, in the process of building the GLUT, the physical law of lasing light. They must also prune the conversation tree of “wrong” predictions, since that would alert the interviewer to the fact that the GLUT was “faking” understanding up to the point of the experimental test; this rules out the builders merely “covering all (conversational) bases”. They must truly understand the phenomenon themselves.
(One may object that it would take an inordinately long time to teach a person of merely normal intelligence about a phenomenon such as lasing light. But we have earlier stipulated that the length of the test can be extended to human lifespans; that is surely enough for a person of normal intelligence to eventually get there.)
We are led to what is (to me at least) a disturbing conclusion. The building of a GLUT entails the discovery by the builders of all experimentally discoverable physical laws of our universe that can be taught a person of normal intelligence in a reasonable finite lifespan.
I’m not a professional philosopher, so possibly this argument has holes.
Nevertheless it seems to me that this unpalatable conclusion points to one primordial flaw in the GLUT argument: it goes counter to the open-ended nature of the optimization process known as intelligence. You cannot optimize by covering all bases, for the same reason that a theory that can explain all conceivable events has no real content.
The original paper tried to anticipate this objection by offering as a general defense the stipulation that the GLUT should simulate a “desert island” type of castaway, so that the GLUT would be dispensed of the capacity to converse fluently about current events. But the objection is more general and its force becomes harder to avoid if the duration of the test is extended greatly: we need to imagine that the GLUT can be brought up to date with current events, and afterwards respond appropriately to them, as would a person of normal intelligence. This requires the GLUT builders to anticipate the future with enough precision to prune “inappropriate” responses, and so the defense that the builders would “cover all bases” is untenable.
The domain of physical law is the one where the consequences of the teachability test are brought into sharpest focus, but I suspect that “merely social” tests of the GLUT in everyday life would very quickly expose its supposed intelligence as a sham.
Behaviourism, or God-like GLUT builders: pick your poison.
There is an aspect of the construction that you are not quite taking in. The programmers give a response to EVERY sequence of letters and spaces that a judge COULD type in the remaining segment of the original hour. One or more of those sequences will be a description of a laser, another will be a description of some similar device that goes counter to physical law, etc. The programmers are supposed to respond to each string as an intelligent person would respond. Here is the relevant part of the description: “Suppose the interrogator goes first, typing in one of A1...An. The programmers produce one sensible response to each of these sentences, B1...Bn. For each of B1...Bn, the interrogator can make various replies [every possible reply of all lengths up to the remaining time], so many branches will sprout below each of the Bi. Again, for each of these replies, the programmers produce one sensible response, and so on.” The general point is that there is no need for the programmers to “think of” every theory: that is accomplished by exhaustion. Of course the machine is impossible but that is OK because the point is a conceptual one: having the capacity to respond intelligently for any stipulated finite period (as in the Turing Test) is not conceptually sufficient for genuine intelligence.
That is plainly wrong. The “input’ space (possible judge queries) is exhaustively covered, I’m getting that just fine. No such thing can be said about the “output” space: we’re requiring that the output consist of strings encoding responses that an intelligent person would emit. The judge is allowed to say random, possibly wrong, things, but the GLUT is not so allowed.
Consider an input string which consists of a correct explanation of quantum mechanics (which we assume the builders don’t know yet at build time), plus a question to the GLUT about what happens in a novel, never before encountered (by the GLUT) experimental setup. This input string is possible, and so must be considered by the builders (along with input strings that are incorrect explanations of QM plus questions about TV shows, but we needn’t concern ourselves with those, an actual “judge from the builder’s future” will not emit them).
In order to construct even one sensible response to this input string, to respond “as an intelligent person would”, the GLUT builders must correctly predict the experimental result. An incorrect response will signal to the “judge” that the GLUT is responding by rote, without understanding. If the GLUT equivocates with “I don’t know”, the judge will press for an answer; we are assuming that the GLUT has answered all previous queries sensibly up to this point, that it has been a “good student” of QM. If the GLUT keeps dodging the judge’s request for a prediction, the game is up: the jduge will flunk it on the Turing Test.
To correctly predict an experimental result, the builders must know and understand QM, but we have assumed they don’t. Assuming that the GLUT always passes the Turing Test leads us to a contradiction, so we must allow that there are some Turing Tests the GLUT is unable to pass: those that require it to learn something its builders didn’t know. The GLUT does not have the capacity you are claiming for it.
(If you disagree, and think I’m still not getting it, please kindly answer the following: considering only a single input string QM+NE—explanation of quantum mechanics plus novel experiment—how do you propose that a builder who doesn’t understand QM construct a sensible answer to that input string?)
You’re assuming that the GLUT is simulating a person of average intelligence, right? So they ask a person of average intelligence how they’d respond to that particular sentence, given various kinds of context, and program in the answer(s).
What you’re trying to get at, I think, is a situation for which the GLUT has no response, but that’s already ruled out by the fact that the hypothetical situation specifies that the programmers have to have systematically considered every possible situation and programmed in a response to it. (It doesn’t have to be a good response, just how a person of average intelligence would respond, so variations on ‘I don’t know’ or ‘that doesn’t make sense to me’ would be not just acceptable but actually correct in some situations.)
Heh. I’d claim that your use of “average” here is smuggling in precisely the kind of connotation that are relied on to make the GLUT concept plausible, but which do not stand up to scrutiny.
Let’s say I’m assuming the GLUT is simulating an intelligence “equivalent” to mine. And assume the GLUT builder is me, ten years ago, when I didn’t know about Brehme diagrams but was otherwise relatively smart. Assume the input string is the first few chapters of the Shadowitz text on special relativity I have recently gone through. Under these assumptions, “equivalent” intelligence consists of being able to answer the exercises as correctly as I recently did.
(Crucially, if the supposed-to-be-equivalent-to-mine intelligence turns out to be for some reason cornered into saying “I don’t know” or “I can’t make sense of this text”, I can tell for sure it’s not as smart as I am, and we have a contradiction.)
The GLUT intuition pump requires that the me-of-today can “teach” the me-of-ten-years-ago how to use Brehme diagrams, to the point where the me-of-ten-years ago can correctly answer the kind of questions about time dilation that I can answer today.
We’re led to concluding one of the following:
that I can send information backwards in time
that the me-of-ten-years-ago did know about SR, contrary to stipulation
that the builders have another way of computing sensible answers, contrary to stipulation
that the “intelligence” exhibited by GLUT is restricted to making passable conversational answers but is limited in not being able to acquire new knowledge
My hunch is that this last is really what the fuzziness of the word “intelligence” allows someone thinking about GLUTs to get away with, and not realize it. The GLUT is a smarter ELIZA, but if we try to give it a specific, operational, predictive kind of intelligence of which humans are demonstrably capable, it is easily exposed as a dummy.
In the course of building the GLUT, you-of-10-years-ago would have to, in the course of going through every possible input that the GLUT might need to respond to, encounter the first few chapters of the book in question, and figure out a correct response to that particular input string. So you-of-10-years-ago would have to know about SR, not necessarily at the start of the project, but definitely by the end of it. (And the GLUT simulating you-of-10-years-ago would be able to simulate the responses that you-of-10-years-ago generated in the learning process, assuming that you-of-10-years-ago put them in as generated rather than programming the GLUT to react as if it already knew about SR.)
Going through every possible random string is an extremely inefficient way to gain new information, though.
So you agree with me: since there is nothing special about either the 10-year stipulation or about the theory in question, we’re requiring the GLUT builders to have discovered and understood every physical theory that will ever be discovered and can be taught to a person of my intelligence.
This is conceptually an even taller order than the already hard to swallow “impossible-but-conceptually-conceivable” machine. Where are they supposed to get the information from? This is—so we are led to conclude—a civilization which can take a stroll through the Library of Babel and pick out just those books which correspond to a sensible physical theory.
I think you misunderstood. You-of-10-years-ago doesn’t have to have figured out SR prior to building the GLUT; you-of-10-years-ago would learn about SR—and an unimaginable number of other things, many of them wrong—in the course of programming the GLUT. That’s implied in ‘going through every possible input’. Also, you-of-10-years-ago wouldn’t have to program the objectively-right answers into the GLUT, just their own responses to the various inputs, so no external data source is necessary.
The GLUT builder has to understand the given theory, and derive its implications to the novel experiment. But they don’t have to know that the theory is correct. It is your later input of a correct explanation that picks the correct answer out of all the wrong ones, and the GLUT builder doesn’t have to care which is which.
I don’t get what you mean here. Please clarify?
If the tester gives the GLUT a plausible-sounding explanation of some event that is incorrect, but that you-of-10-years-ago would be deceived by, the GLUT simulation of you should respond as if deceived. Similarly, if the tester gives the GLUT an incorrect but plausible-sounding explanation of SR that you-of-10-years-ago would take as correct, the GLUT should respond as if it thinks the explanation is correct. You-of-10-years-ago would need to program both sets of responses—thinking that the incorrect explanation of SR is correct, and thinking that the correct explanation of SL is correct—into the GLUT. You-of-10-years-ago would not need to know which of those two explanations of SR was actually correct in order to program thinking-that-they-are-correct responses into the GLUT.
I do not accept that a me-of-10-years ago could convincingly simulate these responses after forcing himself to learn every possible variation on the Shadowitz book and sincerely accepting that as true information. Conversely, if he started with the “true” Shadowitz he would have a hard time erasing that knowledge afterwards to give convincing answers to the “false” versions.
Not only would the me-of-10-years ago not be able to convincingly reproduce, e.g. the excitement of learning new stuff and finding that it works; that me would (I suspect) simply go mad under such bizarre circumstances! This is not how learning works in an intelligent mind stipulated as “equivalent” to mine.
That’s a trivial inconvenience. You can use a molecular assembler to build duplicates of your 10-years-ago self. Assuming that physicalism is correct and that consciousness involves no quantum effects, these doppelgänger will be conscious and you can feed each a version of the Shadowitz book.
I was anticipating precisely this objection.
My answer is that this is nothing like a GLUT any more. We are postulating a process of construction which is functionally the same as hooking me up to a source of quantum noise, and recording all of my Everett branches subsequent to that point. The so-called GLUT is the holographic sum of all these branches. The look-up consists of finding the branch which looks like a given input.
What this GLUT in fact looks like is simply the universe as conceived of under the relative state interpretation of QM. (Whether the relative state interpretation is correct or not is immaterial.) So how, exactly, are we supposed to “look inside” the GLUT and realize that it is “obviously” not conscious but just a big jukebox?
After having followed the line of reasoning that led us here, “looking inside” the GLUT has precisely the same informational structure as “looking inside” the relative-state universe (not as we do, confined to one particular Everett branch, but as would entities “outside” our universe, assuming for instance that we lived in a simulation).
The GLUT, assuming this process of construction, looks precisely like a timeless universe. And we have no reason to doubt that the minds inhabiting this universe are not conscious, and every reason to suppose that they are conscious.
You can look at the substrate of the GLUT. This is actually an excellent objection to computationalism, since an algorithm can be memoized to various degrees, a simulation can be more or less strict, etc. so there’s no sharp difference in character between a GLUT and a simulation of the physical universe.
And claiming that the GLUT is conscious suffers from a particularly sharp version of the conscious-rock argument. Encrypt the GLUT with a random one-time pad, and neither the resulting data nor the key will be conscious; but you can plug both into a decrypter and consciousness is restored. This makes very little sense.
On a different level of objection, I for one would bite the functionalist bullet: something that could talk to me regularly for 80 years, sensibly, who could actually teach me things or occasionally delight me, all the while insisting that it wasn’t in fact conscious but merely a GLUT simulating my Aunt Bertha...
Well, I would call that thing conscious in spite of itself.
To simulate Aunt Bertha effectively, and to keep that up for 80 years, it would in all likelihood have to be encoded with Aunt Bertha’s memories, Aunt Bertha’s wonderful quirks of personality, Aunt Bertha’s concerns for my little domestic worries as I gradually moved through my own narative arc in life, Aunt Bertha’s nuggets of wisdom that I would sometimes find deep as the ocean and other times silly relics of a different age, and so on and so forth.
The only difference with Aunt Bertha would be that, when I asked her (not “it”) why she thought she answered as she does, she’d tell me, “You know, dear nephew, I don’t want to deceive you, for all that I love you: I’m not really your Aunt Bertha, I’m just a GLUT programmed to act like her. But don’t fret, dear. You’re just an incredibly lucky boy who got handed the jackpot when drawing from the infinite jar of GLUTs. Isn’t that nice? Now, about your youngest’s allergies...”
Wasn’t an objection to these kinds of GLUTs that you’d basically have to make them by running countless actual, conscious copies of Aunt Bertha and record their incremental responses to each possible conversation chain? So you would be in a sense talking with a real, conscious human, although they might be long dead when you start indexing the table.
Though since each path is just a recording of a live person, it wouldn’t agree with being a GLUT unless the Aunt Bertha copies used to build the table would have been briefed earlier about just why they are being locked in a featureless white room and compelled to have conversation with the synthetic voice speaking mostly nonsense syllables at them from the ceiling.
(We can do the “the numbers are already ridiculous, so what the hell” maneuver again here, and replace strings of conversation with the histories of total sensory input Aunt Bertha’s mind can have received at each possible point in her life at a reasonable level of digitization, map these to a set of neurochemical outputs to her muscles and other outside-world affecting bits, and get a simulacrum we can put in a body with similar sensory capabilities and have it walking around, probably quite indistinguishable from the genuine, Turing-complete article. Although this would involve putting the considerably larger number of Bertha-copies used to build the GLUT into somewhat more unpleasant situations than being forced to listen to gibberish for ages.)
Surely there are multiple possible conscious experiences that could be had by non-GLUT entities with Aunt Bertha’s behavior. How would you decide which one to ascribe to the GLUT?
I’m not sure I even understand the question.
If you asked me, “Is GAunt Bertha conscious”, I would confidently answer “yes”, for the same reason I would answer “yes” if asked that question about you. Namely, both you and her talk fluently about consciousness, about your inner lives, and the parsimonious explanation is that you have inner lives similar to mine.
In the case of GAunt Bertha, it is the parsimonious explanation despite her protestations to the contrary, even though they lower the prior.
In Bayesian terms, I would count those 80 years of correspondence as overwhelming evidence that she has an inner life similar to mine, and the GLUT hypothesis starts out burdened with such a large prior probabilty against it that the amount of evidence you would have to show me to convince me that Aunt Bertha was a GLUT all along would take ages longer to even convey to me.
Oh, sorry. I thought you were assuming Aunt Bertha was a GLUT (not just that she claimed to be), and claiming she would be conscious. I agree that if Bertha claims to be a GLUT, she’s ridiculously unlikely to actually be one, but I’m not sure why this is interesting.
Regardless....
If something is conscious, it seems like there should be a fact of the matter as to what it is experiencing. (There might be multiple separate experiences associated with it, but then there should be a fact of the matter as to which experiences and with what relative amounts of reality-fluid.) (If you use UDT or some such theory under which ascription of consciousness is observer-dependent, there is still a subjectively objective fact of the matter here.)
Intuitively, it seems likely that behavior underdetermines experience for non-GLUTs: that, for some set of inputs and outputs that some conscious being exhibits, there are probably two different computations that have those same inputs and outputs but are associated with different experiences.
If the totality of Aunt Bertha’s possible inputs and outputs has this property — if different non-GLUT computations associated with different experiences could give rise to them — and if GBertha is conscious, which of these experiences (or what weighting over them) does GBertha have?
Well, going back to humans for a moment, there are two kinds of fact we can ascertain:
how people behave under various experimental conditions, which include asking them what they are experiencing;
how (what we very strongly suspect is) the material substrate of their conscious experience behaves under various experimental conditions, such as MRI, etc.
For anything else of which we have provisionally reached the conclusion that it is conscious, we can broadly make the same two categories of observation. (Sometimes these two categories of observation yield result that appear paradoxical when we compare them, for instance Libet’s experiments. These paradoxes may lead us to revise and refine our concept of consciousness.)
In fact the first kind is only a particular instance of the second; all our observations about conscious beings are mediated through experimental setups of some kind, formal or informal.
I’d go further and claim (based on cumulative refinements and revisions to the notion of consciousness as I understand it) that our observations about ourselves are mediated through the same kind of (decidedly informal) experimental setup. As the Luminosity sequence suggests, the way I know how I think is the same way I know how anybody else thinks: by jotting notes to an experimenter which happens to be myself.
The “multiplicity of possible conscious experiences” isn’t a question we could ask only about GBertha, but about anything that appears conscious, including ourselves.
So, what difference does it make to my objections to a GLUT scenario?
The lookup tables are not conscious but the process that produced them was.
What about a randomly generated lookup table that just happens to simulate a person? (They can be found here.)
That world is more inconvenient than the one where I wake up with my arm replaced by a purple tentacle. Did you even read the article you linked to?
My specification is the reason we are talking about something improbable. It’s not the cause of the improbable thing itself.
The point is that you have specified something so improbable that it is not going to actually happen, so I don’t have to explain it, like I don’t have to worry about how I would explain my arm being replaced by a purple tentacle.
Mitchell isn’t asking you to explain anything. He’s asking you to predict (effectively) what would happen, consciousness-wise, given a randomly generated GLUT. There is a fact of the matter as to what would happen in that situation (in the same sense, whatever that may be, that there are facts about consciousness in normal situations), and a complete theory will be able to say what it is; the best you can say is that you don’t currently have a theory that covers that situation (or that the situation is underspecified; maybe it depends on what sort of randomizer you use, or something).
My theory does cover that situation; it says the GLUT will not be conscious. It also says that situation will not happen, because GLUTs that act like people come from entanglement with people. Things that don’t actually happen are allowed to violate general rules about things that do happen.
Okay. Why did you bother bringing up the tentacle, or the section you quoted from Eliezer’s post? Why insist on the improbability of a hypothetical when “least convenient possible world” has already been called?
Because I was challenging the applicability of Least Convenient Possible Worlds to this discussion. It is a fully general (and invalid) argument against any theory T to say take this event A that T says is super improbable and suppose that (in the Least Convenient Possible World) A happens, which is overwhelming evidence against T. The tentacle arm replacement is one such event that would contradict a lot of theories. Would you ask someone defending the theory that their body does not drastically change overnight to consider the Least Convenient Possible World where they do wake up with a tentacle instead of an arm?
But you don’t actually need to resort to this dodge. You already said the lookup tables aren’t conscious; that in itself is a step which is troublesome for a lot of computationalists. You could just add a clause to your original statement, e.g.
“The lookup tables are not conscious, but the process that produced them was either conscious or extremely improbable.”
Voila, you now have an answer which covers all possible worlds and not just the probable ones. I think it’s what you wanted to say anyway.
If that answer would have satisfied you, why did you ask about a scenario so improbable you felt compelled to justify it with an appeal to the Least Convenient Possible World?
Do you now agree that GLUT simulations do not imply the existence of zombies?
I thought you were overlooking the extremely-improbable case by mistake, rather than overlooking it on principle.
For me, the point of a GLUT is that it is a simulation of consciousness that is not itself conscious, a somewhat different concept from the usual philosophical notion of a zombie, which is supposed to be physically identical to a conscious being, but with the consciousness somehow subtracted. A GLUT is physically different from the thing it simulates, so it’s a different starting point.
I think your prior estimate for other people’s philosophical competence and/or similarity to you is way too high.
I suspect your prior estimate for people’s philosophical competence / similarity to you (whichever you prefer) is way too high.
To the best of our knowledge, any “quantum property” can be simulated by a classical computer with approx. exponential slowdown. Obviously, a classical computer is not going to instantiate these quantum properties.
Is that obvious?
If you think that there’s something to being an X besides having the causal structure of an X, then yes.
It should be. We can definitely build classical computers where quantum effects are negligible.
(For all we know, the individual transistors of these computers might have some subjective experience; but the computer as a whole won’t.)
If the Church-Turing-Deutsch thesis is true and some kind of Digital Physics is an accurate depiction of reality then a simulation of physics should be indistinguishable from ‘actual’ physics. Saying subjective experience would not exist in the simulation under such circumstances would be a particularly bizarre form of dualism.
The same formal structure will exist, but it will be wholly unrelated to what we mean by “subjective experience”. What’s dualistic about this claim?