The two insights of materialism
Preceded by: There just has to be something more, you know? Followed by: Physicalism: consciousness as the last sense.
Contents: 1. An epistemic difficulty 2. How and why to be a materialist
An epistemic difficulty
Like many readers of this blog, I am a materialist. Like many still, I was not always. Long ago, the now-rhetorical ponderings in the preceding post in fact delivered the fatal blow to my nagging suspicion that somehow, materialism just isn’t enough.
By materialism, I mean the belief that the world and people are composed entirely of something called matter (a.k.a. energy), which physics currently best understands as consisting of particles (a.k.a. waves). If physics reformulates these notions, materialism can adjust with it, leading some to prefer the term “physicalism”.
Now, I encounter people all the time who, because of education or disillusionment, have abandoned most aspects of religion, yet still believe in more than one than one kind of reality. It’s often called “being spiritual”. People often think it feels better than the alternative (see Joy in the merely real), but it also persists for what people experience as an epistemic concern:
The inability to reconcile the “experiencing self” concept with one’s notion of physical reality.
This is among the the most common epistemic discomforts with materialism (I only say “discomfort”, because a blank spot on your map does not correspond to a blank territory). The inside view — introspection — shows us something people call a “mind” or “spirit”, and the outside view — our eyes — shows us something we call a “brain”, which looks nothing at all the same. But the perceived distance between these concepts signals that connecting them would be extremely meaningful, the way superficially unrelated hypotheses and conclusions make for a very powerful theorem. For the connection to start making sense, one must realize that “you are made of matter” is as much a statement about matter as a statment about you…
The two insights of materialism: That the reconciliation of mind and matter –
is not misinformation about mind, but extra information about matter, and
is not misinformation about matter, but extra information about mind.
These are really two insights, and underusing one of them leaves a sense of “doesn’t quite capture it” in the psyche. See, the way most people think or learn about physics, a particle is a tiny dot, with some attributes like charge specified by numbers, obeying certain laws of motion. But in fact, this is a model of a particle. As a conviction, physics need not claim that “dots and waves are all there is”, but rather, that all there is can be described on analogy with dots and waves. Science is about modelling — a map that matches the territory — and “truth” is just how well it matches up.
And given modern science, there is something more you can say about a particle besides the geometry and equations that describe it, something which connects it to the direct, cogito-ergo-sum style knowledge we all enjoy: whatever it is, a particle is a one thousand-trillion-trillionth of a you. Yes, you, in your entirety. If part of that includes something you call a “soul”, then yes, science can now model the quantitative aspects, in more or less complete detail, of a one thousand-trillion-trillionth of a “soul”. Is that too much? Too incredible? A song by The Books that I like almost says it perfectly:
You are something that the whole world is made of.
This moots the debate. The first step is not to “reduce” the introspective view to the extrospective view, but to realize that they’re looking at the same object. The assertion is not that “mind is just particles”, but rather that “a tiny fraction of a mind” and “a tiny fraction of matter” happen to refer to the same object, and we should agree to call that object “particle”. Depending on how you use the word “conscious”, this does not necessarily say that a particle is conscious in the way that you are; an octant of a sphere is not a sphere. But assembled correctly, it is certainly one-eighth of a sphere!
I’ve learned that some people call this view “neutral monism”, but I prefer to still call it materialism as an emphasis that the extrospective view “science” really has a larger quantity of information at this point in human history. This is different information about reality than provided by introspection, and to ignore it is detrimental to one’s world view!
So, to help non-materialists in attaining this reconciliation of mind and matter, I’ve written the following rough path of ideas that one can follow:
How and why to be a materialist
-
Accepting materialism is saying “the rest of the world is made of whatever I am”, not just “I am made of whatever the rest of the world is”. And why not? In the eyes of science, these are both the same, true statement. Semantically, the first one tells you something qualitative about matter, and the second one tells you something extremely quantitative about your mind! It means modern neuroscience and biology can be used to help you understand yourself. Awesome!
-
Accepting physics is accepting that your “spirit” might consist of parts which, sufficiently divided and removed from context, might behave in a regular fashion. Then you might as well call the parts “particles” and call your spirit “brain”, and look at all the amazing data we have about them that help describe how you work.
-
Beware of the works-how-it-feels bias, the fallacious additional assumption that the world works the way you feel about it. (See How an algorithm feels from the inside.) These pieces of your mind/spirit called particles are extremely tiny; in order of magnitude, they are more than twice as small as your deepest introspection, so you can’t judge them very well based on instinct (a neuron is about a 1011th of your mind, and an atom is about a 1014th of a neuron). And because they’re so tiny and numerous, they can be put together to form things vastly different from yourself in form and function, like plants and stars.
Your instinct that the laws of physics don’t fully describe you is correct! You are the way you are because of two things:
the laws that describe your soul-pieces or particles, whatever those laws may be, and
the way they’re put together,
and the latter is almost unimaginably more significant! One way to see this is to look around at all the things that are not you. Saying how the tiny bits of your soul behave independently does not describe how to put them together, just like describing an octant of a sphere doesn’t explain say how to turn eight of them into a whole sphere. Plus, even after your initial construction as a baby, a whole lot of growth and experience has configured what you are today.
Only to put this into perspective, consider that the all the most fundamental laws of physics know can certainly be written down, without evidence or much explanation, in a text file of less than 1 megabyte. The information content of the human genome, which so far seems necessary to construct a sustainable brain, is about 640 MB (efficiently encoded, that’s 1.7 bits per nucleotide pair). Don’t be fooled at how “small” 640 is: it means the number of possible states of that data is at least 8640 times larger than the number of the states of our text file describing all of physics! Next, the brain itself stores information as you develop, with a capacity of at least 1 terrabyte by the most conservative estimates, which means it has at least around 81500 times the number of possible states of the DNA sequence that first built it.
So being a desk is different from being a human, not because it’s made of different stuff, but because the stuff is put together extremely differently, more differently than we can fully imagine. When people say form determines function, they should say FORM in BIG CAPITAL LETTERS. No wonder you thought particle physics “just doesn’t seem to capture it”!
Your perceived distance between the concepts of “mind” and “particles” is also correct! As JanetK says, “There is no shortcut from electrons to thoughts”. Continuing the connection/theorem analogy, a theorem with superficially unrelated hypotheses and conclusions is not only liable to be very useful, but to have a difficult proof as well. The analogue of the difficult proof is that, distinct from the discovery of particles themselves, massive amounts of technological progress and research have been required to establish the connection between:
how particles work and how you look from the outside (neurochemistry/neurobiology), and
how you look from the outside and how you look from the inside (neuropsychology).
treat mental illness,
restore lost memories,
design brain surgery,
explain cognitive biases,
physically relate our emotions to each other…
-
Adjusting emotionally is extremely important as you bring materialism under consideration, not only to accomodate changing your beliefs, but to cope with them when they do change. You may need to redescribe morality, what makes you happy, and why you want to be alive, but none of these things needs to be revoked, and LessWrong is by far the best community I’ve ever seen to help with this transition. For example, Eliezer has written a chronological account of his Coming of Age as a rationalist, and he has certainly maintained a sense of morality and life-worth. I recommend building an appropriate emotional safety net while you consider materialism, not just to combat the bias of fear, but so you’re ready when one day you realize oh my gosh I’m a materialist!
I have a friend who says that instead of classifying people as believing in the material or the supernatural, he classifies them by whether they think more than one of those things exists and are different. Roughly speaking, dualists and non-dualists. I think he’s got the right idea. Why bother believing in more than one kind of thing? Why believe in separate “soul” and “material” if the world can just as well be made of tiny specks of regularly-behaved “spirit”? It’s the same theory, and watching out for works-how-it-feels bias, you gain a lot of tangible insight about yourself if you realize they’re the same.
So do what’s right. Right for you, right for your loved ones, and right for rightness itself if that matters to you. You probably already know what it means to be a good person, and your good intentions just won’t work if you use poor judgement. Start thinking about materialism so you can know more, and make better, well-informed decisions.
Who believes in the supernatural is simply underestimating the natural.
There is no something more, because there is no something less… but there certainly and most definitely is you.
Follow up to comments:
One can only get so far from dualism in a single sitting, and what this article includes is a much a function of my time as of its validity. For now I’ll leave it up to others to argue stronger positions than those presented here, but to acknowledge, some important issues I did not address include:
Whatever stuff or process the world comprises, is it merely accessible to physics, or can physics describe its nature entirely? And supposing it can, is consciousness an entirely mathematical phenomenon that is unaffected by how it is physically implemented? That is, if we made a neural network computationally isomorphic to the human brain, but in a different physical arrangement (e.g. a silicon based computer), should you be as certain of its consciousness as of the consciousness of other humans? And more questions...
A rough outline of some stances on the questions above is as follows: (to avoid debate I’ll omit the term naturalism, though I do approve of its normative use)
Monism: the world comprises just one genre of stuff or process (no natural/supernatural distinction).
This article: this stuff or process is physically accessible, and is therefore amenable to study by the natural sciences.
Physicalism: the stuff or process is no more extensive than its description in terms of physics.
Computationalism: consciousness is a mathematical phenomenon, unaffected by how it is physically represented or implemented.
And of course it is also important to question whether these distinctions are practical, meaningful, or merely illusory. It all needs to be cleaned and carefully disected. Have at it, LessWrong!
I think we need a new term to distinguish materialists who believe that consciousness arises from physical interactions, from materialists who believe that consciousness arises from formal mathematical interactions. The latter would believe that you can’t fully simulate a person without the simulation being conscious. This is a much more interesting and important (and debatable) distinction to me.
I don’t see why the former wouldn’t also believe that—any simulation must ultimately be grounded in physical interactions (the computer is still made of matter).
The former might believe that consciousness arises from particular physical interactions — interactions that might exist in the brain but not in a computer.
Wouldn’t such a person believe that you can’t fully simulate a person at all with a conventional computer though?
I think Phil Goetz is using the term “simulate” in its computational or mathematical sense: The materialist of the first kind would agree that if you had a pretty good algorithmic model of a brain, you could simulate that model in a computer and it would behave just like the brain. But they would not agree that the simulation had consciousness.
ETA: Correct me if I’m wrong, but a materialist of the first kind would be one who is open to the possibility of p-zombies.
No, p-zombies are supposed to be indistinguishable from the real thing. You can tell apart a simulation of consciousness from an actual conscious being, because the simulation is running on a different substrate.
Basically, yes. But I think it’s worthwhile to distinguish between physically (the original definition), functionally, and behaviorally identical p-zombies, where materialists reject the possibility of the first, and functionalists reject the first and second (each is obviously a superset of the former).
NB: “Functionally identical” is handwaving, absent some canonical method of figuring out what computation a physical system implements (the conscious-rocks argument).
Do people holding this view who call themselves materialists actually exist? It seems an incoherent position to hold and I can’t recall seeing anyone express that belief. It seems very similar to the dualist position that consciousness has some magic property that can’t be captured outside of a human brain.
John Searle, David Pearce (see the last question), presumably some of the others listed under “Criticism” here).
As far as I can tell from looking at those links both Searle and Pearce would deny the possibility of simulating a person with a conventional computer. I understand that position and while I think it is probably wrong it is not obviously wrong and it could turn out to be true. It seems that this is also Penrose’s position.
From the Chinese Room Wikipedia entry for example:
From the Pearce link you gave:
So I still wonder whether anyone actually believes that you could simulate a human mind with a computer but that it would not be conscious.
They would deny that a conventional computer simulation can create subjective experience. However, the Church-Turing thesis implies that if physicalism is true then conscious beings can be simulated. AFAICT, it is only Penrose who would deny this.
Do you mean the Church-Turing-Deutsch principle? It appears to me that Pearce at least in the linked article is making a claim which effectively denies that principle—his claim implies that physics is not computable.
Why? Pearce is a physicalist, not a computationalist; he ought to accept the possibility of a computation which is behaviorally identical to consciousness but has no conscious experience.
What sense of ‘ought’ are you using here? That seems like a very odd thing to believe to me. If you think that’s what he actually believes you’re going to have to point me to some evidence.
So that means you are a computationalist? Fine, but why do you think physicalism may be incoherent?
It’s hard to fish for evidence in a single interview, but Pearce says:
To me, this reads as an express acknowledgement of the CT thesis (unless quantum gravity turns out to be uncomputable, in which case the CTT is just plain false).
The distinction seems to hinge on whether physics is computable. I suspect the Church-Turing-Deutsch principle is true and if it is then it is possible to simulate a human mind using a classical computer and that simulation would be conscious. If it is false however then it is possible that consciousness depends on some physical process that cannot be simulated in a computer. That seems to me to be what Pearce is claiming and that is not incoherent. If we live in such a universe however then it is not possible to simulate a human using a classical computer / universal Turing machine and so it is incoherent to claim that you could simulate a human but the simulation would not be conscious because you can’t simulate a human.
I honestly don’t see how you make that connection. It seems clear to me that Pearce is implying that consciousness depends on non-computable physical processes.
You seem to be begging the question: I suspect that we simply have different models of what the “problem of consciousness” is.
Regardless, physicalism seems to be the most parsimonious theory; computationalism implies that any physical system instantiates all conscious beings, which makes it a non-starter.
Say again? Why should I believe this to be the case?
Basically, the interpretation of a physical system as implementing a computation is subjective, and a sufficiently complex interpretation can interpret it as implementing any computation you want, or at least any up to the size of the physical system. AKA the “conscious rocks” or “joke interpretations” problem.
Paper by Chalmers criticizing this argument, citing defenses of it by Hilary Putnam and John Searle
Simpler presentation by Jaron Lanier
I can see why someone might think that, but surely the requirement that any interpretation be a homomorphism from the computation to the processes of the object would be strong restriction on the sets of computation that it is instantiating?
Intriguing. Could you elaborate? Apparently “homomorphism” is a very general term.
I think the idea is that you can’t pick a different interpretation for the rock implementing a specific computation for each instant of time. A convincing narrative of the physical processes in a rock instantiating a consciousness would require a mapping from rock states to the computational process of the consciousness that remains stable over time. With the physical processes going on in rocks being pretty much random, you wouldn’t get the moment-to-moment coherence you’d need for this even if you can come up with interpretations for single instants.
One intuition here is that once you come up with a good interpretation, the physical system needs to be able to come up with correct results from computations that go on longer than where you extrapolated doing your interpretation. If you try to get around the single instant thing and make a tortured interpretation of rock states representing the computation of, say, 100 consecutive computations of the consciousness, the interpretation is going to have the rock give you garbage for computation 101. You’re just doing the computation yourself now and painstakingly fitting things to random physical noise in the rock.
A homomorphism is a “structure preserving map”, and is quite general until you specify what is preserved.
From my brief reading of Chalmers, he’s basically captured my objection. As Risto_Saarelma says, the point is that a mapping merely of states should not count. As long as the sets of object states are not overlapping, there’s a mapping into the abstract computation. That’s boring. To truly instantiate the computation, what has to be put in is the causal structure, the rules of the computation, and these seem to be far more restrictive than one trace of possible states.
Chalmer’s “clock and dial” seems to get around this in that it can enumerate all possible traces, which seems to be equivalent to capturing the rules, but still feels decidedly wrong.
Try bisimulation.
Having printed it out and read it, it seems that “any physical system instantiates all conscious beings” is fairly well refuted, and what is left reduces to the GLUT problem.
Thanks for the link.
I remember seeing the Chalmers paper before, but never reading far enough to understand his reasoning—I should probably print it out and see if I can understand it on paper.
Edit: Yes, I know that he’s criticizing the argument—I’m just saying I got lost last time I tried to read it.
So do you think there is a meaningful difference between computationalism and physicalism if the Church-Turing-Deutsch principle is true? If so, what is it?
Basically, physicalism need not be substrate-independent. For instance, it could be that Pearce is right: subjective experience is implemented by a complex quantum state in the brain, and our qualia, intentionality and other features of subjective experience are directly mapped to the states of this quantum system. This would account for the illusion that our consciousness is “just” our brain, while dramatically simplifying the underlying ontology.
Is that a yes or a no? It seems to me that saying physicalism is not substrate-independent is equivalent to saying the Church-Turing-Deutsch principle is false. In other words, that a Turing machine cannot simulate every physical process. My question is whether you think there is a meaningful difference between physicalism and computationalism if the Church-Turing-Deutsch principle is true. There is obviously a difference if it is false.
Why would this be? Because of free will? Even if free will exists, just replace the input of free will with a randomness oracle and your Turing machine will still be simulating a conscious system, albeit perhaps a weird one.
I don’t think free will is particularly relevant to the question. Pearce seems to be claiming that some kind of quantum effects in the brain are essential to consciousness and that a simulation of a brain in a computer therefore cannot be conscious. If you could simulate the quantum processes then the argument falls apart. It only makes sense if the Church-Turing-Deutsch principle is false and there are physical processes that cannot be simulated by a Turing machine. I think that is unlikely but possible and a coherent position.
If all physical processes can be simulated by a Turing machine then I don’t see a meaningful difference between physicalism and computationalism. I still don’t know what your answer is to that question. If you do think there is still a meaningful difference then please share.
\sigh** You seem to be so committed to computationalism that you’re unable to understand competing theories.
Simulating quantum processes on a classical computer is not the same as instantiating them in the real world. And physicalism commits us to giving a special status to the real world, since it’s what our consciousness is made of. (Perhaps other “consciousnesses” exist which are made out of something else entirely, but physicalism is silent on this issue.) Hence, consciousness is not invariant under simulation; a classical simulation of a conscious system is similar to a zombie in that it behaves like a conscious being but has no subjective experience.
ETA: I think you are under the mistaken impression that a theory of consciousness needs to explain your heterophenomenological intuitions, i.e. what kinds of beings your brain would model as conscious. These intuitions are a result of evolution, and they must necessarily have a functionalist character, since your models of other beings have no input other than the general form of said beings and their behavior. Philosophy of mind mostly seeks to explain subjective experience, which is just something entirely different.
So you do think there is a difference between physicalism and computationalism even if the Church-Turing-Deutsch principle is true? And this difference is something to do with a special status held by the real world vs. simulations of the real world? I’m trying to understand what these competing theories are but there seems to be a communication problem that means you are failing to convey them to me.
That’s what it means to say that physicalism is substrate-dependent. There is a (simple) psycho-physical law which states that subjective experience is implemented on a specific substrate.
It just so happens that evolution has invented some analog supercomputers called “brains” and optimized them for computational efficiency. At some point, it hit on a “trick” for running quantum computations with larger and larger state spaces, and started implementing useful algorithms such as reinforcement learning, aversive learning, perception, cognition etc. on this substrate. As it turns out, the most efficient physical implementations of such quantum algorithms have subjective experience as a side effect, or perhaps as a crucial building block. So subjective awareness got selected for and persisted in the population to this day.
It seems a fairly simple story to me. What’s wrong with it?
So is one of the properties of that specific substrate (the physical world) that it cannot be simulated by a Turing machine? I don’t know why you can’t just give a yes/no answer to that question. I’ve stated it explicitly enough times now that you just come across as deliberately obtuse by not answering it.
I think I’ve been fairly clear that I don’t deny the possibility that consciousness depends on non-computable physics. I don’t think it is the most likely explanation but it doesn’t seem to be clearly ruled out given our current understanding of the universe. Your story might be something close to the truth if the Church-Turing-Deutsch principle is false. It appears to me to be incoherent if it is true however.
I think the Church-Turing-Deutsch principle is probably true but I don’t think we can rule out the possibility that it is false. If it is true then it seems a simulation of a human running on a conventional computer would be just as conscious as a real human. If it is false then it is not possible to simulate a human being on a conventional computer and it therefore doesn’t make sense to say that such a simulation cannot be conscious because a simulation cannot be created. What if anything do you disagree with from those claims?
Because it implies the possibility of zombies, or for some other reason?
Basically, yes. Slightly more explicitly, it appears to say that two contradictory things are true: that a Turing machine can simulate every physical process but that there are properties arising from physical processes running directly on their ‘native’ hardware that do not arise when those same processes are simulated. That suggests either that the simulation is actually incomplete (it is missing inputs or algorithms that account for the difference) or that there is some kind of dualism going on: a mysterious and unidentifiable ‘something’ that accounts for consciousness existing in a human brain but not in a perfect simulation of a human brain.
If the missing something is not part of physics then we’re really back to dualism and not physicalism at all. It seems like an attempt to sneak dualism back in without admitting to being a dualist in polite company.
Is subjective experience a “property”? By assumption, all the features of subjective experience have physical correlates which are preserved by the simulation. It’s just that the ‘native’ process fits a “format” that allows it to actually be experienced, whereas the simulated version does not. It seems weird to call this a dualist theory when the only commonality is an insistence on taking the problem of subjective experience seriously.
Well, I don’t think it really matters what you call it but I assume we agree that it is a something. Do you believe that it is in principle possible to differentiate between an entity that has that something and an entity that does not?
This sounds like your answer to my previous question is ‘no’. So is your position that it is not possible in principle to distinguish between a simulation of a human brain and a ‘real’ human brain but that the latter differs in that it possesses a ‘something’ that is not a function of the laws of physics and is inaccessible to any form of investigation other than introspection by the inhabitant of that brain but that is nonetheless in some sense a meaningful distinction? That sounds a lot like dualism to me.
Perhaps not. ‘That something’ may be simply a model which translates the aforementioned physical properties into perceptual terms which are more familiar to us. But this begs the question of why we would be familiar with perception in the first place; “we have subjective experience, and by extension so does anything which is implemented in the same substrate as us” is a good way to escape that dilemma.
The whole point of physicalism is that subjective experience is a function of the laws of physics, and in fact a fairly low-level function. If you want to avoid any hint of dualism, just remove the “inhabitant” (a misnomer) and the “psycho-physical bridging laws” from the model and enjoy your purely physicalistic theory. Just don’t expect it to do a good job of talking about phenomenology or qualia: physicalist theories are just weird like that.
As the saying goes, those who do not know dualism are doomed to reinvent it, poorly. Beware this tendency.
Do you ever answer a direct question?
Are you saying that there is some extra law (on top of the physical laws that explain how our brains implement our cognitive algorithms) that maps our cognitive algorithms, or a certain way of implementing them, to consiousness? So that, in principal, the universe could have not had that law, and we would do all the same things, run all the same cognitive algorithms, but not be consious? Do you believe that p-zombies are conceptially possible?
The psycho-physical law is not really an extra law “on top of the laws of physics”, so much as a correspondence between quantum state spaces and subjective experiences—ideally, the correspondence would be as simple as possible.
You could build a version of the universe which was not endowed with any psycho-physical laws, but it’s not something anyone would ever experience; it would be one formal system plucked out seemingly at random from the set of computational structures. It is as logically possible as anything else, but whether it makes sense to regard such a bizarre thing as “conceptually possible” is another matter.
But would this universe look the same as our universe to an outside observer who cannot directly observe subjective experience, but only the physical states that subjective experience supposedly correspond to?
We’re assuming that physicalism is true, so yes it would look the same. The inhabitants would be p-zombies, but all physical correlates of subjective experience would exist.
So, since in this alternate universe without subjective experience, people have the same discussions about subjective experience as their analogs in this universe, the subjective experience is not the cause of these discussions. So what explains the fact the this physical stuff people are made out of, which only obeys physical laws and can’t be influenced by subjective experience, discusses subjective experience? Where did that improbability come from?
First of all, physical stuff can be influenced by the physical correlates of subjective experience. Since the zombie universe was obtained by removing subjective experience from an universe where it originally existed, it’s not surprising that these physical correlates would show some of the same properties.
The properties which subjective experience and its physical correlates have in this universe could be well explained by a combination of (1) anthropic principles (2) the psycho-physical bridging law (3) the properties of our perceptions and other qualia. Moreover, the fact that we’re having this discussion screens out the possibility that people might have no inclination at all to talk about subjective experience.
If the physical properties of the physical correlates of subjective experience are sufficient to explain why we talk about subject experience even without a bridging law, then why are they not enough to also explain the subjective experiences without a bridging law?
Subjective experience is self-evident enough to need no explanation. What needs to be explained is how its content as perceived by us (i.e. qualia, beliefs, thoughts etc.) relates to formally modeled physics: hence, the bridging law maps between the conceptual description and the complex quantum system which is physically implemented in the brain.
No, subjective experience is self-evident enough that we do not need to argue about whether it exists, we can easily agree that it does. (Though, you seem to believe that in the zombie world, we would incorrectly come to the same agreement.) But agreeing that something exists is not the same as understanding how or why it exists. This part is not self-evident and we disagree about it. You seem to believe that the explanation requires macroscopic quantum superpositions and some bringing law that somewhat arbitrarily maps these quantum superpositions onto subjective experiences. I believe that if we had sufficient computing power and knew fully the arrangement of neurons in a brain, we could explain it using only classical approximations of physics.
We don’t understand why, but then again we don’t know why anything exists. In practice, something as basic as subjective experience is always taken as a given. As for how, our inner phenomenology reveals far more about subjective experience than physics ever could.
Nevertheless, we do also want to know how the self might relate to our physical models; and contrary to what might be expected, macroscopic quantum superposition is actually the parsimonious hypothesis here for a wide variety of reasons.
Unless QM as we know it is badly wrong, it just doesn’t fit our models of physical reality that anything resembling “the self” would be instantiated in a hugely complicated classical system (a brain with an arrangement of brain regions and billions of neurons? Talk about an arbitrary bridging law!) as opposed to a comparatively simple quantum state.
Moreover, it is eminently plausible that evolution should have found some ways of exploiting quantum computation in the brain during its millions-of-years-long development. The current state of neuroscience is admittedly unsatisfactory, but this shouldn’t cause us to shed too much confidence.
I am talking about why subjective experience exists given that the physical universe exists. Are you being deliberately obtuse?
You are failing to address my actual position, which is that there is no arbitrary bridging law, but a mapping from the mathematical structure of physical systems to subjective experience, because that mathematical structure is the subjective experience, and it mathematically has to be that way. The explanation of why and how I am talking about is an understanding of that mathematical structure, and how physical systems can have that structure.
If you believe that we evolved systems for maintaining stable macroscopic quantum superposition without decoherence, and that we have not noticed this when we study the brain, then QM as you know it is badly wrong.
Interesting. How do you know that the physical universe exists, though? Could it be that your certainty about the physical universe has something to do with your subjective experience?
“The mathematical structure of physical systems” means either physical law, or else something so arbitrary that a large rock can be said to instantiate all human consciousnesses.
Evidence please. Quantum biology is an active research topic, and models of quantum computation differ in how resilient they are to decoherence.
Basically, what bogus said.
I’m confused about what you mean by “simulating a person”. Presumably you don’t mean simulating in a way that is conscious/has mental states (since that would make the claim under discussion trivially, uninterestingly inconsistent), so presumably you do mean just simulating the physics/neurology and producing the same behavior. While AFAIK neither explicitly says so in the links, Searle and Pearce both seem to me to believe the latter is possible. (Searle in particular has never, AFAIK, denied that an unconscious Chinese Room would be possible in principle; and by “strong AI” Searle means the possibility of AI with an ‘actual mind’/mental states/consciousness, not just generally intelligent behavior.)
Yes. Equivalently, is uploading possible with conventional computers?
It seems to me that both Searle and Pearce would answer no to both questions. Pearce in particular seems to be saying that consciousness depends on quantum properties of brains that cannot be simulated by a conventional computer. It appears to me that this is equivalent to a claim that physics is not computable but I’m not totally confident of that equivalence. I have trouble reading any other conclusion from anything in those links. Can you point to a quote that makes you think otherwise?
I don’t think Pearce or Searle would agree with this, and it sounds like you might be projecting your belief onto them. We already know of philosophers who explicitly endorse the possibility of zombies, so it’s not surprising for philosophers to endorse positions that imply the possibility of zombies.
Afraid not, but I think if they thought physics were uncomputable (in the behavioral-simulation sense) they would say so more explicitly.
Way back at the beginning of this thread I was trying to establish whether anybody who calls themselves a materialist actually believes the statement “you can’t fully simulate a person without the simulation being conscious” to be false. I still don’t feel I have an answer to that question. It seems that bogus might believe that statement to be false but he is frustratingly evasive when it comes to answering any direct questions about what he actually believes. It seems we are not currently in a position to say definitively what Pearce or Searle believe.
The only reason I asked in the first place is that I’ve tended to assume someone who self-describes as a materialist would also believe that statement to be true. I guess the moral of this thread is that I can’t assume that and should ask if I want to know.
Many people want to draw the line at lookup tables—they don’t believe simulation by lookup table would be conscious.
-- Daniel Dennett (from here)
The point being that GLUTs are faulty intuition pumps, so we cannot use them to bolster our intuition that “something mechanical that passed the Turing Test might nevertheless not be conscious”.
It would take a GLUT as large as the universe just to store all possible replies to questions I might ask of it, but it would flounder on a simple test: if I were to repeat the same question several times, it would give me the same answer each time. You could push me into a less convenient possible world by arguing that the GLUT responds to minute differences in my tone of voice, etc. - but I could also record myself on tape and play the same tape back N times, and the GLUT would expose itself as such, and therefore fail the test, by sphexishly reciting back its stored lines.
There’s no way that I can see of going around this, other than to “extend” the GLUT concept to allow for stored states and conditional branches, at which point we recover Turing completeness. To a programmer, the GLUT concept just isn’t credible.
Ok, basic confusion here. The GLUT obviously has to be indexed on conversation histories up to the point of the reply, not just the last statement from the interlocutor. Having it only index using the last statement would make it pretty trivially incapable of passing a good Turing test. It follows that since it’s still assumed to be a finite table, it can only do conversations up to a given length, say half an hour. Half an hour, on the other hand, should be quite long enough to pass a Turing test, and since we’re dealing with crazy scales here, we might just as well make the maximum length of conversation 80 years or something.
Tut, tut. Assuming the confusion you claim to see is mine: you don’t get to tell me that my objection to an intuition pump is incoherent, you are required to show that it is incoherent, and it is preferable to avoid lullaby language in such argumentation.
Yes, the question “what is your index” exposes the GLUT as a confused intuition pump. I am at present looking at the Ned Block (1981) paper Psychologism and Behaviorism which (as best I could ascertain) is the original source for the GLUT concept. It makes a similar claim to yours, namely that “for a Turing Test of any given length, the machine could in principle be programmed in just the same way to pass a Turing Test of that length”.
But sauce for the goose is sauce for the gander: for a GLUT of any size, there is a Turing Test of sufficient duration that exposes the GLUT as not conscious, by looping back to the start of the conversation! This shows that the argument from a necessarily finite index does have force to counter the GLUT as an intuition pump.
It is flawed in other ways. You can’t blame Ned Block who at the time of writing that paper can’t have spent a lot of time on IRC, but someone with that experience would tell you that indexing on character strings wouldn’t be enough to pass a 1-hour Turing test: the GLUT as originally specified would be vulnerable to timing attacks. It wouldn’t be able to spontaneously say something like “You haven’t typed anything back to me for thirty minutes, what’s wrong?”
“OK”, a GLUT advocate might reply, “we can in principle include timings in the index, to whatever timing resolution you are capable of detecting”.
It’s tempting to grant this “in principle” counter-objection, especially as I don’t have the patience to go to the literature and verify that the “timing attack” objection hasn’t been raised and countered before.
But the fact that the timing attack wasn’t anticipated by Ned Block is precisely what shows up the GLUT concept as a faulty intuition pump. You don’t get to “go back to the drawing board” on the GLUT concept each time an attack is found and iteratively improve it until its index has been generalized enough to cover all possible circumstances: that is tantamount to having an actual, live, intelligent human sit behind the keyboard and respond.
Actually the whole idea of the GLUT machine (dubbed the ‘blockhead’ in Braddon-Mitchell’s and Jackson’s book, The Philosophy of Mind and Cognition) IS precisely to use live intelligent humans to store an intelligent response to every response a judge might make under a pre-specified limit (including silence and looping, which is discussed explicitly in the paper). The idea is to show that even though the resulting machine has the capacity to emit an intelligent response to any comment within the finite specified limits, it nonetheless has the intelligence of a juke-box. The point is that the intelligent programmers anticipate anything that the “judge” could say in the finite span. The upshot is that the capacity of a machine to pass a Turing Test of a finite length does not entail actual intelligence.
I confess to having downloaded the paper recently and not given it more attention than was necessary to satisfy my usual habit of having primary sources at hand. I’ve gone back and read it more carefully, but it probably deserves still longer scrutiny.
(Welcome to Less Wrong, by the way. I don’t suppose you need to post an introduction, seeing as you have your own Wikipedia page. Nice to be chatting with you here!)
However, I’m not seeing where this is discussed explicitly, other than (this is perhaps what you mean) under the general heading of using “quantized stimulus parameters” as input to the GLUT-generating process. I grant that this does adequately deal with the most crude timing attacks imaginable.
There do seem to me to be other, more subtle attacks which—according to my earlier argument that, if you have to go back to the drawing board each time such an attack is found, leave the GLUT critique of behaviourism ineffective—would still prove fatal. For instance we can consider teachability of the GLUT, to uncover an entire class of attacks.
Suppose there is some theoretical concept, unknown to the putative human programmers of the GLUT (or perhaps we should call them conversation-authors, as the programming involved is minimal), but which can be taught to someone of normal intelligence. I don’t want to restrict my argument to any particular domain, but for illustrative purposes let’s pick the phenomenon of lasing light. This is a reasonable example, since the GLUT concept would have been implementable as early as Babbage’s time and the key insights date from Einstein’s.
In this scenario, the GLUT’s interviewer choses as his conversation topic the theoretical background needed to build up to the concept of lasing light. The test comes when she (gender picked by flipping a coin) asks the GLUT to make specific predictions about a given experimental setup that extrapolates relevant physical law into a domain not previously discussed, but where that law still applies.
By my earlier stipulation, the GLUT’s builders must discover, in the process of building the GLUT, the physical law of lasing light. They must also prune the conversation tree of “wrong” predictions, since that would alert the interviewer to the fact that the GLUT was “faking” understanding up to the point of the experimental test; this rules out the builders merely “covering all (conversational) bases”. They must truly understand the phenomenon themselves.
(One may object that it would take an inordinately long time to teach a person of merely normal intelligence about a phenomenon such as lasing light. But we have earlier stipulated that the length of the test can be extended to human lifespans; that is surely enough for a person of normal intelligence to eventually get there.)
We are led to what is (to me at least) a disturbing conclusion. The building of a GLUT entails the discovery by the builders of all experimentally discoverable physical laws of our universe that can be taught a person of normal intelligence in a reasonable finite lifespan.
I’m not a professional philosopher, so possibly this argument has holes.
Nevertheless it seems to me that this unpalatable conclusion points to one primordial flaw in the GLUT argument: it goes counter to the open-ended nature of the optimization process known as intelligence. You cannot optimize by covering all bases, for the same reason that a theory that can explain all conceivable events has no real content.
The original paper tried to anticipate this objection by offering as a general defense the stipulation that the GLUT should simulate a “desert island” type of castaway, so that the GLUT would be dispensed of the capacity to converse fluently about current events. But the objection is more general and its force becomes harder to avoid if the duration of the test is extended greatly: we need to imagine that the GLUT can be brought up to date with current events, and afterwards respond appropriately to them, as would a person of normal intelligence. This requires the GLUT builders to anticipate the future with enough precision to prune “inappropriate” responses, and so the defense that the builders would “cover all bases” is untenable.
The domain of physical law is the one where the consequences of the teachability test are brought into sharpest focus, but I suspect that “merely social” tests of the GLUT in everyday life would very quickly expose its supposed intelligence as a sham.
Behaviourism, or God-like GLUT builders: pick your poison.
There is an aspect of the construction that you are not quite taking in. The programmers give a response to EVERY sequence of letters and spaces that a judge COULD type in the remaining segment of the original hour. One or more of those sequences will be a description of a laser, another will be a description of some similar device that goes counter to physical law, etc. The programmers are supposed to respond to each string as an intelligent person would respond. Here is the relevant part of the description: “Suppose the interrogator goes first, typing in one of A1...An. The programmers produce one sensible response to each of these sentences, B1...Bn. For each of B1...Bn, the interrogator can make various replies [every possible reply of all lengths up to the remaining time], so many branches will sprout below each of the Bi. Again, for each of these replies, the programmers produce one sensible response, and so on.” The general point is that there is no need for the programmers to “think of” every theory: that is accomplished by exhaustion. Of course the machine is impossible but that is OK because the point is a conceptual one: having the capacity to respond intelligently for any stipulated finite period (as in the Turing Test) is not conceptually sufficient for genuine intelligence.
That is plainly wrong. The “input’ space (possible judge queries) is exhaustively covered, I’m getting that just fine. No such thing can be said about the “output” space: we’re requiring that the output consist of strings encoding responses that an intelligent person would emit. The judge is allowed to say random, possibly wrong, things, but the GLUT is not so allowed.
Consider an input string which consists of a correct explanation of quantum mechanics (which we assume the builders don’t know yet at build time), plus a question to the GLUT about what happens in a novel, never before encountered (by the GLUT) experimental setup. This input string is possible, and so must be considered by the builders (along with input strings that are incorrect explanations of QM plus questions about TV shows, but we needn’t concern ourselves with those, an actual “judge from the builder’s future” will not emit them).
In order to construct even one sensible response to this input string, to respond “as an intelligent person would”, the GLUT builders must correctly predict the experimental result. An incorrect response will signal to the “judge” that the GLUT is responding by rote, without understanding. If the GLUT equivocates with “I don’t know”, the judge will press for an answer; we are assuming that the GLUT has answered all previous queries sensibly up to this point, that it has been a “good student” of QM. If the GLUT keeps dodging the judge’s request for a prediction, the game is up: the jduge will flunk it on the Turing Test.
To correctly predict an experimental result, the builders must know and understand QM, but we have assumed they don’t. Assuming that the GLUT always passes the Turing Test leads us to a contradiction, so we must allow that there are some Turing Tests the GLUT is unable to pass: those that require it to learn something its builders didn’t know. The GLUT does not have the capacity you are claiming for it.
(If you disagree, and think I’m still not getting it, please kindly answer the following: considering only a single input string QM+NE—explanation of quantum mechanics plus novel experiment—how do you propose that a builder who doesn’t understand QM construct a sensible answer to that input string?)
You’re assuming that the GLUT is simulating a person of average intelligence, right? So they ask a person of average intelligence how they’d respond to that particular sentence, given various kinds of context, and program in the answer(s).
What you’re trying to get at, I think, is a situation for which the GLUT has no response, but that’s already ruled out by the fact that the hypothetical situation specifies that the programmers have to have systematically considered every possible situation and programmed in a response to it. (It doesn’t have to be a good response, just how a person of average intelligence would respond, so variations on ‘I don’t know’ or ‘that doesn’t make sense to me’ would be not just acceptable but actually correct in some situations.)
Heh. I’d claim that your use of “average” here is smuggling in precisely the kind of connotation that are relied on to make the GLUT concept plausible, but which do not stand up to scrutiny.
Let’s say I’m assuming the GLUT is simulating an intelligence “equivalent” to mine. And assume the GLUT builder is me, ten years ago, when I didn’t know about Brehme diagrams but was otherwise relatively smart. Assume the input string is the first few chapters of the Shadowitz text on special relativity I have recently gone through. Under these assumptions, “equivalent” intelligence consists of being able to answer the exercises as correctly as I recently did.
(Crucially, if the supposed-to-be-equivalent-to-mine intelligence turns out to be for some reason cornered into saying “I don’t know” or “I can’t make sense of this text”, I can tell for sure it’s not as smart as I am, and we have a contradiction.)
The GLUT intuition pump requires that the me-of-today can “teach” the me-of-ten-years-ago how to use Brehme diagrams, to the point where the me-of-ten-years ago can correctly answer the kind of questions about time dilation that I can answer today.
We’re led to concluding one of the following:
that I can send information backwards in time
that the me-of-ten-years-ago did know about SR, contrary to stipulation
that the builders have another way of computing sensible answers, contrary to stipulation
that the “intelligence” exhibited by GLUT is restricted to making passable conversational answers but is limited in not being able to acquire new knowledge
My hunch is that this last is really what the fuzziness of the word “intelligence” allows someone thinking about GLUTs to get away with, and not realize it. The GLUT is a smarter ELIZA, but if we try to give it a specific, operational, predictive kind of intelligence of which humans are demonstrably capable, it is easily exposed as a dummy.
In the course of building the GLUT, you-of-10-years-ago would have to, in the course of going through every possible input that the GLUT might need to respond to, encounter the first few chapters of the book in question, and figure out a correct response to that particular input string. So you-of-10-years-ago would have to know about SR, not necessarily at the start of the project, but definitely by the end of it. (And the GLUT simulating you-of-10-years-ago would be able to simulate the responses that you-of-10-years-ago generated in the learning process, assuming that you-of-10-years-ago put them in as generated rather than programming the GLUT to react as if it already knew about SR.)
Going through every possible random string is an extremely inefficient way to gain new information, though.
So you agree with me: since there is nothing special about either the 10-year stipulation or about the theory in question, we’re requiring the GLUT builders to have discovered and understood every physical theory that will ever be discovered and can be taught to a person of my intelligence.
This is conceptually an even taller order than the already hard to swallow “impossible-but-conceptually-conceivable” machine. Where are they supposed to get the information from? This is—so we are led to conclude—a civilization which can take a stroll through the Library of Babel and pick out just those books which correspond to a sensible physical theory.
I think you misunderstood. You-of-10-years-ago doesn’t have to have figured out SR prior to building the GLUT; you-of-10-years-ago would learn about SR—and an unimaginable number of other things, many of them wrong—in the course of programming the GLUT. That’s implied in ‘going through every possible input’. Also, you-of-10-years-ago wouldn’t have to program the objectively-right answers into the GLUT, just their own responses to the various inputs, so no external data source is necessary.
The GLUT builder has to understand the given theory, and derive its implications to the novel experiment. But they don’t have to know that the theory is correct. It is your later input of a correct explanation that picks the correct answer out of all the wrong ones, and the GLUT builder doesn’t have to care which is which.
I don’t get what you mean here. Please clarify?
If the tester gives the GLUT a plausible-sounding explanation of some event that is incorrect, but that you-of-10-years-ago would be deceived by, the GLUT simulation of you should respond as if deceived. Similarly, if the tester gives the GLUT an incorrect but plausible-sounding explanation of SR that you-of-10-years-ago would take as correct, the GLUT should respond as if it thinks the explanation is correct. You-of-10-years-ago would need to program both sets of responses—thinking that the incorrect explanation of SR is correct, and thinking that the correct explanation of SL is correct—into the GLUT. You-of-10-years-ago would not need to know which of those two explanations of SR was actually correct in order to program thinking-that-they-are-correct responses into the GLUT.
I do not accept that a me-of-10-years ago could convincingly simulate these responses after forcing himself to learn every possible variation on the Shadowitz book and sincerely accepting that as true information. Conversely, if he started with the “true” Shadowitz he would have a hard time erasing that knowledge afterwards to give convincing answers to the “false” versions.
Not only would the me-of-10-years ago not be able to convincingly reproduce, e.g. the excitement of learning new stuff and finding that it works; that me would (I suspect) simply go mad under such bizarre circumstances! This is not how learning works in an intelligent mind stipulated as “equivalent” to mine.
That’s a trivial inconvenience. You can use a molecular assembler to build duplicates of your 10-years-ago self. Assuming that physicalism is correct and that consciousness involves no quantum effects, these doppelgänger will be conscious and you can feed each a version of the Shadowitz book.
I was anticipating precisely this objection.
My answer is that this is nothing like a GLUT any more. We are postulating a process of construction which is functionally the same as hooking me up to a source of quantum noise, and recording all of my Everett branches subsequent to that point. The so-called GLUT is the holographic sum of all these branches. The look-up consists of finding the branch which looks like a given input.
What this GLUT in fact looks like is simply the universe as conceived of under the relative state interpretation of QM. (Whether the relative state interpretation is correct or not is immaterial.) So how, exactly, are we supposed to “look inside” the GLUT and realize that it is “obviously” not conscious but just a big jukebox?
After having followed the line of reasoning that led us here, “looking inside” the GLUT has precisely the same informational structure as “looking inside” the relative-state universe (not as we do, confined to one particular Everett branch, but as would entities “outside” our universe, assuming for instance that we lived in a simulation).
The GLUT, assuming this process of construction, looks precisely like a timeless universe. And we have no reason to doubt that the minds inhabiting this universe are not conscious, and every reason to suppose that they are conscious.
You can look at the substrate of the GLUT. This is actually an excellent objection to computationalism, since an algorithm can be memoized to various degrees, a simulation can be more or less strict, etc. so there’s no sharp difference in character between a GLUT and a simulation of the physical universe.
And claiming that the GLUT is conscious suffers from a particularly sharp version of the conscious-rock argument. Encrypt the GLUT with a random one-time pad, and neither the resulting data nor the key will be conscious; but you can plug both into a decrypter and consciousness is restored. This makes very little sense.
On a different level of objection, I for one would bite the functionalist bullet: something that could talk to me regularly for 80 years, sensibly, who could actually teach me things or occasionally delight me, all the while insisting that it wasn’t in fact conscious but merely a GLUT simulating my Aunt Bertha...
Well, I would call that thing conscious in spite of itself.
To simulate Aunt Bertha effectively, and to keep that up for 80 years, it would in all likelihood have to be encoded with Aunt Bertha’s memories, Aunt Bertha’s wonderful quirks of personality, Aunt Bertha’s concerns for my little domestic worries as I gradually moved through my own narative arc in life, Aunt Bertha’s nuggets of wisdom that I would sometimes find deep as the ocean and other times silly relics of a different age, and so on and so forth.
The only difference with Aunt Bertha would be that, when I asked her (not “it”) why she thought she answered as she does, she’d tell me, “You know, dear nephew, I don’t want to deceive you, for all that I love you: I’m not really your Aunt Bertha, I’m just a GLUT programmed to act like her. But don’t fret, dear. You’re just an incredibly lucky boy who got handed the jackpot when drawing from the infinite jar of GLUTs. Isn’t that nice? Now, about your youngest’s allergies...”
Wasn’t an objection to these kinds of GLUTs that you’d basically have to make them by running countless actual, conscious copies of Aunt Bertha and record their incremental responses to each possible conversation chain? So you would be in a sense talking with a real, conscious human, although they might be long dead when you start indexing the table.
Though since each path is just a recording of a live person, it wouldn’t agree with being a GLUT unless the Aunt Bertha copies used to build the table would have been briefed earlier about just why they are being locked in a featureless white room and compelled to have conversation with the synthetic voice speaking mostly nonsense syllables at them from the ceiling.
(We can do the “the numbers are already ridiculous, so what the hell” maneuver again here, and replace strings of conversation with the histories of total sensory input Aunt Bertha’s mind can have received at each possible point in her life at a reasonable level of digitization, map these to a set of neurochemical outputs to her muscles and other outside-world affecting bits, and get a simulacrum we can put in a body with similar sensory capabilities and have it walking around, probably quite indistinguishable from the genuine, Turing-complete article. Although this would involve putting the considerably larger number of Bertha-copies used to build the GLUT into somewhat more unpleasant situations than being forced to listen to gibberish for ages.)
Surely there are multiple possible conscious experiences that could be had by non-GLUT entities with Aunt Bertha’s behavior. How would you decide which one to ascribe to the GLUT?
I’m not sure I even understand the question.
If you asked me, “Is GAunt Bertha conscious”, I would confidently answer “yes”, for the same reason I would answer “yes” if asked that question about you. Namely, both you and her talk fluently about consciousness, about your inner lives, and the parsimonious explanation is that you have inner lives similar to mine.
In the case of GAunt Bertha, it is the parsimonious explanation despite her protestations to the contrary, even though they lower the prior.
In Bayesian terms, I would count those 80 years of correspondence as overwhelming evidence that she has an inner life similar to mine, and the GLUT hypothesis starts out burdened with such a large prior probabilty against it that the amount of evidence you would have to show me to convince me that Aunt Bertha was a GLUT all along would take ages longer to even convey to me.
Oh, sorry. I thought you were assuming Aunt Bertha was a GLUT (not just that she claimed to be), and claiming she would be conscious. I agree that if Bertha claims to be a GLUT, she’s ridiculously unlikely to actually be one, but I’m not sure why this is interesting.
Regardless....
If something is conscious, it seems like there should be a fact of the matter as to what it is experiencing. (There might be multiple separate experiences associated with it, but then there should be a fact of the matter as to which experiences and with what relative amounts of reality-fluid.) (If you use UDT or some such theory under which ascription of consciousness is observer-dependent, there is still a subjectively objective fact of the matter here.)
Intuitively, it seems likely that behavior underdetermines experience for non-GLUTs: that, for some set of inputs and outputs that some conscious being exhibits, there are probably two different computations that have those same inputs and outputs but are associated with different experiences.
If the totality of Aunt Bertha’s possible inputs and outputs has this property — if different non-GLUT computations associated with different experiences could give rise to them — and if GBertha is conscious, which of these experiences (or what weighting over them) does GBertha have?
Well, going back to humans for a moment, there are two kinds of fact we can ascertain:
how people behave under various experimental conditions, which include asking them what they are experiencing;
how (what we very strongly suspect is) the material substrate of their conscious experience behaves under various experimental conditions, such as MRI, etc.
For anything else of which we have provisionally reached the conclusion that it is conscious, we can broadly make the same two categories of observation. (Sometimes these two categories of observation yield result that appear paradoxical when we compare them, for instance Libet’s experiments. These paradoxes may lead us to revise and refine our concept of consciousness.)
In fact the first kind is only a particular instance of the second; all our observations about conscious beings are mediated through experimental setups of some kind, formal or informal.
I’d go further and claim (based on cumulative refinements and revisions to the notion of consciousness as I understand it) that our observations about ourselves are mediated through the same kind of (decidedly informal) experimental setup. As the Luminosity sequence suggests, the way I know how I think is the same way I know how anybody else thinks: by jotting notes to an experimenter which happens to be myself.
The “multiplicity of possible conscious experiences” isn’t a question we could ask only about GBertha, but about anything that appears conscious, including ourselves.
So, what difference does it make to my objections to a GLUT scenario?
The lookup tables are not conscious but the process that produced them was.
What about a randomly generated lookup table that just happens to simulate a person? (They can be found here.)
That world is more inconvenient than the one where I wake up with my arm replaced by a purple tentacle. Did you even read the article you linked to?
My specification is the reason we are talking about something improbable. It’s not the cause of the improbable thing itself.
The point is that you have specified something so improbable that it is not going to actually happen, so I don’t have to explain it, like I don’t have to worry about how I would explain my arm being replaced by a purple tentacle.
Mitchell isn’t asking you to explain anything. He’s asking you to predict (effectively) what would happen, consciousness-wise, given a randomly generated GLUT. There is a fact of the matter as to what would happen in that situation (in the same sense, whatever that may be, that there are facts about consciousness in normal situations), and a complete theory will be able to say what it is; the best you can say is that you don’t currently have a theory that covers that situation (or that the situation is underspecified; maybe it depends on what sort of randomizer you use, or something).
My theory does cover that situation; it says the GLUT will not be conscious. It also says that situation will not happen, because GLUTs that act like people come from entanglement with people. Things that don’t actually happen are allowed to violate general rules about things that do happen.
Okay. Why did you bother bringing up the tentacle, or the section you quoted from Eliezer’s post? Why insist on the improbability of a hypothetical when “least convenient possible world” has already been called?
Because I was challenging the applicability of Least Convenient Possible Worlds to this discussion. It is a fully general (and invalid) argument against any theory T to say take this event A that T says is super improbable and suppose that (in the Least Convenient Possible World) A happens, which is overwhelming evidence against T. The tentacle arm replacement is one such event that would contradict a lot of theories. Would you ask someone defending the theory that their body does not drastically change overnight to consider the Least Convenient Possible World where they do wake up with a tentacle instead of an arm?
But you don’t actually need to resort to this dodge. You already said the lookup tables aren’t conscious; that in itself is a step which is troublesome for a lot of computationalists. You could just add a clause to your original statement, e.g.
“The lookup tables are not conscious, but the process that produced them was either conscious or extremely improbable.”
Voila, you now have an answer which covers all possible worlds and not just the probable ones. I think it’s what you wanted to say anyway.
If that answer would have satisfied you, why did you ask about a scenario so improbable you felt compelled to justify it with an appeal to the Least Convenient Possible World?
Do you now agree that GLUT simulations do not imply the existence of zombies?
I thought you were overlooking the extremely-improbable case by mistake, rather than overlooking it on principle.
For me, the point of a GLUT is that it is a simulation of consciousness that is not itself conscious, a somewhat different concept from the usual philosophical notion of a zombie, which is supposed to be physically identical to a conscious being, but with the consciousness somehow subtracted. A GLUT is physically different from the thing it simulates, so it’s a different starting point.
I think your prior estimate for other people’s philosophical competence and/or similarity to you is way too high.
I suspect your prior estimate for people’s philosophical competence / similarity to you (whichever you prefer) is way too high.
To the best of our knowledge, any “quantum property” can be simulated by a classical computer with approx. exponential slowdown. Obviously, a classical computer is not going to instantiate these quantum properties.
Is that obvious?
If you think that there’s something to being an X besides having the causal structure of an X, then yes.
It should be. We can definitely build classical computers where quantum effects are negligible.
(For all we know, the individual transistors of these computers might have some subjective experience; but the computer as a whole won’t.)
If the Church-Turing-Deutsch thesis is true and some kind of Digital Physics is an accurate depiction of reality then a simulation of physics should be indistinguishable from ‘actual’ physics. Saying subjective experience would not exist in the simulation under such circumstances would be a particularly bizarre form of dualism.
The same formal structure will exist, but it will be wholly unrelated to what we mean by “subjective experience”. What’s dualistic about this claim?
I don’t know about consciousness, but the position that subjective experience has some magic property is common sense. Materialism is just a reasonable attempt to ground that magic property in the physical world.
You could fully simulate the person’s consciousness. The simulation won’t have any subjective experience, and it might also be very inefficient from a computational perspective. Compare running an executable program on a computer vs. running the same program in an interpreted VM.
Agreed. I think the appropriate term is “computationalism”, so I added mention of it in the follow up.
Ick. If the universe can be adequately explained by thinking of as arising from graph operations, then I desire to believe that the universe arises from graph operations.
In other words, being a “materialist” does not commit me to thinking of matter as fundamental. Being a materialist commits me to believing that all of my experiences can be adequately explained in the same terms that explain what the ordinary stuff around me consists of—whatever the bottom levels of the explanation turn out to be.
I’d further specify that the bottom levels should not be fundamentally mental (or living). In other words, the bottom levels should resemble bowling balls or water more than it resembles fish or human beings; to look at it another way, we should end up explaining how human-like things are made out of water-like things since all is water, rather than how water-like things are made out of human-like things since all is mind.
This ‘specification’ seems quite vague and unhelpful. It should be noted that the bottom level could have some mind-like quality without actually being fundamentally mental itself—for instance, a panprotoexperiential reality is one where all entities share some precursors of qualia, but need not have any subjective experience or cognition.
Surely something like Occam’s razor comes in here. If we can explain consciousness in terms of our current science then why would we try to change our current science to include a mind-like quality as a fundamental property of matter? Make not sense to me.
First of all, panexperientialism and its variations seek to explain subjective experience, not consciousness. Moreover, we in fact can’t explain consciousness. “Consciousness is an emergent property” is hardly a satisfactory explanation.
Do you have a specific comment or series of comments in mind, here?
That’s why I prefer the term “philosophical naturalist”.
“Physicalist” is the term used in philosophy now for precisely this reason. It just means that you believe the world is composed of whatever our best theory of physics says it is composed of.
Me too. I wanted to address “fear of matter” head on with the term “materialism”.
Don’t we call whatever is at the bottom matter? It all adds up to normality...
Not in everyday language, for instance we don’t think of vacuum as being matter; so the fact that “matter turns out to be vacuum fluctuations” strikes us as surprising.
If we refine our definitions of “materialism” and “matter” appropriately, then sure. But that seems like turning a blind eye to the connotations of the word “matter”, and perhaps these connotations will be lurking in the background of our thinking about materialism, and give us a nasty mistake at some inopportune moment.
(And at the everyday scale, we get useful cognitive work out of the matter-vacuum distinction.)
Fair enough. I suppose it’d be more accurate to say that whatever matter is fundamentally, so is everything, which is not at all the same thing as matter is fundamental.
As I suggested in the post, I’m with you. The rest of the sentence you truncated was
Reformulations of the phenomenon “matter” are fine by me.
To the extent that “self” is a relational concept, the above strikes me as a fallacy of reification. “Self” is a fact about where some particles stand in relationship with each other, it isn’t a fact about any given particle.
Agreed. In fact, this is even stated in the post:
ETA: Perhaps you are even saying that the first item should be struck from that list. I’d agree with you.
Academician, what you are explicitly not saying is that the aspects of reality that give rise to consciousness can be described mathematically. Well, parts of your post seem to imply that the mathematically describable functions are what matter, but other parts deny it. So it’s confusing, rather than enlightening. But I’ll take you at your word that you are not just a reductionist.
So you are a “monist” but, as David Chalmers has described such positions, in the spirit of dualism. As far as I am concerned, you are a dualist, because the only interesting distinction I see is between mathematically describable reality vs. non-MD reality—and your “monism” has aspects of both.
Your argument seems to be that monism is simpler than dualism, so Occam’s Razor prefers it, so we should believe it. Hence, you define the stuff the world is made of as “whatever I am” and call it one kind of stuff.
I don’t see that as a useful approach, because what I want to know is whether MD stuff is enough, or whether we need something more, where ‘something more’ is explicitly mental-related. Remember, we want the simplest explanation that fits the evidence. So the question reduces to “Does an MD-only world fit the evidence from subjective experience?” That’s a hard question.
I am planning to write a post on the hard problem at some point, which I’ll post on my blog and here.
Correct. I just wrote a follow up to acknowledge this. In short, I can only defend so much at one time :)
Good post, but I think what people are often seeking in the non-material is not so much an explanation of what they are, but a further connection with other people, deities, spirits, etc. In a crude sense, judeo-christian god gives people an ever-present friend that understands everything about them and always loves them. Materialism would tell them, ‘There is no God. You have found that talking to yourself makes you feel that you are unconditionally loved, but it’s all in your head.’
On a non-religious note, two lovers may feel that they have bonded such that they are communicating on another level. Which explanation seems more aesthetically pleasing: 1) Your ‘souls’ are entwined, your ‘minds’ are one, he/she really does deeply understand you such that words are no longer necessary, you are sharing the same experience. 2) You have found a trigger to an evolutionarily developed emotion that makes you feel as if you are communing. Your lover may or may not have found the same switch. You are each experiencing this in your own way in your own head. You will need to discuss to compare.
And yes, I do think that verbal and physical communication is still pretty great (I mean, that’s what we got), but there is a large attraction to believe one’s transcendent feelings really do, well, transcend, and that we are not as alone in our minds as we really are.
It depends. To those wise enough to take joy in the merely real, the materialistic explanation could be a challenge to actually become more empathetic and communicative towards their lovers. An alief of communion and transcendence can also enhance trustworthiness and cooperation, which are generally sought in any love relationship.
By contrast, if the ‘spritual’ explanation were real, it would probably lose its charm and even be resented by some as a loss in autonomy, just as fire-breathing dragons and lightning spells might become boring and unexciting in a world where magic actually worked.
Voted down for preemptive use of Let Me Google That For You. I would actually like to vote this down first for signaling that you are providing a resource explaining a technical term you used by providing a link, but instead providing a much less helpful Google search, where the reader is not sure which, if any, of the search results will be helpful, and vote it down again for using LMGTFY instead of Google directly, which includes obnoxious animations and requires javascript.
I would have left it alone if you had just used the word “alief” without any link at all.
Sure, one can always look at the positive aspects of reality, and many materialists have even tried to put a positive spin on the inevitability of death without an afterlife. But it should not be surprising that what is real is not always what is most beautiful. There are a panoply of reasons not to believe things that are not true, but greater aesthetic value does not seem to be one of them. There is an aesthetic value in the idea of ‘The Truth,’ but I would not say that this outweighs all of the ways in which fantasy can be appealing for most people. And the ‘fantasies’ of which I am speaking are not completely random untruths, like “Hey, I’m gonna believe in Hobbits, because that would be cool!’, but rather ideas that spring from the natural emotional experiences of humanity. They feel correct. Even if they are not.
‘cogito-ergo-sum style knowledge we all enjoy’ I think you have to speak for yourself. I do not find cogito-ego-sum convincing and I hope I am not alone. That is a very slippery slope to dualism. I am and therefore I think is more in keeping that how brains evolved. Animals move, therefore they have to know where they are going, therefore they must model reality, therefore they become conscious.
I would like to disagree but I’m so confused by this part of the comment that I don’t know how to write a reply. But for social mores and legal precedence slippery slope arguments are fallacious. After that I lose track of what you’re saying.
It seems to me true that one cannot be mistaken about one’s existence because things that are mistaken are things that exist. The concepts deployed here aren’t necessarily concepts I’m prepared to let Descartes use after he decides to disbelieve everything he is uncertain of, so I don’t think the argument does what Descartes wants it to do. But I’m not obligated to give up these concepts so I can make the argument without qualms. You cannot be mistaken about your existence.
Jack—I seems to me that ‘slippery slope’ may have been a sloppy use on my part. What I meant was that ‘I think therefore I am’ so implies dualism that it would be difficult to avoid it once you accepted the statement. It is a statement that starts with ideas and goes on from there. On the other hand ‘I exist therefore I think’, starts with materialism. The question is not whether we exist or not but whether we know of our existence because of mental thoughts or because of physical reality. I agree that we cannot be mistaken about our existence. Descartes’ method also implies that in introspection we gain direct knowledge of something. I believe that this is an untenable idea in light of neuroscience. When we see a tree, there is no actual tree inside our skulls, there is a model of a tree. When we experience our thoughts we are likewise experiencing a model of our thoughts. Consciousness is highly processed and in no sense that I know of is it direct knowledge.
I’m confused: do you intend your category of ‘mental thoughts’ to encompass the whole of subjective experience or just introspection?
If the former, then yes, subjective experience is what any theory of physical reality ultimately has to explain. There’s no reason why your theory could not include a lot of distortion, but you still have to be parsimonious and justify that distortion in some way.
You think that subjective experience doesn’t exist, thus there’s no need to explain why consciousness would ever feel like anything? That’s a respectable position, but it definitely needed to be clarified.
Academian, I have re-read your epistemology a couple of times and finally have taken a chunk of it and posted it on my blog. Thank you. You can find it at thoughts on thoughts. I do not seem to be able to create a link here but the site is http://charbonniers.org
Typo in the first header, “An epistemic dificulty”.