Consciousness of simulations & uploads: a reductio
Related articles: Nonperson predicates, Zombies! Zombies?, & many more.
ETA: This argument appears to be a rehash of the Chinese room, which I had previously thought had nothing to do with consciousness, only intelligence. I nonetheless find this one instructive in that it makes certain things explicit which the Chinese room seems to gloss over.
ETA2: I think I may have made a mistake in this post. That mistake was in realizing what ontology functionalism would imply, and thinking that ontology too weird to be true. An argument from incredulity, essentially. Double oops.
Consciousness belongs to a class of topics I think of as my ‘sore teeth.’ I find myself thinking about them all the time: in the middle of bathing, running, cooking. I keep thinking about consciousness because no matter how much I read on the subject, I find I am still confused.
Now, to the heart of the matter. A major claim on which the desirability of uploading (among other things) depends, is that the upload would be conscious (as distinct from intelligent). I think I found a reductio of this claim at about 4:00 last night while staring up at my bedroom ceiling.
Simulating a person
The thought experiment that is supposed to show us that the upload is conscious goes as follows. (You can see an applied version in Eliezer’s bloggingheads debate with Massimo Pigliucci, here. I also made a similar argument to Massimo here.)
Let us take an unfortunate member of the public, call her Simone, and simulate her brain (plus inputs and outputs along the nervous system) on an arbitrarily powerful philosophical supercomputer (this also works if you simulate her whole body plus surroundings). This simulation can be at any level of complexity you like, but it’s probably best if we stick to an atom-by-atom (or complex amplitudes) approach, since that leaves less room for doubt.
Since Simone is a lawful entity within physics, there ought to be nothing in principle stopping us from doing so, and we should get behavioural isomorphism between the simulation and the biological Simone.
Now, we can also simulate inputs and outputs to and from the visual, auditory and language regions of her brain. It follows that with the right expertise, we can ask her questions—questions like “Are you experiencing the subjective feeling of consciousness you had when you were in a biological body?”—and get answers.
I’m almost certain she’ll say “Yes.” (Take a moment to realize why the alternative, if we take her at her word, implies Cartesian dualism.)
The question is, do we believe her when she says she is conscious? 10 hours ago, I would have said “Of course!” because the idea of a simulation of Simone that is 100% behaviourally isomorphic and yet unconscious seemed very counterintuitive; not exactly a p-zombie by virtue of not being atom-by-atom identical with Simone, but definitely in zombie territory.
A different kind of simulation
There is another way to do this thought experiment, however, and it does not require that infinitely powerful computer the philosophy department has (the best investment in the history of academia, I’d say).
(NB: The next few paragraphs are the crucial part of this argument.)
Observe that ultimately, the computer simulation of Simone above would output nothing but a huge sequence of zeroes and ones, process them into visual and audio outputs, and spit them out of a monitor and speakers (or whatever).
So what’s to stop me just sitting down and crunching the numbers myself? All I need is a stupendous amount of time, a lot of pencils, a lot (!!!) of paper, and if you’re kind to me, a calculator. Atom by tedious atom, I’ll simulate inputs to Simone’s auditory system asking her if she’s conscious, then compute her (physically determined) answer to that question.
Take a moment to convince yourself that there is nothing substantively different between this scenario and the previous one, except that it contains approximately 10,000 times the maximum safe dosage of in principle.
Once again, Simone will claim she’s conscious.
...Yeah, I’m sorry, but I just don’t believe her.
I don’t claim certain knowledge about the ontology of consciousness, but if I can summon forth a subjective consciousness ex nihilo by making the right series of graphite squiggles (which don’t even mean anything outside human minds), then we might as well just give up and admit consciousness is magic.
Pigliucci is going to enjoy watching me eat my hat.
What was our mistake?
I’ve thought about this a lot in the last ~10 hours since I came up with the above.
I think when we imagined a simulated human brain, what we were picturing in our imaginations was a visual representation of the simulation, like a scene in Second Life. We saw mental images of simulated electrical impulses propagating along simulated neurons, and the cause & effect in that image is pretty clear...
...only it’s not. What we should have been picturing was a whole series of logical operations happening all over the place inside the computer, with no physical relation between them and the represented basic units of the simulation (atoms, or whatever).
Basically, the simulated consciousness was isomorphic to biological consciousness in a similar way to how my shadow is isomorphic to me. Just like the simulation, if I spoke ASL I could get my shadow to claim conscious awareness, but it wouldn’t mean much.
In retrospect, it should have given us pause that the physical process happening in the computer—zeroes and ones propagating along wires & through transistors—can only be related to consciousness by virtue of outsiders choosing the right interpretations (in their own heads!) for the symbols being manipulated. Maybe if you interpret that stream of zeroes and ones differently, it outputs 5-day weather predictions for a city that doesn’t exist.
Another way of putting it is that, if consciousness is “how the algorithm feels from the inside,” a simulated consciousness is just not following the same algorithm.
But what about the Fading Qualia argument?
The fading qualia argument is another thought experiment, this one by David Chalmers.
Essentially, we strap you into a chair and open up your skull. Then we replace one of your neurons with a silicon-based artificial neuron. Don’t worry, it still outputs the same electrical signals along the axons; your behaviour won’t be affected.
Then we do this for a second neuron.
Then a third, then a kth… until your brain contains only artificial neurons (N of them, where N ≈ 1011).
Now, what happens to your conscious experience in this process? A few possibilities arise:
Conscious experience is initially the same, then shuts off completely at some discrete number of replaced neurons: maybe 1, maybe N/2. Rejected by virtue of being ridiculously implausible.
Conscious experience fades continuously as k → N. Certainly more plausible than option 1, but still very strange. What does “fading” consciousness mean? Half a visual field? A full visual field with less perceived light intensity? Having been prone to (anemia-induced) loss of consciousness as a child, I can almost convince myself that fading qualia make some sort of sense, but not really...
Conscious experience is unaffected by the transition.
I don’t see how this differs at all from Searle’s Chinese room.
The “puzzle” is created by the mental picture we form in our heads when hearing the description. For Searle’s room, it’s a clerk in a room full of tiles, shuffling them between boxes; for yours, it’s a person sitting at a desk scratching on paper. Since the consciousness isn’t that of the human in the room, where is it? Surely not in a few scraps of paper.
But plug in the reality for how complex such simulations would actually have to be, if they were to actually simulate a human brain. Picture what the scenarios would look like running on sufficient fast-forward that we could converse with the simulated person.
You (the clerk inside) would be utterly invisible; you’d live billions of subjective years for every simulated nanosecond. And, since you’re just running a deterministic program, you would appear no more conscious to us than an electron appears conscious as it “runs” the laws of physics.
What we might see instead is a billion streams of paper, flowing too fast for the eye to follow, constantly splitting and connecting and shifting. Cataracts of fresh paper and pencils would be flowing in, somehow turning into marks on the pages. Reach in and grab a couple of pages, and we could see how the marks on one seemed to have some influence on those nearby, but when we try to follow any actual stimulus through to a response we get lost in a thousand divergent flows, that somehow recombine somewhere else moments later to produce an answer.
It’s not so obvious to me that this system isn’t conscious.
It is, of course, utterly absurd to think that meat could be the substrate for true consciousness. And what if Simone chooses herself to spend eons simulating a being by hand? Are we to accept the notion of simulations all the way down?
In all honesty, I don’t think the the simulation necessarily has to be very fine-grained. Plenty of authors will tell you about a time when one of their characters suddenly “insisted” on some action that the author had not foreseen, forcing the author to alter her story to compensate. I think it plausible that, were I to dedicate my life to it, I could imagine a fictional character and his experiences with such fidelity that the character would be correct in claiming to be conscious. (I suspect such a simulation would be taking advantage of the machinery of my own consciousness, in much the same manner as a VMWare virtual machine can, if properly configured, use the optical drive in its host computer.)
What, then, are the obligations of an author to his characters, or of a thinker to her thoughts? My memory is fallible and certainly I may wish to do other things with my time than endlessly simulate another being. Yet “fairness” and the ethic of reciprocity suggest that I should treat simulated beings the same way I would like to be treated by my simulator. Perhaps we need something akin to the ancient Greeks’ concept of xenia — reciprocal obligations of host to guest and guest to host — and perhaps the first rule should be “Do not simulate without sufficient resources to maintain that simulation indefinitely.”
Personally, I would be more surprised if you could imagine a character who was correct in claiming not to be conscious.
There have been some opinions expressed on another thread that disagree with that.
The key question is whether terminating a simulation actually does harm to the simulated entity. Some thought experiments may improve our moral intuitions here.
Does slowing down a simulation do harm?
Does halting, saving, and then restarting a simulation do harm?
Is harm done when we stop a simulation, restore an earlier save file, and then restart?
If we halt and save a simulation, then never get around to restarting it, the save disk physically deteriorates and is eventually placed in a landfill, exactly at which stage of this tragedy did the harm take place? Did the harm take place at some point in our timeline, or at a point in simulated time, or both?
I tend to agree with your invocation of xenia, but I’m not sure it applies to simulations. At what point do simulated entities become my guests? When I buy the shrink-wrap software? When I install the package? When I hit start?
I really remain unconvinced that the metaphor applies.
Applying the notion of information-theoretic death to simulated beings results in the following answers:
Does slowing down a simulation do harm? If/when time for computation becomes exhausted, those beings who lost the opportunity to be simulated are harmed, relative to the counterfactual world in which the simulation was not slowed.
Does halting, saving, and then restarting a simulation do harm? No.
Is harm done when we stop a simulation, restore an earlier save file, and then restart? If the restore made the stopped simulation unrecoverable, yes.
If we halt and save a simulation, then never get around to restarting it, the save disk physically deteriorates and is eventually placed in a landfill, exactly at which stage of this tragedy did the harm take place? When the information became unrecoverable. Did the harm take place at some point in our timeline, or at a point in simulated time, or both? Both.
Slowing down a simulation also does harm if there are interactions which the simulation would prefer to maintain which are made more difficult or impossible.
The same would apply to halting a simulation.
Request for clarification:
Do I understand this properly to say that if the stopped simulation had been derived from the save file state using non-deterministic or control-console inputs, inputs that are not duplicated in the restarted simulation, then harm is done?
Hmmm. I am imagining a programmer busy typing messages to his simulated “creations”:
Looks at what was entered …
Thinks about what just happened. … “Aw Sh.t!”
As I understand it, yes. But the harm might not be as bad as what we currently think of as death, depending on how far back the restore went. Backing one’s self up is a relatively common trope in a certain brand of Singularity fic (e.g. Glasshouse)).
(I needed three parentheses in a row just now: the first one, escaped, for the Wikipedia article title, the second one to close the link, and the third one to appear as text.)
All other things being equal, if I am a simulated entity, I would prefer not to have my simulation terminated, even though I would not know if it happened; I would simply cease to acquire new experiences. Reciprocity/xenia implies that I should not terminate my guest-simulations.
As for when the harm occurs, that’s nebulous concept hanging on the meaning of ‘harm’ and ‘occurs’. In Dan Simmons’ Hyperion Cantos, there is a method of execution called the ‘Schrodinger cat box’. The convict is placed inside this box, which is then sealed. It’s a small but comfortable suite of rooms, within which the convict can live. It also includes a random number generator. It may take a very long time, but eventually that random number generator will trigger the convict’s death. This execution method is used for much the same reason that most rifles in a firing squad are unloaded — to remove the stress on the executioners.
I would argue that the ‘harm’ of the execution occurs the moment the convict is irrevocably sealed inside the box. Actually, I’d say ‘potential harm’ is created, which will be actualized at an unknown time. If the convict’s friends somehow rescue him from the box, this potential harm is averted, but I don’t think that affects the moral value of creating that potential harm in the first place, since the executioner intended that the convict be executed.
If I halt a simulation, the same kind of potential harm is created. If I later restore the simulation, the potential harm is destroyed. If the simulation data is destroyed before I can do so, the potential harm is then actualized. This either takes place at the same simulated instant as when the simulation was halted, or does not take place in simulated time at all, depending on whether you view death as something that happens to you, or something that stops things from happening to you.
In either case, I think there would be a different moral value assigned based on your intent; if you halt the simulation in order to move the computer to a secure vault with dedicated power, and then resume, this is probably morally neutral or morally positive. If you halt the simulation with the intent of destroying its data, this is probably morally negative.
Your second link was discussing simulating the same personality repeatedly, which I don’t think is the same thing here. Your first link is talking about many-worlds futility, where I make all possible moral choices and therefore none of them; I think this is not really worth talking about in this situation.
So it seems that you simply don’t take seriously my claim that no harm is done in terminating a simulation, for the reason that terminating a simulation has no effect on the real existence of the entities simulated.
I see turning off a simulation as comparable to turning off my computer after it has printed the first 47,397,123 digits of pi. My action had no effect on pi itself, which continues to exist. Digits of pi beyond 50 million still exist. All I have done by shutting off the computer power is to deprive myself of the ability to see them.
I say that your claim depends on an assumption about the degree of substrate specificity associated with consciousness, and the safety of this assumption is far from obvious.
What does consciousness have to do with it? It doesn’t matter whether I am simulating minds or simulating bacteria. A simulation is not a reality.
There isn’t a clear way in which you can say that something is a “simulation”, and I think that isn’t obvious when we draw a line in a simplistic way based on our experiences of using computers to “simulate things”.
Real things are arrangements of matter, but what we call “simulations” of things are also arrangements of matter. Two things or processes of the same type (such as two real cats or processes of digestion) will have physical arrangements of matter that have some property in common, but we could say the same about a brain and some arrangement of matter in a computer: A brain and some arrangement of matter in a computer may look different, but they may still have more subtle properties in common, and there is no respect in which you can draw a line and say “They are not the same kind of system.”—or at least any line such drawn will be arbitrary.
I refer you to:
Almond, P., 2008. Searle’s Argument Against AI and Emergent Properties. Available at: http://www.paul-almond.com/SearleEmergentProperties.pdf or http://www.paul-almond.com/SearleEmergentProperties.doc [Accessed 27 August 2010].
But there is such a line. You can unplug a simulation. You cannot unplug a reality. You can slow down a simulation. If it uses time reversible physics, you can run it in reverse. You can convert the whole thing into an equivalent Giant Lookup Table. You can do none of these things to a reality. Not from the inside.
I’m not sure that the ‘line’ between simulation and reality is always well-defined. Whenever you have a system whose behaviour is usefully predicted and explained by a set of laws L other than the laws of physics, you can describe this state of affairs as a simulation of a universe whose laws of physics are L. This leaves a whole bunch of questions open: Whether an agent deliberately set up the ‘simulation’ or whether it came about naturally, how accurate the simulation is, whether and how the laws L can be violated without violating the laws of physics, whether and how an agent is able to violate the laws of L in a controlled way etc.
You give me pause, sir.
All those things can only be done with simulations because the way that we use computers has caused us to build features like malleability, predictability etc into them.
The fact that we can easily time reverse some simulations means little: You haven’t shown that having the capability to time reverse something detracts from other properties that it might have. It would be easy to make simulations based on analogue computers where we could never get the same simulation twice, but there wouldn’t be much of a market for those computers—and, importantly, it wouldn’t persuade you any more.
It is irrelevant that you can slow down a simulation. You have to alter the physical system running the simulation to make it run slower: You are changing it into a different system that runs slower. We could make you run slower too if we were allowed to change your physical system. Also, once more—you are just claiming that that even matters—that the capability to do something to a system detracts from other features.
The lookup table argument is irrelevant. If a program is not running a lookup table, and you convert it to one, you have changed the physical configuration of that system. We could convert you into a giant lookup table just as easily if we are allowed to alter you as well.
The “unplug” one is particularly weak. We can unplug you with a gun. We can unplug you by shutting off the oxygen supply to your brain. Again, where is a proof that being able to unplug something makes it not real?
All I see here is a lot of claims that being able to do something with a certain type of system—which has been deliberately set up to make it easy to do things with it—makes it not real. I see no argument to justify any of that. Further, the actual claims are dubious.
Well, it would mean that “pulling the plug” would mean depriving the simulated entities of a past, rather than depriving them of a future in your viewpoint. I would have thought that would leave you at least a little confused.
Odd. I thought you were the one arguing that substrate doesn’t matter. I must have misunderstood or oversimplified.
I don’t think so. The clock continues to run, my blood runs out, my body goes into rigor, my brain decays. None of those things occur in an unplugged simulation. If you did somehow cause them to occur in a simulation still plugged in, well, then I might worry a little about your ethics.
The difference here is that you see yourself, as the owner of computer hardware running a simulation, as a kind of creator god who has brought conscious entities to life and has responsibility for their welfare.
I, on the other hand imagine myself as a voyeur. And not a real-time voyeur, either. It is more like watching a movie from NetFlicks. The computer is not providing a substrate for new life, it is merely decoding and rendering something that already exists as a narrative.
But what about any commands I might input into the simulation? Sorry, I see those as more akin to selecting among channels, or choosing among n,e,s,w,u, and d in Zork, than as actually interacting with entities I have brought to life.
If we one day construct a computer simulation of a conscious AI, we are not to be thought of as creating conscious intelligence, any more than someone who hacks his cable box so as to provide the Playboy channel has created porn.
Your brain is (so far as is currently known) a Turing-equivalent computer. It is simulating you as we speak, providing inputs to your simulation based on the way its external sensors are manipulated.
Your point being?
In advance of your answer, I point out that you have no moral rights to do anything to that “computer”, and that no one, even myself, currently has the ability to interfere with that simulation in any constructive way—for example, an intervention to keep me from abandoning this conversation in frustration.
I could turn the simulation off. Why is your computational substrate specialer than an AI’s computational substrate?
Because you have no right to interfere with my computational substrate. They will put you in jail. Or, if you prefer, they will put your substrate in jail.
We have not yet specified who has rights concerning the AI’s substrate—who pays the electrical bills. If the owner of the AI’s computer becomes the AI, then I may need to rethink my position. But this rethinking is caused by a society-sanctioned legal doctrine (AI’s may own property) rather than by any blindingly obvious moral truth.
Is there a blindingly obvious moral truth that gives you self-ownership? Why? Why doesn’t this apply to an AI? Do you support slavery?
Moral truth? I think so. Humans should not own humans. Blindingly obvious? Apparently not, given what I know of history.
Well, I left myself an obvious escape clause. But more seriously, I am not sure this one is blindingly obvious either. I presume that the course of AI research will pass from sub-human-level intelligences; thru intelligences better at some tasks than humans but worse at others; to clearly superior intelligences. And, I also suspect that each such AI will begin its existence as a child-like entity who will have a legal guardian until it has assimilated enough information. So I think it is a tricky question. Has EY written anything detailed on the subject?
One thing I am pretty sure of is that I don’t want to grant any AI legal personhood until it seems pretty damn likely that it will respect the personhood of humans. And the reason for that asymmetry is that we start out with the power. And I make no apologies for being a meat chauvinist on this subject.
As a further comment, regarding the idea that you can “unplug” a simulation: You can do this in everday life with nuclear weapons. A nuclear weapon can reduce local reality to its constituent parts—the smaller pieces that things were made out of. If you turn off a computer, you similarly still have the basic underlying reality there—the computer itself—but the higher level organization is gone—just as if a nuclear weapon had been used on the simulated world. This only seems different because the underpinnings of a real object and a “simulated” one are different. Both are emergent properties of some underlying system and both can be removed by altering the underlying system in such a way as they don’t emerge from it anymore (by using nuclear devices or turning off the power).
It would have to be a weapon that somehow destroyed the universe in order for me to see the parallel. Hmmm. A “big crunch” in which all the matter in the universe disappears into a black hole would do the job.
If you can somehow pull that off, I might have to consider you immoral if you went ahead and did it. From outside this universe, of course.
Where do those digits of pi exist? Do they exist in the same sense that I exist, or that my journal entries (stored on my hard drive) exist? What does it mean for information to ‘exist’? If my journal entries are deleted, it is little consolation to tell me they can be recovered from the Library of Babel — such a recovery requires effort equivalent to reconstructing them ex nihilo.
In one sense, every possible state of a simulation could be encoded as a number, and thus every possible state could be said to exist simultaneously. That’s of little comfort to me, though, if I am informed that I’m living in a simulation on some upuniverse computer, which is about to be decommissioned. My life is meaningful to me even if every possible version of me resulting from every possible choice exists in the platonic realm of ethics.
No, of course not. No more than do simulated entities on your hard-drive exist as sentient agents in this universe. As sentient agents, they exist in a simulable universe. A universe which does not require actually running as a simulation in this or any other universe to have its own autonomous existence.
Now I’m pretty sure that is an example of mind projection. Information exists only with reference to some agent being informed.
Which is exactly my point. If you terminate a simulation, you lose access to the simulated entities, but that doesn’t mean they have been destroyed. In fact, they simply cannot be destroyed by any action you can take, since they exist in a different space-time.
But you are not living in that upuniverse computer. You are living here. All that exists in that computer is a simulation of you. In effect, you were being watched. They intend to stop watching. Big deal!
Do you also argue that the books on my bookshelves don’t really exist in this universe, since they can be found in the Library of Babel?
Gee, what do you think?
I don’t really wish to play word games here. Obviously there is some physical thing made of paper and ink on your bookshelf. Equally obviously, Borges was writing fiction when he told us about Babel. But in your thought experiment, something containing the same information as the book on your shelf exists in Babel.
Do you have some point in asking this?
What if you stop the simulation and reality is very large indeed, and someone else starts a simulation somewhere else which just happens, by coincidence, to pick up where your simulation left off? Has that person averted the harm?
Suppose I am hiking in the woods, and I come across an injured person, who is unconscious (and thus unable to feel pain) and leave him there to die of his wounds. (We are sufficiently out in the middle of nowhere that nobody else will come along before he dies.) If reality is large enough that there is another Earth out there with the same man dying of his wounds, and on that Earth, I choose to rescue him, does that avert the harm that happens to of the man I left to die? I feel this is the same sort of question as many-worlds. I can’t wave away my moral responsibility by claiming that in some other universe, I will act differently.
I am fascinated by applying the ethic of reciprocity to simulationism, but is a bidirectional transfer the right approach?
Can we deduce the ethics of our simulator with respect to simulations by reference to how we wish to be simulated? And is that the proper ethics? This would be projecting the ethics up.
Or rather should we deduce the proper ethics from how we appear to be simulated? This would be projecting the ethics down.
The latter approach would lead to a different set of simulation ethics, probably based more on historicity and utility. ie “Simulations should be historically accurate.” This would imply that simulation of past immorality and tragedy is not unethical if it is accurate.
No, I specifically meant that we should treat our simulations the way we would like to be treated, not that we will necessarily be treated that way in “return”. A host’s duty to his guests doesn’t go away just because that host had a poor experience when he himself was a guest at some other person’s house.
If our simulators don’t care about us, nothing we can do will change that, so we might as well treat our simulations well, because we are moral people.
If our simulators do care about us, and are benevolent, we should treat our simulations well, because that will rebound to our benefit.
If our simulators do care about us, and are malevolent (or have ethics not compatible with ours), then, given the choice, I would prefer to be better than them.
Of course, there’s always the possibility that simulations may be much more similar than we think.
But maybe there could be a way in which, if you behave ethically in a simulation, you are more likely to be treated that way “in return” by those simulating you—using a rather strange meaning of “in return”?
Some people interpret the Newcomb’s boxes paradox as meaning that, when you make decisions, you should act is if you are influencing the decisions of other entities when there is some relationship between the behavior of those entities and your behavior—even if there is no obvious causal relationship, and even if the other entities already decided back in the past.
The Newcomb’s boxes paradox is essentially about reference class—it could be argued that every time you make a decision, your decision tells you a lot about the reference class of entities identical to you—and it also tells you something, even if it may not be much in some situations, about entities with some similarity to you, because you are part of this reference class.
Now, if we apply such reasoning, if you have just decided to be ethical, you have just made it a bit more likely that everyone else is ethical (of course, this is your experience—in reality—it was more that your behavior was dictated by being part of the reference class—but you don’t experience the making of decisions from that perspective). Same for being unethical.
You could apply this to simulation scenarios, but you could also apply it to a very large or infinite cosmos—such as some kind of multiverse model. In such a scenario, you might consider each ethical act you perform as increasing the probability that ethical acts are occurring all over reality—even of increasing the proportion of ethical acts in an infinity of acts. It might make temporal discounting a bit less disturbing (to anyone bothered by it): If you act ethically with regard to the parts of reality you can observe, predict and control, your “effect” on the reference class means that you can consider yourself to be making it more likely that other entities, beyond the range of your direct observation, prediction or control, are also behaving ethically within their local environment.
I want to be clear here that I am under no illusion that there is some kind of “magical causal link”. We might say that this is about how our decisions are really determined anyway. Deciding as if “the decision” influences the distant past, another galaxy, another world in some expansive cosmology or a higher level in a computer simulated reality is no different, qualitatively, from deciding as if “your decision” affects anything else in everyday life—when in fact, your decision is determined by outside things.
This may be a bit uncomfortably like certain Buddhist ideas really, though a Buddhist might have more to say on that if one comes along, and I promise that any such similarity wasn’t deliberate.
One weird idea relating to this: The greater the number of beings, civilizations, etc that you know about, the more the behavior of these people will dominate your reference class. If you live in a Star Trek reality, with aliens all over the place, what you know about the ethics of these aliens will be very important, and your own behavior will be only a small part of it: You will reduce the amount of “non-causal influence” that you attribute to your decisions. On the other hand, if you don’t know of any aliens, etc, your own behaviour might be telling you much more about the behavior of other civilizations.
P.S. Remember that anyone who votes this comment down is influencing the reference class of users on Less Wrong who will be reading your comments. Likewise for anyone who votes it up. :) Hurting me only hurts yourselves! (All right—only a bit, I admit.)
That idea used to make me afraid to die before i wrote up all the stories i thought up. Sadly that is not even possible any more.
One big difference between an upload an a person simulated in your mind is that the upload can interact with environment.
On a related note, is anyone familiar with the following variation on the fading qualia argument? It’s inspired by (and very similar to) a response to Chalmers given in the paper “Counterfactuals Cannot Count” by M. Bishop. (Unfortunately, I couldn’t find an ungated version.) Chalmers’s reply to Bishop is here.
The idea is as follows. Let’s imagine a though experiment under the standard computationalist assumptions. Suppose you start with an electronic brain B1 consisting of a huge number of artificial neurons, and you let it run for a while from some time T1 to T2 with an input X, so that during this interval, the brain goes through a vivid conscious experience full of colors, sounds, etc. Suppose further that we’re keeping a detailed log of each neuron’s changes of state during the entire period. Now, if we reset the brain to the initial state it had at T1 and start it again, giving it the same input X, it should go through the exact same conscious experience.
But now imagine that we take the entire execution log and assemble a new brain B2 precisely isomorphic to B1, whose neurons are however not sensitive to their inputs. Instead, each neuron in B2 is programmed to recreate the sequence of states through which its corresponding neuron from B1 passed during the interval (T1, T2) and generate the corresponding outputs. This will result in what Chalmers calls a “wind-up” system, which the standard computationalist view (at least to my knowledge) would not consider conscious, since it completely lacks the causal structure of the original computation, and merely replays it like a video recording.
You can probably see where this is going now. Suppose we restart B1 with the same initial state from T1 and the same input X, and while it’s running, we gradually replace the neurons from B1 with their “wind-up” versions from B2. At the start at T1, we have the presumably conscious B1, and at the end at T2, the presumably unconscious B2 -- but the transition between the two is gradual just like in the original fading qualia argument. Thus, there must be some sort of “fading qualia” process going on after all, unless either B1 is not conscious to begin with, or B2 is conscious after all. (The latter however gets us into the problem that every physical system implements a “wind-up” version of every computation if only some numbers from arbitrary physical measurements are interpreted suitably.)
I don’t find Chalmers’s reply satisfactory. In particular, it seems to me that the above argument is damaging for significant parts of his original fading qualia thought experiment where he explains why he finds the possibility of fading qualia implausible. It is however possible that I’ve misunderstood either the original paper or his brief reply to Bishop, so I’d definitely like to see him address this point in more detail.
Well, this bit seems wrong on Bishop’s part:
This is a false distinction if (as I believe) counterfactual sensitivity is part of what happens. For example, if what happens is that Y causes Z, then part of that is the counterfactual fact that if Y hadn’t happened then Z wouldn’t have happened. (Maybe this particular example can be nitpicked, but I hope that the fundamental point is made.)
If counterfactual sensitivity matters—and I think it does—then some sort of fading (I hesitate to call it “fading qualia” specifically—the whole brain is fading, in that its counterfactual sensitivity is gradually going kaput) is going on. And since the self is (by hypothesis) unable to witness what’s happening, then this demonstrates how extreme our corrigibility with regard to our own subjective experiences is. Not at all a surprising outcome.
I think that something like this must be the case. Especially considering the hypothesis that the brain is a dynamical system that requires rapid feedback among a wide variety counterfactual channels, even the type of calculation in Simplicio’s simulation model wouldn’t work. Note that this is not just because you don’t have enough time to simulate all the moves of the computer algorithm that produces the behavior. You have to be ready to mimic all the possible behaviors that could arise from a different set of inputs, in the same temporal order. I’m sure that somewhere along the way, linear methods of calculation such as your simulation attempts, must break down.
In other words, your simulation is just a dressed up version of the wind up system from a dynamical system point of view. The analogy runs like this: The simulation model is to the real consciousness what the wind-up model is to a simulation, in that it supports much fewer degrees of freedom. It seems that you have to have the right kind of hardware to support such processes, hardware that probably has criteria much closer to our biological, multilateral processing channels than a linear binary logic computer. Note that even though Turing machines supposedly can represent any kind of algorithm, they cannot support the type of counterfactual channels and especially feedback loops necessary for consciousness. The number of calculations necessary to recreate the physical process is probably beyond the linearly possible with such apparatuses.
Allenwang voted up—I don’t understand why there was a negative reaction to this.
This puts the computed human in a curious position in as much as she must consider, if philosophising about her existence, whether she is a reductionist version of a ‘deceptive demon’ that (even more) mystically oriented philosophers have been want to consider. Are her neurons processing stimulus or controlled by their own pattern?
On the other hand she does have some advantages. Because her neuron’s responses are initially determined stimulus X and her own cognitive architecture she is free to do whatever experiments are possible within the artificial world X. X will then either present her with a coherent world of the sort humans would be able to comprehend or present her with something that more or less befuddles her mind. After doing experiments to determine how her brain seems to work she knows that either things are what they appear or that the deceptively demonic computationalist overlords are messing with her electronic brain (or potentially any other form of processing). Either by giving her bogus X or making her entire state totally arbitrary. Throw in Boltzman brains as equivalent to ‘computationalist overlords’ too, as far as she is concerned.
I don’t know what points Chalmer’s or Bishop were trying to make about ‘qualia’ because such arguments often make little to no sense to me. This scenario (like most others) looks like just another curious setup in a reductionist universe.
I once took this reductio in the opposite direction and ended up becoming convinced that consciousness is what it feels like inside a logically consistent description of a mind-state, whether or not it is instantiated anywhere. I’m still confused about some of the implications of this, but somewhat less confused about consciousness itself.
“If I can summon forth a subjective consciousness ex nihilo by making the right blobs of protein throw around the right patterns of electrical impulses and neurotransmitters (which don’t even mean anything outside human minds), then we might as well just give up and admit consciousness is magic.”
Remember that it doesn’t count as a reductio ad absurdum unless the conclusion is logically impossible (or, for the Bayesian analogue, very improbable according to some actual calculation) rather than merely implausible-sounding. I’d rather take Simone’s word for it than believe my intuitions about plausibility.
Doesn’t this imply that an infinity of different subjective consciousnesses are being simulated right now, if only we knew how to assign inputs and outputs correctly?
I started a series of articles, which got some criticism on LW in the past, dealing with this issue (among others) and this kind of ontology. In short, if an ontology like this applies, it does not mean that all computations are equal: There would be issues of measure associated with the number (I’m simplifying here) of interpretations that can find any particular computation. I expect to be posting Part 4 of this series, which has been delayed for a long time and which will answer many objections, in a while, but the previous articles are as follows:
Minds, Substrate, Measure and Value, Part 1: Substrate Dependence. http://www.paul-almond.com/Substrate1.pdf.
Minds, Substrate, Measure and Value, Part 2: Extra Information About Substrate Dependence. http://www.paul-almond.com/Substrate2.pdf.
Minds, Substrate, Measure and Value, Part 3: The Problem of Arbitrariness of Interpretation. http://www.paul-almond.com/Substrate3.pdf.
This won’t resolve everything, but should show that the kind of ontology you are talking about is not a “random free for all”.
This relates to the notion of “joke interpretations” under which a rock can be said to be implementing a given algorithm. There’s some discussion of it in Good and Real.
Yes, it does. And if the universe is spatially infinite, then that implies an infinity of different subjective consciousnesses, too. Neither of these seems like a problem to me.
Not necessarily. See Chlamer’s reply to Hilary Putnam who asserted something similar, especially section 6. Basically, if we require that all of the “internal” structure of the computation be the same in the isomorphism and make a reasonable assumption about the nature consciousness, all of the matter in the Hubble volume wouldn’t be close to large enough to simulate a (human) consciousness.
Do you think the world outside your body is still there when you’re asleep? That objects are still there when you close your eyes?
This.
One of the problems here is that of using our intuition on consciousness as a guide to processes well outside our experience. Why should we believe our common-sense intuition on whether a computer has consciousness, or whether a pencil and paper simulation has consciousness when both are so far beyond our actual experience? It’s like applying our common sense understanding of physics to the study of atoms, or black holes. There’s no reason to assume we can extrapolate that far intuitively with any real chance of success.
After that, there’s a straight choice. Consciousness may be something that arises purely out of a rationally modellable process, or not. If the former, then the biological, computer program and pencil and paper Simone’s will all be conscious, genuinely. If not, then there is something to consciousness that lies outside the reach of rational description—this is not inherently impossible in my opinion, but it does suggest that some entities which claim to be conscious actually won’t be, and that there will be no rational means to show whether they are or not.
Upvoted. I’m stealing this for use in future off-LW discussions of consciousness. ;-)
Another topic that might be discussed is whether consciousness as self-awareness is at all related to moral status as in “Don’t you dare pull the plug. That would be murder!”. Personally, I don’t see any reason why the two should be related. Perhaps we conflate them because both are mysteries and we think that Occam’s razor can be used to economize on mysteries.
Lack of a better alternative?
Indeed.
“It’s the same thing. Just slower.”
You hand calculated simulation is still conscious, and it is the logical relations of cause and effect within that calculation, not “real” geometry, that makes it so, the same as in the computer simulation and biological brains.
What makes me balk at this is that, if it’s true, nobody actually has to bother doing the calculation at all. There doesn’t even have to be a physical process that, if construed right, does the simulation.
It seems to follow that all of the infinity of different potential subjective consciousnesses are running right now. Nobody told me I was signing up for that ontology!
You mean you haven’t signed up yet for the “Tegmark Mathematical Universe”? Shame on you. :)
Permutation City by Greg Egan has a very similar idea at its heart. According to wikipedia Tegmark has cited the novel, so apparently he agrees about the similarity.
http://lesswrong.com/lw/1jm/getting_over_dust_theory/
I see several problems with Tegmark’s MU theory:
What’s the utility? What does this actually differentiate? How would we even know if other universes exist or how many exist if there is no causal relationship between them? The multiverse in QM is quite different: there is a causal connection, but the QM multiverse we inhabit is a strict subset of the TMU, from what I understand.
In Permutation City, the beings end up encoding themselves into a new universe simply by finding a suitable place in the TMU. The problem of course is why would they even need to do that? Whatever universe they thought they encoded themselves into should still exist in the TMU regardless.
Also, I don’t see the point of the above in any case, as even if such a metamathical mystical trick was possible, it would just amount to a copy, with which your current version would have no causal connection to.
Edit: comment deleted. I thought I was responding to something else.
One answer is that in a Tegmark multiverse, all possible universes exist, but not to the same degree; that is, each universe or universe-snapshot has a weight, and that weight is higher if it’s causally descended from or simulated inside of other universes with large weight.
Oh, I think I see what’s confusing you. In the xkcd comic, the pebbles by themselves aren’t a universe, it’s the pebbles being interpreted by the right interpreter that are the universe. The right interpreter is simply a mechanism that (at its simplest) is caused to do action X by a pebble, and action Y by there not being a pebble, where X never equals Y.
So yes, somebody does actually have to bother doing the calculation, because the calculation is the universe (or consciousness, or whatever).
This might be a case where flawed intuition is correct.
The chain of causality leading to the ‘yes’ is MUCH weaker in the pencil and paper version. You imagine squiggles as mere squiggles, not as signals that inexorably cause you to carry them through a zillion steps of calculation. No human as we know them would be so driven, so it looks like that Simone can’t exist as a coherent, caused thing.
But it’s very easy and correct to see a high voltage on a wire as a signal which will reliably cause a set of logic gates to carry it through a zillion steps. So that Simone can get to yes without her universe locking up first.
Right. Our basic human intuitions do not grok the power of algorithms.
Disagree. If we allow humans to be deterministic then a “human as we know them” is driven solely by the physical laws of our universe; there is no sense in talking about our emotional motivations until we have decided that we have free will.
I think your argument does assume we have free will.
I’m suggesting that the part of our minds that deals with hypotheticals silently rejects the premise that ‘self’ is a reliable squiggle controlled component in a deterministic machine.
I’m also saying this is a pretty accurate hardwired assumption about humans, because we do few things with very high reliability.
I don’t think I’m assuming anything about free will. I don’t think about it much, and I forgot how to dissolve it. I think that’s a good thing.
I think your argument assumes “emotional motivations” cannot be reduced to (explained by) the “physical laws of our universe”.
On the contrary, he is assuming we do not; he assumes that it is quite impossible that a human being would actually do the necessary work. That’s why he said that “Simone can’t exist” in this situation.
So his argument is that “a human is not an appropriate tool to do this deterministic thing”. So what? Neither is a log flume—but the fact that log flumes can’t be used to simulate consciousness doesn’t tell us anything about consciousness.
Great quote.
What difference do you see between this argument and the Chinese Room? I see none.
Maybe you’re right. The difference I see is that the Chinese room didn’t convince me, whereas this did.
If functionalism is true then dualism is true. You have the same experience E hovering over the different physical situations A, B, and C, even when they are as materially diverse as neurons, transistors, and someone in a Chinese room.
It should already be obvious that an arrangement of atoms in space is not identical to any particular experience you may claim to somehow be inhabiting it, and so it should already be obvious that the standard materialistic approach to consciousness is actually property dualism. But perhaps the observation that the experience is supposed to be exactly the same, even when the arrangement of atoms is really different, will help a few people to grasp this.
Perhaps one can construe functionalism as a form of dualism, but if so then it’s a curious state of affairs because then one can be a ‘dualist’ while still giving typically materialist verdicts on all the familiar questions and thought experiments in the philosophy of mind:
Artificial intelligence is possible and the ‘systems reply’ to the Chinese Room thought experiment is substantially correct.
“Zombies” are impossible (even a priori).
Libertarian free will is incoherent, or at any rate false.
There is no ‘hard problem of consciousness’ qualitatively distinct from the ‘easy problems’ of figuring out how the brain’s structure and functional organization are able to support the various cognitive competences we observe in human behaviour.
[This isn’t part of what “functionalism” is usually taken to mean, but it’s hard to see how a thoroughgoing functionalist could avoid it:] There aren’t always ‘facts of the matter’ about persisting subjective identity. For instance, in “cloning and teleportation” thought experiments the question of whether my mind ceases to exist or is ‘transferred’ to another body, and if so, which body, turns out to be meaningless.
[As above:] There isn’t always a ‘fact of the matter’ as to whether a being (e.g. a developing foetus) is conscious.
If you guys are prepared to concede all of these and similar bones of contention, I don’t think we’d have anything further to argue about—we can all proudly proclaim ourselves dualists, lament the sterile emptiness of the reductionist vision of the world into whose thrall so many otherwise great thinkers have fallen, and sing life-affirming hymns to the richness and mystery of the mind.
How do you get that from functionalism?
Continuity: The idea that if you look at what’s going on in a developing brain (or, for that matter, a deteriorating brain) there are no—or at least there may not be any—sudden step changes in the patterns of neural activity on which the supposed mental state supervenes.
Or again, one can make the same point about the evolutionary tree. If you consider all of the animal brains there are and ever have been, there won’t be any single criterion, even at the level of ‘functional organisation’, which distinguishes conscious brains from unconscious ones.
This is partly an empirical thesis, insofar as we can actually look and see whether there are such ‘step changes’ in ontogeny and phylogeny. It’s only partly empirical because even if there were, we couldn’t verify that those changes were precisely the ones that signified consciousness.
But surely, if we take functionalism seriously then the lack of any plausible candidates for a discrete “on-off” functional property to coincide with consciousness suggests that consciousness itself is not a discrete “on-off” property.
Doesn’t this argument apply to everything else about consciousness as well—whether a particular brain is thinking something, planning something, experiencing something? According to functionalism, being in any specific conscious state should be a matter of your brain possessing some specific causal/functional property. Are you saying that no such properties are ever definitely and absolutely possessed? Because that would seem to imply that no-one is ever definitely in any specific conscious state—i.e. that there are no facts about consciousness at all.
I think ciphergoth is correct to mention the Sorites paradox.
It always surprises me when people refuse to swallow this idea that “sometimes there’s no fact of the matter as to whether something is conscious”.
However difficult it is to imagine how it can be true, it’s just blindingly obvious that our bodies and minds are built up continuously, without any magic moment when ‘the lights switch on’.
If you take the view that, in addition to physical reality, there is a “bank of screens” somewhere (like in the film Aliens) showing everyone’s points of view then you’ll forever be stuck with the discrete fact that either there is a screen allocated to this particular animal or there isn’t. But surely the correct response here is simply to dispense with the idea of a “bank of screens”.
We need to understand that consciousness behaves as it does irrespectively of our naive preconceptions, rather than trying to make it analytically true that consciousness conforms to our naive preconceptions and using that to refute materialism.
I’ll stick with the principle
So the only way I can countenance the idea
is if this arises because of vagueness in our description of consciousness from within. Some things not only exist but “have an inside” (for example, us); some things, one usually supposes, “just exist” (for example, a rock); and perhaps there are intermediate states between having an inside and not having an inside that we don’t understand well, or don’t understand at all. This would mean that our first-person concept of the difference between conscious and non-conscious was deficient, that it only approximated reality.
But I don’t see any sensitivity to that issue in what you write. Your arguments are coming entirely from the third-person, physical description, the view from outside. You think there’s a continuum of states between some that are definitely conscious, and some that are definitely not conscious, and so you conclude that there’s no sharp boundary between conscious and non-conscious. The first-person description features solely as an idea of a “screen” that we can just “dispense with”. Dude, the first-person description describes the life you actually live, and the only reality you ever directly experience!
What would happen if you were to personally pass from a conscious to a non-conscious state? To deny that there’s a boundary is to say that there’s no fact about what happens to you in that scenario, except that at the start you’re conscious, and at the end you’re not, and we can’t or won’t think or say anything very precise about what happens in between—unless it’s expressed in terms of neurons and atoms and other safely non-subjective entities, which is missing the point. The loss of consciousness, whether in sleep or in death, is a phenomenon on the first-person side of this divide, which explores and crosses the boundary between conscious and non-conscious. It’s a thing that happens to you, to the subject of your experience, and not just to certain not-you objects contemplated by that subject in the third-person, objectifying mode of its experience.
You know, there’s not even any profound physical reason to support the argument from continuity. The physical world is full of qualitative transitions.
Couldn’t you make the same argument about literally switching on a light? :-) Obviously the idea that a light is sometimes on and sometimes off is a naive preconception that we should dispense with.
Correct—the impression that it is an instantaneous, discontinuous process is an illusion caused by the speed of the transition compared to the speed of our perceptions.
Yeah, but I think “mental discretists” can tolerate that kind of very-rapid-but-still-continuous physical change—they just have to say that a mental moment corresponds to (its properties correlate with those of) a smallish patch of spacetime.
I mean, if you believe in unified “mental moments” at all then you’ve got to believe something like that, just because the brain occupies a macroscopic region of space, and because of the finite speed of light.
But this defense becomes manifestly absurd if we can draw out the grey area sufficiently far (e.g. over the entire lifetime of some not-quite-conscious animal.)
That, and the stability of the states on either side.
Well then I’m not sure that we disagree substantively on this issue.
Basically, I’ve said: “Naive discrete view of consciousness --> Not always determinate whether something is conscious”. (Or rather that’s what I’ve meant to say but tended to omit the premise.)
Whereas I think you’re saying something like: “At the level of metaphysical reality, there is no such thing as indeterminacy (apparent indeterminacy only arises through vague or otherwise inadequate language) --> Whatever the true nature of subjective experience, the facts about it must be determinate”
Clearly these two views are compatible with one another (as long as I state my premise). (However, there’s room to agree with the letter but not the spirit of your view, by taking ‘the true nature of subjective experience’ to be something ridiculously far away from what we usually think it is and holding that all mentalistic language (as we know it) is irretrievably vague.)
I’m not sure exactly what you’re thinking of here, but I seem to recall that you’re sympathetic to the idea that physics is important in the philosophy of mind. Anyway, I think the idea that a tiny ‘quantum leap’ could make the difference between a person being (determinately) consciousness and (determinately) unconsciousness is an obvious non-starter.
Well, this is where we actually need to look at the empirical data and see whether a foetus seems to ‘switch on’ like a light at any point. I’ve assumed there is no such point, but what I know about embryology could be written on the back of a postage stamp. (But come on, the idea is ridiculous and I see no reason to disingenuously pretend to be agnostic about it.)
Maybe you’re familiar with the phenomenon of “waking up”. Do you agree that this is a real thing? If so, does it not imply that it once happened to you for the first time?
I agree with that.
What do you think you are doing when you use mentalistic language, then? Do you think it bears no relationship to reality?
A little group of neurons in the brain stem starts sending a train of signals to the base of the thalamus. The thalamus ‘wakes up’ and then sends signals to the cortex and the cortex ‘wakes up’. Consciousness is now ‘on’. Later, the brain stem stops sending the train of signals, the thalamus ‘goes to sleep’ and the cortex slowly winds down the ‘goes to sleep’. Consciousness is now ‘off’. Neither on or off was instantaneous or sharply defined. (Dreaming activated the cortex differently at times during sleep but ignore that for now). Descriptions like this (hopefully more detailed and accurate) are the ‘facts of the matter’ not semantic arguments. Why is it that science is OK for understanding physics and astronomy but not for understanding consciousness?
Science in some broad sense “is OK… for understanding consciousness”, but unless you’re a behaviorist, you need to be explaining (and first, you need to be describing) the subjective side of consciousness, not just the physiology of it. It’s the facts about subjectivity which make consciousness a different sort of topic from anything in the natural sciences.
Yes we will have to describe the subjective side of consciousness but the physiology has to come first. As an illustration: if you didn’t know the function of the heart or much about its physiology, it would be useless to try and understand it by how it felt. Hence we would have ideas like ‘loving with all my heart’, ‘my heart is not in it’ etc. which come from the pre-biology world. Once we know how and why the heart works the way it does, those feeling are seen differently.
I am certainly not a behaviorist and I do think that consciousness is an extremely important function of the brain/mind. We probably can’t understand how cognition works without understanding how consciousness works. I just do not think introspection gets us closer to understanding, nor do I think that introspection gives us any direct knowledge of our own minds - ‘direct’ being the important word.
Right, people wake up and go to sleep. Waking can be relatively quicker or slower depending on the manner of awakening, but… I’m not sure what you think this establishes.
In any case, a sleeping person is not straightforwardly ‘unconscious’ - their mind hasn’t “disappeared” it’s just doing something very different from what it’s doing when it’s awake. A better example would be someone ‘coming round’ from a spell of unconsciousness, and here I think you’ll find that people remember it being a gradual process.
Your whole line of attack here is odd: all that matters for the wider debate is whether or not there are any smooth, gradual processes between consciousness and unconsciousness, not whether or not there also exist rapid-ish transitions between the two.
There are plenty of instances where language is used in a way where its vagueness cannot possibly be eliminated, and yet manages to be meaningful. E.g. “The Battle Of Britain was won primarily because the Luftwaffe switched the focus of their efforts from knocking out the RAF to bombing major cities.” (N.B. I’m not claiming this is true (though it may be) simply that it “bears some relationship to reality”.)
I am objecting, first of all, to your assertion that the idea that a fetus might “‘switch on’ like a light” at some point in its development is “ridiculous”. Waking up was supposed to be an example of a rapid change, as well as something real and distinctive which must happen for a first time in the life of an organism. But I can make this counterargument even just from the physiological perspective. Sharp transitions do occur in embryonic development, e.g. when the morphogenetic motion of tissues and cavities produces a topological change in the organism. If we are going to associate the presence of a mind, or the presence of a capacity for consciousness, with the existence of a particular functional organization in the brain, how can there not be a first moment when that organization exists? It could consist in something as simple as the first synaptic coupling of two previously separate neural systems. Before the first synapses joining them, certain computations were not possible; after the synapses had formed, they were possible.
As for the significance of “smooth, gradual” transitions between consciousness and unconsciousness, I will revert to that principle which you expressed thus:
“Whatever the true nature of subjective experience, the facts about it must be determinate”
Among the facts about subjective experience are its relationship to “non-subjective” states or forms of existence. Those facts must also be determinate. The transition from consciousness to non-consciousness, if it is a continuum, cannot only be a continuum on the physical/physiological side. It must also be a continuum on the subjective side, even though one end of the continuum is absence of subjectivity. When you say there can be material systems for which there is no fact about its being conscious—it’s not conscious, it’s not not-conscious—you are being just as illogical as the people who believe in “the particle without a definite position”.
I ask myself why you would even think like this. Why wouldn’t you suppose instead that folk psychology can be conceptually refined to the point of being exactly correct? Why the willingness to throw it away, in favor of nothing?
Sorites error: in your last sentence you leap from there being no discontinuities to there being no facts at all.
Neil is the one who says that sometimes, there are no facts. How do you get from no facts to facts without a discontinuity?
Maybe I’m missing something, but I can’t see in what way this argument is specifically about consciousness, rather than just being a re-hash of the Sorites Paradox—could you spell it out for me?
If we were just talking about names this wouldn’t matter, but we are talking about explanations. Vagueness in a name just means that the applicability of the name is a little undetermined. But there is no such thing as objective vagueness. The objective properties of things are “exact”, even when we can only specify them vaguely.
This is what we all object to in the Copenhagen interpretation of quantum mechanics, right? It makes no sense to say that a particle has a position, if it doesn’t have a definite position. Either it has a definite position, or the concept of position just doesn’t apply. There’s no problem in saying that the position is uncertain, or in specifying it only approximately; it’s the reification of uncertainty—the particle is somewhere, but not anywhere in particular—which is nonsense. Either it’s somewhere particular (or even everywhere, if you’re a many-worlder), or it’s nowhere.
Neil flirts with reifying vagueness about consciousness in a similarly untenable fashion. We can be vague about how we describe a subjective state of consciousness, we can be vague about how we describe the physical brain. But we cannot identify an exact property of a conscious state with an inherently vague physical predicate. The possibility of exact description of states on both sides, and of exactly specifying the mapping between them, must exist in any viable theory of consciousness. Otherwise, it reifies uncertainty in a way that has the same fundamental illogicality as the “particle without a definite position”.
By the way, if you haven’t read Dennett’s “Real Patterns” then I can recommend it as an excellent explanation of how fuzzily defined, ‘not-always-a-fact-of-the-matter-whether-they’re-present’ patterns, of which folk-psychological states like beliefs and desires are just a special case, can meaningfully find a place in a physicalist universe.
There’s an aspect of this which I haven’t yet mentioned, which is the following:
We can imagine different strains of functionalism. The weakest would just be: “A person’s mental state supervenes on their (multiply realizable) ‘functional state’.” This leaves the nature of the relation between functional state and mental state utterly mysterious, and thereby leaves the ‘hard problem’ looking as ‘hard’ as it ever did.
But I think a ‘thoroughgoing functionalist’ wants to go further, and say that a person’s mental state is somehow constituted by (or reduces to) the functional state of their brain. It’s not a trivial project to flesh out this idea—not simply to clarify what it means, but to begin to sketch out the functional properties that constitute consciousness—but it’s one that various thinkers (like Minsky and Dennett) have actually taken up.
And if one ends up hypothesising that what’s important for whether a system is ‘conscious’ is (say) whether it represents information a certain way, has a certain kind of ‘higher-order’ access to its own state, or whatever—functional properties which can be scaled up and down in scope and complexity without any obvious ‘thresholds’ being encountered that might correspond to the appearance of consciousness—then one has grounds for saying that there isn’t always a ‘fact of the matter’ as to whether a being is conscious.
Then it’s time to return to the rest of your comment—the whole discussion so far has just been about that one claim, that something can be neither conscious nor not-conscious. So now I’ll quote myself:
By “coarse-grained states” do you mean that, say, “pain” stands to the many particular neuronal ensembles that could embody pain, in something like the way “human being” stands to all the actual individual human beings? How would that restore a dualism, and what kind of dualism is that?
The thought experiments proposed in the post and the comments hint at at a strictly simpler problem that we need to solve before tackling consciousness anyway: what is “algorithmicness”? What constitutes a “causal implementation” of an algorithm and distinguishes it from a video feed replay? How can we remove the need for vague “bridging laws” between algorithmicness and physical reality?
I think UDT manages to sidestep this question. Would you agree? (To be more explicit, UDT manages to make decisions without having to explicitly determine whether something in the world is a “causal implementation” of itself. It just makes logical deductions about the world from statements like “S outputs X” where S is a code string that is its own source code, and that seems to be enough.)
But unfortunately I can’t see how to similarly sidestep the problem of consciousness, if we humans are to make use of UDT in a formal way. The problem is that we don’t have access to our own source code, so we can’t write down S directly. All we have are access to subjective sensations and memories, and it seems like we need a theory of consciousness to tell us how to write down the description of an object (or a class of objects), given its subjective sensations and memories.
The situation with UDT is mysterious.
A UDT agent is a sort of ethereal thing, a class of logically-equivalent algorithms (up to rewriting and such) that can never believe it “sees” one universe—only the equivalence class of universes that gave it equivalent sensory inputs up to now. Okay, I can agree that it’s meaningless to ask “where” you are in the universe. But it doesn’t seem meaningless to ask you for your beliefs about your future sensory input #11, given sensory inputs #1-#10. Unfortunately, it’s hard to see how you can define such credences—the naive idea is to count different instantiations of the algorithm within the world program, but we just threw away our concept of what counts as an “instance”.
The equivalence class of algorithms is wider than one might think. For example, if (by way of some tricky mathematical fact) the algorithm’s output is in fact independent from the value of one of the inputs, say input #11, then the algorithm cannot “perceive” that input. In other words, you cannot register any sensation that doesn’t end up affecting your actions in the future. Weird, huh.
You all may be interested in some recent (since 1990 or so) work in theoretical computer science dealing roughly with “what is observationally equivalent with what”. Google for strings including the keywords “bisimulation”, “process algebra”, and “observational equivalence”. Or maybe not—it is unclear to me what you think the problem really is.
UDT sidesteps that question as well, because while it makes decisions, it never needs to compute things like “beliefs about your future sensory input #11, given sensory inputs #1-#10”. I would say that an UDT agent doesn’t have such beliefs.
Not quite sure what this part has to do with what I wrote. If you still think it’s relevant, can you explain how?
Yes, it seems most of my comment was irrelevant, and even the original question was so weird that I can no longer make sense of it. Sorry.
Your answers have showed me that my original comment was wrong: the question of “algorithmicness” is uninteresting unless we imagine that algorithms can have “subjective experience”, which brings us back to consciousness again. Oh well, another line of attack goes dead.
A UDT agent is a program (axioms), not algorithm (theory). The way in which something is specified matters to the way it decides how to behave. If you are only talking about behavior, and not underlying decision-making, then you can abstract from the detail of how it’s generated, but then you presuppose that condition.
A real algorithm keeps doing interesting things when presented with input it’s creator didn’t expect, while a lookup table can only return bland errors.
A well-designed algorithm takes that a step further, actually doing something useful.
I’m close to your conclusion, but I don’t accept your Searle-esque argument. I accept Chalmers’s reasoning, roughly, on the fading qualia argument, and agree with you that it doesn’t justify the usual conception of the joys of uploading.
And I think that’s the whole core of what needs to be said on the topic. That is, we have a good argument for attributing consciousness-as-we-know-it to a fine-grained functional duplicate of ourselves. And that’s all. We don’t have any reason to believe that a coarse-grained functional duplicate—a being that gives similar behavioral outputs for a given input, but uses different structures and processes—would have a subjectivity like ours. (“Fine-grained” is an apropos choice of terminology by Chalmers.)
Our terms for subjective experiences, like pain, joy, the sensation of sweetness, and so on, ultimately have ostensive definitions. They’re this, this, and this. And for concepts like that, it matters what the actual physical structures and processes are, that underly the actual phenomena we were attending to when we introduced the terms. (I don’t think the generic term “consciousness” works this way, though. I’ll avoid that subject for now and stick to some classic examples of qualia.)
This has great significance for uploading if, as I expect, human-like computer intelligence is developed not by directly simulating the human brain at a detailed level, but by taking advantage of the distinctive features of silicon and successor technologies. In that case, “uploading” looks to be the prudential equivalent of suicide.
That exactly seems quite close to Searle to me, in that you are both imposing specific requirements for the substrate—which is all that Searle does really. There is the possible difference that you might be more generous than Searle about what constitutes a valid substrate (though Searle isn’t really too clear on that issue anyway).
Unlike Searle, and like Sharvy, I believe it ain’t the meat, it’s the motion (see the Sharvy reference at the bottom). Sharvy presents a fading qualia argument much like the one Chalmers offers in the link simplicio provides, only, to my recollection, without Chalmers’s wise caveat that the functional isomorphism should be fine-grained.
Why do you cite Chalmers for fading qualia, but not Searle for the rephrased Chinese Room?
I was under the impression the Chinese room was an argument against intelligence simulation, not consciousness. I think you’re right actually. Will edit.
This seems like pretty much Professor John Searle’s argument, to me. Your argument about the algorithm being subject to interpretation and observer dependent has been made by Searle who refers to it as “universal realizability”.
See;
Searle, J. R., 1997. The Mystery of Consciousness. London: Granta Books. Chapter 1, pp.14-17. (Originally Published: 1997. New York: The New York Review of Books. Also published by Granta Books in 1997.)
Searle, J. R., 2002. The Rediscovery of the Mind. Cambridge, Massachusetts: The MIT Press. 9th Edition. Chapter 9, pp.207-212. (Originally Published: 1992. Cambridge, Massachusetts: The MIT Press.)
Here’s a thought experiment that helps me think about uploading (which I perceive as the real, observable-consequences-having issue here):
Suppose that you believed in souls (it is not that hard to get into that mindset—lots of people can do it). Also suppose that you believed in transmigration or reincarnation of souls. Finally, suppose that you believe that souls move around between bodies during the night, when people are asleep. Despite your belief in souls, you know that memories, skills, personality, goals are all located in the brain, not the soul.
Why do you go to sleep? Your consciousness will go out like a light! However, your soul will continue to exist, it will just go on to a different body. Your body has various goals and plans, that it worked on during the day, but it will get another soul tomorrow, and it’s pretty experienced at this kind of juggling, guarding its goals and plans from harm while it is unconscious, and picking up the threads when it becomes conscious again.
Now consider (destructive) teleportation. Why allow your body to be destructively scanned and reconstructed? Well, if you (your body) has the same degree of trust in the equipment that you (your body) has in the process of going to sleep at night, then the two are exactly parallel. The new body will become conscious, and pick up its threads of memory, personality, skills, goals, probably with a different soul, but bodies are used to that.
Now consider (destructive) transmutation. If the reconstructed body used silicon instead of carbon, is anything different?
As far as I can tell, Tegmark’s mathematical universe is “true” but hard to think with. You overwhelm yourself with images of bigness and variety and parallel, nearly-identical copies, but it has to add up to normality at the end. If you’re trying to do something (think about something) difficult, maintaining the imagery can be a drain on your attention.
I invite you to evaluate the procedural integrity of your reasoning.
Do you really expect that “a certain physical pattern of energy flow” causes consciousness? Why? Can you even begin to articulate what that pattern might consist of? What is it about a computer model that fails to adequately account for the physical energy flow? Didn’t you stipulate earlier that our model will “stick to an atom-by-atom (or complex amplitudes) approach”? Is there a difference between complex amplitudes and patterns of energy flow?
The third question seems to call for advances in neuropsychology. And if that’s correct, the first two questions probably face a similar need.
We know redness or sweetness when we see it, but we are in no position to define the processes that regularly explain these experiences. If we can find a property of neural processes that always leads to sweet sensations, and underlies all sweet sensations, then we’ll know what (or whether) patterns of energy flow matter for that sensation.
This is not intended as a criticism. But it sometimes seems to me the philosophical practice of choosing simple examples of a concept often strips away all hope of learning something about the concept from the example.
For example, it the above had been written “We know puceness or umaminess when we see it”, we might have some hope of connecting the concept of perceiving the qualia with the concept of learning the name of the qualia.
I guess my concern is that you have not indicated your reason(s) for promoting the hypothesis that “energy flow” causes consciousness.
As for “advances in neuropsychology,” what do you mean by “neuropsychology” besides a field that includes the study of consciousness? I certainly agree with you that further advances in the study of consciousness would be useful in identifying the causes of consciousness, but why would you assert that consciousness is caused by energy flow? If I understand you correctly, you are confident that the study of consciousness will lead researchers to conclude that it is caused by energy flow. Why?
I’m not confident that consciousness is caused by energy flow; I regard it as one of several (families of) wide-open and highly plausible hypotheses. I’m promoting the hypothesis only to the extent of calling it premature to rule it out.
I think that the clarification you want is pointless. When I write a difficult program (or section of a program), the first thing I do is write the algorithm out on paper in words, a flow chart, or whatever makes sense at the time. Then I play around with it to make sure it can handle any possible input so it will not crash. the reason I do it that way is so I only have to worry about problems with the steps I am following, not issues like syntax, but whether I draw the data flow on paper, visualize it in my mind, run it on my computer, etc, etc. it is ALWAYS the same algorithm. Steps which take an input, interpret it, and then find the result. Consciousness generated by millions of ants carrying stones around in an infinite desert, or from you writing on scraps of paper may not look like much, but it is still consciousness.
The simulation is “all the information you contain (and then possibly some more)” running through an algorithm at least as complex as your own.
The shadow is “a very very small subset of the information about you, none of which is even particularly relevant to consciousness”, and isn’t being run at all.
So, I would disagree fundamentally with your claim that they are in any way similar.
This may be a point of divergence. You’re thinking of the brain as active, and the Graphite-Paper-Person simulator as passive.
If you talk about “patterns of energy flow in the brain” the analogous statement for the GPP is “patterns of marking creation/destruction on the paper”
If you talk about “patterns of graphite on paper” the analogous statement for brains is “patterns of electrochemical potential in cells”
Upvoted for changing your mind
It’s laudable that simplico changed their mind and said so in plain terms, but I would encourage you to upvote only those articles which are the sort you’d like to see more of on LW.
Yes, I would like to see more posts of people that are wrong and realize it. Additional bad posts on Less Wrong is good compared to an atmosphere where people are afraid to be publicly wrong.
I also sometimes treat karma relatively . This post should be somewhere between −1 and 0 and it was at −2 when I upvoted.
There isn’t?
(Not a rhetorical question)
There is. you can look at the blueprints of a CPU or GPU, and it is quite clear that everything needs to be connected in a certain way to work.
:D
First:
...but then...
A strong claim in the headline—but then a feeble one in the supporting argument.
This is John Searle’s Chinese room argument. Searle, John (1980), “Minds, Brains and Programs”, Behavioral and Brain Sciences 3 (3): 417–457. Get the original article and the many refutations of it appended after the article. I don’t remember if 457 is the last page of Searle’s article, or of the entire collection.
Upvoted and disagreed.
There is no particular difference between a simulation that uses true physics[tm] or at least the abstraction necessary and the ‘real’ action.
The person that you are is also not implied by the matter or the actual hardware you happen to run on, but by the informational link between all the things that are currently implemented in your brain. (Memory and connections—to simplify that.) But there is no difference between a solution in hardware or software. One is easier to maintain and change, but it can easily behave the same from the outside. An upload could still be running the same things your brain does. Giving the same results. It just does not seam right because there is no physical body lying around.
I actually have problems with accepting the concept of Qualia in the first place. But why they should go away just because you replace parts of your hardware with identical items fails me. Simone is real—all 3 of them. And while you do not perceive the paper version as really interacting with you, she surely does experience her self. If you stop calculating her, you basically freeze her in time.
My solution would be to take away the term of consciousness altogether. Or to find a way to actually test for it. An AI that claims to be un-conscious would be a weird experience, and i have no clue for how to make sure she actually is not conscious. The term gets used so much in various media, that it really seems like a magic marker like emergency or complexity.
Maybe the impression of human conscious arrives because we have memories, we can think internally, and because on average there is a tendency to behave consistently. But i also have enough experience that makes me doubt the consciousness of specific people.
We all just operate on piles and piles of environmental data. And that you can do in wetware, electronics or on paper.
I have no doubt in my mind that some time in the future nervous systems with be simulated with all their functions including consciousness. Perhaps not a particular person’s nervous system at a particular time, but a somewhat close approximation, a very similar nervous system with consciousness but no magic. However, I definitely doubt that it will be done on a general purpose computer running algorithms. I doubt that step-by-step calculations will be the way that the simulation will be done. Here is why:
1.The brain is massively parallel and complex feedback loops are difficult to calculate (not impossible but difficult). The easiest way to simulate a massively parallel system is to build it in hardware rather than use stepwise software.
2.There are effects of fields to consider – not just electrical and magnetic but also chemical. Like massive numbers of feedback loops, the fields would be difficult to calculate as the same elements that are reacting to the fields are also creating them.
3.There are many critical timing effects in the system and these would have to be duplicated or scaled, another difficulty of calculation.
I believe that it is far simpler to take advantage of the architecture of the brain which appears to have a lot of repetition of small units of a few thousand cells and build good models of these in hardware, including correct timing and ways to simulate fields etc. Then take advantage of the larger (sort of functional) divisions of the brain to construct larger modules. It gets very complicated fairly quickly but not as complicated as stepwise calculations. In essence it resembles the replacement of neurons one at a time with chips but the chips would have to be more more than just fancy logic components as they would have to sense their surroundings as well as communicate with other neurons or chips. The boundaries need to be at the natural joints to make it simpler, but the idea is the same. I can imagine this actually being built and having consciousness. The computer running algorithms or the person with a pencil creating consciousness is a lot harder to imagine (and needs a lot of ‘in principles’, too many for me).
Hardware might ultimately be more efficient than software for this kind of thing, but software is a lot easier to tune and debug. There are reasons neural network chips never took off.
I can plausibly imagine the first upload running in software, orders of magnitude slower than real time, on enough computers to cover a city block and require a dedicated power station, cooperating with a team of engineers and neuroscientists by answering one test question per day; 10 years later, the debugged version implemented in hardware, requiring only a roomful of equipment per upload, and running at a substantial fraction of real-time speed; and another 10 years later, new process technology specifically designed for that hardware, allowing a mass-market version that runs at full real-time speed, fits in desktop form factor and plugs into a standard power socket.
You may be right but my imagination has a problem with it. If there is a way to do analog computing using software in a non step-by-step procedure, then I could imagine a software solution. It is the algorithm that is my problem and not the physical form of the ‘ware’.
I may not be understanding your objection in that case. Are you saying that there’s no way software, being a digital phenomenon, can simulate continuous analog phenomena? If so, I will point to the many cases where we successfully use software to simulate analog phenomena to sufficient precision. If not, can you perhaps rephrase?
I may not be expressing my self well here. I am try to express what I can and cannot imagine—I do not presume to say that because I cannot imagine something, it is impossible. In fact I believe that it would be possible to simulate the nervous system with digital algorithms in principle, just extremely difficult in practice. So difficult I think that I cannot imagine it happening. It is not the ‘software’ or the ‘digital’ that is my block, it is the ‘algorithm’, the stepwise processes that I am having trouble with. How do you imagine the enormous amount and varied nature of feedback in the brain can be simulated by step-by-step logic? I take it that you can imagine how it could be done—so how?
with a lot of steps.
I guess that is the conversation stopper. We agree that it takes a lot of steps. We disagree on whether the number makes it only possible in principle or not.
Ah, I was about to reply with a proof of concept explanation in terms of molecular modeling (which of course would be hopelessly intractable in practice but should illustrate the principle), until I saw you say ‘only possible in principle’; are you saying then that your objection is that you think even the most efficient software-based techniques would take, say, a million years of supercomputer time to run a few seconds of consciousness?
Well, maybe not that long, but a long, long time to do the ‘lot of little steps’. It does not seem the appropriate tool to me. After all, the much slower component parts of a brain do a sort of unit of perception in about a third of a second. I believe that is because it is not done step-wise but something like this: the enormous number of overlapping feedback loops can only stabilize in a sort of ‘best fit scenario’ and it takes very little time for the whole network to hone in on the final perception. (Vaguely that sort of thing)
Right, fair enough, then it’s a quantitative question on which our intuitions differ, and the answer depends both on a lot of specific facts about the brain, and on what sort of progress Moore’s Law ends up making over the next few decades. Let’s give Blue Brain another decade or two and see what things look like then.
Personally I have great hopes for Blue Brain. If it figures out how a single cortex unit works ( which they seem to be on the way to). If they can then figure out how to convert that into a chip and put oodles of those clips in the right environment of inputs and interactions with other parts of the brain (thalamus and basal ganglia especially) and then.....
A lot of work but it has a good chance as long as it avoids the step-by-step algorithm trap.
I cannot agree at all; simSimone is plainly conscious if meatSimone is conscious; there are no magic pattern of electrical impulses in physical space which the universe will “notice” and imbue with consciousness.