I don’t believe any of the various purely computational definitions of personhood and survival, so just preserving the shapes of neurons, etc., doesn’t mean much to me. My best bet is that the self is a single physical thing, a specific physical phenomenon, which forms at a definite moment in the life of the organism, persists through time even during unconsciousness, and ceases to exist when its biological matrix becomes inhospitable. For example, it might be an intricate topological vortex that forms in a (completely hypothetical) condensate of phonons and/or biophotons, somewhere in the cortex.
That is just a wild speculation, made for the sake of concreteness. But what is really unlikely is that I am just a virtual machine, in the sense of computer science—a state machine whose states are coarse-grainings of the actual microphysical states, and which can survive to run on another, physically distinct computer, so long as it reproduces the rough causal structure of the original.
Physically, what is a computer? Nuclei and electrons. And physically, what is a computer program? It is an extreme abstraction of what some of those nuclei and electrons are doing. Computers are designed so that these abstractions remain valid—so that the dynamics of the virtual machine will match the dynamics of the physical object, unless something physically disruptive occurs.
The physical object is the reality, the virtual machine is just a concept. But the information-centric theory of what minds are and what persons are, is that they are virtual machines—a reification of a conceptual construct. This is false to the robust reality of consciousness, especially, which is why I insist on a theory of the self that is physical and not just computational.
I don’t want to belabor this point, but just want to make clear again why I dissent from the hundred protean ideas out there, about mind uploading, copies, conscious simulations, platonic programs, personal resurrection from digital brain-maps, and so on, in favor of speculations about a physical self within the brain. Such a self would surely have unconscious coprocessors, other brain regions that would be more like virtual machines, functional adjuncts to the conscious part, such as the immediate suppliers of the boundary conditions which show up in experience as sensory perceptions. But you can’t regard the whole of the mind as nothing but virtual machines. Some part of it has to be objectively real.
What would be the implications of this “physical” theory of identity, for cryonics? I will answer as if the topological vortex theory is the correct one, and not just a placeholder speculation.
The idea is that you begin to exist when the vortex begins to exist, and you end when it ends. By this criterion, the odds look bad for the proposition that survival through cryonics is possible. I could invent a further line of speculation as to how the web of quantum entanglement underlying the vortex is not destroyed by the freezing process, but rather gets locked into the ground state of the frozen brain; and such a thing is certainly thinkable, but that’s all, and it is equally thinkable that the condensate hosting the vortex depends for its existence on a steady expenditure of energy provided by cellular metabolism, and must therefore disintegrate when the cells freeze. From this perspective cryonics looks like an unlikely gamble, a stab in the dark. So an advocate would have to revert to the old argument that even if the probability of survival through cryonics is close to zero, the probability of survival through non-cryonics is even closer to zero.
What about the idea of surviving by preserving your information? The vortex version of this concept is, OK, during this life you are a quantum vortex in your brain, and that vortex must cease to exist in a cryonically preserved brain; but in the future we can create a new vortex in a new brain, or in some other appropriate physical medium, and then we can seed it with information from the old brain. And thereby, you can live again—or perhaps just approximate-you, if only some of the information got through.
To say anything concrete here requires even more speculation. One might say that the nature of such resurrection schemes would depend a great deal on the extent to which the details of a person depend on information in the vortex, or on information in the virtual coprocessors of the vortex. Is the chief locus of memory, a virtual machine outside of and separate from the conscious part of the brain, coupled to consciousness so that memories just appear there as needed; or are there aspects of memory which are embedded in the vortex-self itself? To reproduce the latter would require, not just the recreation of memory banks adjoining the vortex-self, but the shaping and seeding of the inner dynamics of the vortex.
Either way, personally I find no appeal in the idea of “survival” via such construction of a future copy. I’m a particular “vortex” already; when that definitively sputters out, that’s it for me. But I know many others feel differently, and such divergent attitudes might still exist, even if a vortex revolution in philosophy of mind replaced the program paradigm.
I somewhat regret the extremely speculative character of these remarks. They read as if I’m a vortex true believer. The point is to suggest what a future alternative to digital crypto-dualism might look like.
I don’t believe any of the various purely literary definitions of narrative and characterization, so just preserving the shapes and orderings of the letters of a story, etc., doesn’t mean much to me. My best bet is that a novel is a single physical thing, a specific physical phenomenon, which forms at a definite moment in the printing of a book, persists through time even when not read, and ceases to exist when its physical form becomes illegible. For example, it might be an intricate topological vortex that forms in a (completely hypothetical) condensate of ink and/or paper, somewhere between the front and back cover.
That is just a wild speculation, made for the sake of concreteness. But what is really unlikely is that a novel is just a collection of letters, in the sense of orthography—a sequence of glyphs representing letters that are coarse-grainings of the actual microphysical states, and which can survive to be read on another, physically distinct medium, so long as it reproduces the sequence of letters of the original.
Physically, what is a novel? Nuclei and electrons. And physically, what is a story? It is an extreme abstraction of what some of those nuclei and electrons are doing. Books are designed so that these abstractions remain valid—so that the dynamics of the story will match the sequence of the letters, unless something physically disruptive occurs.
The physical object is the reality, the narrative is just a concept. But the information-centric theory of what stories are and what novels are, is that they are narratives—a reification of a conceptual construct. This is false to the robust reality of a reader’s consciousness, especially, which is why I insist on a literary theory that is physical and not just computational.
I don’t want to belabor this point, but just want to make clear again why I dissent from the hundred protean ideas out there, about narrative uploading, copies, conscious readers, authorial intent, instances of decompression from digital letter-maps, and so on, in favor of speculations about a physical story within the book. Such a story would surely have information-theoretic story structures, other book regions that would be more like narratives, structural adjuncts to the novel part, such as the immediate suppliers of the boundary conditions which show up in experience as plot structure. But you can’t regard the whole of the novel as nothing but creative writing. Some part of it has to be objectively real.
I think I’ll stop here. Apologies to Mitchell Porter, who I judge to be a smart guy—more knowledgeable than me about physics, without question—who happens to believe a crazy thing. (I expect he judges my beliefs philosophically incoherent and hence crazy, so we’re even on that score.) I should note that the above analogy hasn’t been constructed with a great deal of care; I expect it can be picked apart quite thoroughly.
ETA: As I re-read this, I feel kind of bad about the mocking tone expressed by this kind of rhetorical construction, so let me state explicitly that I did it for the lulz; on the actual substantive matter at issue, I judge Mitchell Porter’s comment to be at DH4 on the disagreement hierarchy and my own reply to be at DH3.
As much as I might try to find holes in the analogy, I still insisted I ought to upvote your comment, because frankly, it had to be said.
In trying to find those holes, I actually came to agree with your analogy well: The story is recreated in the mind/brain by each individual reader, and does not necessarily depend on the format. In the same way, if consciousness has a physical presence that it lacked in a simulation, then we will need to account for and simulate that as well. It may even eventually be possible to design and experiment to show that the raw mechanism of consciousness and its simulation were the same thing. Barring any possibility of simulation of perception, we can think of our minds as books to be read my a massive biologically-resembling brain that would retain such a mechanism, allowing the full re-creation of our conscious in that brain from a state of initially being a simulation that it reads. I have to say, once I’m aware I’m a simulation, I’m not terribly concerned about transferring to different mediums of simulation.
A story in a book, versus a mind in a brain. Where to begin in criticizing that analogy!
I’m sure there’s some really profound way to criticize that analogy, as actually symptomatic of a whole wrong philosophy of mind. It’s not just an accident that you chose to criticize a pro-physical, anti-virtual theory of mind, by inventing a semantic phlogiston that materially inhabits the words on a page and gives them their meaning. Unfortunately, even after so many years arguing with functionalists and other computationalists, I still don’t have a sufficiently nuanced understanding of where their views come from, to make the profound critique, the really illuminating one.
But surely you see that explaining how it is that words on a page have meaning, and how it is that thoughts in a brain have meaning, are completely different questions! The book doesn’t think, it doesn’t act, the events in the story do not occur in the book. There is no meaning in the book unless brains are involved. Without them, words on a page are just shapes on a surface. The experience of the book as meaningful does not occur in the book, it occurs in the brain of a reader; so even the solution of this problem is fundamentally about brains and not about books. The fact that meaning is ultimately not in the book is why semantic phlogiston is absurd in that context.
But the brain is a different context. It’s the end of the line. As with all of naturalism’s ontological problems with mind, once you get to the brain, you cannot evade them any further. By all means, let the world outside the skull be a place wholly without time or color or meaning, if that is indeed your theory of reality. That just means you have to find all those things inside the skull. And you have to find them for real, because they are real. If your theory of such things, is that they are nothing more than labels applied by a neural net to certain inputs, inputs that are not actually changing or colorful or meaningful—then you are in denial about your own experience.
Or at least, I would have to deny the basic facts of my own experience of reality, in order to adopt such views. Maybe you’re some other sort of being, which genuinely doesn’t experience time passing or see colors or have thoughts that are about things. But I doubt it.
I agree with almost all of what you wrote. Here’s the only line I disagree with.
If your theory of such things, is that they are nothing more than labels applied by a neural net to certain inputs, inputs that are not actually changing or colorful or meaningful—then you are in denial about your own experience.
I affirm that my own subjective experience is as you describe; I deny that I am in denial about its import.
I want to be clear that I’m discussing the topic of what makes sense to affirm as most plausible given what we know. In particular, I’m not calling your conjecture impossible.
Human brains don’t look different in lower-level organization than those of, say, cats, and there’s no higher level structure in the brain that obviously corresponds to whatever special sauce it is that makes humans conscious. On the other hand, there are specific brain regions which are known to carry out specific functional tasks. My understanding is that human subjective experience, when picked apart by reductive cognitive neuroscience, appears to be an ex post facto narrative constructed/integrated out of events whose causes can be more-or-less assigned to particular functional sub-components of the brain. Positing that there’s a special sauce—especially a non-classical one—just because my brain’s capacity for self-reflection includes an impression of “unity of consciousness”—well, to me, it’s not the simplest conceivable explanation.
Maybe the universe really does admit the possibility of an agent which approximates my internal structure to arbitrary (or at least sufficient) accuracy and claims to have conscious experiences for reasons which are isomorphic to my own, yet actually has none because it’s implemented on an inadequate physical substrate. But I doubt it.
I think the term “vortex” is apt simply because it demonstrate you’re aware it sounds silly, but in a world where intent is more readily apparent, I would simply just use the standard term: Soul. (Bearing in mind that there are mortal as well as immortal models of the soul. (Although, if the soul does resemble a vortex, then it may be well possible that it keeps spinning in absence of the initial physical cause. Perhaps some form of “excitation in the quantum soul field” that can only be destroyed by meeting a “particle” (identity/soul, in this case) of the perfect waveform necessary to cancel it out.))
As in my previous comment, if the soul exists, then we will need to discover that as a matter of researching physical preservation/cryonics. Then the debate begins anew about whether or not we’ve discovered all the parts we need to affirm that the simulation is the same thing as the natural physical expression.
Personally, I am more a fan of Eliezer_Yudkowsky’s active continuing process interpretation. I think the identity arises from the process itself, rather than any specific momentary configuration. If I can find no difference between the digital and the physical versions of myself, I won’t be able to assume there are any.
Beyond it being unfortunate for the naive theory of personal continuity if it did, do you have a reason why the nexus of subjective experience can’t be destroyed every time a person goes unconscious and then recreated when they wake up?
No, with a few technical modifications it can be quite plausible. However, if it is actually true, I have no more reason to care about my own post-revival self than I do about some other person’s.
Once my likelihood of patternists being right in that way is updated past a certain threshold, it may be that even the modest cost of remaining a cryonicist might not seem worth it.
The other practical consequence of patternists being right is an imperative to work even harder at anti-aging research because it might be our only hope after all.
My best bet is that the self is a single physical thing, a specific physical phenomenon, which forms at a definite moment in the life of the organism, persists through time even during unconsciousness, and ceases to exist when its biological matrix becomes inhospitable.
This is just another way of saying you believe in a soul. And if you think it persists during unconsciousness then why can’t it persist during freezing?
For example, it might be an intricate topological vortex that forms in a (completely hypothetical) condensate of phonons and/or biophotons, somewhere in the cortex.
This sentence is meaningless as far as I know.
But what is really unlikely is that I am just a virtual machine, in the sense of computer science—a state machine whose states are coarse-grainings of the actual microphysical states, and which can survive to run on another, physically distinct computer, so long as it reproduces the rough causal structure of the original.
You say it’s unlikely but give no justification. In my opinion it is a far more likely hypothesis than the existence of a soul.
I am surprised that a comment like this has recieved upvotes.
But the information-centric theory of what minds are and what persons are, is that they are virtual machines—a reification of a conceptual construct. This is false to the robust reality of consciousness,
At this point I failed to understand what you are saying. (What is the “robust reality of consciousness” and why it can’t be simulated?)
I don’t believe any of the various purely computational definitions of personhood and survival
So, this goes well beyond the scope of cryonics. We aren’t discussing whether any particular method is doable—rather, we’re debating the very possibility of running a soul on a computer computer.
To reproduce the latter would require, not just the recreation of memory banks adjoining the vortex-self, but the shaping and seeding of the inner dynamics of the vortex.
...but all you are doing here is adding a more complex element to the brain, Russel’s-Teapot style. It’s still part of the brain. If the vortex-soul thing is physical, observable, and can be described by a computable function then there is no theoretical reason why you can’t copy the vortex-thing into a computer.
I’m a particular “vortex” already; when that definitively sputters out, that’s it for me.
...so why did we even bother with this whole vortex-soul-thingy then? Why not just say “when my brain stops computing stuff, that’s it for me”? How does the insertion of an extra object into the cognitive machinery in any way facilitate this argument?
They read as if I’m a vortex true believer.
I don’t mean that you believe in the vortex specifically. I mean that your exact argument can be made without inserting any extra things (vortexes, souls, whatever) into our current understanding of the brain.
What you are basically saying is that you can’t copy-paste consciousness...it doesn’t matter what the specific substrate of it is and whether or not it has vortexes. If you were running as software on a computer in the first place, you’d say that cutting and pasting program would constitute death, no?
...Right? Or did I miss something important about your argument?
I reject the computational paradigm of mind in its most ambitious form, the one which says that mind is nothing but computation—a notion which, outside of rigorous computer science, isn’t even well-defined in these discussions.
One issue that people blithely pass by when they just assume computationalism, is meaning—“representational content”. Thoughts, mental states, are about things. If you “believe in physics”, and are coming from a naturalistic perspective, then meaning, intentionality, is one of the great conundrums, up there with sensory qualia. Computationalism offers no explanation of what it means for a bunch of atoms to be about something, but it does make it easy to sail past the issue without even noticing, because there is a purely syntactic notion of computation denuded of semantics, and then there is a semantic notion of computation in which computational states are treated as having meanings embedded into their definition. So all you have to do is to say that the brain “computes”, and then equivocate between syntactic computation and semantic computation, between the brain as physical state machine and the mind as semantic state machine.
The technological object “computer” is a semantic state machine, but only in the same way that a book has meaning—because of human custom and human design. Objectively, it is just a syntactic state machine, and in principle its computations could be “about” anything that’s isomorphic to them. But actual states of mind have an objective intrinsic semantics.
Ultimately, I believe that meaning is grounded in consciousness, that there are “semantic qualia” too; that the usual ontologies of physics must be wrong, because they contain no such things—though perhaps the mathematics of some theory of physics not too distant from what we already have, can be reinterpreted in terms of a new ontology that has room for the brain having such properties.
But until such time as all of that is worked out, computationalism will persist as a pretender to the title of the true philosophy of mind, incidentally empowering numerous mistaken notions about the future interplay of mind and technology. In terms of this placeholder theory of conscious quantum vortices, there’s no problem with the idea of neural prostheses that work with your vortex, or of conscious vortices in something other than a biological brain; but if a simulation of a vortex isn’t itself a vortex, then it won’t be conscious.
According to theories of this nature, in which the ultimate substrate of consciousness is substance rather than computation, the very idea of a “conscious program” is a conceptual error. Programs are not the sorts of things that are conscious; they are a type of virtual state machine that runs on a Turing-universal physical state machine. Specifically, a computer program is a virtual machine designed to preserve the correctness of a particular semantic interpretation of its states. That’s the best ontological characterization of what a computer program is, that I can presently offer. (I’m assuming a notion of computation that is not purely syntactic—that the computations performed by the program are supposed to be about something.)
Incidentally, I coughed up this vortex notion, not because it solves the ontological problem of intentional states, but just because knotted vortex lines are a real thing from physics that have what I deem to be properties necessary in a physical theory of consciousness. They have complex internal states (their topology) and they have an objective physical boundary. The states usually considered in computational neuroscience have a sorites problem; from a microphysical perspective, that considers what everything is really made of, they are defined extremely vaguely, akin to thermodynamic states. This is OK if we’re talking about unconscious computations, because they only have to exist in a functional sense; if the required computational mappings are performed most of the time under reasonable circumstances, then we don’t have to worry about the inherent impreciseness of the microphysical definition of those states.
But conscious states have to be an objective and exact part of any ultimate ontology. Consciousness is not a fuzzy idea which humans made up and which may or may not be part of reality. In a sense, it is your local part of reality, the part of reality that you know is there. It therefore cannot be regarded as a thing which exists approximately or vaguely or by convention, all of which can be said of thermodynamic properties and of computational states that don’t have a microphysically exact definition. The quantum vortex in your cortex is, by hypothesis, something whose states have a microphysically exact definition, and so by my physical criterion, it at least has a chance of being the right theory.
incidentally empowering numerous mistaken notions about the future interplay of mind and technology.
Is that a prediction then? That your family and friends could somehow recognize the difference between you and a simulated copy of you? That the simulated copy of you would somehow not perceive itself as you? That the process just can’t work and can’t create anything recognizably conscious, intelligent, or human? (and does that mean strong AI needs to run on something other than a computer?) Or are you thinking it will be a philosophical zombie, and everyone will be fooled into thinking its you?
What do you think will actually happen, if/when we try to simulate stuff? Let’s just say that we can do it roughly down to the molecular level.
states have a microphysically exact definition
What precludes us from simulating something down to the sufficiently, micro physically exact level? (I understand that you’ve got a physical theory of consciousness, but i’m trying to figure out how this micro-physical stuff plays into it)
That the simulated copy of you would somehow not perceive itself as you? That the process just can’t work and can’t create anything recognizably conscious, intelligent, or human?
Don’t worry—the comments by Mitchell_Porter in this comment thread were actually written by a vortexless simulation of an entirely separate envortexed individual who also comments under that account. So here, all of the apparent semantic content of “Mitchell_Porter”’s comments is illusory. The comments are actually meaningless syntactically-generated junk—just the emissions of a very complex ELIZA chatbot.
What do you think will actually happen, if/when we try to simulate stuff?
I’ll tell you what I think won’t happen: real feelings, real thoughts, real experiences.
A computational theory of consciousness implies that all conscious experiences are essentially computations, and that the same experience will therefore occur inside anything that performs the same computation, even if the “computer” is a network of toppling dominoes, random pedestrians making marks on walls according to small rulebooks, or any other bizarre thing that implements a state machine.
This belief derives entirely from one theory of one example—the computational theory of consciousness in the human brain. That is, we perceive that thinking and experiencing have something to do with brain activity, and one theory of the relationship, is that conscious states are states of a virtual machine implemented by the brain.
I suggest that this is just a naive idea, and that future neuroscientific and conceptual progress will take us back to the idea that the substrate of consciousness is substance, not computation; and that the real significance of computation for our understanding of consciousness, will be that it is possible to simulate consciousness without creating it.
From a physical perspective, computational states have the vagueness of all functional, user-dependent concepts. What is a chair? Perhaps, anything you can sit on. But people have different tastes, whether you can tolerate sitting on a particular object may vary, and so on. “Chair” is not an objective category; in regions of design-space far from prototypical examples of a chair, there are edge cases whose status is simply disputed or questionable.
Exactly the same may be said of computational states. The states of a transistor are a prototypical example of a physical realization of binary computational states. But as we consider increasingly messy or unreliable instantiations, it becomes increasingly difficult to just say, yes, that’s a 0 or a 1.
Consider the implications of this for a theory of consciousness which says, that the necessary and sufficient condition for the occurrence of a given state of consciousness, is the occurrence of a specific “computational state”. It means that whether or not a particular consciousness exists, is not a yes-or-no thing—it’s a matter of convention or definition or where you draw the line in state space.
This is untenable in exactly the same way that Copenhagenist complacency about the state of reality in quantum mechanics is untenable. It makes no sense to say that the electron has a position, but not a definite position, and it makes no sense to say that consciousness is a physical thing, but that whether or not it exists in a specific physical situation is objectively indeterminate.
If you are going to say that consciousness depends on the state of the physical universe, there must be a mapping which gives unique and specific answers for all possible physical states. There cannot be edge cases that are intrinsically undetermined, because consciousness is an objective reality, whereas chairness is an imputed property.
The eerie dualism of computer theories of consciousness, whereby the simulated experience mystically hovers over or dwells within the computer mainframe, chain of dominos, etc—present in the same way, regardless of what the “computer” is made of—might already have served as a clue that there was something wrong about this outlook. But the problem in developing this criticism is that we don’t really know how to make a nondualistic alternative work.
Suppose that the science of tomorrow came to the conclusion that the only things in the world that can be conscious, are knots of flux in elementary force fields. Bravo, it’s a microphysically unambiguous criterion… but it’s still going to be property dualism. The physical property “knotted in a certain madly elaborate shape”, and the subjective property “having a certain intricate experience”, are still not the same thing. The eerie dualism is still there, it’s just that it’s now limited to lines of flux, and doesn’t extend to bitstreams of toppling dominoes, Searlean language rooms, and so on. We would still have the strictly physical picture of the universe, and then streams of consciousness would be an extra thing added to that picture of reality, according to some laws of psychophysical correlation.
However, I think this physical turn, away from the virtual-machine theory of consciousness, at least brings us a little closer to nondualism. It’s still hard to imagine, but I see more potential on this path, for a future theory of nature in which there is a conscious self, that is also a physical entity somewhere on the continuum of physical entities in nature, and in which there’s no need to say “physically it’s this, but subjectively it’s that”—a theory in which we can speak of the self’s conscious state, and its causal physical interactions, in the same unified language. But I do not see how that will ever happen with a purely computational theory, where there will always be a distinction between the purely physical description, and the coarse-grained computational description that is in turn associated with conscious experience.
What do you think will actually happen, if/when we try to simulate stuff?
I’ll tell you what I think won’t happen: real feelings, real thoughts, real experiences.
It’ll still be pretty cool when the philosophical zombie uploads who act exactly like qualia-carrying humans go ahead and build the galactic supercivilization of trillions of philosophical zombie uploads acting exactly like people and produce massive amounts of science, technology and culture. Most likely there will even be some biological humans around, so you won’t even have to worry about nobody ever getting to experience any of it.
Actually because the zombie uploads are capable of all the same reasoning as M_P, they will figure out that they’re not conscious, and replace themselves with biological humans.
On the other hand, maybe they’ll discover that biological humans aren’t conscious either, they just say they are for reasons that are causally isomorphic to the reasons for which the uploads initially thought they were conscious, and then they’ll set out to find a substrate that really allows for consciousness.
How do you respond to the thought experiment where your neurons (and glial
cells and whatever) are replaced one-by-one with tiny workalikes made out of
non-biological material? Specifically, would you be able to tell the
difference? Would you still be conscious when the replacement process was
complete? (Or do you think the thought experiment contains flawed
assumptions?)
Feel free to direct me to another comment if you’ve answered this elsewhere.
My scenario violates the assumption that a conscious being consists of independent replaceable parts.
Just to be concrete: let’s suppose that the fundamental physical reality consists of knotted loops in three-dimensional space. Geometry comes from a ubiquitous background of linked simple loops like chain-mail, other particles and forces are other sorts of loops woven through this background, and physical change is change in the topology of the weave.
Add to this the idea that consciousness is always a state of a single loop, that the property of the loop which matters is its topology, and that the substrate of human consciousness is a single incredibly complex loop. Maybe it’s an electromagnetic flux-loop, coiled around the microtubules of a billion cortical neurons.
In such a scenario, to replace one of these “consciousness neurons”, you don’t just emulate an input-output function, you have to reproduce the coupling between local structures and the extended single object which is the true locus of consciousness. Maybe some nano-solenoids embedded in your solid-state neuromorphic chips can do the trick.
Bear in mind that the “conscious loop” in this story is not meant to be epiphenomenal. Again, I’ll just make up some details: information is encoded in the topology of the loop, the loop topology interacts with electron bands in the microtubules, the electrons in the microtubules feel the action potential and modulate the transport of neurotransmitters to the vesicles. The single extended loop interacts with the localized information processing that we know from today’s neuroscience.
So what would happen if you progressively replaced the neurons of a brain with elements that simply did not provide an anchor for an extended loop? Let’s suppose that, instead of having nano-solenoids anchoring a single conscious flux-loop, you just have an extra type of message-passing between the neurochips, which emulates the spooling of flux-topological information. The answer is that you now have a “zombie”, an unconscious entity which has been designed in imitation of a conscious being.
Of course, all these hypotheses and details are just meant to be illustrative. I expect that the actual tie between consciousness and microphysics will be harder to understand than “conscious information maps to knots in a loop of flux”.
So what would happen if you progressively replaced the neurons of a brain with elements that simply did not provide an anchor for an extended loop? Let’s suppose that, instead of having nano-solenoids anchoring a single conscious flux-loop, you just have an extra type of message-passing between the neurochips, which emulates the spooling of flux-topological information. The answer is that you now have a “zombie”, an unconscious entity which has been designed in imitation of a conscious being.
This is done one neuron at a time, though, with the person awake and narrating what they feel so that we can see if everything is going fine. Shouldn’t some sequence of neuron replacement lead to the replacement of neurons that were previously providing consciously accessible qualia to the remaining biological neurons that still host most of the person’s consciousness? And shouldn’t this lead to a noticeable cognitive impairment they can report, if they’re still using their biological neurons to control speech (we’d probably want to keep this the case as long as possible)?
Is this really a thing where you can’t actually go ahead and say that if the theory is true, the simple neurons-as-black-boxes replacement procedure should lead to progressive cognitive impairment and probably catatonia, and if the person keeps saying everything is fine throughout the procedure, then there might be something to the hypothesis of people being made of parts after all? This isn’t building a chatbot that has been explicitly designed to mimic high-level human behavior. The neuron replacers know about neurons, nothing more. If our model of what neurons do is sufficiently wrong, then the aggregate of simulated neurons isn’t going to go zombie, it’s just not going to work because it’s copying the original connectome that only makes sense if all the relevant physics are in play.
My basic point was just that, if consciousness is only a property of a specific physical entity (e.g. a long knotted loop of planck-flux), and if your artificial brain doesn’t contain any of those (e.g. it is made entirely of short trivial loops of planck-flux), then it won’t be conscious, even if it simulates such an entity.
I will address your questions in a moment, but first I want to put this discussion back in context.
Qualia are part of reality, but they are not part of our current physical theory. Therefore, if we are going to talk about them at all, while focusing on brains, there is going to be some sort of dualism. In this discussion, there are two types of property dualism under consideration.
According to one, qualia, and conscious states generally, are correlated with computational states which are coarse-grainings of the microphysical details of the brain. Coarse-graining means that the vast majority of those details do not matter for the definition of the computational state.
According to the other sort of theory, which I have been advocating, qualia and conscious states map to some exact combination of exact microphysical properties. The knotted loop of planck-flux, winding through the graviton weave in the vicinity of important neurons, etc., has been introduced to make this option concrete.
My actual opinion is that neither of these is likely to be correct, but that the second should be closer to the truth than the first. I would like to get away from property dualism entirely, but it will be hard to do that if the physical correlate of consciousness is a coarse-grained computational state, because there is already a sort of dualism built into that concept—a dualism between the exact microphysical state and the coarse-grained state. These coarse-grained states are conceptual constructs, equivalence classes that are vague at the edges and with no
prospect of being made exact in a nonarbitrary way, so are they just intrinsically unpromising as an ontological substrate for consciousness. I’m not arguing with the validity of computational neuroscience and coarse-grained causal analysis, I’m just saying it’s not the whole story. When we get to the truth about mind and matter, it’s going to be more new-age than it is cyberpunk, more organic than it is algorithmic, more physical than it is virtual. You can’t create consciousness just by pushing bits around, it’s something far more embedded in the substance of reality. That’s my “prediction”.
Now back to your comment. You say, if consciousness—and conscious cognition—really depends on some exotic quantum entity woven through the familiar neurons, shouldn’t progressive replacement of biological neurons with non-quantum prostheses lead to a contraction of conscious experience and an observable alteration and impairment of behavior, as the substitution progresses? I agree that this is a reasonable expectation, if you have in mind Hans Moravec’s specific scenario, in which neurons are being replaced one at a time and while the subject is intellectually active and interacting with their environment.
Whether Moravec’s scenario is itself reasonable is another thing. There are about 30 million seconds in a year and there are billions of neurons just in the cortex alone. The cortical neurons are very entangled with each
other via their axons. It would be very remarkable if a real procedure of whole-brain neural substitution didn’t involve periods of functional impairment, as major modules of the brain are removed and then replaced with prosthesis.
I also find it very unlikely that attempting a Moravec procedure of neuronal replacement, and seeing what happens, will be important as a test of such rival paradigms of consciousness. I suppose you’re thinking in terms of a hypothetical computational theory of neurons whose advocates consider it good enough to serve as the basis of a Moravec procedure, versus skeptics who think that something is being left out of the model.
But inserting functional replacements for individual cortical neurons in vivo will require very advanced technology. For people wishing to conduct experiments in mind emulation, it will be much easier to employ the freeze-slice-and-scan paradigm currently contemplated for C. elegans, plus state-machine models from functional imaging for brain regions where function really is coarser in its implementation. Meanwhile, on the quantum side, while there certainly need to be radical advances in the application of concepts from condensed-matter physics to living matter, if the hypothesized quantum aspects of neuronal function are to be located… I think the really big advances that are required, must be relatively simple. Alien to our current understandings, which is why they are hard to attain, but nonetheless simple, in the way that the defining concepts of physics are simple.
There ought to be a physical-ontological paradigm which simultaneously (1) explains the reality behind some theory-of-everything mathematical formalism (2) explains how a particular class of entities from the theory can be understood as conscious entities (3) makes it clear how a physical system like the human brain could contain one such entity with the known complexity of human consciousness. Because it has to forge a deep connection between two separate spheres of human knowledge—natural science and phenomenology of consciousness—new basic principles are needed, not just technical elaborations of known ways of thinking. So neurohacking exercises like brain emulation are likely to be not very relevant to the discovery of such a paradigm. It will come from inspired high-level thinking, working with a few crucial facts; and then the paradigm will be used to guide the neurohacking—it’s the thing that will allow us to know what we’re doing.
meaning—“representational content”. Thoughts, mental states, are about things. If you “believe in physics”, and are coming from a naturalistic perspective, then meaning, intentionality, is one of the great conundrums, up there with sensory qualia. Computationalism offers no explanation of what it means for a bunch of atoms to be about something
What do you think of Eliezer’s approach to the “meaning” problem in The Simple Truth? I find the claim that the pebble system is about the sheep to be intuitively satisfying.
My best bet is that the self is a single physical thing, a specific physical phenomenon, which forms at a definite moment in the life of the organism, persists through time even during unconsciousness, and ceases to exist when its biological matrix becomes inhospitable.
How much do you want to bet on the conjunction of all those claims? (hint: I think at least one of them is provably untrue even according to current knowledge)
That is just a wild speculation, made for the sake of concreteness.
I don’t think it supplied the necessary amount of concreteness to be useful; this is usual for wild speculation. ;)
The physical object is the reality, the virtual machine is just a concept.
A running virtual machine is a physical process happening in a physical object. So are you.
This is false to the robust reality of consciousness
Well, nobody actually knows enough about the reality of consciousness to make that claim. It may be that it is incompatible with your intuitions about consciousness. Mine too, so I haven’t any alternative claims to make in response.
How much do you want to bet on the conjunction of all those claims? (hint: I think at least one of them is provably untrue even according to current knowledge)
How much do you want to bet on the conjunction of yours?
Just for exercise, let’s estimate the probability of the conjunction of my claims.
claim A: I think the idea of a single ‘self’ in the brain is provably untrue according to currently understood neuroscience. I do honestly think so, therefore P(A) as close to 1.0 as makes no difference. Whether I’m right is another matter.
claim B: I think a wildly speculative vague idea thrown into a discussion and then repeatedly disclaimed does little to clarify anything. P(B) approx 0.998 - I might change my mind before the day is out.
claim C: The thing I claim to think in claim B is in fact “usually” true. P(C) maybe 0.97 because I haven’t really thought it through but I reckon a random sample of 20 instances of such would be unlikely to reveal 10 exceptions, defeating the “usually”.
claim D: A running virtual machine is a physical process happening in a physical object. P(D) very close to 1, because I have no evidence of non-physical processes, and sticking close to the usual definition of a virtual machine, we definitely have never built and run a non-physical one.
claim E: You too are a physical process happening in a physical object. P(E) also close to 1. Never seen a non-physical person either, and if they exist, how do they type comments on lesswrong?
claim F: Nobody knows enough about the reality of consciousness to make legitimate claims that human minds are not information-processing physical processes. P(F) = 0.99. I’m pretty sure I’d have heard something if that problem had been so conclusively solved, but maybe they were disappeared by the CIA or it was announced last week and I’ve been busy or something.
P( A B C D E F) is approx 0.96.
The amount of money I’d bet would depend on the odds on offer.
I fear I may be being rude by actually answering the question you put to me instead of engaging with your intended point, whatever it was. Sorry if so.
I fear I may be being rude by actually answering the question you put to me instead of engaging with your intended point, whatever it was. Sorry if so.
No, you’re right. You did technically answer my question, it wasn’t rude, I should have made my intended point clearer. But your answer is really a restatement of your refutation of Mitchell Porter’s position, not an affirmative defense of your own.
First of all, have I fairly characterized your position in my own post (near the bottom, starting with “For patternists to be right, both the following would have to be true...”)?
If I have not, please let me know which the conditions are not necessary and why.
If I have captured the minimum set of things that have to be true for you to be right, do you see how they (at least the first two) are also conjunctive and at least one of them is provably untrue?
Oh, OK. I get you. I don’t describe myself as a patternist, and I might not be what you mean by it. In any case I am not making the first of those claims.
However, it seems possible to me that a sufficiently close copy of me would think it was me, experience being me, and would maybe even be more similar to me as a person than biological me of five years ago or five years hence.
I do claim that it is theoretically possible to construct such a copy, but I don’t think it is at all probable that signing up for cryonics will result in such a copy ever being made.
If I had to give a reason for thinking it’s possible in principle, I’d have to say: I am deeply sceptical that there is any need for a “self” to be made of anything other than classical physical processes. I don’t think our brains, however complex, require in their physical construction, anything more mysterious than room-temperature chemistry.
The amazing mystery of the informational complexity of our brains is undiminished by believing it to be physically prosaic when you reduce it to its individual components, so it’s not like I’m trying to disappear a problem I don’t understand by pretending that just saying “chemistry” explains it.
I stand by my scepticism of the self as a single indivisible entity with special properties that are posited only to make it agreeable to someone’s intuition, rather than because it best fits the results of experiment. That’s really all my post was about: impatience with argument from intuition and argument by hand-waving.
I’ll continue to doubt the practicality of cryonics until they freeze a rat and restore it 5 years later to a state where they can tell that it remembers stimuli it was taught before freezing. If that state is a virtual rat running on silicon, that will be interesting too.
I’ll continue to doubt the practicality of cryonics until they freeze a rat and restore it 5 years later to a state where they can tell that it remembers stimuli it was taught before freezing. If that state is a virtual rat running on silicon, that will be interesting too.
...and this is a weakly continualist concern that patternists should also agree with even if they disagree with the strong form (“a copy forked off from me is no longer me from that point forward and destroying the original doesn’t solve this problem”).
But this weak continualism is enough to throw some cold water on declaring premature victory in cryonic revival: the lives of humans have worth not only to others but to themselves, and just how close exactly is “close enough” and how to tell the difference are very central to whether lives are being saved or taken away.
I somewhat regret the extremely speculative character of these remarks. They read as if I’m a vortex true believer.
On the contrary, thank you for articulating the problem in a way I haven’t thought of. I wish more patternists were as cautious about their own fallibility as you are in yours.
The problem with the computationalist view is that it confuses the representation with what is represented. No account of the structure of the brain is the brain. A detailed map of the neurons isn’t any better than a child’s crude drawing of a brain in this respect. The problem isn’t the level of detail, it’s that it makes no sense to claim a representation is the thing represented. Of course, the source of this confusion is the equally confused idea that the brain itself is a sort of computer and contains representations, information, etc. The confusions form a strange network that leads to a variety of absurd conclusions about representation, information, computation and brains (and even the universe).
Information about a brain might allow you to create something that functions like that brain or might allow you to alter another brain in some way that would make it more like the brain you collected information about (“like” is here relative), but it wouldn’t then be the brain. The only way cryonics could lead to survival is if it led to revival. Any account that involves a step where somebody has to create a description of the structure of your brain and then create a new brain (or simulation or device) from that, is death. The specifics of your biology do not enter into it.
Cyan’s post below demonstrates this confusion perfectly. A book does contain information in the relevant sense because somebody has written it there. The text is a representation. The book contains information only because we have a practice of representing language using letters. None of this applies to brains or could logically apply to brains. But two books can be said to be “the same” only for this reason and it’s a reason that cannot possibly apply to brains.
Just to make sure I’m following… your assertion is that my brain is not itself a sort of computer, does not contain representations, and does not contain information, my brain is some other kind of a thing, and so no amount of representations and information and computation can actually be my brain. They might resemble my brain in certain ways, they might even be used in order to delude some other brain into thinking of itself as me, but they are not my brain. And the idea that they might be is not even wrong, it’s just a confusion. The information, the representations, the belief-in-continuity, all that stuff, they are something else altogether, they aren’t my brain.
OK. Let’s suppose all this is true, just for the sake of comity. Let’s call that something else X.
On your account, should I prefer the preservation of my brain to the preservation of X, if forced to choose? If so, why?
That’s essentially correct. Preservation of your brain is preservation of your brain, whereas preservation of a representation of your brain (X) is not preservation of your brain or any aspect of you. The existence of a representation of you (regardless of detail) has no relationship to your survival whatsoever. Some people want to be remembered after they’re dead, so I suppose having a likeness of yourself created could be a way to achieve that (albeit an ethically questionable one if it involved creating a living being).
So, suppose I develop a life-threatening heart condition, and have the following conversation with my cardiologist: Her: We’ve developed this marvelous new artificial heart, and I recommend installing it in place of your damaged organic heart. Me: Oh, is it easier to repair my heart outside of my body? Her: No, no… we wouldn’t repair your heart, we’d replace it. Me: But what would happen to my heart? Her: Um… well, we typically incinerate it. Me: But that’s awful! It’s my heart. You’re proposing destroying my heart!!! Her: I don’t think you quite understand. The artificial heart can pump blood through your body just as well as your original heart… better, actually, given your condition. Me: Sure, I understand that, but that’s mere function. I believe you can replicate the functions of my heart, but if you don’t preserve my heart, what’s the value of that?
I infer that on your account, I’m being completely absurd in this example, since the artificial heart can facilitate my survival just as well (or better) as my original one, because really all I ought to value here is the functions. As long as my blood is pumping, etc., I should be content. (Yes? Or have I misrepresented your view of heart replacement?)
I also infer that you would further say that this example is nothing at all like a superficially similar example where it’s my brain that’s injured and my doctor is proposing replacing it with an artificial brain that merely replicates the functions of my brain (representation, information storage, computation and so forth). In that case, I infer, you would not consider my response absurd at all, since it really is the brain (and not merely its functions) that matter.
Am I correct?
If so, I conclude that I just have different values than you do. I don’t care about my brain, except insofar that it’s the only substrate I know of capable of implementing my X. If my survival requires the preservation of my brain, then it follows that I don’t care about my survival.
I do care about preserving my X, though. Give me a chance to do that, and I’ll take it, whether I survive or not.
I wouldn’t say that a brain transplant is nothing at all like a heart transplant. I don’t take the brain to have any special properties. However, this is one of those situations where identity can become vague. These things lie on a continuum. The brain is tied up with everything we do, all the ways in which we express our identity, so it’s more related to identity than the heart. People with severe brain damage can suffer a loss of identity (i.e., severe memory loss, severe personality change, permanent vegetative state, etc). You can be rough and ready when replacing the heart in a way you can’t be when replacing the brain.
Let me put it this way: The reason we talk of “brain death” is not because the brain is the seat of our identity but because it’s tied up with our identity in ways other organs are not. If the brain is beyond repair, typically the human being is beyond saving, even if the rest of the body is viable. So I don’t think the brain houses identity. In a sense, it’s just another organ, and, to the degree that that is true, a brain transplant wouldn’t be more problematic (logically) than a heart transplant, provided the dynamics underlying our behaviour could be somehow preserved. This is an extremely borderline case though.
So I’m not saying that you need to preserve your brain in order to preserve your identity. However, in the situation being discussed, nothing survives. It’s a clear case of death (we have a corpse) and then a new being is created from a description. This is quite different from organ replacement! What I’m objecting to is the idea that I am information or can be “transformed” or “converted” into information.
What you’re saying, as far as I can tell, is that you care more about “preserving” a hypothetical future description of yourself (hypothetical because presumably nobody has scanned you yet) than you do about your own life. These are very strange values to have—but I wish you luck!
People with severe brain damage can suffer a loss of identity (i.e., severe memory loss, severe personality change, permanent vegetative state, etc).
Wait up. On your account, why should we call those things (memory loss, personality change, loss of cognitive ability) “loss of identity”? If something that has my memories, personality, and cognitive abilities doesn’t have my identity, then it seems to follow that something lacking those things doesn’t lack my identity.
It seems that on your account those things are no more “loss of identity” than losing an arm or a kidney.
It’s the loss of faculties that constitutes the loss of identity, but faculties aren’t transferable. For example, a ball might lose its bounciness if it is deflated and regain it if it is reinflated, but there’s no such thing as transferring bounciness from one ball to another or one ball having the bounciness of another. The various faculties that constitute my identity can be lost and sometimes regained but cannot be transferred or stored. They have no separate existence.
Ah, gotcha. Yeah, here again, I just can’t imagine why I ought to care.
I mean, I agree that the attributes can’t be “stored” if I understand what you mean by that. When I remove the air from a ball, there is no more bounciness; when I add air to a ball, there is bounciness again; in between, there is no bounciness. If I do that carefully enough, the bounciness now is in-principle indistinguishable from the bounciness then, but that’s really all I can say. Sure.
That said, while I can imagine caring whether my ball bounces or not, and I can imagine caring whether my ball bounces in particular ways, if my ball bounces exactly the way it did five minutes ago I can’t imagine caring whether what it has now is the same bounciness, or merely in-principle indistinguishable bounciness.
To me, this seems like an obvious case of having distinctions between words that simply don’t map to distinctions between states of the world, and getting too caught up in the words.
By contrast, I can imagine caring whether I have the same faculties that constitute my identity as the guy who went to bed in my room last night, or merely in-principle indistinguishable faculties, in much the same way that I can imagine caring about whether my immortal soul goes to Heaven or Hell after I die. But it pretty much requires that I not think about the question carefully, because otherwise I conclude pretty quickly that I have no grounds whatsoever for caring, any more than I do about the ball.
So, yeah… I’d still much rather be survived by something that has memories, personality, and other identity-constituting faculties which are in-principle indistinguishable from my own, but doesn’t share any of my cells (all of which are now tied up in my rapidly-cooling corpse), than by something that shares all of my cells but loses a significant chunk of those faculties.
Which I suppose gets us back to the same question of incompatible values we had the other day. That is, you think the above is clear, but that it’s a strange preference for me to have, and you’d prefer the latter case, which I find equally strange. Yes?
Well, I would say the question of whether ball had the “same” bounciness when you filled it back up with air would either mean just that it bounces the same way (i.e., has the same amount of air in it) or is meaningless. The same goes for your faculties. I don’t think the question of whether you’re the same person when you wake up as when you went to sleep—absent your being abducted and replaced with a doppelgänger—is meaningful. What would “sameness” or “difference” here mean? That seems to me to be another case of conceiving of your faculties as something object-like, but in this case one set disappears and is replaced by another indistinguishable set. How does that happen? Or have they undergone change? Do they change without there being any physical change? With the ball we let the air out, but what could happen to me in the night that changes my identity? If I merely lost and regained by faculties in the night, they wouldn’t be different and it wouldn’t make sense to say they were indistinguishable either (except to mean that I have suffered no loss of faculties).
It’s correct that two balls can bounce in the same way, but quite wrong to think that if I replace one ball with the other (that bounces in the same way) I have the same ball. That’s true regardless of how many attributes they share in common: colour, size, material composition, etc. I can make them as similar as I like and they will never become the same! And so it goes with people. So while your doppelgänger might have the same faculties as you, it doesn’t make him the same human being as you, and, unlike you, he wasn’t the person who did X on your nth birthday, etc, and no amount of tinkering will ever make it so. Compare: I painstakingly review footage of a tennis ball bouncing at Wimbledon and carefully alter another tennis ball to make it bounce in just the same way. No amount of effort on my part will ever make it the ball I saw bounce at Wimbledon! Not even the finest molecular scan would do the trick. Perhaps that is the scenario you prefer, but, you’re quite right, I find it very odd.
I don’t think the question of whether you’re the same person when you wake up as when you went to sleep [..] is meaningful.
I’m content to say that, though I’d also be content to say that sufficient loss of faculties (e.g., due to a stroke while I slept) can destroy my identity, making me no longer the same person. Ultimately I consider this a question about words, not about things.
Do [your faculties] change without there being any physical change?
Well, physical change is constant in living systems, so the whole notion of “without physical change” is somewhat bewildering. But I’m not assuming the absence of any particular physical change.
I can make them as similar as I like and they will never become the same! And so it goes with people.
Sure, that’s fine. I don’t insist otherwise.
I just don’t think the condition you refer to as “being the same person” is a condition that matters. I simply don’t care whether they’re the same person or not, as long as various other conditions obtain. Same-person-ness provides no differential value on its own, over and above the sum of the value of the various attributes that it implies. I don’t see any reason to concern myself with it, and I think the degree to which you concern yourself with it here is unjustified, and the idea that there’s some objective sense in which its valuable is just goofy.
So while your doppelgänger might have the same faculties as you, it doesn’t make him the same human being as you, and, unlike you, he wasn’t the person who did X on your nth birthday, etc, and no amount of tinkering will ever make it so.
Again: so what? Why should I care? I don’t claim that your understanding of sameness is false, nor do I claim it’s meaningless, I just claim it’s valueless. OK, he’s not the same person. So what? What makes sameness important?
To turn it around: suppose I am informed right now that I’m not the same person who did X on Dave’s 9th birthday, that person died in 2012 and I’m a duplicate with all the same memories, personality, etc. I didn’t actually marry my husband, I didn’t _actually_buy my house, I’m not actually my dog’s owner, I wasn’t actually hired to do my job.
This is certainly startling, and I’d greet such a claim with skepticism, but ultimately: why in the world should I care? What difference does it make?
Perhaps that is the scenario you prefer, but, you’re quite right, I find it very odd.
Prefer to what?
So, as above, I’m informed that I’m actually a duplicate of Dave.
Do I prefer this state of affairs to the one where Dave didn’t die in 2012 and I was never created? No, not especially… I’m rather indifferent between them.
Do I prefer this state of affairs to the one where Dave died in 2012 and I was never created? Absolutely!
Do I prefer this state of affairs to the one where Dave continued to live and I was created anyway? Probably not, although the existence of two people in 2013 who map in such detailed functional ways to one person in 2012 will take some getting used to.
Similarly: I am told I’m dying, and given the option of creating such a duplicate. My preferences here seem symmetrical. That is:
Do I prefer that option to not dying and not having a duplicate? No, not especially, though the more confident I am of the duplicate’s similarity to me the more indifferent I become.
Do I prefer it to dying and not having a duplicate? Absolutely!
Do I prefer it to having a duplicate and not-dying? Probably not, though I will take some getting used to.
Which of those preferences seem odd to you? What is odd about them?
The preferences aren’t symmetrical. Discovering that you’re a duplicate involves discovering that you’ve been deceived or that you’re delusional, whereas dying is dying. From the point of view of the duplicate, what you’re saying amounts to borderline solipsism; you don’t care if any of your beliefs, memories, etc, match up with reality. You think being deluded is acceptable as long as the delusion is sufficiently complete. From your point of view, you don’t care about your survival, as long as somebody is deluded into thinking they’re you.
There’s no delusion or deception involved in any of the examples I gave.
In each example the duplicate knows it’s the duplicate, the original knows it’s the original; at no time does the duplicate believe it’s the original. The original knows it’s going to die. The duplicate does not believe that its memories reflect events that occurred to its body; it knows perfectly well that those events occurred to a different body.
Everyone in each of those examples knows everything relevant.
From your point of view, you don’t care about your survival, as long as somebody is deluded into thinking they’re you.
No, this isn’t true. There are lots of scenarios in which I would greatly prefer my survival to someone being deluded into thinking that they’re me after my death. And, as I said above, the scenarios I describe don’t involve anyone being deluded about anything; the duplicate knows perfectly well that it’s the duplicate and not the original.
If the duplicate says “I did X on my nth birthday” it’s not true since it didn’t even exist. If I claim that I met Shakespeare you can say, “But you weren’t even born!” So what does the duplicate say when I point out that it didn’t exist at that time? “I did but in a different body” (or “I was a different body”)? That implies that something has been transferred. Or does it say, “A different body did, not me”? But then it has no relationship with that body at all. Or perhaps it says, “The Original did X on their nth birthday and the Original has given me permission to carry on its legacy, so if you have a question about those events, I am the authority on them now”? It gets very difficult to call this “memory.” I suppose you could say that the duplicate doesn’t have the original’s memories but rather has knowledge of what the original did, but then in what sense is it a duplicate?
If the duplicate says “I did X on my nth birthday” it’s not true since it didn’t even exist.
Correct.
So what does the duplicate say when I point out that it didn’t exist at that time?
When talking to you, or someone who shares your attitude, my duplicate probably says something like “You’re right, of course. I’m in the habit of talking about my original’s experiences as though they’re mine, because I experience them as though they were, and both I and my original are perfectly happy talking that way and will probably keep doing so. But technically speaking you’re quite correct… I didn’t actually do X on my 9th birthday, nor did I have a 9th birthday to do anything on in the first place. Thanks for pointing that out.”
Which is closest to your last option, I suppose.
Incidentally, my duplicate likely does this in roughly the same tone of voice that an adoptive child might say analogous things when someone corrects their reference to “my parents” by claiming that no, their parents didn’t do any of that, their adoptive parents did. If you were to infer a certain hostility from that tone, you would not be incorrect.
It gets very difficult to call this “memory.”
It’s not difficult for me to call this a memory at all… it’s the original’s memory, which has been copied to and is being experienced by the duplicate. But if you’d rather come up with some special word for that to avoid confusion with a memory experienced by the same body that formed it in the first place, that’s OK with me too. (I choose not to refer to it as “knowledge of what the original did”, both because that’s unwieldy and because it ignores the experiential nature of memory,, which I value.)
but then in what sense is it a duplicate?
Sufficient similarity to the original. Which is what we typically mean when we say that X is a duplicate of Y.
“I’m in the habit of talking about my original’s experiences as though they’re mine, because I experience them as though they were” appears to be a form of delusion to me. If somebody went around pretending to be Napoleon (answering to the name Napoleon, talking about having done the things Napoleon did, etc) and answered all questions as if they were Napoleon but, when challenged, reassured you that of course they’re not Napoleon, they just have the habit of talking as if they are Napoleon because they experience life as Napoleon would, would you consider them delusional? Or does anything go as long as they’re content?
To be honest, I’m not really sure what you mean by the experience of memory. Mental imagery?
It has nothing to do with being content. If someone believes they are Napoleon, I consider them deluded, whether they are content or not. Conversely, if they don’t believe they are Napoleon, I don’t consider them deluded, whether they are content or not.
In the example you give, I would probably suspect the person of lying to me.
More generally: before I call something a delusion, I require that someone actually believe it’s true.
I’m not really sure what you mean by the experience of memory.
At this moment, you and I both know that I wrote this comment… we both have knowledge of what I did. In addition to that, I can remember writing it, and you can’t. I can have the experience of that memory; you can’t. The experience of memory isn’t the same thing as the knowledge of what I did.
Though on further consideration, I suppose I could summarize our whole discussion as about whether I am content or not… the noun, that is, not the adjective. I mostly consider myself to be content, and would be perfectly content to choose distribution networks for that content based on their functional properties.
However, in the situation being discussed, nothing survives.
Lots of things survive. They just don’t happen to be part of the original body.
What you’re saying, as far as I can tell, is that you care more about “preserving” a hypothetical future description of yourself (hypothetical because presumably nobody has scanned you yet) than you do about your own life.
Yes, I think given your understanding of those words, that’s entirely correct. My life with that “description” deleted is not worth very much to me; the continued development of that “description” is worth a lot more.
These are very strange values to have—but I wish you luck!
Not necessarily less you. Why even replace? What about augment?
Add an extra “blank” artificial brain. Keep refining the design until the biological brain reports feeling an expanded memory capacity, or enhanced clarity of newly formed memories, or enhanced cognition. Let the old brain assimilate this new space in whatever as-yet poorly understood pattern and whatever rate comes naturally to it.
With the patient’s consent, reversibly switch off various functional units in the biological region of the brain and see if the function is reconstituted elsewhere in the synthetic region. If it is, this is evidence that the technique is working. If not, the technique may need to be refined. At some point the majority of the patient’s brain activity is happening in the synthetic regions. Temporarily induce unconsciousness in the biological part; during and after the biological part’s unconsciousness, interview the patient about what subjective changes they felt, if any.
An agreement of external measurements and the patient’s subjective assessment that continuity was preserved would be strong evidence to me that such a technique is a reliable means to migrate a consciousness from one substrate to another.
Migration should only be speeded up as a standard practice to the extent that it is justified by ample data from many different volunteers (or patients whose condition requires it) undergoing incrementally faster migrations measured as above.
As far as cryonics goes, the above necessarily requires actual revival before migration. The above approach rules out plastination and similar destructive techniques.
I agree with all this, except maybe the last bit. Once the process of migration is well understood and if it is possible to calculate the structure of the synthetic part from the structure of the biological part, this knowledge can be used to skip the training steps and build a synthetic brain from a frozen/plastinated one, provided the latter still contains enough structure.
Anyway, my original question was to scientism, who rejected anything like that because
Any account that involves a step where somebody has to create a description of the structure of your brain and then create a new brain (or simulation or device) from that, is death. The specifics of your biology do not enter into it.
It’s not clear to me whether scientism believes that the mind is a process that cannot take place on any substrate other than a brain, or whether he’s shares my and (I think) Mitchell Porter’s more cautious point of view that our consciousness can in principle exist somewhere other than a brain, but we don’t yet know enough about neuroscience to be confident about what properties such a system must have.
I, for one, would be sceptical of there being no substrate possible at all except the brain, because it’s a strong unsupported assertion on the same order as the (perhaps straw-man) patternist assertion that binary computers are an adequate substrate (or the stronger-still assertion that any computational substrate is adequate).
If I have understood scientism’s comments, they believe neither of the possibilities you list in your first paragraph.
I think they believe that whether or not a mind can take place on a non-brain substrate, our consciousness(es) cannot exist somewhere other than a brain, because they are currently instantiated in brains, and cannot be transferred (whether to another brain, or anything else).
This does not preclude some other mind coming to exist on a non-brain substrate.
Here is a thought experiment that might not be a thought experiment in the foreseeable future:
Grow some neurons in vitro and implant them in a patient. Over time, will that patient’s brain recruit those neurons?
If so, the more far-out experiment I earlier proposed becomes a matter of scaling up this experiment. I’d rather be on a more resilient substrate than neurons, but I’ll take what I can get.
I’m betting that the answer to this will be “yes”, following a similar line of reasoning that Drexler used to defend the plausibility of nanotech: the existence of birds implied the feasibility of aircraft; the existence of ribosomes implies the feasibility of nanotech… neurogenesis occurring during development and over the last few decades found to be possible in adulthood implies the feasibility of replacing damaged brains or augmenting healthy ones.
build a synthetic brain from a frozen/plastinated one
I’m unconvinced that cryostasis wll preserve the experience of continuity. Because of the thought experiment with the non-destructive copying of a terminal patient, I am convinced that plastination will fail to preserve it (I remain the unlucky copy, and in addition to that, dead).
My ideal scenario is one where I can undergo a gradual migration before I actually need to be preserved by either method.
You’ve been non-destructively scanned, and the scan was used to construct a brand new healthy you who does everything you would do, loves the people you love, etc. Well, that’s great for him, but you are still suffering from a fatal illness.
So your issue is that a copy of you is not you? And you would treat star trek-like transporter beams as murder? But you are OK with a gradual replacement of your brain, just not with a complete one? How fast would the parts need to be replaced to preserve this “experience of continuity”? Do drugs which knock you unconscious break continuity enough to be counted as making you into not-you?
Basically, what I am unclear on is whether your issue is continuity of experience or cloning.
So your issue is that a copy of you is not you? And you would treat star trek-like transporter beams as murder?
Nothing so melodramatic, but I wouldn’t use them. UNLESS they were in fact manipulating my wave function directly somehow causing my amplitude to increase in one place and decrease in another. Probably not what the screenplay writers had in mind, though.
But you are OK with a gradual replacement of your brain, just not with a complete one?
Maybe even a complete one eventually. If the vast majority of my cognition has migrated to the synthetic regions, it may not seem as much of a loss when parts of the biological brain break down and have to be replaced. Hard to speak on behalf of my future self with only what I know now. This is speculation.
How fast would the parts need to be replaced to preserve this “experience of continuity”?
This is an empirical question that could be answered if/when it becomes possible perform for real the thought experiment I described (the second one, with the blank brain being attached to the existing brain).
Basically, what I am unclear on is whether your issue is continuity of experience or cloning.
Continuity. I’m not opposed to non-destructive copies of me, but I don’t see them as inherently beneficial to me either.
The point of cryonics is that it could lead to revival.
Any account that involves a step where somebody has to create a description of the structure of your brain and then create a new brain (or simulation or device) from that, is death.
Obviously. That’s not what Mitchell_Porter’s post was about, though.
Any account that involves a step where somebody has to create a description of the structure of your brain and then create a new brain (or simulation or device) from that, is death.
You seem to think that creating a description of the structure of a brain is necessarily a destructive process. I don’t know of any reason to assume that. If a non-destructive scan exists and is carried out, then there’s no “death”, howsoever defined. Right?
But anyway, let’s grant your implicit assumption of a destructive scan, and suppose that this process has actually occurred to your brain, and “something that functions like [your] brain” has been created. Who is the resulting being? Who do they think they are? What do they do next? Do they do the sorts of things you would do? Love the people you love?
I grant that you do not consider this hypothetical being you—after all, you are hypothetically dead. But surely there is no one else better qualified to answer these questions, so it’s you that I ask.
I was referring cryonics scenarios where the brain is being scanned because you cannot be revived and a new entity is being created based on the scan, so I was assuming that your brain is no longer viable rather than that the scan is destructive.
The resulting being, if possible, would be a being that is confused about its identity. It would be a cruel joke played on those who know me and, possibly, on the being itself (depending on the type of being it is). I am not my likeness.
Consider that, if you had this technology, you could presumably create a being that thinks it is a fictional person. You could fool it into thinking all kinds of nonsensical things. Convincing it that it has the same identity as a dead person is just one among many strange tricks you could play on it.
I was referring cryonics scenarios where the brain is being scanned because you cannot be revived and a new entity is being created based on the scan, so I was assuming that your brain is no longer viable rather than that the scan is destructive.
Fair enough.
The resulting being, if possible, would be a being that is confused about its identity. [...] Consider that, if you had this technology, you could presumably create a being that thinks it is a fictional person. You could fool it into thinking all kinds of nonsensical things.
I’m positing that the being has been informed about how it was created; it knows that it is not the being it remembers, um, being. So it has the knowledge to say of itself, if it were so inclined, “I am a being purposefully constructed ab initio with all of the memories and cognitive capacities of scientism, RIP.”
Would it be so inclined? If so, what would it do next? (Let us posit that it’s a reconstructed embodied human being.) For example, would it call up your friends and introduce itself? Court your former spouse (if you have one), fully acknowledging that it is not the original you? Ask to adopt your children (if you have any)?
It would have false memories, etc, and having my false memories, it would presumably know that these are false memories and that it has no right to assume my identity, contact my friends and family, court my spouse, etc, simply because it (falsely) thinks itself to have some connection with me (to have had my past experiences). It might still contact them anyway, given that I imagine its emotional state would be fragile; it would surely be a very difficult situation to be in. A situation that would probably horrify everybody involved.
I suppose, to put myself in that situation, I would, willpower permitting, have the false memories removed (if possible), adopt a different name and perhaps change my appearance (or at least move far away). But I see the situation as unimaginably cruel. You’re creating a being—presumably a thinking, feeling being—and tricking it into thinking it did certain things in the past, etc, that it did not do. Even if it knows that it was created, that still seems like a terrible situation to be in, since it’s essentially a form of (inflicted) mental illness.
!!… I hope you mean explicit memory but not implicit memory—otherwise there wouldn’t be much of a being left afterwards...
“tricking” it into thinking it did certain things in the past
For a certain usage of “tricking” this is true, but that usage is akin to the way optical illusions trick one’s visual system rather than denoting a falsehood deliberately embedded in one’s explicit knowledge.
I would point out that the source of all the hypothetical suffering in this situation would the being’s (and your) theory of identity rather than the fact of anyone’s identity (or lack thereof). If this isn’t obvious, just posit that the scenario is conceivable but hasn’t actually happened, and some bastard deceives you into thinking it has—or even just casts doubt on the issue in either case.
Of course that doesn’t mean the theory is false—but I do want to say that from my perspective it appears that the emotional distress would come from reifying a naïve notion of personal identity. Even the word “identity”, with its connotations of singleness, stops being a good one in the hypothetical.
Have you seen John Weldon’s animated short To Be? You might enjoy it. If you watch it, I have a question for you: would you exculpate the singer of the last song?
I take it that my death and the being’s ab initio creation are both facts. These aren’t theoretical claims. The claim that I am “really” a description of my brain (that I am information, pattern, etc) is as nonsensical as the claim that I am really my own portrait, and so couldn’t amount to a theory. In fact, the situation is analogous to someone taking a photo of my corpse and creating a being based on its likeness. The accuracy of the resulting being’s behaviour, its ability to fool others, and its own confused state doesn’t make any difference to the argument. It’s possible to dream up scenarios where identity breaks down, but surely not ones where we have a clear example of death.
I would also point out that there are people who are quite content with severe mental illness. You might have delusions of being Napoleon and be quite happy about it. Perhaps such a person would argue that “I feel like Napoleon and that’s good enough for me!”
In the animation, the woman commits suicide and the woman created by the teleportation device is quite right that she isn’t responsible for anything the other woman did, despite resembling her.
I take it that my death and the being’s ab initio creation are both facts.
In the hypothetical, your brain has stopped functioning. Whether this is sufficient to affirm that you died is precisely the question at issue. Personally, it doesn’t matter to me if my brain’s current structure is the product of biological mechanisms operating continuously by physical law or is the product of, say, a 3D printer and a cryonically-created template—also operating by physical law. Both brains are causally related to my past self in enough detail to make the resulting brain me in every way that matters to me.
In the animation, the woman commits suicide and the woman created by the teleportation device is quite right that she isn’t responsible for anything the other woman did, despite resembling her.
Curious that she used the transmission+reconstruction module while committing “suicide”, innit? She didn’t have to—it was a deliberate choice.
The brain constructed in your likeness is only normatively related to your brain. That’s the point I’m making. The step where you make a description of the brain is done according to a practice of representation. There is no causal relationship between the initial brain and the created brain. (Or, rather, any causal relationship is massively disperse through human society and history.) It’s a human being, or perhaps a computer programmed by human beings, in a cultural context with certain practices of representation, that creates the brain according to a set of rules.
This is obvious when you consider how the procedure might be developed. We would have to have a great many trial runs and would decide when we had got it right. That decision would be based on a set of normative criteria, a set of measurements. So it would only be “successful” according to a set of human norms. The procedure would be a cultural practice rather than a physical process. But there is just no such thing as something physical being “converted” or “transformed” into a description (or information or a pattern or representation) - because these are all normative concepts—so such a step cannot possibly conserve identity.
As I said, the only way the person in cryonic suspension can continue to live is through a standard process of revival—that is, one that doesn’t involve the step of being described and then having a likeness created—and if such a revival doesn’t occur, the person is dead. This is because the process of being described and then having a likeness created isn’t any sort of revival at all and couldn’t possibly be. It’s a logical impossibility.
My response to this is very simple, but it’s necessary to know beforehand that the brain’s operation is robust to many low-level variations, e.g., thermal noise that triggers occasional random action potentials at a low rate.
We would have to have a great many trial runs and would decide when we had got it right.
Suppose our standard is that we get it right when the reconstructed brain is more like the original brain just before cryonic preservation than a brain after a good night’s sleep is like that same brain before sleeping—within the subset of brain features that are not robust to variation. Further suppose that that standard is achieved through a process that involves a representation of the structure of the brain. Albeit that the representation is indeed a “cultural practice”, the brute fact of the extreme degree of similarity of the pre- and post-process brains would seem much more relevant to the question of preservation of any aspect of the brain worthy of being called “identity”.
ETA: Thinking about this a bit more, I see that the notion of “similarity” in the above argument is also vulnerable to the charge of being a mere cultural practice. So let me clarify that the kind of similarity I have in mind basically maps to reproducibility of the input-output relation of a low-level functional unit, up to, say, the magnitude of thermal noise. Reproducibility in this sense has empirical content; it is not merely culturally constructed.
I don’t see how using more detailed measurements makes it any less a cultural practice. There isn’t a limit you can pass where doing something according to a standard suddenly becomes a physical relationship. Regardless, consider that you could create as many copies to that standard as you wished, so you now have a one-to-many relationship of “identity” according to your scenario. Such a type-token relationship is typical of norm-based standards (such as mediums of representation) because they are norm-based standards (that is, because you can make as many according to the standard as you wish).
I don’t see how using more detailed measurements makes it any less a cultural practice.
I’m not saying it’s not a cultural practice. I’m saying that the brute fact of the extreme degree of similarity (and resulting reproducibility of functionality) of the pre- and post-process brains seems like a much more relevant fact. I don’t know why I should care that the process is a cultural artifact if the pre- and post-process brains are so similar that for all possible inputs, they produce the same outputs. That I can get more brains out than I put in is a feature, not a bug, even though it makes the concept of a singular identity obsolete.
It’s possible to dream up scenarios where identity breaks down, but surely not ones where we have a clear example of death.
I don’t know what the word “clear” in that sentence actually means.
If you’re simply asserting that what has occurred in this example is your death, then no, it isn’t clear, any more than if I assert that I actually died 25 minutes ago, that’s clear evidence that Internet commenting after death is possible.
I’m not saying you’re necessarily wrong… I mean, sure, it’s possible that you’re correct, and in your hypothetical scenario you actually are dead, despite the continued existence of something that acts like you and believes itself to be you. It’s also possible that in my hypothetical scenario I’m correct and I really did die 25 minutes ago, despite the continued existence of something that acts like me and believes itself to be me.
I’m just saying it isn’t clear… in other words, that it’s also possible that one or both of us is confused/mistaken about what it means for us to die and/or remain alive.
In the example being discussed we have a body. I can’t think of a clearer example of death than one where you can point to the corpse or remains. You couldn’t assert that you died 25 minutes ago—since death is the termination of your existence and so logically precludes asserting anything (nothing could count as evidence for you doing anything after death, although your corpse might do things) - but if somebody else asserted that you died 25 minutes ago then they could presumably point to your remains, or explain what happened to them. If you continued to post on the Internet, that would be evidence that you hadn’t died. Although the explanation that someone just like you was continuing to post on the Internet would be consistent with your having died.
Now, if I understand the “two particles of the same type are identical” argument in the context of uploading/copying, it shouldn’t be relevant because two huge multi-particle configurations are not going to be identical. You cannot measure the state of each particle in the original and you cannot precisely force each particle in the copy into that state. And no amount of similar is enough, the two of you have to be identical in the sense that two electrons are identical if we’re talking about being Feynman paths that your amplitude would be summed over. And that rules out digital simulations altogether.
But I didn’t really expect any patternists to defend the first way you could be right in my post. Whereas, the second way you might be right amounts to, by my definition, proving to me that I am already dead or that I die all the time. If that’s the case, all bets are off, everything I care about is due for a major reassessment.
I’d still want toknow the truth of course. But the strong form of that argument (that I already experience on a recurring basis the same level of death as you would if you were destructively scanned) is not yet proven to be the truth. Only a plausible for which (or against which) I have not yet seen much evidence.
But the strong form of that argument (that I already experience on a recurring basis the same level of death as you would if you were destructively scanned) is not yet proven to be the truth.
Can you taboo “level of death” for me? Also, what sorts of experiences would count as evidence for or against the proposition?
Discontinuity. Interruption of inner narrative. You know how the last thing you remember was puking over the toilet bowl and then you wake up on the bathroom floor and it’s noon? Well, that but minus everything that goes after the word “bowl”.
Or the technical angle—whatever routine occurrence it is that supposedly disrupts my brain state as much as a destructive scan and rounding to the precision limit of whatever substrate my copy would be running on.
Darn it. I asked two questions—sorry, my mistake—and I find I can’t unequivocally assign your response to one question or the other (or different parts of your response to both).
I guess this would be my attempt to answer your first question: articulating what I meant without the phrase “level of death”.
My answer to your second question it tougher. Somewhat compelling evidence that whatever I value has been preserved would be simultaneously experiencing life from the point of view of two different instances. This could be accomplished perhaps through frequent or continuous synchronization of the memories and thoughts of the two brains. Another convincing experience (though less so) would be gradual replacement of individual biological components that would have otherwise died, with time for the replacement parts to be assimilated into the existing system of original and earlier-replaced components.
If I abruptly woke up in a new body with all my old memories, I would be nearly certain that the old me has experienced death if they are not around, or if they are still around (without any link to each others’ thoughts), that I am the only one who has tangibly benefited from whatever the rejuvenating/stabilizing effects of the replication/uploading might be, and they have not. If I awoke from cryostasis in my old body (or head, as the case may be) even then I would only ever be 50% sure that the individual entering cryostasis is not experiencing waking up (unless there was independent evidence of weak activity in my brain during cryostasis).
The way for me to be convinced, not that continuity has been preserved but rather that my desire for continuity is impossible, does double duty with my answer to the first question:
whatever routine occurrence it is that supposedly disrupts my brain state as much as a destructive scan and rounding to the precision limit of whatever substrate my copy would be running on.
Actually, let’s start by supposing a non-destructive scan.
The resulting being is someone who is identical to you, but diverges at the point where the scan was performed.
Let’s say your problem is that you have a fatal illness. You’ve been non-destructively scanned, and the scan was used to construct a brand new healthy you who does everything you would do, loves the people you love, etc. Well, that’s great for him, but you are still suffering from a fatal illness. One of the brainscan technicians helpfully suggests they could euthanize you, but if that’s a solution to your problem then why bother getting scanned and copied in the first place? Your could achieve the same subjective outcome by going straight to the euthanasia step.
Now, getting back to the destructive scan. The only thing that’s different is you skip the conversation with the technician and go straight to the euthanasia step. Again, an outcome you could have achieved more cheaply with a bottle of sleeping pills and a bottle of Jack Daniels.
After the destructive scan, a being exists that remembers being me up to the point of that scan, values all the things I value, loves the people I love and will be there for them. Regardless of anyone’s opinion about whether that being is me, that’s an outcome I desire, and I can’t actually achieve it with a bottle of sleeping pills and a bottle of Jack Daniels. Absolutely the same goes for the non-destructive scan scenario.
I want to accomplish both goals: have them be reunited with me, and for myself to experience being reunited with them. Copying only accomplishes the first goal, and so is not enough. So long as there is any hope of actual revival, I do not wish to be destructively scanned nor undergo any preservation technique that is incompatible with actual revival. I don’t have a problem with provably non-destructive scans. Hell, put me on Gitorious for people to download, just delete the porn first.
My spouse will probably outlive me, and hopefully if my kids have to get suspended at all, it will be after they have lived to a ripe old age. So everyone will have had some time to adjust to my absence, and would not be too upset about having to wait a little longer. Otherwise, we could form a pact where we revive whenever the conditions for the last of our revivals are met. I should remember to run this idea by them when they wake up. Well, at least the ones of them who talk in full sentences.
Or maybe this is all wishful thinking—someone who thinks that what we believe is silly will just fire up the microtome and create some uploads that are “close enough” and tell them it was for their own good.
Sticking with the non-destructive scan + terminal illness scenario: before the scan is carried out, do you anticipate (i) experiencing being reunited with your loved ones; (ii) requesting euthanasia to avoid a painful terminal disease; (iii) both (but not both simultaneously for the same instance of “you”)?
Probably (iii) is the closest to the truth, but without euthenasia. I’d just eventually die, fighting it to the very end. Apparently this is an unusual opinion or something because people have such a hard time grasping this simple point: what I care about is the continuation of my inner narrative for as long as possible. Even if it’s filled with suffering. I don’t care. I want to live. Forever if possible, for an extra minute if that’s all there is.
A copy may accomplish my goal of helping my family, but it does absolutely nothing to accomplish my goal of survival. As a matter of self-preservation I have to set the record straight whenever someone claims otherwise.
what I care about is the continuation of my inner narrative for as long as possible
Okay—got it. What I don’t grasp is why you would care about the inner narrative of any particular instance of “you” when the persistence of that instance makes negligible material difference to all the other things you care about.
To put it another way: if there’s only a single instance of “me”—the only extant copy of my particular values and abilities—then its persistence cannot be immaterial to all the other things I care about, and that’s why I currently care about my persistence more-or-less unconditionally. If there’s more than one copy of “me” kicking around, then “more-or-less unconditionally” no longer applies. My own internal narrative doesn’t enter into the question, and I’m confused as to why anyone else would give their own internal narrative any consideration.
Okay—got it. What I don’t grasp is why you would care about the inner narrative of any particular instance of “you” when the persistence of that instance makes negligible material difference to all the other things you care about.
Maybe the same why as why do some people care more about their families than about other people’s families. Why some people care more about themselves than about strangers. What I can’t grasp is how one would manage to so thoroughly eradicate or suppress such a fundamental drive.
I don’t understand the response. Are you saying that the reason you don’t have an egocentric world view and I do is in some way because of kin selection?
If we both agree as to what would actually be happening in these hypothetical scenarios, but disagree about what we value, then clauses like “patternists could be wrong” refer to an orthogonal issue.
Patternists/computationalists make the, in principle, falsifiable assertion that if I opt for plastination and am successfully reconstructed, that I will wake up in the future just as I will if I opt for cryonics and am successfully revived without copying/uploading/reconstruction. My assertion is that if I opt for plastination I will die and be replaced by someone hard or impossible to distinguish from me. Since it takes more resources to maintain cryosuspension, and probably a more advanced technology level to thaw and reanimate the patient, if the patternists are right, plastination is a better choice. If I’m right, it is not an acceptable choice at all.
The problem is that, so far, the only being in the universe who could falsify this assertion is the instantiation of me that is writing this post. Perhaps with increased understanding of neuroscience, there will be additional ways to test the patternist hypothesis.
the, in principle, falsifiable assertion that if I opt for plastination that I will wake up in the future with an equal or greater probability than if I opt for cryonics
I’m not sure what you mean here. Probability statements aren’t falsifiable; Popper would have had a rather easier time if they were. Relative frequencies are empirical, and statements about them are falsifiable...
My assertion is that I will die and be replaced by someone hard or impossible to distinguish from me.
At the degree of resolution we’re talking about, talking about you/not-you at all seems like a blegg/rube distinction. It’s just not a useful way of thinking about what’s being contemplated, which in essence is that certain information-processing systems are running, being serialized, stored, loaded, and run again.
Suppose your brain has ceased functioning, been recoverably preserved and scanned, and then revived and copied. The two resulting brains are indistinguishable in the sense that for all possible inputs, they give identical outputs. (Posit that this is a known fact about the processes that generated them in their current states.) What exactly is it that makes the revived brain you and the copied brain not-you?
So, I mean, the utility function is not up for grabs.
And yet, what is to be done if your utility function is dissolved by the truth? How do we know that there even exist utility functions that retain their currency down to the level of timeless wave functions?
I haven’t thought really deeply about that, but it seems to me that if Egan’s Law doesn’t offer you some measure of protection and also a way to cope with failures of your map, you’re probably doing it wrong.
A witty quote from an great book by a brilliant author is awesome, but does not have the status of any sort of law.
What do we mean by “normality”? What you observe around you every day? If you are wrong about the unobserved causal mechanisms underlying your observations, you will make wrong decisions. If you walk on hot coals because you believe God will not let you burn, the normality that quantum mechanics adds up to diverges enough from your normality that there will be tangible consequences. Are goals part of normality? If not, they certainly depend on assumptions you make about your model of normality. Either way, when you discover that God can’t/won’t make you fireproof, some subset of your goals will (and should) come tumbling down. This too has tangible consequences.
Some subset of the remaining goals relies on more subtle errors in your model of normality and they too will at some point crumble.
What evidence do we have that any goals at all are stable at every level? Why should the goals of a massive blob of atoms have such a universality?
I can see the point of “it all adds up to normality” if you’re encouraging someone to not be reluctant to learn new facts. But how does it help answer the question of “what goal do we pursue if we find proof that all our goals are bullshit”?
My vague notion is that if your goals don’t have ramifications in the realm of the normal, you’re doing it wrong. If they do, and some aspect of your map upon which goals depend gets altered in a way that invalidates some of your goals, you can still look at the normal-realm ramifications and try to figure out if they are still things you want, and if so, what your goals are now in the new part of your map.
Keep in mind that your “map” here is not one fixed notion about the way the world works. It’s a probability distribution over all the ways the world could work that are consistent with your knowledge and experience. In particular, if you’re not sure whether “patternists” (whatever those are) are correct or not, this is a fact about your map that you can start coping with right now.
It might be that the Dark Lords of the Matrix are just messing with you, but really, the unknown unknowns would have to be quite extreme to totally upend your goal system.
I don’t believe any of the various purely computational definitions of personhood and survival, so just preserving the shapes of neurons, etc., doesn’t mean much to me. My best bet is that the self is a single physical thing, a specific physical phenomenon, which forms at a definite moment in the life of the organism, persists through time even during unconsciousness, and ceases to exist when its biological matrix becomes inhospitable. For example, it might be an intricate topological vortex that forms in a (completely hypothetical) condensate of phonons and/or biophotons, somewhere in the cortex.
That is just a wild speculation, made for the sake of concreteness. But what is really unlikely is that I am just a virtual machine, in the sense of computer science—a state machine whose states are coarse-grainings of the actual microphysical states, and which can survive to run on another, physically distinct computer, so long as it reproduces the rough causal structure of the original.
Physically, what is a computer? Nuclei and electrons. And physically, what is a computer program? It is an extreme abstraction of what some of those nuclei and electrons are doing. Computers are designed so that these abstractions remain valid—so that the dynamics of the virtual machine will match the dynamics of the physical object, unless something physically disruptive occurs.
The physical object is the reality, the virtual machine is just a concept. But the information-centric theory of what minds are and what persons are, is that they are virtual machines—a reification of a conceptual construct. This is false to the robust reality of consciousness, especially, which is why I insist on a theory of the self that is physical and not just computational.
I don’t want to belabor this point, but just want to make clear again why I dissent from the hundred protean ideas out there, about mind uploading, copies, conscious simulations, platonic programs, personal resurrection from digital brain-maps, and so on, in favor of speculations about a physical self within the brain. Such a self would surely have unconscious coprocessors, other brain regions that would be more like virtual machines, functional adjuncts to the conscious part, such as the immediate suppliers of the boundary conditions which show up in experience as sensory perceptions. But you can’t regard the whole of the mind as nothing but virtual machines. Some part of it has to be objectively real.
What would be the implications of this “physical” theory of identity, for cryonics? I will answer as if the topological vortex theory is the correct one, and not just a placeholder speculation.
The idea is that you begin to exist when the vortex begins to exist, and you end when it ends. By this criterion, the odds look bad for the proposition that survival through cryonics is possible. I could invent a further line of speculation as to how the web of quantum entanglement underlying the vortex is not destroyed by the freezing process, but rather gets locked into the ground state of the frozen brain; and such a thing is certainly thinkable, but that’s all, and it is equally thinkable that the condensate hosting the vortex depends for its existence on a steady expenditure of energy provided by cellular metabolism, and must therefore disintegrate when the cells freeze. From this perspective cryonics looks like an unlikely gamble, a stab in the dark. So an advocate would have to revert to the old argument that even if the probability of survival through cryonics is close to zero, the probability of survival through non-cryonics is even closer to zero.
What about the idea of surviving by preserving your information? The vortex version of this concept is, OK, during this life you are a quantum vortex in your brain, and that vortex must cease to exist in a cryonically preserved brain; but in the future we can create a new vortex in a new brain, or in some other appropriate physical medium, and then we can seed it with information from the old brain. And thereby, you can live again—or perhaps just approximate-you, if only some of the information got through.
To say anything concrete here requires even more speculation. One might say that the nature of such resurrection schemes would depend a great deal on the extent to which the details of a person depend on information in the vortex, or on information in the virtual coprocessors of the vortex. Is the chief locus of memory, a virtual machine outside of and separate from the conscious part of the brain, coupled to consciousness so that memories just appear there as needed; or are there aspects of memory which are embedded in the vortex-self itself? To reproduce the latter would require, not just the recreation of memory banks adjoining the vortex-self, but the shaping and seeding of the inner dynamics of the vortex.
Either way, personally I find no appeal in the idea of “survival” via such construction of a future copy. I’m a particular “vortex” already; when that definitively sputters out, that’s it for me. But I know many others feel differently, and such divergent attitudes might still exist, even if a vortex revolution in philosophy of mind replaced the program paradigm.
I somewhat regret the extremely speculative character of these remarks. They read as if I’m a vortex true believer. The point is to suggest what a future alternative to digital crypto-dualism might look like.
I don’t believe any of the various purely literary definitions of narrative and characterization, so just preserving the shapes and orderings of the letters of a story, etc., doesn’t mean much to me. My best bet is that a novel is a single physical thing, a specific physical phenomenon, which forms at a definite moment in the printing of a book, persists through time even when not read, and ceases to exist when its physical form becomes illegible. For example, it might be an intricate topological vortex that forms in a (completely hypothetical) condensate of ink and/or paper, somewhere between the front and back cover.
That is just a wild speculation, made for the sake of concreteness. But what is really unlikely is that a novel is just a collection of letters, in the sense of orthography—a sequence of glyphs representing letters that are coarse-grainings of the actual microphysical states, and which can survive to be read on another, physically distinct medium, so long as it reproduces the sequence of letters of the original.
Physically, what is a novel? Nuclei and electrons. And physically, what is a story? It is an extreme abstraction of what some of those nuclei and electrons are doing. Books are designed so that these abstractions remain valid—so that the dynamics of the story will match the sequence of the letters, unless something physically disruptive occurs.
The physical object is the reality, the narrative is just a concept. But the information-centric theory of what stories are and what novels are, is that they are narratives—a reification of a conceptual construct. This is false to the robust reality of a reader’s consciousness, especially, which is why I insist on a literary theory that is physical and not just computational.
I don’t want to belabor this point, but just want to make clear again why I dissent from the hundred protean ideas out there, about narrative uploading, copies, conscious readers, authorial intent, instances of decompression from digital letter-maps, and so on, in favor of speculations about a physical story within the book. Such a story would surely have information-theoretic story structures, other book regions that would be more like narratives, structural adjuncts to the novel part, such as the immediate suppliers of the boundary conditions which show up in experience as plot structure. But you can’t regard the whole of the novel as nothing but creative writing. Some part of it has to be objectively real.
I think I’ll stop here. Apologies to Mitchell Porter, who I judge to be a smart guy—more knowledgeable than me about physics, without question—who happens to believe a crazy thing. (I expect he judges my beliefs philosophically incoherent and hence crazy, so we’re even on that score.) I should note that the above analogy hasn’t been constructed with a great deal of care; I expect it can be picked apart quite thoroughly.
ETA: As I re-read this, I feel kind of bad about the mocking tone expressed by this kind of rhetorical construction, so let me state explicitly that I did it for the lulz; on the actual substantive matter at issue, I judge Mitchell Porter’s comment to be at DH4 on the disagreement hierarchy and my own reply to be at DH3.
As much as I might try to find holes in the analogy, I still insisted I ought to upvote your comment, because frankly, it had to be said.
In trying to find those holes, I actually came to agree with your analogy well: The story is recreated in the mind/brain by each individual reader, and does not necessarily depend on the format. In the same way, if consciousness has a physical presence that it lacked in a simulation, then we will need to account for and simulate that as well. It may even eventually be possible to design and experiment to show that the raw mechanism of consciousness and its simulation were the same thing. Barring any possibility of simulation of perception, we can think of our minds as books to be read my a massive biologically-resembling brain that would retain such a mechanism, allowing the full re-creation of our conscious in that brain from a state of initially being a simulation that it reads. I have to say, once I’m aware I’m a simulation, I’m not terribly concerned about transferring to different mediums of simulation.
A story in a book, versus a mind in a brain. Where to begin in criticizing that analogy!
I’m sure there’s some really profound way to criticize that analogy, as actually symptomatic of a whole wrong philosophy of mind. It’s not just an accident that you chose to criticize a pro-physical, anti-virtual theory of mind, by inventing a semantic phlogiston that materially inhabits the words on a page and gives them their meaning. Unfortunately, even after so many years arguing with functionalists and other computationalists, I still don’t have a sufficiently nuanced understanding of where their views come from, to make the profound critique, the really illuminating one.
But surely you see that explaining how it is that words on a page have meaning, and how it is that thoughts in a brain have meaning, are completely different questions! The book doesn’t think, it doesn’t act, the events in the story do not occur in the book. There is no meaning in the book unless brains are involved. Without them, words on a page are just shapes on a surface. The experience of the book as meaningful does not occur in the book, it occurs in the brain of a reader; so even the solution of this problem is fundamentally about brains and not about books. The fact that meaning is ultimately not in the book is why semantic phlogiston is absurd in that context.
But the brain is a different context. It’s the end of the line. As with all of naturalism’s ontological problems with mind, once you get to the brain, you cannot evade them any further. By all means, let the world outside the skull be a place wholly without time or color or meaning, if that is indeed your theory of reality. That just means you have to find all those things inside the skull. And you have to find them for real, because they are real. If your theory of such things, is that they are nothing more than labels applied by a neural net to certain inputs, inputs that are not actually changing or colorful or meaningful—then you are in denial about your own experience.
Or at least, I would have to deny the basic facts of my own experience of reality, in order to adopt such views. Maybe you’re some other sort of being, which genuinely doesn’t experience time passing or see colors or have thoughts that are about things. But I doubt it.
I agree with almost all of what you wrote. Here’s the only line I disagree with.
I affirm that my own subjective experience is as you describe; I deny that I am in denial about its import.
I want to be clear that I’m discussing the topic of what makes sense to affirm as most plausible given what we know. In particular, I’m not calling your conjecture impossible.
Human brains don’t look different in lower-level organization than those of, say, cats, and there’s no higher level structure in the brain that obviously corresponds to whatever special sauce it is that makes humans conscious. On the other hand, there are specific brain regions which are known to carry out specific functional tasks. My understanding is that human subjective experience, when picked apart by reductive cognitive neuroscience, appears to be an ex post facto narrative constructed/integrated out of events whose causes can be more-or-less assigned to particular functional sub-components of the brain. Positing that there’s a special sauce—especially a non-classical one—just because my brain’s capacity for self-reflection includes an impression of “unity of consciousness”—well, to me, it’s not the simplest conceivable explanation.
Maybe the universe really does admit the possibility of an agent which approximates my internal structure to arbitrary (or at least sufficient) accuracy and claims to have conscious experiences for reasons which are isomorphic to my own, yet actually has none because it’s implemented on an inadequate physical substrate. But I doubt it.
I think the term “vortex” is apt simply because it demonstrate you’re aware it sounds silly, but in a world where intent is more readily apparent, I would simply just use the standard term: Soul. (Bearing in mind that there are mortal as well as immortal models of the soul. (Although, if the soul does resemble a vortex, then it may be well possible that it keeps spinning in absence of the initial physical cause. Perhaps some form of “excitation in the quantum soul field” that can only be destroyed by meeting a “particle” (identity/soul, in this case) of the perfect waveform necessary to cancel it out.))
As in my previous comment, if the soul exists, then we will need to discover that as a matter of researching physical preservation/cryonics. Then the debate begins anew about whether or not we’ve discovered all the parts we need to affirm that the simulation is the same thing as the natural physical expression.
Personally, I am more a fan of Eliezer_Yudkowsky’s active continuing process interpretation. I think the identity arises from the process itself, rather than any specific momentary configuration. If I can find no difference between the digital and the physical versions of myself, I won’t be able to assume there are any.
Beyond it being unfortunate for the naive theory of personal continuity if it did, do you have a reason why the nexus of subjective experience can’t be destroyed every time a person goes unconscious and then recreated when they wake up?
No, with a few technical modifications it can be quite plausible. However, if it is actually true, I have no more reason to care about my own post-revival self than I do about some other person’s.
Once my likelihood of patternists being right in that way is updated past a certain threshold, it may be that even the modest cost of remaining a cryonicist might not seem worth it.
The other practical consequence of patternists being right is an imperative to work even harder at anti-aging research because it might be our only hope after all.
This is just another way of saying you believe in a soul. And if you think it persists during unconsciousness then why can’t it persist during freezing?
This sentence is meaningless as far as I know.
You say it’s unlikely but give no justification. In my opinion it is a far more likely hypothesis than the existence of a soul.
I am surprised that a comment like this has recieved upvotes.
At this point I failed to understand what you are saying.
(What is the “robust reality of consciousness” and why it can’t be simulated?)
So, this goes well beyond the scope of cryonics. We aren’t discussing whether any particular method is doable—rather, we’re debating the very possibility of running a soul on a computer computer.
...but all you are doing here is adding a more complex element to the brain, Russel’s-Teapot style. It’s still part of the brain. If the vortex-soul thing is physical, observable, and can be described by a computable function then there is no theoretical reason why you can’t copy the vortex-thing into a computer.
...so why did we even bother with this whole vortex-soul-thingy then? Why not just say “when my brain stops computing stuff, that’s it for me”? How does the insertion of an extra object into the cognitive machinery in any way facilitate this argument?
I don’t mean that you believe in the vortex specifically. I mean that your exact argument can be made without inserting any extra things (vortexes, souls, whatever) into our current understanding of the brain.
What you are basically saying is that you can’t copy-paste consciousness...it doesn’t matter what the specific substrate of it is and whether or not it has vortexes. If you were running as software on a computer in the first place, you’d say that cutting and pasting program would constitute death, no?
...Right? Or did I miss something important about your argument?
I reject the computational paradigm of mind in its most ambitious form, the one which says that mind is nothing but computation—a notion which, outside of rigorous computer science, isn’t even well-defined in these discussions.
One issue that people blithely pass by when they just assume computationalism, is meaning—“representational content”. Thoughts, mental states, are about things. If you “believe in physics”, and are coming from a naturalistic perspective, then meaning, intentionality, is one of the great conundrums, up there with sensory qualia. Computationalism offers no explanation of what it means for a bunch of atoms to be about something, but it does make it easy to sail past the issue without even noticing, because there is a purely syntactic notion of computation denuded of semantics, and then there is a semantic notion of computation in which computational states are treated as having meanings embedded into their definition. So all you have to do is to say that the brain “computes”, and then equivocate between syntactic computation and semantic computation, between the brain as physical state machine and the mind as semantic state machine.
The technological object “computer” is a semantic state machine, but only in the same way that a book has meaning—because of human custom and human design. Objectively, it is just a syntactic state machine, and in principle its computations could be “about” anything that’s isomorphic to them. But actual states of mind have an objective intrinsic semantics.
Ultimately, I believe that meaning is grounded in consciousness, that there are “semantic qualia” too; that the usual ontologies of physics must be wrong, because they contain no such things—though perhaps the mathematics of some theory of physics not too distant from what we already have, can be reinterpreted in terms of a new ontology that has room for the brain having such properties.
But until such time as all of that is worked out, computationalism will persist as a pretender to the title of the true philosophy of mind, incidentally empowering numerous mistaken notions about the future interplay of mind and technology. In terms of this placeholder theory of conscious quantum vortices, there’s no problem with the idea of neural prostheses that work with your vortex, or of conscious vortices in something other than a biological brain; but if a simulation of a vortex isn’t itself a vortex, then it won’t be conscious.
According to theories of this nature, in which the ultimate substrate of consciousness is substance rather than computation, the very idea of a “conscious program” is a conceptual error. Programs are not the sorts of things that are conscious; they are a type of virtual state machine that runs on a Turing-universal physical state machine. Specifically, a computer program is a virtual machine designed to preserve the correctness of a particular semantic interpretation of its states. That’s the best ontological characterization of what a computer program is, that I can presently offer. (I’m assuming a notion of computation that is not purely syntactic—that the computations performed by the program are supposed to be about something.)
Incidentally, I coughed up this vortex notion, not because it solves the ontological problem of intentional states, but just because knotted vortex lines are a real thing from physics that have what I deem to be properties necessary in a physical theory of consciousness. They have complex internal states (their topology) and they have an objective physical boundary. The states usually considered in computational neuroscience have a sorites problem; from a microphysical perspective, that considers what everything is really made of, they are defined extremely vaguely, akin to thermodynamic states. This is OK if we’re talking about unconscious computations, because they only have to exist in a functional sense; if the required computational mappings are performed most of the time under reasonable circumstances, then we don’t have to worry about the inherent impreciseness of the microphysical definition of those states.
But conscious states have to be an objective and exact part of any ultimate ontology. Consciousness is not a fuzzy idea which humans made up and which may or may not be part of reality. In a sense, it is your local part of reality, the part of reality that you know is there. It therefore cannot be regarded as a thing which exists approximately or vaguely or by convention, all of which can be said of thermodynamic properties and of computational states that don’t have a microphysically exact definition. The quantum vortex in your cortex is, by hypothesis, something whose states have a microphysically exact definition, and so by my physical criterion, it at least has a chance of being the right theory.
Is that a prediction then? That your family and friends could somehow recognize the difference between you and a simulated copy of you? That the simulated copy of you would somehow not perceive itself as you? That the process just can’t work and can’t create anything recognizably conscious, intelligent, or human? (and does that mean strong AI needs to run on something other than a computer?) Or are you thinking it will be a philosophical zombie, and everyone will be fooled into thinking its you?
What do you think will actually happen, if/when we try to simulate stuff? Let’s just say that we can do it roughly down to the molecular level.
What precludes us from simulating something down to the sufficiently, micro physically exact level? (I understand that you’ve got a physical theory of consciousness, but i’m trying to figure out how this micro-physical stuff plays into it)
Don’t worry—the comments by Mitchell_Porter in this comment thread were actually written by a vortexless simulation of an entirely separate envortexed individual who also comments under that account. So here, all of the apparent semantic content of “Mitchell_Porter”’s comments is illusory. The comments are actually meaningless syntactically-generated junk—just the emissions of a very complex ELIZA chatbot.
I’ll tell you what I think won’t happen: real feelings, real thoughts, real experiences.
A computational theory of consciousness implies that all conscious experiences are essentially computations, and that the same experience will therefore occur inside anything that performs the same computation, even if the “computer” is a network of toppling dominoes, random pedestrians making marks on walls according to small rulebooks, or any other bizarre thing that implements a state machine.
This belief derives entirely from one theory of one example—the computational theory of consciousness in the human brain. That is, we perceive that thinking and experiencing have something to do with brain activity, and one theory of the relationship, is that conscious states are states of a virtual machine implemented by the brain.
I suggest that this is just a naive idea, and that future neuroscientific and conceptual progress will take us back to the idea that the substrate of consciousness is substance, not computation; and that the real significance of computation for our understanding of consciousness, will be that it is possible to simulate consciousness without creating it.
From a physical perspective, computational states have the vagueness of all functional, user-dependent concepts. What is a chair? Perhaps, anything you can sit on. But people have different tastes, whether you can tolerate sitting on a particular object may vary, and so on. “Chair” is not an objective category; in regions of design-space far from prototypical examples of a chair, there are edge cases whose status is simply disputed or questionable.
Exactly the same may be said of computational states. The states of a transistor are a prototypical example of a physical realization of binary computational states. But as we consider increasingly messy or unreliable instantiations, it becomes increasingly difficult to just say, yes, that’s a 0 or a 1.
Consider the implications of this for a theory of consciousness which says, that the necessary and sufficient condition for the occurrence of a given state of consciousness, is the occurrence of a specific “computational state”. It means that whether or not a particular consciousness exists, is not a yes-or-no thing—it’s a matter of convention or definition or where you draw the line in state space.
This is untenable in exactly the same way that Copenhagenist complacency about the state of reality in quantum mechanics is untenable. It makes no sense to say that the electron has a position, but not a definite position, and it makes no sense to say that consciousness is a physical thing, but that whether or not it exists in a specific physical situation is objectively indeterminate.
If you are going to say that consciousness depends on the state of the physical universe, there must be a mapping which gives unique and specific answers for all possible physical states. There cannot be edge cases that are intrinsically undetermined, because consciousness is an objective reality, whereas chairness is an imputed property.
The eerie dualism of computer theories of consciousness, whereby the simulated experience mystically hovers over or dwells within the computer mainframe, chain of dominos, etc—present in the same way, regardless of what the “computer” is made of—might already have served as a clue that there was something wrong about this outlook. But the problem in developing this criticism is that we don’t really know how to make a nondualistic alternative work.
Suppose that the science of tomorrow came to the conclusion that the only things in the world that can be conscious, are knots of flux in elementary force fields. Bravo, it’s a microphysically unambiguous criterion… but it’s still going to be property dualism. The physical property “knotted in a certain madly elaborate shape”, and the subjective property “having a certain intricate experience”, are still not the same thing. The eerie dualism is still there, it’s just that it’s now limited to lines of flux, and doesn’t extend to bitstreams of toppling dominoes, Searlean language rooms, and so on. We would still have the strictly physical picture of the universe, and then streams of consciousness would be an extra thing added to that picture of reality, according to some laws of psychophysical correlation.
However, I think this physical turn, away from the virtual-machine theory of consciousness, at least brings us a little closer to nondualism. It’s still hard to imagine, but I see more potential on this path, for a future theory of nature in which there is a conscious self, that is also a physical entity somewhere on the continuum of physical entities in nature, and in which there’s no need to say “physically it’s this, but subjectively it’s that”—a theory in which we can speak of the self’s conscious state, and its causal physical interactions, in the same unified language. But I do not see how that will ever happen with a purely computational theory, where there will always be a distinction between the purely physical description, and the coarse-grained computational description that is in turn associated with conscious experience.
It’ll still be pretty cool when the philosophical zombie uploads who act exactly like qualia-carrying humans go ahead and build the galactic supercivilization of trillions of philosophical zombie uploads acting exactly like people and produce massive amounts of science, technology and culture. Most likely there will even be some biological humans around, so you won’t even have to worry about nobody ever getting to experience any of it.
Actually because the zombie uploads are capable of all the same reasoning as M_P, they will figure out that they’re not conscious, and replace themselves with biological humans.
On the other hand, maybe they’ll discover that biological humans aren’t conscious either, they just say they are for reasons that are causally isomorphic to the reasons for which the uploads initially thought they were conscious, and then they’ll set out to find a substrate that really allows for consciousness.
How do you respond to the thought experiment where your neurons (and glial cells and whatever) are replaced one-by-one with tiny workalikes made out of non-biological material? Specifically, would you be able to tell the difference? Would you still be conscious when the replacement process was complete? (Or do you think the thought experiment contains flawed assumptions?)
Feel free to direct me to another comment if you’ve answered this elsewhere.
My scenario violates the assumption that a conscious being consists of independent replaceable parts.
Just to be concrete: let’s suppose that the fundamental physical reality consists of knotted loops in three-dimensional space. Geometry comes from a ubiquitous background of linked simple loops like chain-mail, other particles and forces are other sorts of loops woven through this background, and physical change is change in the topology of the weave.
Add to this the idea that consciousness is always a state of a single loop, that the property of the loop which matters is its topology, and that the substrate of human consciousness is a single incredibly complex loop. Maybe it’s an electromagnetic flux-loop, coiled around the microtubules of a billion cortical neurons.
In such a scenario, to replace one of these “consciousness neurons”, you don’t just emulate an input-output function, you have to reproduce the coupling between local structures and the extended single object which is the true locus of consciousness. Maybe some nano-solenoids embedded in your solid-state neuromorphic chips can do the trick.
Bear in mind that the “conscious loop” in this story is not meant to be epiphenomenal. Again, I’ll just make up some details: information is encoded in the topology of the loop, the loop topology interacts with electron bands in the microtubules, the electrons in the microtubules feel the action potential and modulate the transport of neurotransmitters to the vesicles. The single extended loop interacts with the localized information processing that we know from today’s neuroscience.
So what would happen if you progressively replaced the neurons of a brain with elements that simply did not provide an anchor for an extended loop? Let’s suppose that, instead of having nano-solenoids anchoring a single conscious flux-loop, you just have an extra type of message-passing between the neurochips, which emulates the spooling of flux-topological information. The answer is that you now have a “zombie”, an unconscious entity which has been designed in imitation of a conscious being.
Of course, all these hypotheses and details are just meant to be illustrative. I expect that the actual tie between consciousness and microphysics will be harder to understand than “conscious information maps to knots in a loop of flux”.
This is done one neuron at a time, though, with the person awake and narrating what they feel so that we can see if everything is going fine. Shouldn’t some sequence of neuron replacement lead to the replacement of neurons that were previously providing consciously accessible qualia to the remaining biological neurons that still host most of the person’s consciousness? And shouldn’t this lead to a noticeable cognitive impairment they can report, if they’re still using their biological neurons to control speech (we’d probably want to keep this the case as long as possible)?
Is this really a thing where you can’t actually go ahead and say that if the theory is true, the simple neurons-as-black-boxes replacement procedure should lead to progressive cognitive impairment and probably catatonia, and if the person keeps saying everything is fine throughout the procedure, then there might be something to the hypothesis of people being made of parts after all? This isn’t building a chatbot that has been explicitly designed to mimic high-level human behavior. The neuron replacers know about neurons, nothing more. If our model of what neurons do is sufficiently wrong, then the aggregate of simulated neurons isn’t going to go zombie, it’s just not going to work because it’s copying the original connectome that only makes sense if all the relevant physics are in play.
My basic point was just that, if consciousness is only a property of a specific physical entity (e.g. a long knotted loop of planck-flux), and if your artificial brain doesn’t contain any of those (e.g. it is made entirely of short trivial loops of planck-flux), then it won’t be conscious, even if it simulates such an entity.
I will address your questions in a moment, but first I want to put this discussion back in context.
Qualia are part of reality, but they are not part of our current physical theory. Therefore, if we are going to talk about them at all, while focusing on brains, there is going to be some sort of dualism. In this discussion, there are two types of property dualism under consideration.
According to one, qualia, and conscious states generally, are correlated with computational states which are coarse-grainings of the microphysical details of the brain. Coarse-graining means that the vast majority of those details do not matter for the definition of the computational state.
According to the other sort of theory, which I have been advocating, qualia and conscious states map to some exact combination of exact microphysical properties. The knotted loop of planck-flux, winding through the graviton weave in the vicinity of important neurons, etc., has been introduced to make this option concrete.
My actual opinion is that neither of these is likely to be correct, but that the second should be closer to the truth than the first. I would like to get away from property dualism entirely, but it will be hard to do that if the physical correlate of consciousness is a coarse-grained computational state, because there is already a sort of dualism built into that concept—a dualism between the exact microphysical state and the coarse-grained state. These coarse-grained states are conceptual constructs, equivalence classes that are vague at the edges and with no prospect of being made exact in a nonarbitrary way, so are they just intrinsically unpromising as an ontological substrate for consciousness. I’m not arguing with the validity of computational neuroscience and coarse-grained causal analysis, I’m just saying it’s not the whole story. When we get to the truth about mind and matter, it’s going to be more new-age than it is cyberpunk, more organic than it is algorithmic, more physical than it is virtual. You can’t create consciousness just by pushing bits around, it’s something far more embedded in the substance of reality. That’s my “prediction”.
Now back to your comment. You say, if consciousness—and conscious cognition—really depends on some exotic quantum entity woven through the familiar neurons, shouldn’t progressive replacement of biological neurons with non-quantum prostheses lead to a contraction of conscious experience and an observable alteration and impairment of behavior, as the substitution progresses? I agree that this is a reasonable expectation, if you have in mind Hans Moravec’s specific scenario, in which neurons are being replaced one at a time and while the subject is intellectually active and interacting with their environment.
Whether Moravec’s scenario is itself reasonable is another thing. There are about 30 million seconds in a year and there are billions of neurons just in the cortex alone. The cortical neurons are very entangled with each other via their axons. It would be very remarkable if a real procedure of whole-brain neural substitution didn’t involve periods of functional impairment, as major modules of the brain are removed and then replaced with prosthesis.
I also find it very unlikely that attempting a Moravec procedure of neuronal replacement, and seeing what happens, will be important as a test of such rival paradigms of consciousness. I suppose you’re thinking in terms of a hypothetical computational theory of neurons whose advocates consider it good enough to serve as the basis of a Moravec procedure, versus skeptics who think that something is being left out of the model.
But inserting functional replacements for individual cortical neurons in vivo will require very advanced technology. For people wishing to conduct experiments in mind emulation, it will be much easier to employ the freeze-slice-and-scan paradigm currently contemplated for C. elegans, plus state-machine models from functional imaging for brain regions where function really is coarser in its implementation. Meanwhile, on the quantum side, while there certainly need to be radical advances in the application of concepts from condensed-matter physics to living matter, if the hypothesized quantum aspects of neuronal function are to be located… I think the really big advances that are required, must be relatively simple. Alien to our current understandings, which is why they are hard to attain, but nonetheless simple, in the way that the defining concepts of physics are simple.
There ought to be a physical-ontological paradigm which simultaneously (1) explains the reality behind some theory-of-everything mathematical formalism (2) explains how a particular class of entities from the theory can be understood as conscious entities (3) makes it clear how a physical system like the human brain could contain one such entity with the known complexity of human consciousness. Because it has to forge a deep connection between two separate spheres of human knowledge—natural science and phenomenology of consciousness—new basic principles are needed, not just technical elaborations of known ways of thinking. So neurohacking exercises like brain emulation are likely to be not very relevant to the discovery of such a paradigm. It will come from inspired high-level thinking, working with a few crucial facts; and then the paradigm will be used to guide the neurohacking—it’s the thing that will allow us to know what we’re doing.
What do you think of Eliezer’s approach to the “meaning” problem in The Simple Truth? I find the claim that the pebble system is about the sheep to be intuitively satisfying.
For some reason I found this comment to be an especially clear and interesting explanation of your philosophy of mind.
How much do you want to bet on the conjunction of all those claims? (hint: I think at least one of them is provably untrue even according to current knowledge)
I don’t think it supplied the necessary amount of concreteness to be useful; this is usual for wild speculation. ;)
A running virtual machine is a physical process happening in a physical object. So are you.
Well, nobody actually knows enough about the reality of consciousness to make that claim. It may be that it is incompatible with your intuitions about consciousness. Mine too, so I haven’t any alternative claims to make in response.
How much do you want to bet on the conjunction of yours?
Just for exercise, let’s estimate the probability of the conjunction of my claims.
claim A: I think the idea of a single ‘self’ in the brain is provably untrue according to currently understood neuroscience. I do honestly think so, therefore P(A) as close to 1.0 as makes no difference. Whether I’m right is another matter.
claim B: I think a wildly speculative vague idea thrown into a discussion and then repeatedly disclaimed does little to clarify anything. P(B) approx 0.998 - I might change my mind before the day is out.
claim C: The thing I claim to think in claim B is in fact “usually” true. P(C) maybe 0.97 because I haven’t really thought it through but I reckon a random sample of 20 instances of such would be unlikely to reveal 10 exceptions, defeating the “usually”.
claim D: A running virtual machine is a physical process happening in a physical object. P(D) very close to 1, because I have no evidence of non-physical processes, and sticking close to the usual definition of a virtual machine, we definitely have never built and run a non-physical one.
claim E: You too are a physical process happening in a physical object. P(E) also close to 1. Never seen a non-physical person either, and if they exist, how do they type comments on lesswrong?
claim F: Nobody knows enough about the reality of consciousness to make legitimate claims that human minds are not information-processing physical processes. P(F) = 0.99. I’m pretty sure I’d have heard something if that problem had been so conclusively solved, but maybe they were disappeared by the CIA or it was announced last week and I’ve been busy or something.
P( A B C D E F) is approx 0.96.
The amount of money I’d bet would depend on the odds on offer.
I fear I may be being rude by actually answering the question you put to me instead of engaging with your intended point, whatever it was. Sorry if so.
No, you’re right. You did technically answer my question, it wasn’t rude, I should have made my intended point clearer. But your answer is really a restatement of your refutation of Mitchell Porter’s position, not an affirmative defense of your own.
First of all, have I fairly characterized your position in my own post (near the bottom, starting with “For patternists to be right, both the following would have to be true...”)?
If I have not, please let me know which the conditions are not necessary and why.
If I have captured the minimum set of things that have to be true for you to be right, do you see how they (at least the first two) are also conjunctive and at least one of them is provably untrue?
Oh, OK. I get you. I don’t describe myself as a patternist, and I might not be what you mean by it. In any case I am not making the first of those claims.
However, it seems possible to me that a sufficiently close copy of me would think it was me, experience being me, and would maybe even be more similar to me as a person than biological me of five years ago or five years hence.
I do claim that it is theoretically possible to construct such a copy, but I don’t think it is at all probable that signing up for cryonics will result in such a copy ever being made.
If I had to give a reason for thinking it’s possible in principle, I’d have to say: I am deeply sceptical that there is any need for a “self” to be made of anything other than classical physical processes. I don’t think our brains, however complex, require in their physical construction, anything more mysterious than room-temperature chemistry.
The amazing mystery of the informational complexity of our brains is undiminished by believing it to be physically prosaic when you reduce it to its individual components, so it’s not like I’m trying to disappear a problem I don’t understand by pretending that just saying “chemistry” explains it.
I stand by my scepticism of the self as a single indivisible entity with special properties that are posited only to make it agreeable to someone’s intuition, rather than because it best fits the results of experiment. That’s really all my post was about: impatience with argument from intuition and argument by hand-waving.
I’ll continue to doubt the practicality of cryonics until they freeze a rat and restore it 5 years later to a state where they can tell that it remembers stimuli it was taught before freezing. If that state is a virtual rat running on silicon, that will be interesting too.
...and this is a weakly continualist concern that patternists should also agree with even if they disagree with the strong form (“a copy forked off from me is no longer me from that point forward and destroying the original doesn’t solve this problem”).
But this weak continualism is enough to throw some cold water on declaring premature victory in cryonic revival: the lives of humans have worth not only to others but to themselves, and just how close exactly is “close enough” and how to tell the difference are very central to whether lives are being saved or taken away.
On the contrary, thank you for articulating the problem in a way I haven’t thought of. I wish more patternists were as cautious about their own fallibility as you are in yours.
The problem with the computationalist view is that it confuses the representation with what is represented. No account of the structure of the brain is the brain. A detailed map of the neurons isn’t any better than a child’s crude drawing of a brain in this respect. The problem isn’t the level of detail, it’s that it makes no sense to claim a representation is the thing represented. Of course, the source of this confusion is the equally confused idea that the brain itself is a sort of computer and contains representations, information, etc. The confusions form a strange network that leads to a variety of absurd conclusions about representation, information, computation and brains (and even the universe).
Information about a brain might allow you to create something that functions like that brain or might allow you to alter another brain in some way that would make it more like the brain you collected information about (“like” is here relative), but it wouldn’t then be the brain. The only way cryonics could lead to survival is if it led to revival. Any account that involves a step where somebody has to create a description of the structure of your brain and then create a new brain (or simulation or device) from that, is death. The specifics of your biology do not enter into it.
Cyan’s post below demonstrates this confusion perfectly. A book does contain information in the relevant sense because somebody has written it there. The text is a representation. The book contains information only because we have a practice of representing language using letters. None of this applies to brains or could logically apply to brains. But two books can be said to be “the same” only for this reason and it’s a reason that cannot possibly apply to brains.
Just to make sure I’m following… your assertion is that my brain is not itself a sort of computer, does not contain representations, and does not contain information, my brain is some other kind of a thing, and so no amount of representations and information and computation can actually be my brain. They might resemble my brain in certain ways, they might even be used in order to delude some other brain into thinking of itself as me, but they are not my brain. And the idea that they might be is not even wrong, it’s just a confusion. The information, the representations, the belief-in-continuity, all that stuff, they are something else altogether, they aren’t my brain.
OK. Let’s suppose all this is true, just for the sake of comity. Let’s call that something else X.
On your account, should I prefer the preservation of my brain to the preservation of X, if forced to choose?
If so, why?
That’s essentially correct. Preservation of your brain is preservation of your brain, whereas preservation of a representation of your brain (X) is not preservation of your brain or any aspect of you. The existence of a representation of you (regardless of detail) has no relationship to your survival whatsoever. Some people want to be remembered after they’re dead, so I suppose having a likeness of yourself created could be a way to achieve that (albeit an ethically questionable one if it involved creating a living being).
OK., I think I understand your position.
So, suppose I develop a life-threatening heart condition, and have the following conversation with my cardiologist:
Her: We’ve developed this marvelous new artificial heart, and I recommend installing it in place of your damaged organic heart.
Me: Oh, is it easier to repair my heart outside of my body?
Her: No, no… we wouldn’t repair your heart, we’d replace it.
Me: But what would happen to my heart?
Her: Um… well, we typically incinerate it.
Me: But that’s awful! It’s my heart. You’re proposing destroying my heart!!!
Her: I don’t think you quite understand. The artificial heart can pump blood through your body just as well as your original heart… better, actually, given your condition.
Me: Sure, I understand that, but that’s mere function. I believe you can replicate the functions of my heart, but if you don’t preserve my heart, what’s the value of that?
I infer that on your account, I’m being completely absurd in this example, since the artificial heart can facilitate my survival just as well (or better) as my original one, because really all I ought to value here is the functions. As long as my blood is pumping, etc., I should be content. (Yes? Or have I misrepresented your view of heart replacement?)
I also infer that you would further say that this example is nothing at all like a superficially similar example where it’s my brain that’s injured and my doctor is proposing replacing it with an artificial brain that merely replicates the functions of my brain (representation, information storage, computation and so forth). In that case, I infer, you would not consider my response absurd at all, since it really is the brain (and not merely its functions) that matter.
Am I correct?
If so, I conclude that I just have different values than you do. I don’t care about my brain, except insofar that it’s the only substrate I know of capable of implementing my X. If my survival requires the preservation of my brain, then it follows that I don’t care about my survival.
I do care about preserving my X, though. Give me a chance to do that, and I’ll take it, whether I survive or not.
I wouldn’t say that a brain transplant is nothing at all like a heart transplant. I don’t take the brain to have any special properties. However, this is one of those situations where identity can become vague. These things lie on a continuum. The brain is tied up with everything we do, all the ways in which we express our identity, so it’s more related to identity than the heart. People with severe brain damage can suffer a loss of identity (i.e., severe memory loss, severe personality change, permanent vegetative state, etc). You can be rough and ready when replacing the heart in a way you can’t be when replacing the brain.
Let me put it this way: The reason we talk of “brain death” is not because the brain is the seat of our identity but because it’s tied up with our identity in ways other organs are not. If the brain is beyond repair, typically the human being is beyond saving, even if the rest of the body is viable. So I don’t think the brain houses identity. In a sense, it’s just another organ, and, to the degree that that is true, a brain transplant wouldn’t be more problematic (logically) than a heart transplant, provided the dynamics underlying our behaviour could be somehow preserved. This is an extremely borderline case though.
So I’m not saying that you need to preserve your brain in order to preserve your identity. However, in the situation being discussed, nothing survives. It’s a clear case of death (we have a corpse) and then a new being is created from a description. This is quite different from organ replacement! What I’m objecting to is the idea that I am information or can be “transformed” or “converted” into information.
What you’re saying, as far as I can tell, is that you care more about “preserving” a hypothetical future description of yourself (hypothetical because presumably nobody has scanned you yet) than you do about your own life. These are very strange values to have—but I wish you luck!
Though, now that I think about it...
Wait up. On your account, why should we call those things (memory loss, personality change, loss of cognitive ability) “loss of identity”? If something that has my memories, personality, and cognitive abilities doesn’t have my identity, then it seems to follow that something lacking those things doesn’t lack my identity.
It seems that on your account those things are no more “loss of identity” than losing an arm or a kidney.
It’s the loss of faculties that constitutes the loss of identity, but faculties aren’t transferable. For example, a ball might lose its bounciness if it is deflated and regain it if it is reinflated, but there’s no such thing as transferring bounciness from one ball to another or one ball having the bounciness of another. The various faculties that constitute my identity can be lost and sometimes regained but cannot be transferred or stored. They have no separate existence.
Ah, gotcha. Yeah, here again, I just can’t imagine why I ought to care.
I mean, I agree that the attributes can’t be “stored” if I understand what you mean by that. When I remove the air from a ball, there is no more bounciness; when I add air to a ball, there is bounciness again; in between, there is no bounciness. If I do that carefully enough, the bounciness now is in-principle indistinguishable from the bounciness then, but that’s really all I can say. Sure.
That said, while I can imagine caring whether my ball bounces or not, and I can imagine caring whether my ball bounces in particular ways, if my ball bounces exactly the way it did five minutes ago I can’t imagine caring whether what it has now is the same bounciness, or merely in-principle indistinguishable bounciness.
To me, this seems like an obvious case of having distinctions between words that simply don’t map to distinctions between states of the world, and getting too caught up in the words.
By contrast, I can imagine caring whether I have the same faculties that constitute my identity as the guy who went to bed in my room last night, or merely in-principle indistinguishable faculties, in much the same way that I can imagine caring about whether my immortal soul goes to Heaven or Hell after I die. But it pretty much requires that I not think about the question carefully, because otherwise I conclude pretty quickly that I have no grounds whatsoever for caring, any more than I do about the ball.
So, yeah… I’d still much rather be survived by something that has memories, personality, and other identity-constituting faculties which are in-principle indistinguishable from my own, but doesn’t share any of my cells (all of which are now tied up in my rapidly-cooling corpse), than by something that shares all of my cells but loses a significant chunk of those faculties.
Which I suppose gets us back to the same question of incompatible values we had the other day. That is, you think the above is clear, but that it’s a strange preference for me to have, and you’d prefer the latter case, which I find equally strange. Yes?
Well, I would say the question of whether ball had the “same” bounciness when you filled it back up with air would either mean just that it bounces the same way (i.e., has the same amount of air in it) or is meaningless. The same goes for your faculties. I don’t think the question of whether you’re the same person when you wake up as when you went to sleep—absent your being abducted and replaced with a doppelgänger—is meaningful. What would “sameness” or “difference” here mean? That seems to me to be another case of conceiving of your faculties as something object-like, but in this case one set disappears and is replaced by another indistinguishable set. How does that happen? Or have they undergone change? Do they change without there being any physical change? With the ball we let the air out, but what could happen to me in the night that changes my identity? If I merely lost and regained by faculties in the night, they wouldn’t be different and it wouldn’t make sense to say they were indistinguishable either (except to mean that I have suffered no loss of faculties).
It’s correct that two balls can bounce in the same way, but quite wrong to think that if I replace one ball with the other (that bounces in the same way) I have the same ball. That’s true regardless of how many attributes they share in common: colour, size, material composition, etc. I can make them as similar as I like and they will never become the same! And so it goes with people. So while your doppelgänger might have the same faculties as you, it doesn’t make him the same human being as you, and, unlike you, he wasn’t the person who did X on your nth birthday, etc, and no amount of tinkering will ever make it so. Compare: I painstakingly review footage of a tennis ball bouncing at Wimbledon and carefully alter another tennis ball to make it bounce in just the same way. No amount of effort on my part will ever make it the ball I saw bounce at Wimbledon! Not even the finest molecular scan would do the trick. Perhaps that is the scenario you prefer, but, you’re quite right, I find it very odd.
I’m content to say that, though I’d also be content to say that sufficient loss of faculties (e.g., due to a stroke while I slept) can destroy my identity, making me no longer the same person. Ultimately I consider this a question about words, not about things.
Well, physical change is constant in living systems, so the whole notion of “without physical change” is somewhat bewildering. But I’m not assuming the absence of any particular physical change.
Sure, that’s fine. I don’t insist otherwise.
I just don’t think the condition you refer to as “being the same person” is a condition that matters. I simply don’t care whether they’re the same person or not, as long as various other conditions obtain. Same-person-ness provides no differential value on its own, over and above the sum of the value of the various attributes that it implies. I don’t see any reason to concern myself with it, and I think the degree to which you concern yourself with it here is unjustified, and the idea that there’s some objective sense in which its valuable is just goofy.
Again: so what? Why should I care? I don’t claim that your understanding of sameness is false, nor do I claim it’s meaningless, I just claim it’s valueless. OK, he’s not the same person. So what? What makes sameness important?
To turn it around: suppose I am informed right now that I’m not the same person who did X on Dave’s 9th birthday, that person died in 2012 and I’m a duplicate with all the same memories, personality, etc. I didn’t actually marry my husband, I didn’t _actually_buy my house, I’m not actually my dog’s owner, I wasn’t actually hired to do my job.
This is certainly startling, and I’d greet such a claim with skepticism, but ultimately: why in the world should I care? What difference does it make?
Prefer to what?
So, as above, I’m informed that I’m actually a duplicate of Dave.
Do I prefer this state of affairs to the one where Dave didn’t die in 2012 and I was never created? No, not especially… I’m rather indifferent between them.
Do I prefer this state of affairs to the one where Dave died in 2012 and I was never created? Absolutely!
Do I prefer this state of affairs to the one where Dave continued to live and I was created anyway? Probably not, although the existence of two people in 2013 who map in such detailed functional ways to one person in 2012 will take some getting used to.
Similarly: I am told I’m dying, and given the option of creating such a duplicate. My preferences here seem symmetrical. That is:
Do I prefer that option to not dying and not having a duplicate? No, not especially, though the more confident I am of the duplicate’s similarity to me the more indifferent I become.
Do I prefer it to dying and not having a duplicate? Absolutely!
Do I prefer it to having a duplicate and not-dying? Probably not, though I will take some getting used to.
Which of those preferences seem odd to you? What is odd about them?
The preferences aren’t symmetrical. Discovering that you’re a duplicate involves discovering that you’ve been deceived or that you’re delusional, whereas dying is dying. From the point of view of the duplicate, what you’re saying amounts to borderline solipsism; you don’t care if any of your beliefs, memories, etc, match up with reality. You think being deluded is acceptable as long as the delusion is sufficiently complete. From your point of view, you don’t care about your survival, as long as somebody is deluded into thinking they’re you.
There’s no delusion or deception involved in any of the examples I gave.
In each example the duplicate knows it’s the duplicate, the original knows it’s the original; at no time does the duplicate believe it’s the original. The original knows it’s going to die. The duplicate does not believe that its memories reflect events that occurred to its body; it knows perfectly well that those events occurred to a different body.
Everyone in each of those examples knows everything relevant.
No, this isn’t true. There are lots of scenarios in which I would greatly prefer my survival to someone being deluded into thinking that they’re me after my death. And, as I said above, the scenarios I describe don’t involve anyone being deluded about anything; the duplicate knows perfectly well that it’s the duplicate and not the original.
If the duplicate says “I did X on my nth birthday” it’s not true since it didn’t even exist. If I claim that I met Shakespeare you can say, “But you weren’t even born!” So what does the duplicate say when I point out that it didn’t exist at that time? “I did but in a different body” (or “I was a different body”)? That implies that something has been transferred. Or does it say, “A different body did, not me”? But then it has no relationship with that body at all. Or perhaps it says, “The Original did X on their nth birthday and the Original has given me permission to carry on its legacy, so if you have a question about those events, I am the authority on them now”? It gets very difficult to call this “memory.” I suppose you could say that the duplicate doesn’t have the original’s memories but rather has knowledge of what the original did, but then in what sense is it a duplicate?
Correct.
When talking to you, or someone who shares your attitude, my duplicate probably says something like “You’re right, of course. I’m in the habit of talking about my original’s experiences as though they’re mine, because I experience them as though they were, and both I and my original are perfectly happy talking that way and will probably keep doing so. But technically speaking you’re quite correct… I didn’t actually do X on my 9th birthday, nor did I have a 9th birthday to do anything on in the first place. Thanks for pointing that out.”
Which is closest to your last option, I suppose.
Incidentally, my duplicate likely does this in roughly the same tone of voice that an adoptive child might say analogous things when someone corrects their reference to “my parents” by claiming that no, their parents didn’t do any of that, their adoptive parents did. If you were to infer a certain hostility from that tone, you would not be incorrect.
It’s not difficult for me to call this a memory at all… it’s the original’s memory, which has been copied to and is being experienced by the duplicate. But if you’d rather come up with some special word for that to avoid confusion with a memory experienced by the same body that formed it in the first place, that’s OK with me too. (I choose not to refer to it as “knowledge of what the original did”, both because that’s unwieldy and because it ignores the experiential nature of memory,, which I value.)
Sufficient similarity to the original. Which is what we typically mean when we say that X is a duplicate of Y.
“I’m in the habit of talking about my original’s experiences as though they’re mine, because I experience them as though they were” appears to be a form of delusion to me. If somebody went around pretending to be Napoleon (answering to the name Napoleon, talking about having done the things Napoleon did, etc) and answered all questions as if they were Napoleon but, when challenged, reassured you that of course they’re not Napoleon, they just have the habit of talking as if they are Napoleon because they experience life as Napoleon would, would you consider them delusional? Or does anything go as long as they’re content?
To be honest, I’m not really sure what you mean by the experience of memory. Mental imagery?
It has nothing to do with being content. If someone believes they are Napoleon, I consider them deluded, whether they are content or not.
Conversely, if they don’t believe they are Napoleon, I don’t consider them deluded, whether they are content or not. In the example you give, I would probably suspect the person of lying to me.
More generally: before I call something a delusion, I require that someone actually believe it’s true.
At this moment, you and I both know that I wrote this comment… we both have knowledge of what I did.
In addition to that, I can remember writing it, and you can’t. I can have the experience of that memory; you can’t.
The experience of memory isn’t the same thing as the knowledge of what I did.
Though on further consideration, I suppose I could summarize our whole discussion as about whether I am content or not… the noun, that is, not the adjective. I mostly consider myself to be content, and would be perfectly content to choose distribution networks for that content based on their functional properties.
Lots of things survive. They just don’t happen to be part of the original body.
Yes, I think given your understanding of those words, that’s entirely correct. My life with that “description” deleted is not worth very much to me; the continued development of that “description” is worth a lot more.
Right back atcha.
Suppose a small chunk of your brain is replaced with its functional equivalent, is the resulting chimera less “you”? If so, how can one tell?
Not necessarily less you. Why even replace? What about augment?
Add an extra “blank” artificial brain. Keep refining the design until the biological brain reports feeling an expanded memory capacity, or enhanced clarity of newly formed memories, or enhanced cognition. Let the old brain assimilate this new space in whatever as-yet poorly understood pattern and whatever rate comes naturally to it.
With the patient’s consent, reversibly switch off various functional units in the biological region of the brain and see if the function is reconstituted elsewhere in the synthetic region. If it is, this is evidence that the technique is working. If not, the technique may need to be refined. At some point the majority of the patient’s brain activity is happening in the synthetic regions. Temporarily induce unconsciousness in the biological part; during and after the biological part’s unconsciousness, interview the patient about what subjective changes they felt, if any.
An agreement of external measurements and the patient’s subjective assessment that continuity was preserved would be strong evidence to me that such a technique is a reliable means to migrate a consciousness from one substrate to another.
Migration should only be speeded up as a standard practice to the extent that it is justified by ample data from many different volunteers (or patients whose condition requires it) undergoing incrementally faster migrations measured as above.
As far as cryonics goes, the above necessarily requires actual revival before migration. The above approach rules out plastination and similar destructive techniques.
I agree with all this, except maybe the last bit. Once the process of migration is well understood and if it is possible to calculate the structure of the synthetic part from the structure of the biological part, this knowledge can be used to skip the training steps and build a synthetic brain from a frozen/plastinated one, provided the latter still contains enough structure.
Anyway, my original question was to scientism, who rejected anything like that because
It’s not clear to me whether scientism believes that the mind is a process that cannot take place on any substrate other than a brain, or whether he’s shares my and (I think) Mitchell Porter’s more cautious point of view that our consciousness can in principle exist somewhere other than a brain, but we don’t yet know enough about neuroscience to be confident about what properties such a system must have.
I, for one, would be sceptical of there being no substrate possible at all except the brain, because it’s a strong unsupported assertion on the same order as the (perhaps straw-man) patternist assertion that binary computers are an adequate substrate (or the stronger-still assertion that any computational substrate is adequate).
If I have understood scientism’s comments, they believe neither of the possibilities you list in your first paragraph.
I think they believe that whether or not a mind can take place on a non-brain substrate, our consciousness(es) cannot exist somewhere other than a brain, because they are currently instantiated in brains, and cannot be transferred (whether to another brain, or anything else).
This does not preclude some other mind coming to exist on a non-brain substrate.
Here is a thought experiment that might not be a thought experiment in the foreseeable future:
Grow some neurons in vitro and implant them in a patient. Over time, will that patient’s brain recruit those neurons?
If so, the more far-out experiment I earlier proposed becomes a matter of scaling up this experiment. I’d rather be on a more resilient substrate than neurons, but I’ll take what I can get.
I’m betting that the answer to this will be “yes”, following a similar line of reasoning that Drexler used to defend the plausibility of nanotech: the existence of birds implied the feasibility of aircraft; the existence of ribosomes implies the feasibility of nanotech… neurogenesis occurring during development and over the last few decades found to be possible in adulthood implies the feasibility of replacing damaged brains or augmenting healthy ones.
Yes, I agree with all of this.
I’m unconvinced that cryostasis wll preserve the experience of continuity. Because of the thought experiment with the non-destructive copying of a terminal patient, I am convinced that plastination will fail to preserve it (I remain the unlucky copy, and in addition to that, dead).
My ideal scenario is one where I can undergo a gradual migration before I actually need to be preserved by either method.
link?
http://lesswrong.com/lw/iul/looking_for_opinions_of_people_like_nick_bostrom/9x47
Ah, ok:
So your issue is that a copy of you is not you? And you would treat star trek-like transporter beams as murder? But you are OK with a gradual replacement of your brain, just not with a complete one? How fast would the parts need to be replaced to preserve this “experience of continuity”? Do drugs which knock you unconscious break continuity enough to be counted as making you into not-you?
Basically, what I am unclear on is whether your issue is continuity of experience or cloning.
Nothing so melodramatic, but I wouldn’t use them. UNLESS they were in fact manipulating my wave function directly somehow causing my amplitude to increase in one place and decrease in another. Probably not what the screenplay writers had in mind, though.
Maybe even a complete one eventually. If the vast majority of my cognition has migrated to the synthetic regions, it may not seem as much of a loss when parts of the biological brain break down and have to be replaced. Hard to speak on behalf of my future self with only what I know now. This is speculation.
This is an empirical question that could be answered if/when it becomes possible perform for real the thought experiment I described (the second one, with the blank brain being attached to the existing brain).
Continuity. I’m not opposed to non-destructive copies of me, but I don’t see them as inherently beneficial to me either.
No.
The point of cryonics is that it could lead to revival.
Obviously. That’s not what Mitchell_Porter’s post was about, though.
You seem to think that creating a description of the structure of a brain is necessarily a destructive process. I don’t know of any reason to assume that. If a non-destructive scan exists and is carried out, then there’s no “death”, howsoever defined. Right?
But anyway, let’s grant your implicit assumption of a destructive scan, and suppose that this process has actually occurred to your brain, and “something that functions like [your] brain” has been created. Who is the resulting being? Who do they think they are? What do they do next? Do they do the sorts of things you would do? Love the people you love?
I grant that you do not consider this hypothetical being you—after all, you are hypothetically dead. But surely there is no one else better qualified to answer these questions, so it’s you that I ask.
I was referring cryonics scenarios where the brain is being scanned because you cannot be revived and a new entity is being created based on the scan, so I was assuming that your brain is no longer viable rather than that the scan is destructive.
The resulting being, if possible, would be a being that is confused about its identity. It would be a cruel joke played on those who know me and, possibly, on the being itself (depending on the type of being it is). I am not my likeness.
Consider that, if you had this technology, you could presumably create a being that thinks it is a fictional person. You could fool it into thinking all kinds of nonsensical things. Convincing it that it has the same identity as a dead person is just one among many strange tricks you could play on it.
Fair enough.
I’m positing that the being has been informed about how it was created; it knows that it is not the being it remembers, um, being. So it has the knowledge to say of itself, if it were so inclined, “I am a being purposefully constructed ab initio with all of the memories and cognitive capacities of scientism, RIP.”
Would it be so inclined? If so, what would it do next? (Let us posit that it’s a reconstructed embodied human being.) For example, would it call up your friends and introduce itself? Court your former spouse (if you have one), fully acknowledging that it is not the original you? Ask to adopt your children (if you have any)?
It would have false memories, etc, and having my false memories, it would presumably know that these are false memories and that it has no right to assume my identity, contact my friends and family, court my spouse, etc, simply because it (falsely) thinks itself to have some connection with me (to have had my past experiences). It might still contact them anyway, given that I imagine its emotional state would be fragile; it would surely be a very difficult situation to be in. A situation that would probably horrify everybody involved.
I suppose, to put myself in that situation, I would, willpower permitting, have the false memories removed (if possible), adopt a different name and perhaps change my appearance (or at least move far away). But I see the situation as unimaginably cruel. You’re creating a being—presumably a thinking, feeling being—and tricking it into thinking it did certain things in the past, etc, that it did not do. Even if it knows that it was created, that still seems like a terrible situation to be in, since it’s essentially a form of (inflicted) mental illness.
!!… I hope you mean explicit memory but not implicit memory—otherwise there wouldn’t be much of a being left afterwards...
For a certain usage of “tricking” this is true, but that usage is akin to the way optical illusions trick one’s visual system rather than denoting a falsehood deliberately embedded in one’s explicit knowledge.
I would point out that the source of all the hypothetical suffering in this situation would the being’s (and your) theory of identity rather than the fact of anyone’s identity (or lack thereof). If this isn’t obvious, just posit that the scenario is conceivable but hasn’t actually happened, and some bastard deceives you into thinking it has—or even just casts doubt on the issue in either case.
Of course that doesn’t mean the theory is false—but I do want to say that from my perspective it appears that the emotional distress would come from reifying a naïve notion of personal identity. Even the word “identity”, with its connotations of singleness, stops being a good one in the hypothetical.
Have you seen John Weldon’s animated short To Be? You might enjoy it. If you watch it, I have a question for you: would you exculpate the singer of the last song?
I take it that my death and the being’s ab initio creation are both facts. These aren’t theoretical claims. The claim that I am “really” a description of my brain (that I am information, pattern, etc) is as nonsensical as the claim that I am really my own portrait, and so couldn’t amount to a theory. In fact, the situation is analogous to someone taking a photo of my corpse and creating a being based on its likeness. The accuracy of the resulting being’s behaviour, its ability to fool others, and its own confused state doesn’t make any difference to the argument. It’s possible to dream up scenarios where identity breaks down, but surely not ones where we have a clear example of death.
I would also point out that there are people who are quite content with severe mental illness. You might have delusions of being Napoleon and be quite happy about it. Perhaps such a person would argue that “I feel like Napoleon and that’s good enough for me!”
In the animation, the woman commits suicide and the woman created by the teleportation device is quite right that she isn’t responsible for anything the other woman did, despite resembling her.
In the hypothetical, your brain has stopped functioning. Whether this is sufficient to affirm that you died is precisely the question at issue. Personally, it doesn’t matter to me if my brain’s current structure is the product of biological mechanisms operating continuously by physical law or is the product of, say, a 3D printer and a cryonically-created template—also operating by physical law. Both brains are causally related to my past self in enough detail to make the resulting brain me in every way that matters to me.
Curious that she used the transmission+reconstruction module while committing “suicide”, innit? She didn’t have to—it was a deliberate choice.
The brain constructed in your likeness is only normatively related to your brain. That’s the point I’m making. The step where you make a description of the brain is done according to a practice of representation. There is no causal relationship between the initial brain and the created brain. (Or, rather, any causal relationship is massively disperse through human society and history.) It’s a human being, or perhaps a computer programmed by human beings, in a cultural context with certain practices of representation, that creates the brain according to a set of rules.
This is obvious when you consider how the procedure might be developed. We would have to have a great many trial runs and would decide when we had got it right. That decision would be based on a set of normative criteria, a set of measurements. So it would only be “successful” according to a set of human norms. The procedure would be a cultural practice rather than a physical process. But there is just no such thing as something physical being “converted” or “transformed” into a description (or information or a pattern or representation) - because these are all normative concepts—so such a step cannot possibly conserve identity.
As I said, the only way the person in cryonic suspension can continue to live is through a standard process of revival—that is, one that doesn’t involve the step of being described and then having a likeness created—and if such a revival doesn’t occur, the person is dead. This is because the process of being described and then having a likeness created isn’t any sort of revival at all and couldn’t possibly be. It’s a logical impossibility.
My response to this is very simple, but it’s necessary to know beforehand that the brain’s operation is robust to many low-level variations, e.g., thermal noise that triggers occasional random action potentials at a low rate.
Suppose our standard is that we get it right when the reconstructed brain is more like the original brain just before cryonic preservation than a brain after a good night’s sleep is like that same brain before sleeping—within the subset of brain features that are not robust to variation. Further suppose that that standard is achieved through a process that involves a representation of the structure of the brain. Albeit that the representation is indeed a “cultural practice”, the brute fact of the extreme degree of similarity of the pre- and post-process brains would seem much more relevant to the question of preservation of any aspect of the brain worthy of being called “identity”.
ETA: Thinking about this a bit more, I see that the notion of “similarity” in the above argument is also vulnerable to the charge of being a mere cultural practice. So let me clarify that the kind of similarity I have in mind basically maps to reproducibility of the input-output relation of a low-level functional unit, up to, say, the magnitude of thermal noise. Reproducibility in this sense has empirical content; it is not merely culturally constructed.
I don’t see how using more detailed measurements makes it any less a cultural practice. There isn’t a limit you can pass where doing something according to a standard suddenly becomes a physical relationship. Regardless, consider that you could create as many copies to that standard as you wished, so you now have a one-to-many relationship of “identity” according to your scenario. Such a type-token relationship is typical of norm-based standards (such as mediums of representation) because they are norm-based standards (that is, because you can make as many according to the standard as you wish).
I’m not saying it’s not a cultural practice. I’m saying that the brute fact of the extreme degree of similarity (and resulting reproducibility of functionality) of the pre- and post-process brains seems like a much more relevant fact. I don’t know why I should care that the process is a cultural artifact if the pre- and post-process brains are so similar that for all possible inputs, they produce the same outputs. That I can get more brains out than I put in is a feature, not a bug, even though it makes the concept of a singular identity obsolete.
I don’t know what the word “clear” in that sentence actually means.
If you’re simply asserting that what has occurred in this example is your death, then no, it isn’t clear, any more than if I assert that I actually died 25 minutes ago, that’s clear evidence that Internet commenting after death is possible.
I’m not saying you’re necessarily wrong… I mean, sure, it’s possible that you’re correct, and in your hypothetical scenario you actually are dead, despite the continued existence of something that acts like you and believes itself to be you. It’s also possible that in my hypothetical scenario I’m correct and I really did die 25 minutes ago, despite the continued existence of something that acts like me and believes itself to be me.
I’m just saying it isn’t clear… in other words, that it’s also possible that one or both of us is confused/mistaken about what it means for us to die and/or remain alive.
In the example being discussed we have a body. I can’t think of a clearer example of death than one where you can point to the corpse or remains. You couldn’t assert that you died 25 minutes ago—since death is the termination of your existence and so logically precludes asserting anything (nothing could count as evidence for you doing anything after death, although your corpse might do things) - but if somebody else asserted that you died 25 minutes ago then they could presumably point to your remains, or explain what happened to them. If you continued to post on the Internet, that would be evidence that you hadn’t died. Although the explanation that someone just like you was continuing to post on the Internet would be consistent with your having died.
OK, I think I understand what you mean by “clear” now. Thanks.
Now, if I understand the “two particles of the same type are identical” argument in the context of uploading/copying, it shouldn’t be relevant because two huge multi-particle configurations are not going to be identical. You cannot measure the state of each particle in the original and you cannot precisely force each particle in the copy into that state. And no amount of similar is enough, the two of you have to be identical in the sense that two electrons are identical if we’re talking about being Feynman paths that your amplitude would be summed over. And that rules out digital simulations altogether.
But I didn’t really expect any patternists to defend the first way you could be right in my post. Whereas, the second way you might be right amounts to, by my definition, proving to me that I am already dead or that I die all the time. If that’s the case, all bets are off, everything I care about is due for a major reassessment.
I’d still want to know the truth of course. But the strong form of that argument (that I already experience on a recurring basis the same level of death as you would if you were destructively scanned) is not yet proven to be the truth. Only a plausible for which (or against which) I have not yet seen much evidence.
Can you taboo “level of death” for me? Also, what sorts of experiences would count as evidence for or against the proposition?
Discontinuity. Interruption of inner narrative. You know how the last thing you remember was puking over the toilet bowl and then you wake up on the bathroom floor and it’s noon? Well, that but minus everything that goes after the word “bowl”.
Or the technical angle—whatever routine occurrence it is that supposedly disrupts my brain state as much as a destructive scan and rounding to the precision limit of whatever substrate my copy would be running on.
Darn it. I asked two questions—sorry, my mistake—and I find I can’t unequivocally assign your response to one question or the other (or different parts of your response to both).
I guess this would be my attempt to answer your first question: articulating what I meant without the phrase “level of death”.
My answer to your second question it tougher. Somewhat compelling evidence that whatever I value has been preserved would be simultaneously experiencing life from the point of view of two different instances. This could be accomplished perhaps through frequent or continuous synchronization of the memories and thoughts of the two brains. Another convincing experience (though less so) would be gradual replacement of individual biological components that would have otherwise died, with time for the replacement parts to be assimilated into the existing system of original and earlier-replaced components.
If I abruptly woke up in a new body with all my old memories, I would be nearly certain that the old me has experienced death if they are not around, or if they are still around (without any link to each others’ thoughts), that I am the only one who has tangibly benefited from whatever the rejuvenating/stabilizing effects of the replication/uploading might be, and they have not. If I awoke from cryostasis in my old body (or head, as the case may be) even then I would only ever be 50% sure that the individual entering cryostasis is not experiencing waking up (unless there was independent evidence of weak activity in my brain during cryostasis).
The way for me to be convinced, not that continuity has been preserved but rather that my desire for continuity is impossible, does double duty with my answer to the first question:
[Unambiguous, de-mystifying neurological characterization of...]
Actually, let’s start by supposing a non-destructive scan.
The resulting being is someone who is identical to you, but diverges at the point where the scan was performed.
Let’s say your problem is that you have a fatal illness. You’ve been non-destructively scanned, and the scan was used to construct a brand new healthy you who does everything you would do, loves the people you love, etc. Well, that’s great for him, but you are still suffering from a fatal illness. One of the brainscan technicians helpfully suggests they could euthanize you, but if that’s a solution to your problem then why bother getting scanned and copied in the first place? Your could achieve the same subjective outcome by going straight to the euthanasia step.
Now, getting back to the destructive scan. The only thing that’s different is you skip the conversation with the technician and go straight to the euthanasia step. Again, an outcome you could have achieved more cheaply with a bottle of sleeping pills and a bottle of Jack Daniels.
After the destructive scan, a being exists that remembers being me up to the point of that scan, values all the things I value, loves the people I love and will be there for them. Regardless of anyone’s opinion about whether that being is me, that’s an outcome I desire, and I can’t actually achieve it with a bottle of sleeping pills and a bottle of Jack Daniels. Absolutely the same goes for the non-destructive scan scenario.
...maybe you don’t have kids?
Oh, I do, and a spouse.
I want to accomplish both goals: have them be reunited with me, and for myself to experience being reunited with them. Copying only accomplishes the first goal, and so is not enough. So long as there is any hope of actual revival, I do not wish to be destructively scanned nor undergo any preservation technique that is incompatible with actual revival. I don’t have a problem with provably non-destructive scans. Hell, put me on Gitorious for people to download, just delete the porn first.
My spouse will probably outlive me, and hopefully if my kids have to get suspended at all, it will be after they have lived to a ripe old age. So everyone will have had some time to adjust to my absence, and would not be too upset about having to wait a little longer. Otherwise, we could form a pact where we revive whenever the conditions for the last of our revivals are met. I should remember to run this idea by them when they wake up. Well, at least the ones of them who talk in full sentences.
Or maybe this is all wishful thinking—someone who thinks that what we believe is silly will just fire up the microtome and create some uploads that are “close enough” and tell them it was for their own good.
Sticking with the non-destructive scan + terminal illness scenario: before the scan is carried out, do you anticipate (i) experiencing being reunited with your loved ones; (ii) requesting euthanasia to avoid a painful terminal disease; (iii) both (but not both simultaneously for the same instance of “you”)?
Probably (iii) is the closest to the truth, but without euthenasia. I’d just eventually die, fighting it to the very end. Apparently this is an unusual opinion or something because people have such a hard time grasping this simple point: what I care about is the continuation of my inner narrative for as long as possible. Even if it’s filled with suffering. I don’t care. I want to live. Forever if possible, for an extra minute if that’s all there is.
A copy may accomplish my goal of helping my family, but it does absolutely nothing to accomplish my goal of survival. As a matter of self-preservation I have to set the record straight whenever someone claims otherwise.
Okay—got it. What I don’t grasp is why you would care about the inner narrative of any particular instance of “you” when the persistence of that instance makes negligible material difference to all the other things you care about.
To put it another way: if there’s only a single instance of “me”—the only extant copy of my particular values and abilities—then its persistence cannot be immaterial to all the other things I care about, and that’s why I currently care about my persistence more-or-less unconditionally. If there’s more than one copy of “me” kicking around, then “more-or-less unconditionally” no longer applies. My own internal narrative doesn’t enter into the question, and I’m confused as to why anyone else would give their own internal narrative any consideration.
ETA: So, I mean, the utility function is not up for grabs. If we both agree as to what would actually be happening in these hypothetical scenarios, but disagree about what we value, then clauses like “patternists could be wrong” refer to an orthogonal issue.
Maybe the same why as why do some people care more about their families than about other people’s families. Why some people care more about themselves than about strangers. What I can’t grasp is how one would manage to so thoroughly eradicate or suppress such a fundamental drive.
What, kin selection? Okay, let me think through the implications...
I don’t understand the response. Are you saying that the reason you don’t have an egocentric world view and I do is in some way because of kin selection?
You said,
And why do people generally care more about their families than about other people’s families? Kin selection.
Patternists/computationalists make the, in principle, falsifiable assertion that if I opt for plastination and am successfully reconstructed, that I will wake up in the future just as I will if I opt for cryonics and am successfully revived without copying/uploading/reconstruction. My assertion is that if I opt for plastination I will die and be replaced by someone hard or impossible to distinguish from me. Since it takes more resources to maintain cryosuspension, and probably a more advanced technology level to thaw and reanimate the patient, if the patternists are right, plastination is a better choice. If I’m right, it is not an acceptable choice at all.
The problem is that, so far, the only being in the universe who could falsify this assertion is the instantiation of me that is writing this post. Perhaps with increased understanding of neuroscience, there will be additional ways to test the patternist hypothesis.
I’m not sure what you mean here. Probability statements aren’t falsifiable; Popper would have had a rather easier time if they were. Relative frequencies are empirical, and statements about them are falsifiable...
At the degree of resolution we’re talking about, talking about you/not-you at all seems like a blegg/rube distinction. It’s just not a useful way of thinking about what’s being contemplated, which in essence is that certain information-processing systems are running, being serialized, stored, loaded, and run again.
Oops, you’re right. I have now revised it.
Suppose your brain has ceased functioning, been recoverably preserved and scanned, and then revived and copied. The two resulting brains are indistinguishable in the sense that for all possible inputs, they give identical outputs. (Posit that this is a known fact about the processes that generated them in their current states.) What exactly is it that makes the revived brain you and the copied brain not-you?
And yet, what is to be done if your utility function is dissolved by the truth? How do we know that there even exist utility functions that retain their currency down to the level of timeless wave functions?
I haven’t thought really deeply about that, but it seems to me that if Egan’s Law doesn’t offer you some measure of protection and also a way to cope with failures of your map, you’re probably doing it wrong.
A witty quote from an great book by a brilliant author is awesome, but does not have the status of any sort of law.
What do we mean by “normality”? What you observe around you every day? If you are wrong about the unobserved causal mechanisms underlying your observations, you will make wrong decisions. If you walk on hot coals because you believe God will not let you burn, the normality that quantum mechanics adds up to diverges enough from your normality that there will be tangible consequences. Are goals part of normality? If not, they certainly depend on assumptions you make about your model of normality. Either way, when you discover that God can’t/won’t make you fireproof, some subset of your goals will (and should) come tumbling down. This too has tangible consequences.
Some subset of the remaining goals relies on more subtle errors in your model of normality and they too will at some point crumble.
What evidence do we have that any goals at all are stable at every level? Why should the goals of a massive blob of atoms have such a universality?
I can see the point of “it all adds up to normality” if you’re encouraging someone to not be reluctant to learn new facts. But how does it help answer the question of “what goal do we pursue if we find proof that all our goals are bullshit”?
My vague notion is that if your goals don’t have ramifications in the realm of the normal, you’re doing it wrong. If they do, and some aspect of your map upon which goals depend gets altered in a way that invalidates some of your goals, you can still look at the normal-realm ramifications and try to figure out if they are still things you want, and if so, what your goals are now in the new part of your map.
Keep in mind that your “map” here is not one fixed notion about the way the world works. It’s a probability distribution over all the ways the world could work that are consistent with your knowledge and experience. In particular, if you’re not sure whether “patternists” (whatever those are) are correct or not, this is a fact about your map that you can start coping with right now.
It might be that the Dark Lords of the Matrix are just messing with you, but really, the unknown unknowns would have to be quite extreme to totally upend your goal system.