I wrote up about a page-long reply, then realized it probably deserves its own posting. I’ll see if I can get to that in the next day or so. There’s a wide spectrum of possible solutions to the personal identity problem, from physical continuity (falsified) to pattern continuity and causal continuity (described by Eliezer in the OP), to computational continuity (my own view, I think). It’s not a minor point though, whichever view turns out to be correct has immense ramifications for morality and timeless decision theory, among other things...
What relevance does personal identity have to TDT? TDT doesn’t depend on whether the other instances of TDT are in copies of you, or in other people who merely use the same decision theory as you.
Ok I will, but that part is easy enough to state here: I mean correct in the reductionist sense. The simplest explanation which resolves the original question and/or associated confusion, while adding to our predictive capacity and not introducing new confusion.
Mm. I’m not sure I understood that properly; let me echo my understanding of your view back to you and see if I got it.
Suppose I get in something that is billed as a transporter, but which does not preserve computational continuity. Suppose, for example, that it destructively scans my body, sends the information to the destination (a process which is not instantaneous, and during which no computation can take place), and reconstructs an identical body using that information out of local raw materials at my destination.
If it turns out that computational or physical continuity is the correct answer to what preserves personal identity, then I in fact never arrive at my destination, although the thing that gets constructed at the destination (falsely) believes that it’s me, knows what I know, etc. This is, as you say, an issue of great moral concern… I have been destroyed, this new person is unfairly given credit for my accomplishments and penalized for my errors, and in general we’ve just screwed up big time.
Conversely, if it turns out that pattern or causal continuity is the correct answer, then there’s no problem.
Therefore it’s important to discover which of those facts is true of the world.
Yes? This follows from your view? (If not, I apologize; I don’t mean to put up strawmen, I’m genuinely misunderstanding.)
If so, your view is also that if we want to know whether that’s the case or not, we should look for the simplest answer to the question “what does my personal identity comprise?” that does not introduce new confusion and which adds to our predictive capacity. (What is there to predict here?)
Yes?
EDIT: Ah, I just read this post where you say pretty much this. OK, cool; I understand your position.
Suppose I get in something that is billed as a transporter, but which does not preserve computational continuity. Suppose, for example, that it destructively scans my body, sends the information to the destination (a process which is not instantaneous, and during which no computation can take place), and reconstructs an identical body using that information out of local raw materials at my destination.
I don’t know what “computation” or “computational continuity” means if it’s considered to be separate from causal continuity, and I’m not sure other philosophers have any standard idea of this either. From the perspective of the Planck time, your brain is doing extremely slow ‘computations’ right now, it shall stand motionless a quintillion ticks and more before whatever arbitrary threshold you choose to call a neural firing. Or from a faster perspective, the 50 years of intervening time might as well be one clock tick. There can be no basic ontological distinction between fast and slow computation, and aside from that I have no idea what anyone in this thread could be talking about if it’s distinct from causal continuity.
(shrug) It’s Mark’s term and I’m usually willing to make good-faith efforts to use other people’s language when talking to them. And, yes, he seems to be drawing a distinction between computation that occurs with rapid enough updates that it seems continuous to a human observer and computation that doesn’t. I have no idea why he considers that distinction important to personal identity, though… as far as I can tell, the whole thing depends on the implicit idea of identity as some kind of ghost in the machine that dissipates into the ether if not actively preserved by a measurable state change every N microseconds. I haven’t confirmed that, though.
Hypothesis: consciousness is what a physical interaction feels like from the inside.
Importantly, it is a property of the interacting system, which can have various degrees of coherence—a different concept than quantum coherence, which I am still developing: something along the lines of negative-entropic complexity. There is therefore a deep correlation between negentropy and consciousness. Random thermodynamic motion in a gas is about as minimum-conscious as you can get (lots of random interactions, but all short lived and decoherent). A rock is slightly more conscious due to its crystalline structure, but probably leads a rather boring existence (by our standards, at least). And so on, all the way up to the very negentropic primate brain which experiences a high degree of coherent experience that we call “consciousness” or “self.”
I know this sounds like making thinking an ontologically basic concept. It’s rather the reverse—I am building the experience of thinking up from physical phenomenon: consciousness is the experience of organized physical interactions. But I’m not yet convinced of it either. If you throw out the concept of coherent interaction (what I have been calling computation continuity), then it does reduce to causal continuity. But causal continuity does have it’s problems which make me suspect it as not being the final, ultimate answer...
Hypothesis: consciousness is what a physical interaction feels like from the inside. ... consciousness is the experience of organized physical interactions.
How do you explain the existence of the phenomenon of “feeling like” and of “experience”?
I agree that the grandparent has circumvented addressing the crux of the matter, however I feel (heh) that the notion of “explain” often comes with unrealistic expectations. It bears remembering that we merely describe relationships as succinctly as possible, then that description is the “explanation”.
While we would e.g. expect/hope for there to be some non-contradictory set of descriptions applying to both gravity and quantum phenomena (for which we’d eat a large complexity penalty, since complex but accurate descriptions always beat out simple but inaccurate descriptions; Occam’s Razor applies only to choosing among fitting/not yet falsified descriptions), as soon as we’ve found some pinned-down description in some precise language, there’s no guarantee—or strictly speaking, need—of an even simpler explanation.
A world running according to currently en-vogue physics, plus a box which cannot be described as an extension of said physics, but only in some other way, could in fact be fully explained, with no further explanans for the explanandum.
It seems pretty straightforward to note that there’s no way to “derive” phenomena such as “feeling like” in the current physics framework, except of course to describe which states of matters/energy correspond to which qualia.
Such a description could be the explanation, with nothing further to be explained:
If it empirically turned out that a specific kind matter needs to be arranged in the specific pattern of a vertebrate brain to correlate to qualia, that would “explain” consciousness. If it turned out (as we all expect) that the pattern alone sufficies, then certain classes of instantiated algorithms (regardless of the hardware/wetware) would be conscious. Regardless, either description (if it turned out to be empirically sound) would be the explanation.
I also wonder, what could any answer within the current physics framework possibly look like, other than an asterisk behind the equations with the addendum of “values n1 … nk for parameters p1 … pk correlate with qualia x”?
How do you explain “feeling like” and “experience” in general? This is LW so I assume you have a reductionist background and would offer an explanation based on information patterns, neuron firings, hormone levels, etc. But ultimately all of that reduces down to a big collection of quarks, each taking part in mostly random interactions on the scale of femtoseconds. The apparent organization of the brain is in the map, not the territory. So if subjective experience reduces down to neurons, and neurons reduce down to molecules, and molecules reduce to quarks and leptons, where then does the consciousness reside? “Information patterns” alone is an inadequate answer—that’s at the level of the map, not the territory. Quarks and leptons combine into molecules, molecules into neural synapses, and the neurons connect into the 3lb information processing network that is my brain. Somewhere along the line, the subjective experience of “consciousness” arises. Where, exactly, would you propose that happens?
We know (from our own subjective experience) that something we call “consciousness” exists at the scale of the entire brain. If you assume that the workings of the brain is fully explained by its parts and their connections, and those parts explained by their sub-components and designs, etc. you eventually reach the ontologically basic level of quarks and leptons. Fundamentally the brain is nothing more than the interaction of a large number of quarks and leptons. So what is the precise interaction of fundamental particles is the basic unit of consciousness? What level of complexity is required before simply organic matter becomes a conscious mind?
It sounds ridiculous, but if you assume that quarks and leptons are “conscious,” or rather that consciousness is the interaction of these various ontologically primitive, fundamental particles, a remarkably consistent theory emerges: one which dissolves the mystery of subjective consciousness by explaining it as the mere aggregation of interdependent interactions. Besides being simple, this is also predictive: it allows us to assert for a given situation (e.g. a teleporter or halted simulation) whether loss of personal identity occurs, which has implications for morality of real situations encountered in the construction of an AI.
The apparent organization of the brain is in the map, not the territory.
What do you mean by this? Are fMRIs a big conspiracy?
Fundamentally the brain is nothing more than the interaction of a large number of quarks and leptons.
This description applies equally to all objects. When you describe the brain this way, you leave out all its interesting characteristics, everything that makes it different from other blobs of interacting quarks and leptons.
What I’m saying is that the high-level organization is not ontologically primitive. When we talk about organizational patterns of the brain, or the operation of neural synapses, we’re taking about very high level abstractions. Yes, they are useful abstractions primarily because they ignore unnecessary detail. But that detail is how they are actually implemented. The brain is soup of organic particles with very high rates of particle interaction due simply to thermodynamic noise. At the nanometer and femtosecond scale, there is very little signal to noise, however at the micrometer and millisecond scale general trends start to emerge, phenomenon which form the substrate of our computation. But these high level abstractions don’t actually exist—they are just average approximations over time of lower level, noisy interactions.
I assume you would agree that a normal adult brain in a human experiences a subjective feeling of consciousness that persists from moment-to-moment. I also think it’s a fair bet that you would not think that a single electron bouncing around in some part of a synaptic pathway or electronic transistor has anything resembling a conscious experience. But somehow, a big aggregation of these random motions does add up to you or me. So at what point in the formation of a human brain, or construction of an AI does it become conscious? At what point does it mere dead matter transform into sentience? Is this a hard cutoff? Is it gradual?
Speaking of gradations, certain animals can’t recognize themselves in a mirror. If you use self-awareness as a metric as was argued elsewhere, does that mean they’re not conscious? What about insects, which operate with a more distributed neural system. Dung beetles seem to accomplish most tasks by innate reflex response. Do they have at least a little, tiny subjective experience of consciousness? Or is their existence no more meaningful than that of a stapler?
Yes, this objection applies equally to all objects. That’s precisely my point. Brains are not made of any kind of “mind stuff”—that’s substance dualism which I reject. Furthermore, minds don’t have a subjective experience separate from what is physically explainable—that’s epiphenomenalism, similarly rejected. “Minds exist in information patterns” is a mysterious answer—information patterns are themselves merely evolving expressions in the configuration space of quarks & leptons. Any result of the information pattern must be explainable in terms of the interactions of its component parts, or else we are no longer talking about a reductionist universe. If I am coming at this with a particular bias, it is this: all aspects of mind including consciousness, subjective experience, qualia, or whatever you want to call it are fundamentally reducible to forces acting on elementary particles.
I see only two reductionist paths forward to take: (1) posit a new, fundamental law by which at some aggregate level of complexity or organization, a computational substrate becomes conscious. How & why is not explained, and as far as I can tell there is no experimental way to determine where this cutoff is. But assume it is there. Or, (2) accept that like everything else in the universe, consciousness reduces down to the properties of fundamental particles and their interactions (it is the interaction of particles). A quark and a lepton exchanging a photon is some minimal quantum Plank-level of conscious experience. Yes, that means that even a rock and a stapler experience some level of conscious experience—barely distinguishable from thermal noise, but nonzero—but the payoff is a more predictive reductionist model of the universe. In terms of biting bullets, I think accepting many-worlds took more gumption than this.
I also think it’s a fair bet that you would not think that a single electron bouncing around in some part of a synaptic pathway or electronic transistor has anything resembling a conscious experience. But somehow, a big aggregation of these random motions does add up to you or me. So at what point in the formation of a human brain, or construction of an AI does it become conscious? At what point does it mere dead matter transform into sentience? Is this a hard cutoff? Is it gradual?
This is a Wrong Question. Consciousness, whatever it is, is (P=.99) a result of a computation. My computer exhibits a microsoft word behavior, but if I zoom in to the electrons and transistors in the CPU, I see no such microsoft word nature. It is silly to zoom in to quarks and leptons looking for the true essence of microsoft word. This is the way computations work—a small piece of the computation simply does not display behavior that is like the entire computation. The CPU is not the computation. It is not the atoms of the brain that are conscious, it is the algorithm that they run, and the atoms are not the algorithm. Consciousness is produced by non-conscious things.
“Minds exist in information patterns” is a mysterious answer—information patterns are themselves merely evolving expressions in the configuration space of quarks & leptons. Any result of the information pattern must be explainable in terms of the interactions of its component parts, or else we are no longer talking about a reductionist universe. If I am coming at this with a particular bias, it is this: all aspects of mind including consciousness, subjective experience, qualia, or whatever you want to call it are fundamentally reducible to forces acting on elementary particles.
Minds exist in some algorithms (“information pattern” sounds too static for my taste). Your desire to reduce things to forces on elementary particles is misguided, I think, because you can do the same computation with many different substrates. The important thing, the thing we care about, is the computation, not the substrate. Sure, you can understand microsoft word at the level of quarks in a CPU executing assembly language, but it’s much more useful to understand it in terms of functions and algorithms.
You’ve completely missed / ignored my point, again. Microsoft Word can be functionally reduced to electrons in transistors. The brain can be functionally reduced to biochemistry. Unless you resort to some form of dualism, the mind (qualia) is also similarly reduced.
just as computation can be brought down to the atomic scale (or smaller, with quantum computing), so too can conscious experiences be constructed out of such computational events. Indeed they are one and the same thing, just viewed from different perspectives.
The brain can be functionally reduced to biochemistry. Unless you resort to some form of dualism, the mind (qualia) is also similarly reduced.
I thought dualism meant you thought that there was ontologically basic conciousness stuff separate from ordinary matter?
I think the mind should be reduced to algorithms, and biochemistry is an implementation detail. This may make me a dualist by your usage of the word.
I think that it’s equally silly to ask, “where is the microsoft-word-ness” about a subset of transistors in your CPU as it is to ask “where is the consciousness” about a subset of neurons in your brain. I see this as describing how non-ontologically-basic consciousness can be produced by non-conscious stuff.
You’ve completely missed / ignored my point, again.
Apologies; does the above address your point? If not I’m confused about your point.
I’m arguing that if you think the mind can be reduced to algorithms implemented on computational substrate, then it is a logical consequence from our understanding of the rules of physics and the nature of computation that what we call subjective experience must also scale down as you reduce a computational machine down to its parts. After all, the algorithms themselves too also reducible down to stepwise axiomatic logical operations, implemented as transistors or interpretable machine code.
The only way to preserve the common intuition that “it takes (simulation of) a brain or equivalent to produce a mind” is to posit some form of dualism. I don’t think it is silly to ask “where is the microsoft-word-ness” about a subset of a computer—you can for example point to the regions of memory and disk where the spellchecker is located, and say “this is the part that matches user input against tables of linguistic data,” just like we point to regions of the brain and say “this is your language processing centers.”
The experience of having a single, unified me directing my conscious experience is an illusion—it’s what the integration process feels like from the inside, but it does not correspond to reality (we have psychological data to back this up!). I am in fact a society of agents, each simpler but also relying on an entire bureaucracy of other agents in an enormous distributed structure. Eventually though, things reduce down to individual circuits, then ultimately to the level of individual cell receptors and chemical pathways. At no point along the way is there a clear division where it is obvious that conscious experience ends and what follows is merely mechanical, electrical, and chemical processes. In fact as I’ve tried to point out the divisions between higher level abstractions and their messy implementations is in the map, not the territory.
To assert that “this level of algorithmic complexity is a mind, and below that is mere machines” is a retreat to dualism, though you may not yet see it in that way. What you are asserting is that there is this ontologically basic mind-ness which spontaneously emerges when an algorithm has reached a certain level of complexity, but which is not the aggregation of smaller phenomenon.
I think we have really different models of how algorithms and their sub-components work.
it is a logical consequence from our understanding of the rules of physics and the nature of computation that what we call subjective experience must also scale down as you reduce a computational machine down to its parts.
Suppose I have a computation that produces the digits of pi. It has subroutines which multiply and add. Is it an accurate description of these subroutines that they have a scaled down property of computes-pi-ness? I think this is not a useful way to understand things. Subroutines do not have a scaled-down percentage of the properties of their containing algorithm, they do a discrete chunk of its work. It’s just madness to say that, e.g., your language processing center is 57% conscious.
The experience of having a single, unified me directing my conscious experience is an illusion...
I agree with all this. Humans probably are not the minimal conscious system, and there are probably subsets of our component circuitry which maintain the property of conciousness. But yes, I maintain that eventually, you’ll get to an algorithm that is conscious while none of its subroutines are.
If this makes me a dualist then I’m a dualist, but that doesn’t feel right. I mean, the only way you can really explain a thing is to show how it arises from something that’s not like it in the first place, right?
I think we have different models of what consciousness is. In your pi example, the multiplier has multiply-ness, and the adder has add-ness properties, and when combined together in a certain way you get computes-pi-ness. Likewise our minds have many, many, many different components which—somehow, someway—each have a small experiential qualia which when you sum together yield the human condition.
Through brain damage studies, for example, we have descriptions of what it feels like to live without certain mental capabilities. I think you would agree with this, but for others reading take this thought experiment: imagine that I were to systematically shut down portions of your brain, or in simulation, delete regions of your memory space. For the purpose of the argument I do it slowly over time in relatively small amounts, and cleaning up dangling references so the whole system doesn’t shut down. Certainly as time goes by your mental functionality is reduced, and you stop being capable of having experiences you once took for granted. But at what point, precisely, do you stop experiencing at all qualia of any form? When you’re down to just a billion neurons? A million? A thousand? When you’re down to just one processing region? Is one tiny algorithm on a single circuit enough?
Humans probably are not the minimal conscious system, and there are probably subsets of our component circuitry which maintain the property of consciousness. But yes, I maintain that eventually, you’ll get to an algorithm that is conscious while none of its subroutines are.
What is the minimal conscious system? It’s easy and perhaps accurate to say “I don’t know.” After all, neither one of us know enough neural and cognitive science to make this call, I assume. But we should be able to answer this question: “if presented criteria for a minimally-conscious-system, what would convince me of its validity?”
If this makes me a dualist then I’m a dualist, but that doesn’t feel right. I mean, the only way you can really explain a thing is to show how it arises from something that’s not like it in the first place, right?
Eliezer’s post on reductionism is relevant here. In a reductionist universe, anything and everything is fully defined by its constituent elements—no more, no less. There’s a popular phrase that has no place is reductionist theories: “the whole is greater than the sum of its parts.” Typically what this actually means is that you failed to count the “parts” correctly: a part list should also include spatial configurations and initial conditions, which together imply the dynamic behaviors as well. For example, a pulley is more than a hunk of metal and some rope, but it is fully defined if you specify how the metal is shaped, how the rope is threaded through it and fixed to objects with knots, how the whole contraption is oriented with respect to gravity, and the procedure for applying rope-pulling-force. Combined with the fundamental laws of physics, this is a fully reductive explanation of a rope-pulley system which is the sum of its fully-defined parts.
And so it goes with consciousness. Unless we are comfortable with the mysterious answers provided by dualism—or empirical evidence like confirmation of psychic phenomenon compels us to go there—then we must demand that an explanation be provided that explains consciousness fully as the aggregation of smaller processes.
When I look a explanations of the workings of the brain, starting with the highest level psychological theories and neural structure, and working the way all the way down the abstraction hierarchy to individual neural synapses and biochemical pathways, nowhere along the way do I see an obvious place to stop and say “here is where consciousness begins!” Likewise, I can start from the level of mere atoms and work my way up to the full neural architecture, without finding any step that adds something which could be consciousness, but which isn’t fundamentally like the levels below it. But when you get to the highest level, you’ve described the full brain without finding consciousness anywhere along the way.
I can see how this leads otherwise intelligent philosophers like David Chalmers to epiphenomenalism. But I’m not going to go down that path, because the whole situation is the result of mental confusion.
The Standard Rationalist Answer is that mental processes are information patterns, nothing more, and tat consciousness is an illusion, end of story. But that still leaves me confused! It’s not like free will for example, where because of the mind projection fallacy I think I have free will due to how a deterministic decision theory algorithm feels from the inside. I get that. No, the answer of “that subjective experience of consciousness isn’t real, get over it” is unsatisfactory because if I don’t have conscious, how am I experiencing thinking in the first place? Cogito ergo sum.
However there is a way out. I went looking for a source of consciousness because I like nearly every other philosopher assumed that there was something special and unique which set brains aside as having minds which other more mundane objects—like rocks and staplers—do not possess. That’s so obviously true, but honestly I have no real justification for that belief. So let’s try negating it. What is possible if we don’t exclude mundane things from having minds too?
Well, what does it feel like to be a quark and a lepton exchanging a photon? I’m not really sure, but let’s call that approximately the minimum possible “experience”, and for the duration of the interaction continuous interaction over time, the two particles share a “mind”. Arrange a number of these objects together and you get an atom, which itself also has a shared/merged experience so long as the particles remain in bonded interaction. Arrange a lot of atoms together and you get a electrical transistor. Now we’re finally starting to get to a level where I have some idea of what the “shared experience of being a transistor” would be (rather boring, by my standards), and more importantly, it’s clear how that experience is aggregated together from its constituent parts. From here, computing theory takes over as more complex interdependent systems are constructed, each merging experiences together into a shared hive mind, until you reach the level of the human being or AI.
Are you at least following what I’m saying, even if you don’t agree?
That was a very long comment (thank you for your effort) and I don’t think I have the energy to exhaustively go through it.
I believe I follow what you’re saying. It doesn’t make much sense to me, so maybe that belief is false.
I think the fact that if you start with a brain, which is presumably conscious, and zoom in all the way looking for the conciousness boundary, and then start with a quark, which is presumably not conscious, and zoom all the way out to the entire brain, also without finding a consciousness barrier—I think this means that the best we can do at the moment is set upper and lower bounds.
A minimally conscious system—say, something that can convince me that it thinks it is conscious. “echo ‘I’m conscious!’” doesn’t quite cut it, things that recognize themselves in mirrors probably do, and I could go either way on the stuff in between.
I think your reductionism is a little misapplied. My pi-calculating program develops a new property of pi-computation when you put the adders and multipliers together right, but is completely described in terms of adders and multipliers. I expect consciousness to be exactly the same; it’ll be completely described in terms of qualia generating algorithms (or some such), which won’t themselves have the consciousness property.
This is hard to see because the algorithms are written in spaghetti code, in the wiring between neurons. In computer terms, we have access to the I/O system and all the gates in the CPU, but we don’t currently know how they’re connected. Looking at more or fewer of the gates doesn’t help, because the critical piece of information is how they’re connected and what algorithm they implement.
IMO, my guess (P=.65) is that qualia are going to turn out to be something like vectors in a feature space. Under this model, clearly systems incapable of representing such a vector can’t have any qualia at all. Rocks and single molecules, for example.
How do you explain “feeling like” and “experience” in general? This is LW so I assume you have a reductionist background and would offer an explanation based on information patterns, neuron firings, hormone levels, etc.
I indeed have a reductionist background, but I offer no explanation, because I have none. I do not even know what an explanation could possibly look like; but neither do I take that as proof that there cannot be one. The story you tell surrounds the central mystery with many physical details, but even in your own accont of it the mystery remains unresolved:
Somewhere along the line, the subjective experience of “consciousness” arises.
However much you assert that there must be an explanation, I see here no advance towards actually having one. What does it mean to attribute consciousness to subatomic particles and rocks? Does it predict anything, or does it only predict that we could make predictions about teleporters and simulations if we had a physical explanation of consciousness?
Hypothesis: consciousness is what a physical interaction feels like from the inside.
I would imagine that consciousness (in a sense of self-awareness) is the ability to introspect into your own algorithm. The more you understand what makes you tick, rather than mindlessly following the inexplicable urges and instincts, the more conscious you are.
Yes, that is not only 100% accurate, but describes where I’m headed.
I am looking for the simplest explanation of the subjective continuity of personal identity, which either answers or dissolves the question. Further, the explanation should either explain which teleportation scenario is correct (identity transfer, or murder+birth), or satisfactorily explain why it is a meaningless distinction.
What is there to predict here?
If I, the person standing in front of the transporter door, will experience walking on Mars, or oblivion.
Yes, it is perhaps likely that this will never be experimentally observable. That may even be a tautology since we are talking about subjective experience. But still, a reductionist theory of consciousness could provide a simple, easy to understand explanation for the origin of personal identity (e.g., what an computational machine feels like from the inside) and which predicts identity transfer or murder + birth. That would be enough for me, at least as long as there’s not competing equally simple theories.
What is there to predict here? If I, the person standing in front of the transporter door, will experience walking on Mars, or oblivion.
Well, you certainly won’t experience oblivion, more or less by definition. The question is whether you will experience walking on Mars or not.
But there is no distinct observation to be made in these two cases. That is, we agree that either way there will be an entity having all the observable attributes (both subjective and objective; this is not about experimental proof, it’s about the presence or absence of anything differentially observable by anyone) that Mark Friendebach has, walking on Mars.
So, let me rephrase the question: what observation is there to predict here?
So, let me rephrase the question: what observation is there to predict here?
That’s not the direction I was going with this. It isn’t about empirical observation, but rather aspects of morality which depend on subjective experience. The prediction is under what conditions subjective experience terminates. Even if not testable, that is still an important thing to find out, with moral implications.
Is it moral to use a teleporter? From what I can tell, that depends on whether the person’s subjective experience is terminated in the process. From the utility point of view the outcomes are very nearly the same—you’ve murdered one person, but given “birth” to an identical copy in the process. However if the original, now destroyed person didn’t want to die, or wouldn’t have wanted his clone to die, then it’s a net negative.
As I said elsewhere, the teleporter is the easiest way to think of this, but the result has many other implications from general anesthesia, to cryonics, to Pascal’s mugging and the basilisk.
I wrote up about a page-long reply, then realized it probably deserves its own posting. I’ll see if I can get to that in the next day or so. There’s a wide spectrum of possible solutions to the personal identity problem, from physical continuity (falsified) to pattern continuity and causal continuity (described by Eliezer in the OP), to computational continuity (my own view, I think). It’s not a minor point though, whichever view turns out to be correct has immense ramifications for morality and timeless decision theory, among other things...
What relevance does personal identity have to TDT? TDT doesn’t depend on whether the other instances of TDT are in copies of you, or in other people who merely use the same decision theory as you.
It has relevance for the basilisk scenario, which I’m not sure I should say any more about.
When you write up the post, you might want to say a few words about what it means for one of these views to be “correct” or “incorrect.”
Ok I will, but that part is easy enough to state here: I mean correct in the reductionist sense. The simplest explanation which resolves the original question and/or associated confusion, while adding to our predictive capacity and not introducing new confusion.
Mm. I’m not sure I understood that properly; let me echo my understanding of your view back to you and see if I got it.
Suppose I get in something that is billed as a transporter, but which does not preserve computational continuity. Suppose, for example, that it destructively scans my body, sends the information to the destination (a process which is not instantaneous, and during which no computation can take place), and reconstructs an identical body using that information out of local raw materials at my destination.
If it turns out that computational or physical continuity is the correct answer to what preserves personal identity, then I in fact never arrive at my destination, although the thing that gets constructed at the destination (falsely) believes that it’s me, knows what I know, etc. This is, as you say, an issue of great moral concern… I have been destroyed, this new person is unfairly given credit for my accomplishments and penalized for my errors, and in general we’ve just screwed up big time.
Conversely, if it turns out that pattern or causal continuity is the correct answer, then there’s no problem.
Therefore it’s important to discover which of those facts is true of the world.
Yes? This follows from your view? (If not, I apologize; I don’t mean to put up strawmen, I’m genuinely misunderstanding.)
If so, your view is also that if we want to know whether that’s the case or not, we should look for the simplest answer to the question “what does my personal identity comprise?” that does not introduce new confusion and which adds to our predictive capacity. (What is there to predict here?)
Yes?
EDIT: Ah, I just read this post where you say pretty much this. OK, cool; I understand your position.
I don’t know what “computation” or “computational continuity” means if it’s considered to be separate from causal continuity, and I’m not sure other philosophers have any standard idea of this either. From the perspective of the Planck time, your brain is doing extremely slow ‘computations’ right now, it shall stand motionless a quintillion ticks and more before whatever arbitrary threshold you choose to call a neural firing. Or from a faster perspective, the 50 years of intervening time might as well be one clock tick. There can be no basic ontological distinction between fast and slow computation, and aside from that I have no idea what anyone in this thread could be talking about if it’s distinct from causal continuity.
(shrug) It’s Mark’s term and I’m usually willing to make good-faith efforts to use other people’s language when talking to them. And, yes, he seems to be drawing a distinction between computation that occurs with rapid enough updates that it seems continuous to a human observer and computation that doesn’t. I have no idea why he considers that distinction important to personal identity, though… as far as I can tell, the whole thing depends on the implicit idea of identity as some kind of ghost in the machine that dissipates into the ether if not actively preserved by a measurable state change every N microseconds. I haven’t confirmed that, though.
Hypothesis: consciousness is what a physical interaction feels like from the inside.
Importantly, it is a property of the interacting system, which can have various degrees of coherence—a different concept than quantum coherence, which I am still developing: something along the lines of negative-entropic complexity. There is therefore a deep correlation between negentropy and consciousness. Random thermodynamic motion in a gas is about as minimum-conscious as you can get (lots of random interactions, but all short lived and decoherent). A rock is slightly more conscious due to its crystalline structure, but probably leads a rather boring existence (by our standards, at least). And so on, all the way up to the very negentropic primate brain which experiences a high degree of coherent experience that we call “consciousness” or “self.”
I know this sounds like making thinking an ontologically basic concept. It’s rather the reverse—I am building the experience of thinking up from physical phenomenon: consciousness is the experience of organized physical interactions. But I’m not yet convinced of it either. If you throw out the concept of coherent interaction (what I have been calling computation continuity), then it does reduce to causal continuity. But causal continuity does have it’s problems which make me suspect it as not being the final, ultimate answer...
How do you explain the existence of the phenomenon of “feeling like” and of “experience”?
I agree that the grandparent has circumvented addressing the crux of the matter, however I feel (heh) that the notion of “explain” often comes with unrealistic expectations. It bears remembering that we merely describe relationships as succinctly as possible, then that description is the “explanation”.
While we would e.g. expect/hope for there to be some non-contradictory set of descriptions applying to both gravity and quantum phenomena (for which we’d eat a large complexity penalty, since complex but accurate descriptions always beat out simple but inaccurate descriptions; Occam’s Razor applies only to choosing among fitting/not yet falsified descriptions), as soon as we’ve found some pinned-down description in some precise language, there’s no guarantee—or strictly speaking, need—of an even simpler explanation.
A world running according to currently en-vogue physics, plus a box which cannot be described as an extension of said physics, but only in some other way, could in fact be fully explained, with no further explanans for the explanandum.
It seems pretty straightforward to note that there’s no way to “derive” phenomena such as “feeling like” in the current physics framework, except of course to describe which states of matters/energy correspond to which qualia.
Such a description could be the explanation, with nothing further to be explained:
If it empirically turned out that a specific kind matter needs to be arranged in the specific pattern of a vertebrate brain to correlate to qualia, that would “explain” consciousness. If it turned out (as we all expect) that the pattern alone sufficies, then certain classes of instantiated algorithms (regardless of the hardware/wetware) would be conscious. Regardless, either description (if it turned out to be empirically sound) would be the explanation.
I also wonder, what could any answer within the current physics framework possibly look like, other than an asterisk behind the equations with the addendum of “values n1 … nk for parameters p1 … pk correlate with qualia x”?
How do you explain “feeling like” and “experience” in general? This is LW so I assume you have a reductionist background and would offer an explanation based on information patterns, neuron firings, hormone levels, etc. But ultimately all of that reduces down to a big collection of quarks, each taking part in mostly random interactions on the scale of femtoseconds. The apparent organization of the brain is in the map, not the territory. So if subjective experience reduces down to neurons, and neurons reduce down to molecules, and molecules reduce to quarks and leptons, where then does the consciousness reside? “Information patterns” alone is an inadequate answer—that’s at the level of the map, not the territory. Quarks and leptons combine into molecules, molecules into neural synapses, and the neurons connect into the 3lb information processing network that is my brain. Somewhere along the line, the subjective experience of “consciousness” arises. Where, exactly, would you propose that happens?
We know (from our own subjective experience) that something we call “consciousness” exists at the scale of the entire brain. If you assume that the workings of the brain is fully explained by its parts and their connections, and those parts explained by their sub-components and designs, etc. you eventually reach the ontologically basic level of quarks and leptons. Fundamentally the brain is nothing more than the interaction of a large number of quarks and leptons. So what is the precise interaction of fundamental particles is the basic unit of consciousness? What level of complexity is required before simply organic matter becomes a conscious mind?
It sounds ridiculous, but if you assume that quarks and leptons are “conscious,” or rather that consciousness is the interaction of these various ontologically primitive, fundamental particles, a remarkably consistent theory emerges: one which dissolves the mystery of subjective consciousness by explaining it as the mere aggregation of interdependent interactions. Besides being simple, this is also predictive: it allows us to assert for a given situation (e.g. a teleporter or halted simulation) whether loss of personal identity occurs, which has implications for morality of real situations encountered in the construction of an AI.
What do you mean by this? Are fMRIs a big conspiracy?
This description applies equally to all objects. When you describe the brain this way, you leave out all its interesting characteristics, everything that makes it different from other blobs of interacting quarks and leptons.
What I’m saying is that the high-level organization is not ontologically primitive. When we talk about organizational patterns of the brain, or the operation of neural synapses, we’re taking about very high level abstractions. Yes, they are useful abstractions primarily because they ignore unnecessary detail. But that detail is how they are actually implemented. The brain is soup of organic particles with very high rates of particle interaction due simply to thermodynamic noise. At the nanometer and femtosecond scale, there is very little signal to noise, however at the micrometer and millisecond scale general trends start to emerge, phenomenon which form the substrate of our computation. But these high level abstractions don’t actually exist—they are just average approximations over time of lower level, noisy interactions.
I assume you would agree that a normal adult brain in a human experiences a subjective feeling of consciousness that persists from moment-to-moment. I also think it’s a fair bet that you would not think that a single electron bouncing around in some part of a synaptic pathway or electronic transistor has anything resembling a conscious experience. But somehow, a big aggregation of these random motions does add up to you or me. So at what point in the formation of a human brain, or construction of an AI does it become conscious? At what point does it mere dead matter transform into sentience? Is this a hard cutoff? Is it gradual?
Speaking of gradations, certain animals can’t recognize themselves in a mirror. If you use self-awareness as a metric as was argued elsewhere, does that mean they’re not conscious? What about insects, which operate with a more distributed neural system. Dung beetles seem to accomplish most tasks by innate reflex response. Do they have at least a little, tiny subjective experience of consciousness? Or is their existence no more meaningful than that of a stapler?
Yes, this objection applies equally to all objects. That’s precisely my point. Brains are not made of any kind of “mind stuff”—that’s substance dualism which I reject. Furthermore, minds don’t have a subjective experience separate from what is physically explainable—that’s epiphenomenalism, similarly rejected. “Minds exist in information patterns” is a mysterious answer—information patterns are themselves merely evolving expressions in the configuration space of quarks & leptons. Any result of the information pattern must be explainable in terms of the interactions of its component parts, or else we are no longer talking about a reductionist universe. If I am coming at this with a particular bias, it is this: all aspects of mind including consciousness, subjective experience, qualia, or whatever you want to call it are fundamentally reducible to forces acting on elementary particles.
I see only two reductionist paths forward to take: (1) posit a new, fundamental law by which at some aggregate level of complexity or organization, a computational substrate becomes conscious. How & why is not explained, and as far as I can tell there is no experimental way to determine where this cutoff is. But assume it is there. Or, (2) accept that like everything else in the universe, consciousness reduces down to the properties of fundamental particles and their interactions (it is the interaction of particles). A quark and a lepton exchanging a photon is some minimal quantum Plank-level of conscious experience. Yes, that means that even a rock and a stapler experience some level of conscious experience—barely distinguishable from thermal noise, but nonzero—but the payoff is a more predictive reductionist model of the universe. In terms of biting bullets, I think accepting many-worlds took more gumption than this.
This is a Wrong Question. Consciousness, whatever it is, is (P=.99) a result of a computation. My computer exhibits a microsoft word behavior, but if I zoom in to the electrons and transistors in the CPU, I see no such microsoft word nature. It is silly to zoom in to quarks and leptons looking for the true essence of microsoft word. This is the way computations work—a small piece of the computation simply does not display behavior that is like the entire computation. The CPU is not the computation. It is not the atoms of the brain that are conscious, it is the algorithm that they run, and the atoms are not the algorithm. Consciousness is produced by non-conscious things.
Minds exist in some algorithms (“information pattern” sounds too static for my taste). Your desire to reduce things to forces on elementary particles is misguided, I think, because you can do the same computation with many different substrates. The important thing, the thing we care about, is the computation, not the substrate. Sure, you can understand microsoft word at the level of quarks in a CPU executing assembly language, but it’s much more useful to understand it in terms of functions and algorithms.
You’ve completely missed / ignored my point, again. Microsoft Word can be functionally reduced to electrons in transistors. The brain can be functionally reduced to biochemistry. Unless you resort to some form of dualism, the mind (qualia) is also similarly reduced.
just as computation can be brought down to the atomic scale (or smaller, with quantum computing), so too can conscious experiences be constructed out of such computational events. Indeed they are one and the same thing, just viewed from different perspectives.
I thought dualism meant you thought that there was ontologically basic conciousness stuff separate from ordinary matter?
I think the mind should be reduced to algorithms, and biochemistry is an implementation detail. This may make me a dualist by your usage of the word.
I think that it’s equally silly to ask, “where is the microsoft-word-ness” about a subset of transistors in your CPU as it is to ask “where is the consciousness” about a subset of neurons in your brain. I see this as describing how non-ontologically-basic consciousness can be produced by non-conscious stuff.
Apologies; does the above address your point? If not I’m confused about your point.
I’m arguing that if you think the mind can be reduced to algorithms implemented on computational substrate, then it is a logical consequence from our understanding of the rules of physics and the nature of computation that what we call subjective experience must also scale down as you reduce a computational machine down to its parts. After all, the algorithms themselves too also reducible down to stepwise axiomatic logical operations, implemented as transistors or interpretable machine code.
The only way to preserve the common intuition that “it takes (simulation of) a brain or equivalent to produce a mind” is to posit some form of dualism. I don’t think it is silly to ask “where is the microsoft-word-ness” about a subset of a computer—you can for example point to the regions of memory and disk where the spellchecker is located, and say “this is the part that matches user input against tables of linguistic data,” just like we point to regions of the brain and say “this is your language processing centers.”
The experience of having a single, unified me directing my conscious experience is an illusion—it’s what the integration process feels like from the inside, but it does not correspond to reality (we have psychological data to back this up!). I am in fact a society of agents, each simpler but also relying on an entire bureaucracy of other agents in an enormous distributed structure. Eventually though, things reduce down to individual circuits, then ultimately to the level of individual cell receptors and chemical pathways. At no point along the way is there a clear division where it is obvious that conscious experience ends and what follows is merely mechanical, electrical, and chemical processes. In fact as I’ve tried to point out the divisions between higher level abstractions and their messy implementations is in the map, not the territory.
To assert that “this level of algorithmic complexity is a mind, and below that is mere machines” is a retreat to dualism, though you may not yet see it in that way. What you are asserting is that there is this ontologically basic mind-ness which spontaneously emerges when an algorithm has reached a certain level of complexity, but which is not the aggregation of smaller phenomenon.
I think we have really different models of how algorithms and their sub-components work.
Suppose I have a computation that produces the digits of pi. It has subroutines which multiply and add. Is it an accurate description of these subroutines that they have a scaled down property of computes-pi-ness? I think this is not a useful way to understand things. Subroutines do not have a scaled-down percentage of the properties of their containing algorithm, they do a discrete chunk of its work. It’s just madness to say that, e.g., your language processing center is 57% conscious.
I agree with all this. Humans probably are not the minimal conscious system, and there are probably subsets of our component circuitry which maintain the property of conciousness. But yes, I maintain that eventually, you’ll get to an algorithm that is conscious while none of its subroutines are.
If this makes me a dualist then I’m a dualist, but that doesn’t feel right. I mean, the only way you can really explain a thing is to show how it arises from something that’s not like it in the first place, right?
I think we have different models of what consciousness is. In your pi example, the multiplier has multiply-ness, and the adder has add-ness properties, and when combined together in a certain way you get computes-pi-ness. Likewise our minds have many, many, many different components which—somehow, someway—each have a small experiential qualia which when you sum together yield the human condition.
Through brain damage studies, for example, we have descriptions of what it feels like to live without certain mental capabilities. I think you would agree with this, but for others reading take this thought experiment: imagine that I were to systematically shut down portions of your brain, or in simulation, delete regions of your memory space. For the purpose of the argument I do it slowly over time in relatively small amounts, and cleaning up dangling references so the whole system doesn’t shut down. Certainly as time goes by your mental functionality is reduced, and you stop being capable of having experiences you once took for granted. But at what point, precisely, do you stop experiencing at all qualia of any form? When you’re down to just a billion neurons? A million? A thousand? When you’re down to just one processing region? Is one tiny algorithm on a single circuit enough?
What is the minimal conscious system? It’s easy and perhaps accurate to say “I don’t know.” After all, neither one of us know enough neural and cognitive science to make this call, I assume. But we should be able to answer this question: “if presented criteria for a minimally-conscious-system, what would convince me of its validity?”
Eliezer’s post on reductionism is relevant here. In a reductionist universe, anything and everything is fully defined by its constituent elements—no more, no less. There’s a popular phrase that has no place is reductionist theories: “the whole is greater than the sum of its parts.” Typically what this actually means is that you failed to count the “parts” correctly: a part list should also include spatial configurations and initial conditions, which together imply the dynamic behaviors as well. For example, a pulley is more than a hunk of metal and some rope, but it is fully defined if you specify how the metal is shaped, how the rope is threaded through it and fixed to objects with knots, how the whole contraption is oriented with respect to gravity, and the procedure for applying rope-pulling-force. Combined with the fundamental laws of physics, this is a fully reductive explanation of a rope-pulley system which is the sum of its fully-defined parts.
And so it goes with consciousness. Unless we are comfortable with the mysterious answers provided by dualism—or empirical evidence like confirmation of psychic phenomenon compels us to go there—then we must demand that an explanation be provided that explains consciousness fully as the aggregation of smaller processes.
When I look a explanations of the workings of the brain, starting with the highest level psychological theories and neural structure, and working the way all the way down the abstraction hierarchy to individual neural synapses and biochemical pathways, nowhere along the way do I see an obvious place to stop and say “here is where consciousness begins!” Likewise, I can start from the level of mere atoms and work my way up to the full neural architecture, without finding any step that adds something which could be consciousness, but which isn’t fundamentally like the levels below it. But when you get to the highest level, you’ve described the full brain without finding consciousness anywhere along the way.
I can see how this leads otherwise intelligent philosophers like David Chalmers to epiphenomenalism. But I’m not going to go down that path, because the whole situation is the result of mental confusion.
The Standard Rationalist Answer is that mental processes are information patterns, nothing more, and tat consciousness is an illusion, end of story. But that still leaves me confused! It’s not like free will for example, where because of the mind projection fallacy I think I have free will due to how a deterministic decision theory algorithm feels from the inside. I get that. No, the answer of “that subjective experience of consciousness isn’t real, get over it” is unsatisfactory because if I don’t have conscious, how am I experiencing thinking in the first place? Cogito ergo sum.
However there is a way out. I went looking for a source of consciousness because I like nearly every other philosopher assumed that there was something special and unique which set brains aside as having minds which other more mundane objects—like rocks and staplers—do not possess. That’s so obviously true, but honestly I have no real justification for that belief. So let’s try negating it. What is possible if we don’t exclude mundane things from having minds too?
Well, what does it feel like to be a quark and a lepton exchanging a photon? I’m not really sure, but let’s call that approximately the minimum possible “experience”, and for the duration of the interaction continuous interaction over time, the two particles share a “mind”. Arrange a number of these objects together and you get an atom, which itself also has a shared/merged experience so long as the particles remain in bonded interaction. Arrange a lot of atoms together and you get a electrical transistor. Now we’re finally starting to get to a level where I have some idea of what the “shared experience of being a transistor” would be (rather boring, by my standards), and more importantly, it’s clear how that experience is aggregated together from its constituent parts. From here, computing theory takes over as more complex interdependent systems are constructed, each merging experiences together into a shared hive mind, until you reach the level of the human being or AI.
Are you at least following what I’m saying, even if you don’t agree?
That was a very long comment (thank you for your effort) and I don’t think I have the energy to exhaustively go through it.
I believe I follow what you’re saying. It doesn’t make much sense to me, so maybe that belief is false.
I think the fact that if you start with a brain, which is presumably conscious, and zoom in all the way looking for the conciousness boundary, and then start with a quark, which is presumably not conscious, and zoom all the way out to the entire brain, also without finding a consciousness barrier—I think this means that the best we can do at the moment is set upper and lower bounds.
A minimally conscious system—say, something that can convince me that it thinks it is conscious. “echo ‘I’m conscious!’” doesn’t quite cut it, things that recognize themselves in mirrors probably do, and I could go either way on the stuff in between.
I think your reductionism is a little misapplied. My pi-calculating program develops a new property of pi-computation when you put the adders and multipliers together right, but is completely described in terms of adders and multipliers. I expect consciousness to be exactly the same; it’ll be completely described in terms of qualia generating algorithms (or some such), which won’t themselves have the consciousness property.
This is hard to see because the algorithms are written in spaghetti code, in the wiring between neurons. In computer terms, we have access to the I/O system and all the gates in the CPU, but we don’t currently know how they’re connected. Looking at more or fewer of the gates doesn’t help, because the critical piece of information is how they’re connected and what algorithm they implement.
IMO, my guess (P=.65) is that qualia are going to turn out to be something like vectors in a feature space. Under this model, clearly systems incapable of representing such a vector can’t have any qualia at all. Rocks and single molecules, for example.
I indeed have a reductionist background, but I offer no explanation, because I have none. I do not even know what an explanation could possibly look like; but neither do I take that as proof that there cannot be one. The story you tell surrounds the central mystery with many physical details, but even in your own accont of it the mystery remains unresolved:
However much you assert that there must be an explanation, I see here no advance towards actually having one. What does it mean to attribute consciousness to subatomic particles and rocks? Does it predict anything, or does it only predict that we could make predictions about teleporters and simulations if we had a physical explanation of consciousness?
I would imagine that consciousness (in a sense of self-awareness) is the ability to introspect into your own algorithm. The more you understand what makes you tick, rather than mindlessly following the inexplicable urges and instincts, the more conscious you are.
Yes, that is not only 100% accurate, but describes where I’m headed.
I am looking for the simplest explanation of the subjective continuity of personal identity, which either answers or dissolves the question. Further, the explanation should either explain which teleportation scenario is correct (identity transfer, or murder+birth), or satisfactorily explain why it is a meaningless distinction.
If I, the person standing in front of the transporter door, will experience walking on Mars, or oblivion.
Yes, it is perhaps likely that this will never be experimentally observable. That may even be a tautology since we are talking about subjective experience. But still, a reductionist theory of consciousness could provide a simple, easy to understand explanation for the origin of personal identity (e.g., what an computational machine feels like from the inside) and which predicts identity transfer or murder + birth. That would be enough for me, at least as long as there’s not competing equally simple theories.
Well, you certainly won’t experience oblivion, more or less by definition. The question is whether you will experience walking on Mars or not.
But there is no distinct observation to be made in these two cases. That is, we agree that either way there will be an entity having all the observable attributes (both subjective and objective; this is not about experimental proof, it’s about the presence or absence of anything differentially observable by anyone) that Mark Friendebach has, walking on Mars.
So, let me rephrase the question: what observation is there to predict here?
That’s not the direction I was going with this. It isn’t about empirical observation, but rather aspects of morality which depend on subjective experience. The prediction is under what conditions subjective experience terminates. Even if not testable, that is still an important thing to find out, with moral implications.
Is it moral to use a teleporter? From what I can tell, that depends on whether the person’s subjective experience is terminated in the process. From the utility point of view the outcomes are very nearly the same—you’ve murdered one person, but given “birth” to an identical copy in the process. However if the original, now destroyed person didn’t want to die, or wouldn’t have wanted his clone to die, then it’s a net negative.
As I said elsewhere, the teleporter is the easiest way to think of this, but the result has many other implications from general anesthesia, to cryonics, to Pascal’s mugging and the basilisk.
OK. I’m tapping out here. Thanks for your time.