Edit: This comment misinterpreted the intended meaning of the post.
Practical CF, more explicitly: A simulation of a human brain on a classical computer, capturing the dynamics of the brain on some coarse-grained level of abstraction, that can run on a computer small and light enough to fit on the surface of Earth, with the simulation running at the same speed as base reality, would cause the same conscious experience as that brain, in the specific sense of thinking literally the exact same sequence of thoughts in the exact same order, in perpetuity.
As TAG has written a number of times, the computationalist thesis seems not to have been convincingly (or even concretely) argued for in any LessWrong post or sequence (including Eliezer’s Sequences). What has been argued for, over and over again, is physicalism, and then more and more rejections of dualist conceptions of souls.
That’s perfectly fine, but “souls don’t exist and thus consciousness and identity must function on top of a physical substrate” is very different from “the identity of a being is given by the abstract classical computation performed by a particular (and reified) subset of the brain’s electronic circuit,” and the latter has never been given compelling explanations or evidence. This is despite the fact that the particular conclusions that have become part of the ethos of LW about stuff like brain emulation, cryonics etc are necessarily reliant on the latter, not the former.
As a general matter, accepting physicalism as correct would naturally lead one to the conclusion that what runs on top of the physical substrate works on the basis of… what is physically there (which, to the best of our current understanding, can be represented through Quantum Mechanical probability amplitudes), not what conclusions you draw from a mathematical model that abstracts away quantum randomness in favor of a classical picture, the entire brain structure in favor of (a slightly augmented version of) its connectome, and the entire chemical make-up of it in favor of its electrical connections. As I have mentioned, that is a mere model that represents a very lossy compression of what is going on; it is not the same as the real thing, and conflating the two is an error that has been going on here for far too long. Of course, it very well might be the case that Rob and the computationalists are right about these issues, but the explanation up to now should make it clear why it is on them to provide evidence for their conclusion.
I recognize you wrote in response to me a while ago that you “find these kinds of conversations to be very time-consuming and often not go anywhere.” I understand this, and I sympathize to a large extent: I also find these discussions very tiresome, which became part of why I ultimately did not engage too much with some of the thought-provoking responses to the question I posed a few months back. So it’s totally ok for us not to get into the weeds of this now (or at any point, really). Nevertheless, for the sake of it, I think the “everyday experience” thermostat example does not seem like an argument in favor of computationalism over physicalism-without-computationalism, since the primary generator of my intuition that my identity would be the same in that case is the literal physical continuity of my body throughout that process. I just don’t think there is a “prosaic” (i.e., bodily-continuity-preserving) analogue or intuition pump to the case of WBE or similar stuff in this respect.
Anyway, in light of footnote 10 in the post (“The question of whether such a simulation contains consciousness at all, of any kind, is a broader discussion that pertains to a weaker version of CF that I will address later on in this sequence”), which to me draws an important distinction between a brain-simulation having some consciousness/identity versus having the same consciousness/identity as that of whatever (physically-instantiated) brain it draws from, I did want to say that this particular post seems focused on the latter and not the former, which seems quite decision-relevant to me:
jbash: These various ideas about identity don’t seem to me to be things you can “prove” or “argue for”. They’re mostly just definitions that you adopt or don’t adopt. Arguing about them is kind of pointless.
sunwillrise: I absolutely disagree. The basic question of “if I die but my brain gets scanned beforehand and emulated, do I nonetheless continue living (in the sense of, say, anticipating the same kinds of experiences)?” seems the complete opposite of pointless, and the kind of conundrum in which agreeing or disagreeing with computationalism leads to completely different answers.
Perhaps there is a meaningful linguistic/semantic component to this, but in the example above, it seems understanding the nature of identity is decision-theoretically relevant for how one should think about whether WBE would be good or bad (in this particular respect, at least).
I should probably let EuanMcLean speak for themselves but I do think “literally the exact same sequence of thoughts in the exact same order” is what OP is talking about. See the part about “causal closure”, and “predict which neurons are firing at t1 given the neuron firings at t0…”. The latter is pretty unambiguous IMO: literally the exact same sequence of thoughts in the exact same order.
I definitely didn’t write anything here that amounts to a general argument for (or against) computationalism. I was very specifically responding to this post. :)
I don’t think this is too related to the OP, but in regard to your exchange with jbash:
I think there’s a perspective where “personal identity” is a strong intuition, but a misleading one—it doesn’t really (“veridically”) correspond to anything at all in the real world. Instead it’s a bundle of connotations, many of which are real and important. Maybe I care that my projects and human relationships continue, that my body survives, that the narrative of my life is a continuous linear storyline, that my cherished memories persist, whatever. All those things veridically correspond to things in the real world, but (in this perspective) there isn’t some core fact of the matter about “personal identity” beyond that bundle of connotations.
I think jbash is saying (within this perspective) that you can take the phrase “personal identity”, pick whatever connotations you care, and define “personal identity” as that. And then your response (as I interpret it) is that no, you can’t do that, because there’s a core fact of the matter about personal identity, and that core fact of the matter is very very important, and it’s silly to define “personal identity” as pointing to anything else besides that core fact of the matter.
So I imagine jbash responding that “do I nonetheless continue living (in the sense of, say, anticipating the same kind of experiences)?” is a confused question, based on reifying misleading intuitions around “I”. It’s a bit like saying “in such-and-such a situation, will my ancestor spirits be happy or sad?”
I’m not really defending this perspective here, just trying to help explain it, hopefully.
I appreciate your response, and I understand that you are not arguing in favor of this perspective. Nevertheless, since you have posited it, I have decided to respond to it myself and expand upon why I ultimately disagree with it (or at the very least, why I remain uncomfortable with it because it doesn’t seem to resolve my confusions).
I think revealed preferences show I am a huge fan of explanations of confusing questions that ultimately claim the concepts we are reifying are ultimately inconsistent/incoherent, and that instead of hitting our heads against the wall over and over, we should take a step back and ponder the topic at a more fundamental level first. So I am certainly open to the idea that “do I nonetheless continue living (in the sense of, say, anticipating the same kind of experiences)?” is a confused question.
But, as I see it, there are a ton of problems with applying this general approach in this particular case. First of all, if anticipated experiences are an ultimately incoherent concept that we cannot analyze without first (unjustifiably) reifying a theory-ladden framework, how precisely are we to proceed from an epistemological perspective? When the foundation of ‘truth’ (or at least, what I conceive of it to be) is based around comparing and contrasting what we expect to see with what we actually observe experimentally, doesn’t the entire edifice collapse once the essential constituent piece of ‘experiences’ breaks down? Recall the classic (and eternally underappreciated) paragraph from Eliezer:
I pause. “Well . . .” I say slowly. “Frankly, I’m not entirely sure myself where this ‘reality’ business comes from. I can’t create my own reality in the lab, so I must not understand it yet. But occasionally I believe strongly that something is going to happen, and then something else happens instead. I need a name for whatever-it-is that determines my experimental results, so I call it ‘reality’. This ‘reality’ is somehow separate from even my very best hypotheses. Even when I have a simple hypothesis, strongly supported by all the evidence I know, sometimes I’m still surprised. So I need different names for the thingies that determine my predictions and the thingy that determines my experimental results. I call the former thingies ‘belief,’ and the latter thingy ‘reality.’ ”
What exactly do we do once we give up on precisely pinpointing the phrases “I believe”, “my [...] hypotheses”, “surprised”, “my predictions”, etc.? Nihilism, attractive as it may be to some from a philosophical or ‘contrarian coolness’ perspective, is not decision-theoretically useful when you have problems to deal with and tasks to accomplish. Note that while Eliezer himself is not what he considers a logical positivist, I think I… might be?
I really don’t understand what “best explanation”, “true”, or “exist” mean, as stand-alone words divorced from predictions about observations we might ultimately make about them.
This isn’t just a semantic point, I think. If there are no observations we can make that ultimately reflect whether something exists in this (seems to me to be) free-floating sense, I don’t understand what it can mean to have evidence for or against such a proposition. So I don’t understand how I am even supposed to ever justifiably change my mind on this topic, even if I were to accept it as something worth discussing on the object-level.
Everything I believe, my whole theory of epistemology and everything else logically downstream of it (aka, virtually everything I believe), relies on the thesis (axiom, if you will) that there is a ‘me’ out there doing some sort of ‘prediction + observation + updating’ in response to stimuli from the outside world. I get that this might be like reifying ghosts in a Wentworthian sense when you drill down on it, but I still have desires about the world, dammit, even if they don’t make coherent sense as concepts! And I want them to be fulfilled regardless.
Edit: This comment misinterpreted the intended meaning of the post.
I… don’t think this is necessarily what @EuanMcLean meant? At the risk of conflating his own perspective and ambivalence on this issue with my own, this is a question of personal identity and whether the computationalist perspective, generally considered a “reasonable enough” assumption to almost never be argued for explicitly on LW, is correct. As I wrote a while ago on Rob’s post:
I recognize you wrote in response to me a while ago that you “find these kinds of conversations to be very time-consuming and often not go anywhere.” I understand this, and I sympathize to a large extent: I also find these discussions very tiresome, which became part of why I ultimately did not engage too much with some of the thought-provoking responses to the question I posed a few months back. So it’s totally ok for us not to get into the weeds of this now (or at any point, really). Nevertheless, for the sake of it, I think the “everyday experience” thermostat example does not seem like an argument in favor of computationalism over physicalism-without-computationalism, since the primary generator of my intuition that my identity would be the same in that case is the literal physical continuity of my body throughout that process. I just don’t think there is a “prosaic” (i.e., bodily-continuity-preserving) analogue or intuition pump to the case of WBE or similar stuff in this respect.
Anyway, in light of footnote 10 in the post (“The question of whether such a simulation contains consciousness at all, of any kind, is a broader discussion that pertains to a weaker version of CF that I will address later on in this sequence”), which to me draws an important distinction between a brain-simulation having some consciousness/identity versus having the same consciousness/identity as that of whatever (physically-instantiated) brain it draws from, I did want to say that this particular post seems focused on the latter and not the former, which seems quite decision-relevant to me:
I should probably let EuanMcLean speak for themselves but I do think “literally the exact same sequence of thoughts in the exact same order” is what OP is talking about. See the part about “causal closure”, and “predict which neurons are firing at t1 given the neuron firings at t0…”. The latter is pretty unambiguous IMO: literally the exact same sequence of thoughts in the exact same order.
I definitely didn’t write anything here that amounts to a general argument for (or against) computationalism. I was very specifically responding to this post. :)
I don’t think this is too related to the OP, but in regard to your exchange with jbash:
I think there’s a perspective where “personal identity” is a strong intuition, but a misleading one—it doesn’t really (“veridically”) correspond to anything at all in the real world. Instead it’s a bundle of connotations, many of which are real and important. Maybe I care that my projects and human relationships continue, that my body survives, that the narrative of my life is a continuous linear storyline, that my cherished memories persist, whatever. All those things veridically correspond to things in the real world, but (in this perspective) there isn’t some core fact of the matter about “personal identity” beyond that bundle of connotations.
I think jbash is saying (within this perspective) that you can take the phrase “personal identity”, pick whatever connotations you care, and define “personal identity” as that. And then your response (as I interpret it) is that no, you can’t do that, because there’s a core fact of the matter about personal identity, and that core fact of the matter is very very important, and it’s silly to define “personal identity” as pointing to anything else besides that core fact of the matter.
So I imagine jbash responding that “do I nonetheless continue living (in the sense of, say, anticipating the same kind of experiences)?” is a confused question, based on reifying misleading intuitions around “I”. It’s a bit like saying “in such-and-such a situation, will my ancestor spirits be happy or sad?”
I’m not really defending this perspective here, just trying to help explain it, hopefully.
I appreciate your response, and I understand that you are not arguing in favor of this perspective. Nevertheless, since you have posited it, I have decided to respond to it myself and expand upon why I ultimately disagree with it (or at the very least, why I remain uncomfortable with it because it doesn’t seem to resolve my confusions).
I think revealed preferences show I am a huge fan of explanations of confusing questions that ultimately claim the concepts we are reifying are ultimately inconsistent/incoherent, and that instead of hitting our heads against the wall over and over, we should take a step back and ponder the topic at a more fundamental level first. So I am certainly open to the idea that “do I nonetheless continue living (in the sense of, say, anticipating the same kind of experiences)?” is a confused question.
But, as I see it, there are a ton of problems with applying this general approach in this particular case. First of all, if anticipated experiences are an ultimately incoherent concept that we cannot analyze without first (unjustifiably) reifying a theory-ladden framework, how precisely are we to proceed from an epistemological perspective? When the foundation of ‘truth’ (or at least, what I conceive of it to be) is based around comparing and contrasting what we expect to see with what we actually observe experimentally, doesn’t the entire edifice collapse once the essential constituent piece of ‘experiences’ breaks down? Recall the classic (and eternally underappreciated) paragraph from Eliezer:
What exactly do we do once we give up on precisely pinpointing the phrases “I believe”, “my [...] hypotheses”, “surprised”, “my predictions”, etc.? Nihilism, attractive as it may be to some from a philosophical or ‘contrarian coolness’ perspective, is not decision-theoretically useful when you have problems to deal with and tasks to accomplish. Note that while Eliezer himself is not what he considers a logical positivist, I think I… might be?
Everything I believe, my whole theory of epistemology and everything else logically downstream of it (aka, virtually everything I believe), relies on the thesis (axiom, if you will) that there is a ‘me’ out there doing some sort of ‘prediction + observation + updating’ in response to stimuli from the outside world. I get that this might be like reifying ghosts in a Wentworthian sense when you drill down on it, but I still have desires about the world, dammit, even if they don’t make coherent sense as concepts! And I want them to be fulfilled regardless.
And, moreover, one of those preferences is maintaining a coherent flow of existence, avoiding changes that would be tantamount to death (even if they are not as literal as ‘someone blows my brains out’). As a human being, I have preferences over what I experience too, not just over what state the random excitations of quantum fields in the Universe are at some point past my expiration date. As far as I see, the hard problem of consciousness (i.e., the nature of qualia) has not been close to solved; any answer to it would have to give me a practical handbook for answering the initial questions I posed to jbash.