In my last post, I defined a concrete claim that computational functionalists tend to make:
Practical CF: A simulation of a human brain on a classical computer, capturing the dynamics of the brain on some coarse-grained level of abstraction, that can run on a computer small and light enough to fit on the surface of Earth, with the simulation running at the same speed as base reality, would cause the same conscious experience as that brain.
From reading this comment, I understand that you mean the following:
Practical CF, more explicitly: A simulation of a human brain on a classical computer, capturing the dynamics of the brain on some coarse-grained level of abstraction, that can run on a computer small and light enough to fit on the surface of Earth, with the simulation running at the same speed as base reality, would cause the same conscious experience as that brain, in the specific sense of thinking literally the exact same sequence of thoughts in the exact same order, in perpetuity.
I agree that “practical CF” as thus defined is false—indeed I think it’s so obviously false that this post is massive overkill in justifying it.
But I also think that “practical CF” as thus defined is not in fact a claim that computational functionalists tend to make.
Let’s put aside simulation and talk about an everyday situation.
Suppose you’re the building manager of my apartment, and I’m in my apartment doing work. Unbeknownst to me, you flip a coin. If it’s heads, then you set the basement thermostat to 20°C. If it’s tails, then you set the basement thermostat to 20.1°C. As a result, the temperature in my room is slightly different in the two scenarios, and thus the temperature in my brain is slightly different, and this causes some tiny number of synaptic vesicles to release differently under heads versus tails, which gradually butterfly-effect into totally different trains of thought in the two scenarios, perhaps leading me to make a different decision on some question where I was really ambivalent and going back and forth, or maybe having some good idea in one scenario but not the other.
But in both scenarios, it’s still “me”, and it’s still “my mind” and “my consciousness”. Do you see what I mean?
So anyway, when you wrote “A simulation of a human brain on a classical computer…would cause the same conscious experience as that brain”, I initially interpreted that sentence as meaning something more like “the same kind of conscious experience”, just as I would have “the same kind of conscious experience” if the basement thermostat were unknowingly set to 20°C versus 20.1°C.
(And no I don’t just mean “there is a conscious experience either way”. I mean something much stronger than that—it’s my conscious experience either way, whether 20°C or 20.1°C.)
Do you see what I mean? And under that interpretation, I think that the statement would be not only plausible but also a better match to what real computational functionalists usually believe.
I agree that “practical CF” as thus defined is false—indeed I think it’s so obviously false that this post is massive overkill in justifying it.
But I also think that “practical CF” as thus defined is not in fact a claim that computational functionalists tend to make.
The term ‘functionalist’ is overloaded. A lot of philosophical terms are overloaded, but ‘functionalist’ is the most egregiously overloaded of all philosophical terms because it refers to two groups of people with two literally incompatible sets of beliefs:
(1) the people who are consciousness realists and think there’s this well-defined consciousness stuff exhibited from human brains, and also that the way this stuff emerges depends on what computational steps/functions/algorithms are executed (whatever that means exactly)
(2) the people who think consciousness is only an intuitive model, in which case functionalism is kinda trivial and not really a thing that can be proved or disproved, anyway
Unless I’m misinterpreting things here (and OP can correct me if I am), the post is arguing against (1), but you are (2), which is why you’re talking past each other here. (I don’t think this sequence in general is relevant to your personal views, which is what I also tried to say here.) In the definition you rephrased
would cause the same conscious experience as that brain, in the specific sense of thinking literally the exact same sequence of thoughts in the exact same order, in perpetuity.
… consciousness realists will read the ‘thinking’ part as referring to thinking in the conscious mind, not to thinking in the physical brain. So to you this reads obviously false to you because you don’t think there is a conscious mind separate from the physical brain, and the thoughts in the physical brain aren’t ‘literally exactly the same’ in the biological brain vs. the simulation—obviously! But the (1) group does, in fact, believe in such a thing, and their position does more or less imply that it would be thinking the same thoughts.
I believe this is what OP is trying to gesture at as well with their reply here.
This is kinda helpful but I also think people in your (1) group would agree with all three of: (A) the sequence of thoughts that you think directly correspond to something about the evolving state of activity in your brain, (B) random noise has nonzero influence on the evolving state of activity in your brain, (C) random noise cannot be faithfully reproduced in a practical simulation.
And I think that they would not see anything self-contradictory about believing all of those things. (And I also don’t see anything self-contradictory about that, even granting your (1).)
Well, I guess this discussion should really be focused more on personal identity than consciousness (OP wrote: “Whether or not a simulation can have consciousness at all is a broader discussion I’m saving for later in the sequence, and is relevant to a weaker version of CF.”).
So in that regard: my mental image of computational functionalists in your group (1) would also say things like (D) “If I start 5 executions of my brain algorithm, on 5 different computers, each with a different RNG seed, then they are all conscious (they are all exuding consciousness-stuff, or whatever), and they all have equal claim to being “me”, and of course they all will eventually start having different trains of thought. Over the months and years they might gradually diverge in beliefs, memories, goals, etc. Oh well, personal identity is a fuzzy thing anyway. Didn’t you read Parfit?”
But I haven’t read as much of the literature as you, so maybe I’m putting words in people’s mouths.
Hmm. I think that none of this refutes the point I was making, which is that practical CF as defined by OP is a position that many people actually hold,[1] hence OP’s argument isn’t just a strawman/missing the point. (Whether or not the argument succeeds is a different question.)
Well, I guess this discussion should really be focused more on personal identity than consciousness (OP wrote: “Whether or not a simulation can have consciousness at all is a broader discussion I’m saving for later in the sequence, and is relevant to a weaker version of CF.”).
I don’t think you have to bring identity into this. (And if you don’t have to, I’d strongly advise leaving it out because identity is another huge rabbit hole.) There’s three claims with strictly increasing strength here: C1 digital simulations can be conscious, C2 a digital simulation of a brain exhibits similar consciousness to that brain, and C3 if a simulation of my brain is created, then that simulation is me. I think only C3 is about identity, and OP’s post is arguing against C2. (All three claims are talking about realist consciousness.)
This is also why I don’t think noise matters. Granting all of (A)-(D) doesn’t really affect C2; a practical simulation could work with similar noise and be pseudo-nondeterministic in the same way that the brain is. I think it’s pretty coherent to just ask about how similar the consciousness is, under a realist framework (i.e., asking C2), without stepping onto the identity hornets nest.
a caveat here is that it’s actually quite hard to write down any philosophical position (except illusionism) such that a lot of people give blanket endorsements (again because everyone has slightly different ideas of what different terms mean), but I think OP has done a pretty good job, definitely better than most, in formulating an opinion that at least a good number of people would probably endorse.
One thing I worry about is that the same disagreement happens with a lot of other users who, unlike Steven, just downvote the post rather than writing a comment.
In general, when I’ve read through the entire LW catalogue of posts with the consciousness tag, I’ve noticed that almost all well received posts with the consciousness tag take what I call the camp #1 perspective (i.e., discuss consciousness from an illusionist lens, even if it’s not always stated explicitly). Iirc the only major exceptions are the posts from Eliezer, which, well, are from Eliezer. So it could be that post who discuss consciousness from a realist PoV consistently receive certain amount of downvotes from camp #1 people to whom the post just seems gibberish/a waste of time. I don’t have any data to prove that this is the mechanism, it’s just a guess, but the pattern is pretty consistent. I also think you generally wouldn’t predict this if you just read the comment sections. (And idk if clarifying the perspective would help since no one does it.)
Edit: This comment misinterpreted the intended meaning of the post.
Practical CF, more explicitly: A simulation of a human brain on a classical computer, capturing the dynamics of the brain on some coarse-grained level of abstraction, that can run on a computer small and light enough to fit on the surface of Earth, with the simulation running at the same speed as base reality, would cause the same conscious experience as that brain, in the specific sense of thinking literally the exact same sequence of thoughts in the exact same order, in perpetuity.
As TAG has written a number of times, the computationalist thesis seems not to have been convincingly (or even concretely) argued for in any LessWrong post or sequence (including Eliezer’s Sequences). What has been argued for, over and over again, is physicalism, and then more and more rejections of dualist conceptions of souls.
That’s perfectly fine, but “souls don’t exist and thus consciousness and identity must function on top of a physical substrate” is very different from “the identity of a being is given by the abstract classical computation performed by a particular (and reified) subset of the brain’s electronic circuit,” and the latter has never been given compelling explanations or evidence. This is despite the fact that the particular conclusions that have become part of the ethos of LW about stuff like brain emulation, cryonics etc are necessarily reliant on the latter, not the former.
As a general matter, accepting physicalism as correct would naturally lead one to the conclusion that what runs on top of the physical substrate works on the basis of… what is physically there (which, to the best of our current understanding, can be represented through Quantum Mechanical probability amplitudes), not what conclusions you draw from a mathematical model that abstracts away quantum randomness in favor of a classical picture, the entire brain structure in favor of (a slightly augmented version of) its connectome, and the entire chemical make-up of it in favor of its electrical connections. As I have mentioned, that is a mere model that represents a very lossy compression of what is going on; it is not the same as the real thing, and conflating the two is an error that has been going on here for far too long. Of course, it very well might be the case that Rob and the computationalists are right about these issues, but the explanation up to now should make it clear why it is on them to provide evidence for their conclusion.
I recognize you wrote in response to me a while ago that you “find these kinds of conversations to be very time-consuming and often not go anywhere.” I understand this, and I sympathize to a large extent: I also find these discussions very tiresome, which became part of why I ultimately did not engage too much with some of the thought-provoking responses to the question I posed a few months back. So it’s totally ok for us not to get into the weeds of this now (or at any point, really). Nevertheless, for the sake of it, I think the “everyday experience” thermostat example does not seem like an argument in favor of computationalism over physicalism-without-computationalism, since the primary generator of my intuition that my identity would be the same in that case is the literal physical continuity of my body throughout that process. I just don’t think there is a “prosaic” (i.e., bodily-continuity-preserving) analogue or intuition pump to the case of WBE or similar stuff in this respect.
Anyway, in light of footnote 10 in the post (“The question of whether such a simulation contains consciousness at all, of any kind, is a broader discussion that pertains to a weaker version of CF that I will address later on in this sequence”), which to me draws an important distinction between a brain-simulation having some consciousness/identity versus having the same consciousness/identity as that of whatever (physically-instantiated) brain it draws from, I did want to say that this particular post seems focused on the latter and not the former, which seems quite decision-relevant to me:
jbash: These various ideas about identity don’t seem to me to be things you can “prove” or “argue for”. They’re mostly just definitions that you adopt or don’t adopt. Arguing about them is kind of pointless.
sunwillrise: I absolutely disagree. The basic question of “if I die but my brain gets scanned beforehand and emulated, do I nonetheless continue living (in the sense of, say, anticipating the same kinds of experiences)?” seems the complete opposite of pointless, and the kind of conundrum in which agreeing or disagreeing with computationalism leads to completely different answers.
Perhaps there is a meaningful linguistic/semantic component to this, but in the example above, it seems understanding the nature of identity is decision-theoretically relevant for how one should think about whether WBE would be good or bad (in this particular respect, at least).
I should probably let EuanMcLean speak for themselves but I do think “literally the exact same sequence of thoughts in the exact same order” is what OP is talking about. See the part about “causal closure”, and “predict which neurons are firing at t1 given the neuron firings at t0…”. The latter is pretty unambiguous IMO: literally the exact same sequence of thoughts in the exact same order.
I definitely didn’t write anything here that amounts to a general argument for (or against) computationalism. I was very specifically responding to this post. :)
I don’t think this is too related to the OP, but in regard to your exchange with jbash:
I think there’s a perspective where “personal identity” is a strong intuition, but a misleading one—it doesn’t really (“veridically”) correspond to anything at all in the real world. Instead it’s a bundle of connotations, many of which are real and important. Maybe I care that my projects and human relationships continue, that my body survives, that the narrative of my life is a continuous linear storyline, that my cherished memories persist, whatever. All those things veridically correspond to things in the real world, but (in this perspective) there isn’t some core fact of the matter about “personal identity” beyond that bundle of connotations.
I think jbash is saying (within this perspective) that you can take the phrase “personal identity”, pick whatever connotations you care, and define “personal identity” as that. And then your response (as I interpret it) is that no, you can’t do that, because there’s a core fact of the matter about personal identity, and that core fact of the matter is very very important, and it’s silly to define “personal identity” as pointing to anything else besides that core fact of the matter.
So I imagine jbash responding that “do I nonetheless continue living (in the sense of, say, anticipating the same kind of experiences)?” is a confused question, based on reifying misleading intuitions around “I”. It’s a bit like saying “in such-and-such a situation, will my ancestor spirits be happy or sad?”
I’m not really defending this perspective here, just trying to help explain it, hopefully.
I appreciate your response, and I understand that you are not arguing in favor of this perspective. Nevertheless, since you have posited it, I have decided to respond to it myself and expand upon why I ultimately disagree with it (or at the very least, why I remain uncomfortable with it because it doesn’t seem to resolve my confusions).
I think revealed preferences show I am a huge fan of explanations of confusing questions that ultimately claim the concepts we are reifying are ultimately inconsistent/incoherent, and that instead of hitting our heads against the wall over and over, we should take a step back and ponder the topic at a more fundamental level first. So I am certainly open to the idea that “do I nonetheless continue living (in the sense of, say, anticipating the same kind of experiences)?” is a confused question.
But, as I see it, there are a ton of problems with applying this general approach in this particular case. First of all, if anticipated experiences are an ultimately incoherent concept that we cannot analyze without first (unjustifiably) reifying a theory-ladden framework, how precisely are we to proceed from an epistemological perspective? When the foundation of ‘truth’ (or at least, what I conceive of it to be) is based around comparing and contrasting what we expect to see with what we actually observe experimentally, doesn’t the entire edifice collapse once the essential constituent piece of ‘experiences’ breaks down? Recall the classic (and eternally underappreciated) paragraph from Eliezer:
I pause. “Well . . .” I say slowly. “Frankly, I’m not entirely sure myself where this ‘reality’ business comes from. I can’t create my own reality in the lab, so I must not understand it yet. But occasionally I believe strongly that something is going to happen, and then something else happens instead. I need a name for whatever-it-is that determines my experimental results, so I call it ‘reality’. This ‘reality’ is somehow separate from even my very best hypotheses. Even when I have a simple hypothesis, strongly supported by all the evidence I know, sometimes I’m still surprised. So I need different names for the thingies that determine my predictions and the thingy that determines my experimental results. I call the former thingies ‘belief,’ and the latter thingy ‘reality.’ ”
What exactly do we do once we give up on precisely pinpointing the phrases “I believe”, “my [...] hypotheses”, “surprised”, “my predictions”, etc.? Nihilism, attractive as it may be to some from a philosophical or ‘contrarian coolness’ perspective, is not decision-theoretically useful when you have problems to deal with and tasks to accomplish. Note that while Eliezer himself is not what he considers a logical positivist, I think I… might be?
I really don’t understand what “best explanation”, “true”, or “exist” mean, as stand-alone words divorced from predictions about observations we might ultimately make about them.
This isn’t just a semantic point, I think. If there are no observations we can make that ultimately reflect whether something exists in this (seems to me to be) free-floating sense, I don’t understand what it can mean to have evidence for or against such a proposition. So I don’t understand how I am even supposed to ever justifiably change my mind on this topic, even if I were to accept it as something worth discussing on the object-level.
Everything I believe, my whole theory of epistemology and everything else logically downstream of it (aka, virtually everything I believe), relies on the thesis (axiom, if you will) that there is a ‘me’ out there doing some sort of ‘prediction + observation + updating’ in response to stimuli from the outside world. I get that this might be like reifying ghosts in a Wentworthian sense when you drill down on it, but I still have desires about the world, dammit, even if they don’t make coherent sense as concepts! And I want them to be fulfilled regardless.
Your alternative wording of practical CF is indeed basically what I’m arguing against (although, we could interpret different degrees of the simulation having the “exact” same experience, and I think the arguments here don’t only argue against the strongest versions but also weaker versions, depending on how strong those arguments are).
I’ll explain a bit more why I think practical CF is relevant to CF more generally.
Firstly, functionalist commonly say things like
Computational functionalism: the mind is the software of the brain. (Piccinini)
Which, when I take at face value, is saying that there is actually a program being implemented by the brain that is meaningful to point to (i.e. it’s not just a program in the sense that any physical process could be a program if you simulate it (assuming digital physics etc)). That program lives on a level of abstraction above biophysics.
Secondly, computational functionalism, taken at face value again, says that all details of the conscious experience should be encoded in the program that creates it. If this isn’t true, then you can’t say that conscious experience is that program because the experience has properties that the program does not.
Putnam advances an opposing functionalist view, on which mental states are functional states. (SEP)
He proposes that mental activity implements a probabilistic automaton and that particular mental states are machine states of the automaton’s central processor. (SEP)
the mind is constituted by the programs stored and executed by the brain (Piccinini)
I can accept the charge that this still is a stronger version of CF that a number of functionalists subscribe to. Which is fine! My plan was to address quite narrow claims at the start of the sequence and move onto broader claims later on.
I’d be curious to hear which of the above steps you think miss the mark on capturing common CF views.
I guess I shouldn’t put words in other people’s mouths, but I think the fact that years-long trains-of-thought cannot be perfectly predicted in practice because of noise is obvious and uninteresting to everyone, I bet including to the computational functionalists you quoted, even if their wording on that was not crystal clear.
There are things that the brain does systematically and robustly by design, things which would be astronomically unlikely to happen by chance. E.g. the fact that I move my lips to emit grammatical English-language sentences rather than random gibberish. Or the fact that humans wanted to go to the moon, and actually did so. Or the fact that I systematically take actions that tend to lead to my children surviving and thriving, as opposed to suffering and dying.
That kind of stuff, which my brain does systematically and robustly, is what makes me me. My memories, goals, hopes and dreams, skills, etc. The fact that I happened to glance towards my scissors at time 582834.3 is not important, but the robust systematic patterns are.
And the reason that my brain does those things systematically and robustly is because the brain is designed to run an algorithm that does those things. And there’s a mathematical explanation of why this particular algorithm does those remarkable systematic things like invent quantum mechanics and reflect on the meaning of life, and separately, there’s a biophysical explanation of how it is that the brain is a machine that runs this algorithm.
I don’t think “software versus hardware” is the right frame. I prefer “the brain is a machine that runs a certain algorithm”. Like, what is software-versus-hardware for a mechanical calculator? I dunno. But there are definitely algorithms that the mechanical calculator is executing.
So we can talk about what is the algorithm that the brain is running, and why does it work? Well, it builds models, and stores them, and queries them, and combines them, and edits them, and there’s a reinforcement learning actor-critic thing, blah blah blah.
Those reasons can still be valid even if there’s some unpredictable noise in the system. Think of a grandfather clock—the second hand will robustly move 60× faster than the minute hand, by design, even if there’s some noise in the pendulum that affects the speed of both, or randomness in the surface friction that affects the exact micron-level location that the second hand comes to rest each tick. Or think of an algorithm that involves randomness (e.g. MCMC), and hence any given output is unpredictable, but the algorithm still robustly and systematically does stuff that is a priori specifiable and be astronomically unlikely to happen by chance. Or think of the Super Mario 64 source code compiled to different chip architectures that use different size floats (for example). You can play both, and they will both be very recognizably Super Mario 64, but any given exact sequence of button presses will eventually lead to divergent trajectories on the two systems. (This kind of thing is known to happen in tool-assisted speedruns—they’ll get out of sync on different systems, even when it’s “the same game” to all appearances.)
But it’s still reasonable to say that the Super Mario 64 source code is specifying an algorithm, and all the important properties of Super Mario 64 are part of that algorithm, e.g. what does Mario look like, how does he move, what are the levels, etc. It’s just that the core algorithm is not specified at such a level of detail that we can pin down what any given infinite sequence of button presses will do. That depends on unimportant details like floating point rounding.
I think this is compatible with how people use the word “algorithm” in practice. Like, CS people will causally talk about “two different implementations of the MCMC algorithm”, and not just “two different algorithms in the MCMC family of algorithms”.
That said, I guess it’s possible that Putnam and/or Piccinini were describing things in a careless or confused way viz. the role of noise impinging upon the brain. I am not them, and it’s probably not a good use of time to litigate their exact beliefs and wording. ¯\_(ツ)_/¯
From reading this comment, I understand that you mean the following:
Practical CF, more explicitly: A simulation of a human brain on a classical computer, capturing the dynamics of the brain on some coarse-grained level of abstraction, that can run on a computer small and light enough to fit on the surface of Earth, with the simulation running at the same speed as base reality, would cause the same conscious experience as that brain, in the specific sense of thinking literally the exact same sequence of thoughts in the exact same order, in perpetuity.
I agree that “practical CF” as thus defined is false—indeed I think it’s so obviously false that this post is massive overkill in justifying it.
But I also think that “practical CF” as thus defined is not in fact a claim that computational functionalists tend to make.
Let’s put aside simulation and talk about an everyday situation.
Suppose you’re the building manager of my apartment, and I’m in my apartment doing work. Unbeknownst to me, you flip a coin. If it’s heads, then you set the basement thermostat to 20°C. If it’s tails, then you set the basement thermostat to 20.1°C. As a result, the temperature in my room is slightly different in the two scenarios, and thus the temperature in my brain is slightly different, and this causes some tiny number of synaptic vesicles to release differently under heads versus tails, which gradually butterfly-effect into totally different trains of thought in the two scenarios, perhaps leading me to make a different decision on some question where I was really ambivalent and going back and forth, or maybe having some good idea in one scenario but not the other.
But in both scenarios, it’s still “me”, and it’s still “my mind” and “my consciousness”. Do you see what I mean?
So anyway, when you wrote “A simulation of a human brain on a classical computer…would cause the same conscious experience as that brain”, I initially interpreted that sentence as meaning something more like “the same kind of conscious experience”, just as I would have “the same kind of conscious experience” if the basement thermostat were unknowingly set to 20°C versus 20.1°C.
(And no I don’t just mean “there is a conscious experience either way”. I mean something much stronger than that—it’s my conscious experience either way, whether 20°C or 20.1°C.)
Do you see what I mean? And under that interpretation, I think that the statement would be not only plausible but also a better match to what real computational functionalists usually believe.
The term ‘functionalist’ is overloaded. A lot of philosophical terms are overloaded, but ‘functionalist’ is the most egregiously overloaded of all philosophical terms because it refers to two groups of people with two literally incompatible sets of beliefs:
(1) the people who are consciousness realists and think there’s this well-defined consciousness stuff exhibited from human brains, and also that the way this stuff emerges depends on what computational steps/functions/algorithms are executed (whatever that means exactly)
(2) the people who think consciousness is only an intuitive model, in which case functionalism is kinda trivial and not really a thing that can be proved or disproved, anyway
Unless I’m misinterpreting things here (and OP can correct me if I am), the post is arguing against (1), but you are (2), which is why you’re talking past each other here. (I don’t think this sequence in general is relevant to your personal views, which is what I also tried to say here.) In the definition you rephrased
… consciousness realists will read the ‘thinking’ part as referring to thinking in the conscious mind, not to thinking in the physical brain. So to you this reads obviously false to you because you don’t think there is a conscious mind separate from the physical brain, and the thoughts in the physical brain aren’t ‘literally exactly the same’ in the biological brain vs. the simulation—obviously! But the (1) group does, in fact, believe in such a thing, and their position does more or less imply that it would be thinking the same thoughts.
I believe this is what OP is trying to gesture at as well with their reply here.
This is kinda helpful but I also think people in your (1) group would agree with all three of: (A) the sequence of thoughts that you think directly correspond to something about the evolving state of activity in your brain, (B) random noise has nonzero influence on the evolving state of activity in your brain, (C) random noise cannot be faithfully reproduced in a practical simulation.
And I think that they would not see anything self-contradictory about believing all of those things. (And I also don’t see anything self-contradictory about that, even granting your (1).)
Well, I guess this discussion should really be focused more on personal identity than consciousness (OP wrote: “Whether or not a simulation can have consciousness at all is a broader discussion I’m saving for later in the sequence, and is relevant to a weaker version of CF.”).
So in that regard: my mental image of computational functionalists in your group (1) would also say things like (D) “If I start 5 executions of my brain algorithm, on 5 different computers, each with a different RNG seed, then they are all conscious (they are all exuding consciousness-stuff, or whatever), and they all have equal claim to being “me”, and of course they all will eventually start having different trains of thought. Over the months and years they might gradually diverge in beliefs, memories, goals, etc. Oh well, personal identity is a fuzzy thing anyway. Didn’t you read Parfit?”
But I haven’t read as much of the literature as you, so maybe I’m putting words in people’s mouths.
Hmm. I think that none of this refutes the point I was making, which is that practical CF as defined by OP is a position that many people actually hold,[1] hence OP’s argument isn’t just a strawman/missing the point. (Whether or not the argument succeeds is a different question.)
I don’t think you have to bring identity into this. (And if you don’t have to, I’d strongly advise leaving it out because identity is another huge rabbit hole.) There’s three claims with strictly increasing strength here: C1 digital simulations can be conscious, C2 a digital simulation of a brain exhibits similar consciousness to that brain, and C3 if a simulation of my brain is created, then that simulation is me. I think only C3 is about identity, and OP’s post is arguing against C2. (All three claims are talking about realist consciousness.)
This is also why I don’t think noise matters. Granting all of (A)-(D) doesn’t really affect C2; a practical simulation could work with similar noise and be pseudo-nondeterministic in the same way that the brain is. I think it’s pretty coherent to just ask about how similar the consciousness is, under a realist framework (i.e., asking C2), without stepping onto the identity hornets nest.
a caveat here is that it’s actually quite hard to write down any philosophical position (except illusionism) such that a lot of people give blanket endorsements (again because everyone has slightly different ideas of what different terms mean), but I think OP has done a pretty good job, definitely better than most, in formulating an opinion that at least a good number of people would probably endorse.
Yea, you might be hitting on at least a big generator of our disagreement. Well spotted
One thing I worry about is that the same disagreement happens with a lot of other users who, unlike Steven, just downvote the post rather than writing a comment.
In general, when I’ve read through the entire LW catalogue of posts with the consciousness tag, I’ve noticed that almost all well received posts with the consciousness tag take what I call the camp #1 perspective (i.e., discuss consciousness from an illusionist lens, even if it’s not always stated explicitly). Iirc the only major exceptions are the posts from Eliezer, which, well, are from Eliezer. So it could be that post who discuss consciousness from a realist PoV consistently receive certain amount of downvotes from camp #1 people to whom the post just seems gibberish/a waste of time. I don’t have any data to prove that this is the mechanism, it’s just a guess, but the pattern is pretty consistent. I also think you generally wouldn’t predict this if you just read the comment sections. (And idk if clarifying the perspective would help since no one does it.)
Edit: This comment misinterpreted the intended meaning of the post.
I… don’t think this is necessarily what @EuanMcLean meant? At the risk of conflating his own perspective and ambivalence on this issue with my own, this is a question of personal identity and whether the computationalist perspective, generally considered a “reasonable enough” assumption to almost never be argued for explicitly on LW, is correct. As I wrote a while ago on Rob’s post:
I recognize you wrote in response to me a while ago that you “find these kinds of conversations to be very time-consuming and often not go anywhere.” I understand this, and I sympathize to a large extent: I also find these discussions very tiresome, which became part of why I ultimately did not engage too much with some of the thought-provoking responses to the question I posed a few months back. So it’s totally ok for us not to get into the weeds of this now (or at any point, really). Nevertheless, for the sake of it, I think the “everyday experience” thermostat example does not seem like an argument in favor of computationalism over physicalism-without-computationalism, since the primary generator of my intuition that my identity would be the same in that case is the literal physical continuity of my body throughout that process. I just don’t think there is a “prosaic” (i.e., bodily-continuity-preserving) analogue or intuition pump to the case of WBE or similar stuff in this respect.
Anyway, in light of footnote 10 in the post (“The question of whether such a simulation contains consciousness at all, of any kind, is a broader discussion that pertains to a weaker version of CF that I will address later on in this sequence”), which to me draws an important distinction between a brain-simulation having some consciousness/identity versus having the same consciousness/identity as that of whatever (physically-instantiated) brain it draws from, I did want to say that this particular post seems focused on the latter and not the former, which seems quite decision-relevant to me:
I should probably let EuanMcLean speak for themselves but I do think “literally the exact same sequence of thoughts in the exact same order” is what OP is talking about. See the part about “causal closure”, and “predict which neurons are firing at t1 given the neuron firings at t0…”. The latter is pretty unambiguous IMO: literally the exact same sequence of thoughts in the exact same order.
I definitely didn’t write anything here that amounts to a general argument for (or against) computationalism. I was very specifically responding to this post. :)
I don’t think this is too related to the OP, but in regard to your exchange with jbash:
I think there’s a perspective where “personal identity” is a strong intuition, but a misleading one—it doesn’t really (“veridically”) correspond to anything at all in the real world. Instead it’s a bundle of connotations, many of which are real and important. Maybe I care that my projects and human relationships continue, that my body survives, that the narrative of my life is a continuous linear storyline, that my cherished memories persist, whatever. All those things veridically correspond to things in the real world, but (in this perspective) there isn’t some core fact of the matter about “personal identity” beyond that bundle of connotations.
I think jbash is saying (within this perspective) that you can take the phrase “personal identity”, pick whatever connotations you care, and define “personal identity” as that. And then your response (as I interpret it) is that no, you can’t do that, because there’s a core fact of the matter about personal identity, and that core fact of the matter is very very important, and it’s silly to define “personal identity” as pointing to anything else besides that core fact of the matter.
So I imagine jbash responding that “do I nonetheless continue living (in the sense of, say, anticipating the same kind of experiences)?” is a confused question, based on reifying misleading intuitions around “I”. It’s a bit like saying “in such-and-such a situation, will my ancestor spirits be happy or sad?”
I’m not really defending this perspective here, just trying to help explain it, hopefully.
I appreciate your response, and I understand that you are not arguing in favor of this perspective. Nevertheless, since you have posited it, I have decided to respond to it myself and expand upon why I ultimately disagree with it (or at the very least, why I remain uncomfortable with it because it doesn’t seem to resolve my confusions).
I think revealed preferences show I am a huge fan of explanations of confusing questions that ultimately claim the concepts we are reifying are ultimately inconsistent/incoherent, and that instead of hitting our heads against the wall over and over, we should take a step back and ponder the topic at a more fundamental level first. So I am certainly open to the idea that “do I nonetheless continue living (in the sense of, say, anticipating the same kind of experiences)?” is a confused question.
But, as I see it, there are a ton of problems with applying this general approach in this particular case. First of all, if anticipated experiences are an ultimately incoherent concept that we cannot analyze without first (unjustifiably) reifying a theory-ladden framework, how precisely are we to proceed from an epistemological perspective? When the foundation of ‘truth’ (or at least, what I conceive of it to be) is based around comparing and contrasting what we expect to see with what we actually observe experimentally, doesn’t the entire edifice collapse once the essential constituent piece of ‘experiences’ breaks down? Recall the classic (and eternally underappreciated) paragraph from Eliezer:
What exactly do we do once we give up on precisely pinpointing the phrases “I believe”, “my [...] hypotheses”, “surprised”, “my predictions”, etc.? Nihilism, attractive as it may be to some from a philosophical or ‘contrarian coolness’ perspective, is not decision-theoretically useful when you have problems to deal with and tasks to accomplish. Note that while Eliezer himself is not what he considers a logical positivist, I think I… might be?
Everything I believe, my whole theory of epistemology and everything else logically downstream of it (aka, virtually everything I believe), relies on the thesis (axiom, if you will) that there is a ‘me’ out there doing some sort of ‘prediction + observation + updating’ in response to stimuli from the outside world. I get that this might be like reifying ghosts in a Wentworthian sense when you drill down on it, but I still have desires about the world, dammit, even if they don’t make coherent sense as concepts! And I want them to be fulfilled regardless.
And, moreover, one of those preferences is maintaining a coherent flow of existence, avoiding changes that would be tantamount to death (even if they are not as literal as ‘someone blows my brains out’). As a human being, I have preferences over what I experience too, not just over what state the random excitations of quantum fields in the Universe are at some point past my expiration date. As far as I see, the hard problem of consciousness (i.e., the nature of qualia) has not been close to solved; any answer to it would have to give me a practical handbook for answering the initial questions I posed to jbash.
Thanks for the comment Steven.
Your alternative wording of practical CF is indeed basically what I’m arguing against (although, we could interpret different degrees of the simulation having the “exact” same experience, and I think the arguments here don’t only argue against the strongest versions but also weaker versions, depending on how strong those arguments are).
I’ll explain a bit more why I think practical CF is relevant to CF more generally.
Firstly, functionalist commonly say things like
Which, when I take at face value, is saying that there is actually a program being implemented by the brain that is meaningful to point to (i.e. it’s not just a program in the sense that any physical process could be a program if you simulate it (assuming digital physics etc)). That program lives on a level of abstraction above biophysics.
Secondly, computational functionalism, taken at face value again, says that all details of the conscious experience should be encoded in the program that creates it. If this isn’t true, then you can’t say that conscious experience is that program because the experience has properties that the program does not.
I can accept the charge that this still is a stronger version of CF that a number of functionalists subscribe to. Which is fine! My plan was to address quite narrow claims at the start of the sequence and move onto broader claims later on.
I’d be curious to hear which of the above steps you think miss the mark on capturing common CF views.
I guess I shouldn’t put words in other people’s mouths, but I think the fact that years-long trains-of-thought cannot be perfectly predicted in practice because of noise is obvious and uninteresting to everyone, I bet including to the computational functionalists you quoted, even if their wording on that was not crystal clear.
There are things that the brain does systematically and robustly by design, things which would be astronomically unlikely to happen by chance. E.g. the fact that I move my lips to emit grammatical English-language sentences rather than random gibberish. Or the fact that humans wanted to go to the moon, and actually did so. Or the fact that I systematically take actions that tend to lead to my children surviving and thriving, as opposed to suffering and dying.
That kind of stuff, which my brain does systematically and robustly, is what makes me me. My memories, goals, hopes and dreams, skills, etc. The fact that I happened to glance towards my scissors at time 582834.3 is not important, but the robust systematic patterns are.
And the reason that my brain does those things systematically and robustly is because the brain is designed to run an algorithm that does those things. And there’s a mathematical explanation of why this particular algorithm does those remarkable systematic things like invent quantum mechanics and reflect on the meaning of life, and separately, there’s a biophysical explanation of how it is that the brain is a machine that runs this algorithm.
I don’t think “software versus hardware” is the right frame. I prefer “the brain is a machine that runs a certain algorithm”. Like, what is software-versus-hardware for a mechanical calculator? I dunno. But there are definitely algorithms that the mechanical calculator is executing.
So we can talk about what is the algorithm that the brain is running, and why does it work? Well, it builds models, and stores them, and queries them, and combines them, and edits them, and there’s a reinforcement learning actor-critic thing, blah blah blah.
Those reasons can still be valid even if there’s some unpredictable noise in the system. Think of a grandfather clock—the second hand will robustly move 60× faster than the minute hand, by design, even if there’s some noise in the pendulum that affects the speed of both, or randomness in the surface friction that affects the exact micron-level location that the second hand comes to rest each tick. Or think of an algorithm that involves randomness (e.g. MCMC), and hence any given output is unpredictable, but the algorithm still robustly and systematically does stuff that is a priori specifiable and be astronomically unlikely to happen by chance. Or think of the Super Mario 64 source code compiled to different chip architectures that use different size floats (for example). You can play both, and they will both be very recognizably Super Mario 64, but any given exact sequence of button presses will eventually lead to divergent trajectories on the two systems. (This kind of thing is known to happen in tool-assisted speedruns—they’ll get out of sync on different systems, even when it’s “the same game” to all appearances.)
But it’s still reasonable to say that the Super Mario 64 source code is specifying an algorithm, and all the important properties of Super Mario 64 are part of that algorithm, e.g. what does Mario look like, how does he move, what are the levels, etc. It’s just that the core algorithm is not specified at such a level of detail that we can pin down what any given infinite sequence of button presses will do. That depends on unimportant details like floating point rounding.
I think this is compatible with how people use the word “algorithm” in practice. Like, CS people will causally talk about “two different implementations of the MCMC algorithm”, and not just “two different algorithms in the MCMC family of algorithms”.
That said, I guess it’s possible that Putnam and/or Piccinini were describing things in a careless or confused way viz. the role of noise impinging upon the brain. I am not them, and it’s probably not a good use of time to litigate their exact beliefs and wording. ¯\_(ツ)_/¯