In my last post, I defined a concrete claim that computational functionalists tend to make:
Practical CF: A simulation of a human brain on a classical computer, capturing the dynamics of the brain on some coarse-grained level of abstraction, that can run on a computer small and light enough to fit on the surface of Earth, with the simulation running at the same speed as base reality, would cause the same conscious experience as that brain.
From reading this comment, I understand that you mean the following:
Practical CF, more explicitly: A simulation of a human brain on a classical computer, capturing the dynamics of the brain on some coarse-grained level of abstraction, that can run on a computer small and light enough to fit on the surface of Earth, with the simulation running at the same speed as base reality, would cause the same conscious experience as that brain, in the specific sense of thinking literally the exact same sequence of thoughts in the exact same order, in perpetuity.
I agree that “practical CF” as thus defined is false—indeed I think it’s so obviously false that this post is massive overkill in justifying it.
But I also think that “practical CF” as thus defined is not in fact a claim that computational functionalists tend to make.
Let’s put aside simulation and talk about an everyday situation.
Suppose you’re the building manager of my apartment, and I’m in my apartment doing work. Unbeknownst to me, you flip a coin. If it’s heads, then you set the basement thermostat to 20°C. If it’s tails, then you set the basement thermostat to 20.1°C. As a result, the temperature in my room is slightly different in the two scenarios, and thus the temperature in my brain is slightly different, and this causes some tiny number of synaptic vesicles to release differently under heads versus tails, which gradually butterfly-effect into totally different trains of thought in the two scenarios, perhaps leading me to make a different decision on some question where I was really ambivalent and going back and forth, or maybe having some good idea in one scenario but not the other.
But in both scenarios, it’s still “me”, and it’s still “my mind” and “my consciousness”. Do you see what I mean?
So anyway, when you wrote “A simulation of a human brain on a classical computer…would cause the same conscious experience as that brain”, I initially interpreted that sentence as meaning something more like “the same kind of conscious experience”, just as I would have “the same kind of conscious experience” if the basement thermostat were unknowingly set to 20°C versus 20.1°C.
(And no I don’t just mean “there is a conscious experience either way”. I mean something much stronger than that—it’s my conscious experience either way, whether 20°C or 20.1°C.)
Do you see what I mean? And under that interpretation, I think that the statement would be not only plausible but also a better match to what real computational functionalists usually believe.
I agree that “practical CF” as thus defined is false—indeed I think it’s so obviously false that this post is massive overkill in justifying it.
But I also think that “practical CF” as thus defined is not in fact a claim that computational functionalists tend to make.
The term ‘functionalist’ is overloaded. A lot of philosophical terms are overloaded, but ‘functionalist’ is the most egregiously overloaded of all philosophical terms because it refers to two groups of people with two literally incompatible sets of beliefs:
(1) the people who are consciousness realists and think there’s this well-defined consciousness stuff exhibited from human brains, and also that the way this stuff emerges depends on what computational steps/functions/algorithms are executed (whatever that means exactly)
(2) the people who think consciousness is only an intuitive model, in which case functionalism is kinda trivial and not really a thing that can be proved or disproved, anyway
Unless I’m misinterpreting things here (and OP can correct me if I am), the post is arguing against (1), but you are (2), which is why you’re talking past each other here. (I don’t think this sequence in general is relevant to your personal views, which is what I also tried to say here.) In the definition you rephrased
would cause the same conscious experience as that brain, in the specific sense of thinking literally the exact same sequence of thoughts in the exact same order, in perpetuity.
… consciousness realists will read the ‘thinking’ part as referring to thinking in the conscious mind, not to thinking in the physical brain. So to you this reads obviously false to you because you don’t think there is a conscious mind separate from the physical brain, and the thoughts in the physical brain aren’t ‘literally exactly the same’ in the biological brain vs. the simulation—obviously! But the (1) group does, in fact, believe in such a thing, and their position does more or less imply that it would be thinking the same thoughts.
I believe this is what OP is trying to gesture at as well with their reply here.
Your alternative wording of practical CF is indeed basically what I’m arguing against (although, we could interpret different degrees of the simulation having the “exact” same experience, and I think the arguments here don’t only argue against the strongest versions but also weaker versions, depending on how strong those arguments are).
I’ll explain a bit more why I think practical CF is relevant to CF more generally.
Firstly, functionalist commonly say things like
Computational functionalism: the mind is the software of the brain. (Piccinini)
Which, when I take at face value, is saying that there is actually a program being implemented by the brain that is meaningful to point to (i.e. it’s not just a program in the sense that any physical process could be a program if you simulate it (assuming digital physics etc)). That program lives on a level of abstraction above biophysics.
Secondly, computational functionalism, taken at fact value again, says that all details of the conscious experience should be encoded in the program that creates it. If this isn’t true, then you can’t say that conscious experience is that program because the experience has properties that the program does not.
Putnam advances an opposing functionalist view, on which mental states are functional states. (SEP)
He proposes that mental activity implements a probabilistic automaton and that particular mental states are machine states of the automaton’s central processor. (SEP)
the mind is constituted by the programs stored and executed by the brain (Piccinini)
I can accept the charge that this still is a stronger version of CF that a number of functionalists subscribe to. Which is fine! My plan was to address quite narrow claims at the start of the sequence and move onto broader claims later on.
I’d be curious to hear which of the above steps you think miss the mark on capturing common CF views.
I guess I shouldn’t put words in other people’s mouths, but I think the fact that years-long trains-of-thought cannot be perfectly predicted in practice because of noise is obvious and uninteresting to everyone, I bet including to the computational functionalists you quoted, even if their wording on that was not crystal clear.
There are things that the brain does systematically and robustly by design, things which would be astronomically unlikely to happen by chance. E.g. the fact that I move my lips to emit grammatical English-language sentences rather than random gibberish. Or the fact that humans wanted to go to the moon, and actually did so. Or the fact that I systematically take actions that tend to lead to my children surviving and thriving, as opposed to suffering and dying.
That kind of stuff, which my brain does systematically and robustly, is what makes me me. My memories, goals, hopes and dreams, skills, etc. The fact that I happened to glance towards my scissors at time 582834.3 is not important, but the robust systematic patterns are.
And the reason that my brain does those things systematically and robustly is because the brain is designed to run an algorithm that does those things, for reasons that can be explained by a mathematical analysis of that algorithm. Just as a sorting algorithm systematically sorts numbers for reasons that can be explained by a mathematical analysis of that algorithm.
I don’t think “software versus hardware” is the right frame. I prefer “the brain is a machine that runs a certain algorithm”. Like, what is software-versus-hardware for a mechanical calculator? I dunno. But there are definitely algorithms that the mechanical calculator is implementing.
So we can talk about what is the algorithm that the brain is running, and why does it work? Well, it builds models, and stores them, and queries them, and combines them, and edits them, and there’s a reinforcement learning actor-critic thing, blah blah blah.
Those reasons can still be valid even if there’s some unpredictable noise in the system. Think of a grandfather clock—the second hand will robustly move 60× faster than the minute hand, by design, even if there’s some noise in the pendulum that effects the speed of both. Or think of an algorithm that involves randomness (e.g. MCMC), and hence any given output is unpredictable, but the algorithm still robustly and systematically does stuff that is a priori specifiable and be astronomically unlikely to happen by chance. Or think of the Super Mario 64 source code compiled to different chip architectures that use different size floats (for example). You can play both, and they will both be very recognizably Super Mario 64, but any given exact sequence of button presses will eventually lead to divergent trajectories on the two systems. (This kind of thing is known to happen in tool-assisted speedruns—they’ll get out of sync on different systems, even when it’s “the same game” to all appearances.)
But it’s still reasonable to say that the Super Mario 64 source code is specifying an algorithm, and all the important properties of Super Mario 64 are part of that algorithm, e.g. what does Mario look like, how does he move, what are the levels, etc. It’s just that the core algorithm is not specified at such a level of detail that we can pin down what any given infinite sequence of button presses will do. That depends on unimportant details like floating point rounding.
I think this is compatible with how people use the word “algorithm” in practice. Like, CS people will causally talk about “two different implementations of the MCMC algorithm”, and not just “two different algorithms in the MCMC family of algorithms”.
That said, I guess it’s possible that Putnam and/or Piccinini were describing things in a careless or confused way viz. the role of noise impinging upon the brain. I am not them, and it’s probably not a good use of time to litigate their exact beliefs and wording. ¯\_(ツ)_/¯
Edit: This comment misinterpreted the intended meaning of the post.
Practical CF, more explicitly: A simulation of a human brain on a classical computer, capturing the dynamics of the brain on some coarse-grained level of abstraction, that can run on a computer small and light enough to fit on the surface of Earth, with the simulation running at the same speed as base reality, would cause the same conscious experience as that brain, in the specific sense of thinking literally the exact same sequence of thoughts in the exact same order, in perpetuity.
As TAG has written a number of times, the computationalist thesis seems not to have been convincingly (or even concretely) argued for in any LessWrong post or sequence (including Eliezer’s Sequences). What has been argued for, over and over again, is physicalism, and then more and more rejections of dualist conceptions of souls.
That’s perfectly fine, but “souls don’t exist and thus consciousness and identity must function on top of a physical substrate” is very different from “the identity of a being is given by the abstract classical computation performed by a particular (and reified) subset of the brain’s electronic circuit,” and the latter has never been given compelling explanations or evidence. This is despite the fact that the particular conclusions that have become part of the ethos of LW about stuff like brain emulation, cryonics etc are necessarily reliant on the latter, not the former.
As a general matter, accepting physicalism as correct would naturally lead one to the conclusion that what runs on top of the physical substrate works on the basis of… what is physically there (which, to the best of our current understanding, can be represented through Quantum Mechanical probability amplitudes), not what conclusions you draw from a mathematical model that abstracts away quantum randomness in favor of a classical picture, the entire brain structure in favor of (a slightly augmented version of) its connectome, and the entire chemical make-up of it in favor of its electrical connections. As I have mentioned, that is a mere model that represents a very lossy compression of what is going on; it is not the same as the real thing, and conflating the two is an error that has been going on here for far too long. Of course, it very well might be the case that Rob and the computationalists are right about these issues, but the explanation up to now should make it clear why it is on them to provide evidence for their conclusion.
I recognize you wrote in response to me a while ago that you “find these kinds of conversations to be very time-consuming and often not go anywhere.” I understand this, and I sympathize to a large extent: I also find these discussions very tiresome, which became part of why I ultimately did not engage too much with some of the thought-provoking responses to the question I posed a few months back. So it’s totally ok for us not to get into the weeds of this now (or at any point, really). Nevertheless, for the sake of it, I think the “everyday experience” thermostat example does not seem like an argument in favor of computationalism over physicalism-without-computationalism, since the primary generator of my intuition that my identity would be the same in that case is the literal physical continuity of my body throughout that process. I just don’t think there is a “prosaic” (i.e., bodily-continuity-preserving) analogue or intuition pump to the case of WBE or similar stuff in this respect.
Anyway, in light of footnote 10 in the post (“The question of whether such a simulation contains consciousness at all, of any kind, is a broader discussion that pertains to a weaker version of CF that I will address later on in this sequence”), which to me draws an important distinction between a brain-simulation having some consciousness/identity versus having the same consciousness/identity as that of whatever (physically-instantiated) brain it draws from, I did want to say that this particular post seems focused on the latter and not the former, which seems quite decision-relevant to me:
jbash: These various ideas about identity don’t seem to me to be things you can “prove” or “argue for”. They’re mostly just definitions that you adopt or don’t adopt. Arguing about them is kind of pointless.
sunwillrise: I absolutely disagree. The basic question of “if I die but my brain gets scanned beforehand and emulated, do I nonetheless continue living (in the sense of, say, anticipating the same kinds of experiences)?” seems the complete opposite of pointless, and the kind of conundrum in which agreeing or disagreeing with computationalism leads to completely different answers.
Perhaps there is a meaningful linguistic/semantic component to this, but in the example above, it seems understanding the nature of identity is decision-theoretically relevant for how one should think about whether WBE would be good or bad (in this particular respect, at least).
I should probably let EuanMcLean speak for themselves but I do think “literally the exact same sequence of thoughts in the exact same order” is what OP is talking about. See the part about “causal closure”, and “predict which neurons are firing at t1 given the neuron firings at t0…”. The latter is pretty unambiguous IMO: literally the exact same sequence of thoughts in the exact same order.
I definitely didn’t write anything here that amounts to a general argument for (or against) computationalism. I was very specifically responding to this post. :)
From reading this comment, I understand that you mean the following:
Practical CF, more explicitly: A simulation of a human brain on a classical computer, capturing the dynamics of the brain on some coarse-grained level of abstraction, that can run on a computer small and light enough to fit on the surface of Earth, with the simulation running at the same speed as base reality, would cause the same conscious experience as that brain, in the specific sense of thinking literally the exact same sequence of thoughts in the exact same order, in perpetuity.
I agree that “practical CF” as thus defined is false—indeed I think it’s so obviously false that this post is massive overkill in justifying it.
But I also think that “practical CF” as thus defined is not in fact a claim that computational functionalists tend to make.
Let’s put aside simulation and talk about an everyday situation.
Suppose you’re the building manager of my apartment, and I’m in my apartment doing work. Unbeknownst to me, you flip a coin. If it’s heads, then you set the basement thermostat to 20°C. If it’s tails, then you set the basement thermostat to 20.1°C. As a result, the temperature in my room is slightly different in the two scenarios, and thus the temperature in my brain is slightly different, and this causes some tiny number of synaptic vesicles to release differently under heads versus tails, which gradually butterfly-effect into totally different trains of thought in the two scenarios, perhaps leading me to make a different decision on some question where I was really ambivalent and going back and forth, or maybe having some good idea in one scenario but not the other.
But in both scenarios, it’s still “me”, and it’s still “my mind” and “my consciousness”. Do you see what I mean?
So anyway, when you wrote “A simulation of a human brain on a classical computer…would cause the same conscious experience as that brain”, I initially interpreted that sentence as meaning something more like “the same kind of conscious experience”, just as I would have “the same kind of conscious experience” if the basement thermostat were unknowingly set to 20°C versus 20.1°C.
(And no I don’t just mean “there is a conscious experience either way”. I mean something much stronger than that—it’s my conscious experience either way, whether 20°C or 20.1°C.)
Do you see what I mean? And under that interpretation, I think that the statement would be not only plausible but also a better match to what real computational functionalists usually believe.
The term ‘functionalist’ is overloaded. A lot of philosophical terms are overloaded, but ‘functionalist’ is the most egregiously overloaded of all philosophical terms because it refers to two groups of people with two literally incompatible sets of beliefs:
(1) the people who are consciousness realists and think there’s this well-defined consciousness stuff exhibited from human brains, and also that the way this stuff emerges depends on what computational steps/functions/algorithms are executed (whatever that means exactly)
(2) the people who think consciousness is only an intuitive model, in which case functionalism is kinda trivial and not really a thing that can be proved or disproved, anyway
Unless I’m misinterpreting things here (and OP can correct me if I am), the post is arguing against (1), but you are (2), which is why you’re talking past each other here. (I don’t think this sequence in general is relevant to your personal views, which is what I also tried to say here.) In the definition you rephrased
… consciousness realists will read the ‘thinking’ part as referring to thinking in the conscious mind, not to thinking in the physical brain. So to you this reads obviously false to you because you don’t think there is a conscious mind separate from the physical brain, and the thoughts in the physical brain aren’t ‘literally exactly the same’ in the biological brain vs. the simulation—obviously! But the (1) group does, in fact, believe in such a thing, and their position does more or less imply that it would be thinking the same thoughts.
I believe this is what OP is trying to gesture at as well with their reply here.
Thanks for the comment Steven.
Your alternative wording of practical CF is indeed basically what I’m arguing against (although, we could interpret different degrees of the simulation having the “exact” same experience, and I think the arguments here don’t only argue against the strongest versions but also weaker versions, depending on how strong those arguments are).
I’ll explain a bit more why I think practical CF is relevant to CF more generally.
Firstly, functionalist commonly say things like
Which, when I take at face value, is saying that there is actually a program being implemented by the brain that is meaningful to point to (i.e. it’s not just a program in the sense that any physical process could be a program if you simulate it (assuming digital physics etc)). That program lives on a level of abstraction above biophysics.
Secondly, computational functionalism, taken at fact value again, says that all details of the conscious experience should be encoded in the program that creates it. If this isn’t true, then you can’t say that conscious experience is that program because the experience has properties that the program does not.
I can accept the charge that this still is a stronger version of CF that a number of functionalists subscribe to. Which is fine! My plan was to address quite narrow claims at the start of the sequence and move onto broader claims later on.
I’d be curious to hear which of the above steps you think miss the mark on capturing common CF views.
I guess I shouldn’t put words in other people’s mouths, but I think the fact that years-long trains-of-thought cannot be perfectly predicted in practice because of noise is obvious and uninteresting to everyone, I bet including to the computational functionalists you quoted, even if their wording on that was not crystal clear.
There are things that the brain does systematically and robustly by design, things which would be astronomically unlikely to happen by chance. E.g. the fact that I move my lips to emit grammatical English-language sentences rather than random gibberish. Or the fact that humans wanted to go to the moon, and actually did so. Or the fact that I systematically take actions that tend to lead to my children surviving and thriving, as opposed to suffering and dying.
That kind of stuff, which my brain does systematically and robustly, is what makes me me. My memories, goals, hopes and dreams, skills, etc. The fact that I happened to glance towards my scissors at time 582834.3 is not important, but the robust systematic patterns are.
And the reason that my brain does those things systematically and robustly is because the brain is designed to run an algorithm that does those things, for reasons that can be explained by a mathematical analysis of that algorithm. Just as a sorting algorithm systematically sorts numbers for reasons that can be explained by a mathematical analysis of that algorithm.
I don’t think “software versus hardware” is the right frame. I prefer “the brain is a machine that runs a certain algorithm”. Like, what is software-versus-hardware for a mechanical calculator? I dunno. But there are definitely algorithms that the mechanical calculator is implementing.
So we can talk about what is the algorithm that the brain is running, and why does it work? Well, it builds models, and stores them, and queries them, and combines them, and edits them, and there’s a reinforcement learning actor-critic thing, blah blah blah.
Those reasons can still be valid even if there’s some unpredictable noise in the system. Think of a grandfather clock—the second hand will robustly move 60× faster than the minute hand, by design, even if there’s some noise in the pendulum that effects the speed of both. Or think of an algorithm that involves randomness (e.g. MCMC), and hence any given output is unpredictable, but the algorithm still robustly and systematically does stuff that is a priori specifiable and be astronomically unlikely to happen by chance. Or think of the Super Mario 64 source code compiled to different chip architectures that use different size floats (for example). You can play both, and they will both be very recognizably Super Mario 64, but any given exact sequence of button presses will eventually lead to divergent trajectories on the two systems. (This kind of thing is known to happen in tool-assisted speedruns—they’ll get out of sync on different systems, even when it’s “the same game” to all appearances.)
But it’s still reasonable to say that the Super Mario 64 source code is specifying an algorithm, and all the important properties of Super Mario 64 are part of that algorithm, e.g. what does Mario look like, how does he move, what are the levels, etc. It’s just that the core algorithm is not specified at such a level of detail that we can pin down what any given infinite sequence of button presses will do. That depends on unimportant details like floating point rounding.
I think this is compatible with how people use the word “algorithm” in practice. Like, CS people will causally talk about “two different implementations of the MCMC algorithm”, and not just “two different algorithms in the MCMC family of algorithms”.
That said, I guess it’s possible that Putnam and/or Piccinini were describing things in a careless or confused way viz. the role of noise impinging upon the brain. I am not them, and it’s probably not a good use of time to litigate their exact beliefs and wording. ¯\_(ツ)_/¯
Edit: This comment misinterpreted the intended meaning of the post.
I… don’t think this is necessarily what @EuanMcLean meant? At the risk of conflating his own perspective and ambivalence on this issue with my own, this is a question of personal identity and whether the computationalist perspective, generally considered a “reasonable enough” assumption to almost never be argued for explicitly on LW, is correct. As I wrote a while ago on Rob’s post:
I recognize you wrote in response to me a while ago that you “find these kinds of conversations to be very time-consuming and often not go anywhere.” I understand this, and I sympathize to a large extent: I also find these discussions very tiresome, which became part of why I ultimately did not engage too much with some of the thought-provoking responses to the question I posed a few months back. So it’s totally ok for us not to get into the weeds of this now (or at any point, really). Nevertheless, for the sake of it, I think the “everyday experience” thermostat example does not seem like an argument in favor of computationalism over physicalism-without-computationalism, since the primary generator of my intuition that my identity would be the same in that case is the literal physical continuity of my body throughout that process. I just don’t think there is a “prosaic” (i.e., bodily-continuity-preserving) analogue or intuition pump to the case of WBE or similar stuff in this respect.
Anyway, in light of footnote 10 in the post (“The question of whether such a simulation contains consciousness at all, of any kind, is a broader discussion that pertains to a weaker version of CF that I will address later on in this sequence”), which to me draws an important distinction between a brain-simulation having some consciousness/identity versus having the same consciousness/identity as that of whatever (physically-instantiated) brain it draws from, I did want to say that this particular post seems focused on the latter and not the former, which seems quite decision-relevant to me:
I should probably let EuanMcLean speak for themselves but I do think “literally the exact same sequence of thoughts in the exact same order” is what OP is talking about. See the part about “causal closure”, and “predict which neurons are firing at t1 given the neuron firings at t0…”. The latter is pretty unambiguous IMO: literally the exact same sequence of thoughts in the exact same order.
I definitely didn’t write anything here that amounts to a general argument for (or against) computationalism. I was very specifically responding to this post. :)