I agree that “practical CF” as thus defined is false—indeed I think it’s so obviously false that this post is massive overkill in justifying it.
But I also think that “practical CF” as thus defined is not in fact a claim that computational functionalists tend to make.
The term ‘functionalist’ is overloaded. A lot of philosophical terms are overloaded, but ‘functionalist’ is the most egregiously overloaded of all philosophical terms because it refers to two groups of people with two literally incompatible sets of beliefs:
(1) the people who are consciousness realists and think there’s this well-defined consciousness stuff exhibited from human brains, and also that the way this stuff emerges depends on what computational steps/functions/algorithms are executed (whatever that means exactly)
(2) the people who think consciousness is only an intuitive model, in which case functionalism is kinda trivial and not really a thing that can be proved or disproved, anyway
Unless I’m misinterpreting things here (and OP can correct me if I am), the post is arguing against (1), but you are (2), which is why you’re talking past each other here. (I don’t think this sequence in general is relevant to your personal views, which is what I also tried to say here.) In the definition you rephrased
would cause the same conscious experience as that brain, in the specific sense of thinking literally the exact same sequence of thoughts in the exact same order, in perpetuity.
… consciousness realists will read the ‘thinking’ part as referring to thinking in the conscious mind, not to thinking in the physical brain. So to you this reads obviously false to you because you don’t think there is a conscious mind separate from the physical brain, and the thoughts in the physical brain aren’t ‘literally exactly the same’ in the biological brain vs. the simulation—obviously! But the (1) group does, in fact, believe in such a thing, and their position does more or less imply that it would be thinking the same thoughts.
I believe this is what OP is trying to gesture at as well with their reply here.
This is kinda helpful but I also think people in your (1) group would agree with all three of: (A) the sequence of thoughts that you think directly correspond to something about the evolving state of activity in your brain, (B) random noise has nonzero influence on the evolving state of activity in your brain, (C) random noise cannot be faithfully reproduced in a practical simulation.
And I think that they would not see anything self-contradictory about believing all of those things. (And I also don’t see anything self-contradictory about that, even granting your (1).)
Well, I guess this discussion should really be focused more on personal identity than consciousness (OP wrote: “Whether or not a simulation can have consciousness at all is a broader discussion I’m saving for later in the sequence, and is relevant to a weaker version of CF.”).
So in that regard: my mental image of computational functionalists in your group (1) would also say things like (D) “If I start 5 executions of my brain algorithm, on 5 different computers, each with a different RNG seed, then they are all conscious (they are all exuding consciousness-stuff, or whatever), and they all have equal claim to being “me”, and of course they all will eventually start having different trains of thought. Over the months and years they might gradually diverge in beliefs, memories, goals, etc. Oh well, personal identity is a fuzzy thing anyway. Didn’t you read Parfit?”
But I haven’t read as much of the literature as you, so maybe I’m putting words in people’s mouths.
Hmm. I think that none of this refutes the point I was making, which is that practical CF as defined by OP is a position that many people actually hold,[1] hence OP’s argument isn’t just a strawman/missing the point. (Whether or not the argument succeeds is a different question.)
Well, I guess this discussion should really be focused more on personal identity than consciousness (OP wrote: “Whether or not a simulation can have consciousness at all is a broader discussion I’m saving for later in the sequence, and is relevant to a weaker version of CF.”).
I don’t think you have to bring identity into this. (And if you don’t have to, I’d strongly advise leaving it out because identity is another huge rabbit hole.) There’s three claims with strictly increasing strength here: C1 digital simulations can be conscious, C2 a digital simulation of a brain exhibits similar consciousness to that brain, and C3 if a simulation of my brain is created, then that simulation is me. I think only C3 is about identity, and OP’s post is arguing against C2. (All three claims are talking about realist consciousness.)
This is also why I don’t think noise matters. Granting all of (A)-(D) doesn’t really affect C2; a practical simulation could work with similar noise and be pseudo-nondeterministic in the same way that the brain is. I think it’s pretty coherent to just ask about how similar the consciousness is, under a realist framework (i.e., asking C2), without stepping onto the identity hornets nest.
a caveat here is that it’s actually quite hard to write down any philosophical position (except illusionism) such that a lot of people give blanket endorsements (again because everyone has slightly different ideas of what different terms mean), but I think OP has done a pretty good job, definitely better than most, in formulating an opinion that at least a good number of people would probably endorse.
One thing I worry about is that the same disagreement happens with a lot of other users who, unlike Steven, just downvote the post rather than writing a comment.
In general, when I’ve read through the entire LW catalogue of posts with the consciousness tag, I’ve noticed that almost all well received posts with the consciousness tag take what I call the camp #1 perspective (i.e., discuss consciousness from an illusionist lens, even if it’s not always stated explicitly). Iirc the only major exceptions are the posts from Eliezer, which, well, are from Eliezer. So it could be that post who discuss consciousness from a realist PoV consistently receive certain amount of downvotes from camp #1 people to whom the post just seems gibberish/a waste of time. I don’t have any data to prove that this is the mechanism, it’s just a guess, but the pattern is pretty consistent. I also think you generally wouldn’t predict this if you just read the comment sections. (And idk if clarifying the perspective would help since no one does it.)
The term ‘functionalist’ is overloaded. A lot of philosophical terms are overloaded, but ‘functionalist’ is the most egregiously overloaded of all philosophical terms because it refers to two groups of people with two literally incompatible sets of beliefs:
(1) the people who are consciousness realists and think there’s this well-defined consciousness stuff exhibited from human brains, and also that the way this stuff emerges depends on what computational steps/functions/algorithms are executed (whatever that means exactly)
(2) the people who think consciousness is only an intuitive model, in which case functionalism is kinda trivial and not really a thing that can be proved or disproved, anyway
Unless I’m misinterpreting things here (and OP can correct me if I am), the post is arguing against (1), but you are (2), which is why you’re talking past each other here. (I don’t think this sequence in general is relevant to your personal views, which is what I also tried to say here.) In the definition you rephrased
… consciousness realists will read the ‘thinking’ part as referring to thinking in the conscious mind, not to thinking in the physical brain. So to you this reads obviously false to you because you don’t think there is a conscious mind separate from the physical brain, and the thoughts in the physical brain aren’t ‘literally exactly the same’ in the biological brain vs. the simulation—obviously! But the (1) group does, in fact, believe in such a thing, and their position does more or less imply that it would be thinking the same thoughts.
I believe this is what OP is trying to gesture at as well with their reply here.
This is kinda helpful but I also think people in your (1) group would agree with all three of: (A) the sequence of thoughts that you think directly correspond to something about the evolving state of activity in your brain, (B) random noise has nonzero influence on the evolving state of activity in your brain, (C) random noise cannot be faithfully reproduced in a practical simulation.
And I think that they would not see anything self-contradictory about believing all of those things. (And I also don’t see anything self-contradictory about that, even granting your (1).)
Well, I guess this discussion should really be focused more on personal identity than consciousness (OP wrote: “Whether or not a simulation can have consciousness at all is a broader discussion I’m saving for later in the sequence, and is relevant to a weaker version of CF.”).
So in that regard: my mental image of computational functionalists in your group (1) would also say things like (D) “If I start 5 executions of my brain algorithm, on 5 different computers, each with a different RNG seed, then they are all conscious (they are all exuding consciousness-stuff, or whatever), and they all have equal claim to being “me”, and of course they all will eventually start having different trains of thought. Over the months and years they might gradually diverge in beliefs, memories, goals, etc. Oh well, personal identity is a fuzzy thing anyway. Didn’t you read Parfit?”
But I haven’t read as much of the literature as you, so maybe I’m putting words in people’s mouths.
Hmm. I think that none of this refutes the point I was making, which is that practical CF as defined by OP is a position that many people actually hold,[1] hence OP’s argument isn’t just a strawman/missing the point. (Whether or not the argument succeeds is a different question.)
I don’t think you have to bring identity into this. (And if you don’t have to, I’d strongly advise leaving it out because identity is another huge rabbit hole.) There’s three claims with strictly increasing strength here: C1 digital simulations can be conscious, C2 a digital simulation of a brain exhibits similar consciousness to that brain, and C3 if a simulation of my brain is created, then that simulation is me. I think only C3 is about identity, and OP’s post is arguing against C2. (All three claims are talking about realist consciousness.)
This is also why I don’t think noise matters. Granting all of (A)-(D) doesn’t really affect C2; a practical simulation could work with similar noise and be pseudo-nondeterministic in the same way that the brain is. I think it’s pretty coherent to just ask about how similar the consciousness is, under a realist framework (i.e., asking C2), without stepping onto the identity hornets nest.
a caveat here is that it’s actually quite hard to write down any philosophical position (except illusionism) such that a lot of people give blanket endorsements (again because everyone has slightly different ideas of what different terms mean), but I think OP has done a pretty good job, definitely better than most, in formulating an opinion that at least a good number of people would probably endorse.
Yea, you might be hitting on at least a big generator of our disagreement. Well spotted
One thing I worry about is that the same disagreement happens with a lot of other users who, unlike Steven, just downvote the post rather than writing a comment.
In general, when I’ve read through the entire LW catalogue of posts with the consciousness tag, I’ve noticed that almost all well received posts with the consciousness tag take what I call the camp #1 perspective (i.e., discuss consciousness from an illusionist lens, even if it’s not always stated explicitly). Iirc the only major exceptions are the posts from Eliezer, which, well, are from Eliezer. So it could be that post who discuss consciousness from a realist PoV consistently receive certain amount of downvotes from camp #1 people to whom the post just seems gibberish/a waste of time. I don’t have any data to prove that this is the mechanism, it’s just a guess, but the pattern is pretty consistent. I also think you generally wouldn’t predict this if you just read the comment sections. (And idk if clarifying the perspective would help since no one does it.)