Hmm. I think that none of this refutes the point I was making, which is that practical CF as defined by OP is a position that many people actually hold,[1] hence OP’s argument isn’t just a strawman/missing the point. (Whether or not the argument succeeds is a different question.)
Well, I guess this discussion should really be focused more on personal identity than consciousness (OP wrote: “Whether or not a simulation can have consciousness at all is a broader discussion I’m saving for later in the sequence, and is relevant to a weaker version of CF.”).
I don’t think you have to bring identity into this. (And if you don’t have to, I’d strongly advise leaving it out because identity is another huge rabbit hole.) There’s three claims with strictly increasing strength here: C1 digital simulations can be conscious, C2 a digital simulation of a brain exhibits similar consciousness to that brain, and C3 if a simulation of my brain is created, then that simulation is me. I think only C3 is about identity, and OP’s post is arguing against C2. (All three claims are talking about realist consciousness.)
This is also why I don’t think noise matters. Granting all of (A)-(D) doesn’t really affect C2; a practical simulation could work with similar noise and be pseudo-nondeterministic in the same way that the brain is. I think it’s pretty coherent to just ask about how similar the consciousness is, under a realist framework (i.e., asking C2), without stepping onto the identity hornets nest.
a caveat here is that it’s actually quite hard to write down any philosophical position (except illusionism) such that a lot of people give blanket endorsements (again because everyone has slightly different ideas of what different terms mean), but I think OP has done a pretty good job, definitely better than most, in formulating an opinion that at least a good number of people would probably endorse.
Hmm. I think that none of this refutes the point I was making, which is that practical CF as defined by OP is a position that many people actually hold,[1] hence OP’s argument isn’t just a strawman/missing the point. (Whether or not the argument succeeds is a different question.)
I don’t think you have to bring identity into this. (And if you don’t have to, I’d strongly advise leaving it out because identity is another huge rabbit hole.) There’s three claims with strictly increasing strength here: C1 digital simulations can be conscious, C2 a digital simulation of a brain exhibits similar consciousness to that brain, and C3 if a simulation of my brain is created, then that simulation is me. I think only C3 is about identity, and OP’s post is arguing against C2. (All three claims are talking about realist consciousness.)
This is also why I don’t think noise matters. Granting all of (A)-(D) doesn’t really affect C2; a practical simulation could work with similar noise and be pseudo-nondeterministic in the same way that the brain is. I think it’s pretty coherent to just ask about how similar the consciousness is, under a realist framework (i.e., asking C2), without stepping onto the identity hornets nest.
a caveat here is that it’s actually quite hard to write down any philosophical position (except illusionism) such that a lot of people give blanket endorsements (again because everyone has slightly different ideas of what different terms mean), but I think OP has done a pretty good job, definitely better than most, in formulating an opinion that at least a good number of people would probably endorse.