I think causal closure of the kind that matters here just means that the abstract description (in this case, of the brain as performing an algorithm/computation) captures all relevant features of the the physical description, not that it has no dependence on inputs. Should probably be renamed something like “abstraction adequacy” (making this up right now, I don’t have a term on shelf for this property). Abstraction (in)adequacy is relevant for CF I believe (I think it’s straight-forward why?). Randomness probably doesn’t matter since you can include this in the abstract description.
Right, what I actually think is that a future brain scan with future understanding could enable a WBE to run on a reasonable-sized supercomputer (e.g. <100 GPUs), and it would be capturing what makes me me, and would be conscious (to the extent that I am), and it would be my consciousness (to a similar extent that I am), but it wouldn’t be able to reproduce my exact train of thought in perpetuity, because it would be able to reproduce neither the input data nor the random noise of my physical brain. I believe that OP’s objection to “practical CF” is centered around the fact that you need an astronomically large supercomputer to reproduce the random noise, and I don’t think that’s relevant. I agree that “abstraction adequacy” would be a step in the right direction.
Causal closure is just way too strict. And it’s not just because of random noise. For example, suppose that there’s a tiny amount of crosstalk between my neurons that represent the concept “banana” and my neurons that represent the concept “Red Army”, just by random chance. And once every 5 years or so, I’m thinking about bananas, and then a few seconds later, the idea of the Red Army pops into my head, and if not for this cross-talk, it counterfactually wouldn’t have popped into my head. And suppose that I have no idea of this fact, and it has no impact on my life. This overlap just exists by random chance, not part of some systematic learning algorithm. If I got magical brain surgery tomorrow that eliminated that specific cross-talk, and didn’t change anything else, then I would obviously still be “me”, even despite the fact that maybe some afternoon 3 years from now I would fail to think about the Red Army when I otherwise might. This cross-talk is not randomness, and it does undermine “causal closure” interpreted literally. But I would still say that “abstraction adequacy” would be achieved by an abstraction of my brain that captured everything except this particular instance of cross-talk.
I feel like the actual crux between you and OP is with the claim in post #2 that the brain operates outside the neuron doctrine to a significant extent. This seems to be what your back and forth is heading toward; OP is fine with pseudo-randomness as long as it doesn’t play a nontrivial computational function in the brain, so the actual important question is not anything about pseudo-randomness but just whether such computational functions exist. (But maybe I’m missing something, also I kind of feel like this is what most people’s objection to the sequence ‘should’ be, so I might have tunnel vision here.)
(Mostly unrelated to the debate, just trying to improve my theory of mind, sorry in advance if this question is annoying.) I don’t get what you mean when you say stuff like “would be conscious (to the extent that I am), and it would be my consciousness (to a similar extent that I am),” since afaik you don’t actually believe that there is a fact of the matter as to the answers to these questions. Some possibilities what I think you could mean
I don’t actually think these questions are coherent, but I’m pretending as if I did for the sake of argument
I’m just using consciousness/identity as fuzzy categories here because I assume that the realist conclusions must align with the intuitive judgments (i.e., if it seems like the fuzzy category ‘consciousness’ applies similarly to both the brain and the simulation, then probably the realist will be forced to say that their consciousness is also the same)
Actually there is a question worth debating here even if consciousness is just a fuzzy category because ???
Actually I’m genuinely entertaining the realist view now
Actually I reject the strict realist/anti-realist distinction because ???
I feel like the actual crux between you and OP is with the claim in post #2 that the brain operates outside the neuron doctrine to a significant extent.
I don’t think that’s quite right. Neuron doctrine is pretty specific IIUC. I want to say: when the brain does systematic things, it’s because the brain is running a legible algorithm that relates to those things. And then there’s a legible explanation of how biochemistry is running that algorithm. But the latter doesn’t need to be neuron-doctrine. It can involve dendritic spikes and gene expression and astrocytes etc.
All the examples here are real and important, and would impact the algorithms of an “adequate” WBE, but are mostly not “neuron doctrine”, IIUC.
Basically, it’s the thing I wrote a long time ago here: “If some [part of] the brain is doing something useful, then it’s humanly feasible to understand what that thing is and why it’s useful, and to write our own CPU code that does the same useful thing.” And I think “doing something useful” includes as a special case everything that makes me me.
I don’t get what you mean when you say stuff like “would be conscious (to the extent that I am), and it would be my consciousness (to a similar extent that I am),” since afaik you don’t actually believe that there is a fact of the matter as to the answers to these questions…
Just, it’s a can of worms that I’m trying not to get into right here. I don’t have a super well-formed opinion, and I have a hunch that the question of whether consciousness is a coherent thing is itself a (meta-level) incoherent question (because of the (A) versus (B) thing here). Yeah, just didn’t want to get into it, and I haven’t thought too hard about it anyway. :)
I think causal closure of the kind that matters here just means that the abstract description (in this case, of the brain as performing an algorithm/computation) captures all relevant features of the the physical description, not that it has no dependence on inputs. Should probably be renamed something like “abstraction adequacy” (making this up right now, I don’t have a term on shelf for this property). Abstraction (in)adequacy is relevant for CF I believe (I think it’s straight-forward why?). Randomness probably doesn’t matter since you can include this in the abstract description.
Right, what I actually think is that a future brain scan with future understanding could enable a WBE to run on a reasonable-sized supercomputer (e.g. <100 GPUs), and it would be capturing what makes me me, and would be conscious (to the extent that I am), and it would be my consciousness (to a similar extent that I am), but it wouldn’t be able to reproduce my exact train of thought in perpetuity, because it would be able to reproduce neither the input data nor the random noise of my physical brain. I believe that OP’s objection to “practical CF” is centered around the fact that you need an astronomically large supercomputer to reproduce the random noise, and I don’t think that’s relevant. I agree that “abstraction adequacy” would be a step in the right direction.
Causal closure is just way too strict. And it’s not just because of random noise. For example, suppose that there’s a tiny amount of crosstalk between my neurons that represent the concept “banana” and my neurons that represent the concept “Red Army”, just by random chance. And once every 5 years or so, I’m thinking about bananas, and then a few seconds later, the idea of the Red Army pops into my head, and if not for this cross-talk, it counterfactually wouldn’t have popped into my head. And suppose that I have no idea of this fact, and it has no impact on my life. This overlap just exists by random chance, not part of some systematic learning algorithm. If I got magical brain surgery tomorrow that eliminated that specific cross-talk, and didn’t change anything else, then I would obviously still be “me”, even despite the fact that maybe some afternoon 3 years from now I would fail to think about the Red Army when I otherwise might. This cross-talk is not randomness, and it does undermine “causal closure” interpreted literally. But I would still say that “abstraction adequacy” would be achieved by an abstraction of my brain that captured everything except this particular instance of cross-talk.
Two thoughts here
I feel like the actual crux between you and OP is with the claim in post #2 that the brain operates outside the neuron doctrine to a significant extent. This seems to be what your back and forth is heading toward; OP is fine with pseudo-randomness as long as it doesn’t play a nontrivial computational function in the brain, so the actual important question is not anything about pseudo-randomness but just whether such computational functions exist. (But maybe I’m missing something, also I kind of feel like this is what most people’s objection to the sequence ‘should’ be, so I might have tunnel vision here.)
(Mostly unrelated to the debate, just trying to improve my theory of mind, sorry in advance if this question is annoying.) I don’t get what you mean when you say stuff like “would be conscious (to the extent that I am), and it would be my consciousness (to a similar extent that I am),” since afaik you don’t actually believe that there is a fact of the matter as to the answers to these questions. Some possibilities what I think you could mean
I don’t actually think these questions are coherent, but I’m pretending as if I did for the sake of argument
I’m just using consciousness/identity as fuzzy categories here because I assume that the realist conclusions must align with the intuitive judgments (i.e., if it seems like the fuzzy category ‘consciousness’ applies similarly to both the brain and the simulation, then probably the realist will be forced to say that their consciousness is also the same)
Actually there is a question worth debating here even if consciousness is just a fuzzy category because ???
Actually I’m genuinely entertaining the realist view now
Actually I reject the strict realist/anti-realist distinction because ???
Thanks!
I don’t think that’s quite right. Neuron doctrine is pretty specific IIUC. I want to say: when the brain does systematic things, it’s because the brain is running a legible algorithm that relates to those things. And then there’s a legible explanation of how biochemistry is running that algorithm. But the latter doesn’t need to be neuron-doctrine. It can involve dendritic spikes and gene expression and astrocytes etc.
All the examples here are real and important, and would impact the algorithms of an “adequate” WBE, but are mostly not “neuron doctrine”, IIUC.
Basically, it’s the thing I wrote a long time ago here: “If some [part of] the brain is doing something useful, then it’s humanly feasible to understand what that thing is and why it’s useful, and to write our own CPU code that does the same useful thing.” And I think “doing something useful” includes as a special case everything that makes me me.
Just, it’s a can of worms that I’m trying not to get into right here. I don’t have a super well-formed opinion, and I have a hunch that the question of whether consciousness is a coherent thing is itself a (meta-level) incoherent question (because of the (A) versus (B) thing here). Yeah, just didn’t want to get into it, and I haven’t thought too hard about it anyway. :)