I think we might be working with different definitions of the term “causal structure”? The way I see it, what matters for whether or not two things have the same causal structure is counterfactual dependency—if neuron A hadn’t have fired, then neuron B would have fired. And we all agree that in a perfect simulation this kind of dependency is preserved. So yes, neurons and transistors have different lower-level causal behaviour, but I wouldn’t call that a different causal structure as long as they both implement a system that behaves the same under different counterfactuals. That’s what I think is wrong with your GIF example, btw—there’s no counterfactual dependency whatsoever. If I delete a particular pixel from one frame of the animation, the next frame wouldn’t change at all. Of course there was the proper dependency when the GIF was originally computed, and I would certainly say that that computation, however it was implemented, was conscious. But not the GIF itself, no.
Anyway, beyond that, we’re obviously working from very different intuitions, because I don’t see the China Brain or Turing machine examples as reductio’s at all—I’m perfectly willing to accept that those entities would be conscious.
It’s unclear why counterfactual dependencies would be necessary for machine functionalism, but ok, let’s include them in the GIF example. Take the first GIF as the initial condition and let the (binary) state of pixel, Xi, at time step, t, take the form, f(i,X1(t-1),X2(t-1),...Xn(t-1)). Does this make it any more plausible that the animated GIF has human consciousness? If you think the GIF has human consciousness, then what is the significance of the fact that the system of equations is generally underdetermined? Personally, it’s not plausible that the GIF has human consciousness, but would agree that since it’s an extreme example, my intuition could be wrong. Unfortunately, this appears to mean that we must agree to disagree on the question of the validity of machine functionalism, or is there another way forward?
I’m not sure I understand you. What do you mean by the system of equations being undetermined. Are you saying to take the same animated gif and not alter the actual physics in any way, and just refer to it differently? That obviously doesn’t change anything. You need to alter the causal structure.
My problem with non-machine functionalism is that any reason we have to say we’re conscious would equally apply to a simulation. If you one day found out that you were really a simulation would you decide your consciousness is an illusion, or figure you must have gotten it backwards which one is conscious, and it’s the simulations that are conscious and the real people that are p-zombies?
Thank you, you saved me a lot of typing. No amount of straight copying of that GIF will generate a conscious experience; but if you print out the first frame and give it to a person with a set of rules for simulating neural behaviour and tell them to calculate the subsequent frames into a gigantic paper notebook, that might generate consciousness.
I think we might be working with different definitions of the term “causal structure”? The way I see it, what matters for whether or not two things have the same causal structure is counterfactual dependency—if neuron A hadn’t have fired, then neuron B would have fired. And we all agree that in a perfect simulation this kind of dependency is preserved. So yes, neurons and transistors have different lower-level causal behaviour, but I wouldn’t call that a different causal structure as long as they both implement a system that behaves the same under different counterfactuals. That’s what I think is wrong with your GIF example, btw—there’s no counterfactual dependency whatsoever. If I delete a particular pixel from one frame of the animation, the next frame wouldn’t change at all. Of course there was the proper dependency when the GIF was originally computed, and I would certainly say that that computation, however it was implemented, was conscious. But not the GIF itself, no.
Anyway, beyond that, we’re obviously working from very different intuitions, because I don’t see the China Brain or Turing machine examples as reductio’s at all—I’m perfectly willing to accept that those entities would be conscious.
It’s unclear why counterfactual dependencies would be necessary for machine functionalism, but ok, let’s include them in the GIF example. Take the first GIF as the initial condition and let the (binary) state of pixel, Xi, at time step, t, take the form, f(i,X1(t-1),X2(t-1),...Xn(t-1)). Does this make it any more plausible that the animated GIF has human consciousness? If you think the GIF has human consciousness, then what is the significance of the fact that the system of equations is generally underdetermined? Personally, it’s not plausible that the GIF has human consciousness, but would agree that since it’s an extreme example, my intuition could be wrong. Unfortunately, this appears to mean that we must agree to disagree on the question of the validity of machine functionalism, or is there another way forward?
I’m not sure I understand you. What do you mean by the system of equations being undetermined. Are you saying to take the same animated gif and not alter the actual physics in any way, and just refer to it differently? That obviously doesn’t change anything. You need to alter the causal structure.
My problem with non-machine functionalism is that any reason we have to say we’re conscious would equally apply to a simulation. If you one day found out that you were really a simulation would you decide your consciousness is an illusion, or figure you must have gotten it backwards which one is conscious, and it’s the simulations that are conscious and the real people that are p-zombies?
Thank you, you saved me a lot of typing. No amount of straight copying of that GIF will generate a conscious experience; but if you print out the first frame and give it to a person with a set of rules for simulating neural behaviour and tell them to calculate the subsequent frames into a gigantic paper notebook, that might generate consciousness.