Worth noting that Eliezer uses this argument as well in The Generalized Anti-Zombie Principle, as the first line of his Socrates Dialogue (I don’t know if he has it from Chalmers or thought of it independently):
Albert: “Suppose I replaced all the neurons in your head with tiny robotic artificial neurons that had the same connections, the same local input-output behavior, and analogous internal state and learning rules.”
He also acknowledges that this could be impossible, but only considers one reason why (which at least I consider highly implausible):
Sir Roger Penrose: “The thought experiment you propose is impossible. You can’t duplicate the behavior of neurons without tapping into quantum gravity. That said, there’s not much point in me taking further part in this conversation.” (Wanders away.)
Also worth noting that another logical possibility (which you sort of get at in footnote 9) is that the thought experiment does go through, and a human with silicon chips instead of neurons would still be conscious, but CF is still false. Maybe it’s not the substrate but the spatial location of neurons that’s relevant. (“Substrate-indendence” is not actually a super well-defined concept, either.)
If you do reject CF but do believe in realist consciousness, then it’s interesting to consider what other property is the key factor for human consciousness. If you’re also a physicalist, then whichever property that is probably has to play a significant computational role in the brain, otherwise you run into contradicts when you compare the brain with a system that doesn’t have the property and is otherwise as similar as possible. Spatial location has at least some things going for it here (e.g., ephaptic coupling and neuron synchronization).
For example, consciousness isn’t a function under this view, it probably still plays a function in biology.[12] If that function is useful for future AI, then we can predict that consciousness will eventually appear in AI systems, since whatever property creates consciousness will be engineered into AI to improve its capabilities.
This is a decent argument for why AI consciousness will happen, but actually “AI consciousness is possible” is a weaker claim. And it’s pretty hard to see how that weaker claim could ever be false, especially if one is a physicalist (aka what you call “materialist” in your assumptions of this post). It would imply that consciousness in the brain depends on a physical property, but that physical property is impossible to instantiate in an artificial system; that seems highly suspect.
Worth noting that Eliezer uses this argument as well in The Generalized Anti-Zombie Principle, as the first line of his Socrates Dialogue (I don’t know if he has it from Chalmers or thought of it independently):
He also acknowledges that this could be impossible, but only considers one reason why (which at least I consider highly implausible):
Also worth noting that another logical possibility (which you sort of get at in footnote 9) is that the thought experiment does go through, and a human with silicon chips instead of neurons would still be conscious, but CF is still false. Maybe it’s not the substrate but the spatial location of neurons that’s relevant. (“Substrate-indendence” is not actually a super well-defined concept, either.)
If you do reject CF but do believe in realist consciousness, then it’s interesting to consider what other property is the key factor for human consciousness. If you’re also a physicalist, then whichever property that is probably has to play a significant computational role in the brain, otherwise you run into contradicts when you compare the brain with a system that doesn’t have the property and is otherwise as similar as possible. Spatial location has at least some things going for it here (e.g., ephaptic coupling and neuron synchronization).
This is a decent argument for why AI consciousness will happen, but actually “AI consciousness is possible” is a weaker claim. And it’s pretty hard to see how that weaker claim could ever be false, especially if one is a physicalist (aka what you call “materialist” in your assumptions of this post). It would imply that consciousness in the brain depends on a physical property, but that physical property is impossible to instantiate in an artificial system; that seems highly suspect.