The Church-Turing thesis gives us the “substrate independence principle”. In principle, AI could be conscious.
The C-T thesis gives you the substrate independence of computation. To get to the substrate independence of consciousness, you need the further premise that the performance of certain computations is sufficient for consciousness, including qualia. This is, of course, not known.
I don’t think this is correct, either (although it’s closer). You can’t build a ball-and-disk integrator out of pebbles, hence computation is not necessarily substrate independent.
What the Turing Thesis says is that a Turing machine, and also any system capable of emulating a Turing machine, is computationally general (i.e., can solve any problem that can be solved at all). You can build a Turing machine out of lots of substrates (including pebbles), hence lots of substrates are computationally general. So it’s possible to integrate a function using pebbles, but it’s not possible to do it using the same computation as the ball-and-disk integrator uses—the pebbles system will perform a very different computation to obtain the same result.
So even if you do hold that certain computations/algorithms are sufficient for consciousness, it still doesn’t follow that a simulated brain has identical consciousness to an original brain. You need an additional argument that says that the algorithms run by both systems are sufficiently similar.
Albert: “Suppose I replaced all the neurons in your head with tiny robotic artificial neurons that had the same connections, the same local input-output behavior, and analogous internal state and learning rules.”
Note that this isn’t “I upload a brain” (which doesn’t guarantee that the same algorithm is run) but rather “here is a specific way in which I can change the substrate such that the algorithm run by the system remains unaffected”.
I don’t think this is correct, either (although it’s closer). You can’t build a ball-and-disk integrator out of pebbles, hence computation is not necessarily substrate neutral.
Meaning that a strong version of computational substrate independence , where any substrate will do, is false? Maybe, but I was arguing against hypothetical, that “the substrate independence of computation implies the substrate independence of consciousness”, not *for* the antecedent, the substrate independence of computation.
What the Turing Thesis says is that a Turing machine, and also any system capable of emulating a Turing machine, is computationally general (i.e., can solve any problem that can be solved at all). You can build a Turing machine out of lots of substrates (including pebbles), hence lots of substrates are computationally general. So it’s possible to integrate a function using pebbles, but it’s not possible to do it using the same computation as the ball-and-disk integrator uses—the pebbles system will perform a very different computation to obtain the same result.
I don’t see the relevance.
So even if you do hold that certain computations/algorithms are sufficient for consciousness, it still doesn’t follow that a simulated brain has identical consciousness to an original brain. You need an additional argument that says that the algorithms run by both systems are sufficiently similar.
OK. A crappy computational emulation might not be conscious, because it’s crappy. It still doesn’t follow that a good emulation is necessarily conscious. You’re just pointing out another possible defeater.
This is a good opportunity to give Eliezer credit because he addressed something similar in the sequences and got the argument right:
Which argument? Are you saying that a good enough emulation is necessarily conscious?
Albert: “Suppose I replaced all the neurons in your head with tiny robotic artificial neurons that had the same connections, the same local input-output behavior, and analogous internal state and learning rules.”
Note that this isn’t “I upload a brain” (which doesn’t guarantee that the same algorithm is run)
If it’s detailed enough, it’s guaranteed to. That’s what “enough” means
but rather “here is a specific way in which I can change the substrate such that the algorithm run by the system remains unaffected”.
Ok...that might prove the substrate independence of computation, which I wasn’t arguing against. Past that, I don’t see your point
The result (at least partially) of a particular physical substrate. Physicalism and computationalism are both not-dualism , but they are not the same as each other.
The C-T thesis gives you the substrate independence of computation. To get to the substrate independence of consciousness, you need the further premise that the performance of certain computations is sufficient for consciousness, including qualia. This is, of course, not known.
I don’t think this is correct, either (although it’s closer). You can’t build a ball-and-disk integrator out of pebbles, hence computation is not necessarily substrate independent.
What the Turing Thesis says is that a Turing machine, and also any system capable of emulating a Turing machine, is computationally general (i.e., can solve any problem that can be solved at all). You can build a Turing machine out of lots of substrates (including pebbles), hence lots of substrates are computationally general. So it’s possible to integrate a function using pebbles, but it’s not possible to do it using the same computation as the ball-and-disk integrator uses—the pebbles system will perform a very different computation to obtain the same result.
So even if you do hold that certain computations/algorithms are sufficient for consciousness, it still doesn’t follow that a simulated brain has identical consciousness to an original brain. You need an additional argument that says that the algorithms run by both systems are sufficiently similar.
This is a good opportunity to give Eliezer credit because he addressed something similar in the sequences and got the argument right:
Note that this isn’t “I upload a brain” (which doesn’t guarantee that the same algorithm is run) but rather “here is a specific way in which I can change the substrate such that the algorithm run by the system remains unaffected”.
Meaning that a strong version of computational substrate independence , where any substrate will do, is false? Maybe, but I was arguing against hypothetical, that “the substrate independence of computation implies the substrate independence of consciousness”, not *for* the antecedent, the substrate independence of computation.
I don’t see the relevance.
OK. A crappy computational emulation might not be conscious, because it’s crappy. It still doesn’t follow that a good emulation is necessarily conscious. You’re just pointing out another possible defeater.
Which argument? Are you saying that a good enough emulation is necessarily conscious?
If it’s detailed enough, it’s guaranteed to. That’s what “enough” means
Ok...that might prove the substrate independence of computation, which I wasn’t arguing against. Past that, I don’t see your point
Ok I guess that was very poorly written. I’ll figure out how to phrase it better and then make a top level post.
Yes, agreed (and I endorse the clarification), hence my question about dualism. (If consciousness is not a result of computation, then what is it?)
The result (at least partially) of a particular physical substrate. Physicalism and computationalism are both not-dualism , but they are not the same as each other.