Do those same people still oppose the on-principle feasibility of the Chinese Room? I can understand why such people might have problems with the idea of a conscious AI, but I was not aware of a faction which thought that machines could never replicate a mind physically other than substance dualists. I’m not well-read in the field, so I could certainly be wrong about the existence of such people, but that seems like a super basic logic fail. Either a) minds are Turing complete, meaning we can replicate them, b) minds are hyper computers in a way which follows some normal physical law, meaning we can replicate them, or c) minds are hyper computers in a way which cannot be replicated (substance dualism). I don’t see how there is a possible fourth view where minds are hyper computers that cannot in principle be replicated, but they follow only normal physical laws. Maybe some sort of material anti-reductionist who holds that there is a particular law which governs things that are exactly minds but nothing else? They would need to deny the in-principle feasibility of humans ever building a meat brain from scratch, which is hard to do (and of course it immediately loses to Occam’s Raxor, but then this is philosophy, eh?). If you’re neither an anti-reductionist nor a dualist then there’s no way to make the claim, and there are better arguments against the people who are. I don’t really see much point in trying to convince anti-reductionists or dualisms of anything, since their beliefs are un-correlated to reality anyway.
Note: there are still interesting feasibility-in-real-life questions to be explored, but those are technical questions. In any case your paper would be well improved by adding a clear thesis near the start of what you’re proposing, in detail.
Oh, and before I forget, the question of whether machines we can currently build can implement a mind is purely a question of whether a mind is a hyper computer or not. We don’t know how to build those yet, but if it somehow was then we’d presumably figure out how that part worked.
Do those same people still oppose the on-principle feasibility of the Chinese Room? I can understand why such people might have problems with the idea of a conscious AI, but I was not aware of a faction which thought that machines could never replicate a mind physically other than substance dualists. I’m not well-read in the field, so I could certainly be wrong about the existence of such people, but that seems like a super basic logic fail. Either a) minds are Turing complete, meaning we can replicate them, b) minds are hyper computers in a way which follows some normal physical law, meaning we can replicate them, or c) minds are hyper computers in a way which cannot be replicated (substance dualism). I don’t see how there is a possible fourth view where minds are hyper computers that cannot in principle be replicated, but they follow only normal physical laws. Maybe some sort of material anti-reductionist who holds that there is a particular law which governs things that are exactly minds but nothing else? They would need to deny the in-principle feasibility of humans ever building a meat brain from scratch, which is hard to do (and of course it immediately loses to Occam’s Raxor, but then this is philosophy, eh?). If you’re neither an anti-reductionist nor a dualist then there’s no way to make the claim, and there are better arguments against the people who are. I don’t really see much point in trying to convince anti-reductionists or dualisms of anything, since their beliefs are un-correlated to reality anyway.
Note: there are still interesting feasibility-in-real-life questions to be explored, but those are technical questions. In any case your paper would be well improved by adding a clear thesis near the start of what you’re proposing, in detail.
Oh, and before I forget, the question of whether machines we can currently build can implement a mind is purely a question of whether a mind is a hyper computer or not. We don’t know how to build those yet, but if it somehow was then we’d presumably figure out how that part worked.