It seems to me that this paper is overly long and filled with unnecessary references, even with a view towards philosophers who don’t know anything from the field.
You may be right about this, though I also want to be cautious because of illusion of transparency issues.
It suffices to say that “bottom-up predictability” applied to the mind implies that we can build a machine to do the things which the mind does.
What I want to claim is somewhat stronger than that; notably there’s the question of whether [i]the general types of machines we already know how to build[/i] can do the things the human mind does. That might not be true if, e.g., you believe in physical hypercomputation (which I don’t, but it’s the kind of thing you want to address if you want to satisfy stubborn philosophers that you’ve dealt with as wide a range of possible objections as possible).
Basically, if you accept that the brain is a physical system, then every argument you can produce about how physical systems can’t do what the brain does is necessarily wrong (although you might need something that isn’t a digital computer).
Again, it would be nice if it were that simple, but there are people who insist they’ll have nothing to do with dualism but who advance the idea that computers can’t do what the brain does, and they don’t accept that argument.
The sections on Godel’s theorem and hyper computation could be summed up in a quick couple of paragraphs which reference each in turn as examples of objections that physical systems can’t do what minds do, followed by the reminder that if you accept the mind as a physical system then clearly those objections can’t apply.
Again, slightly more complicated than this. Penrose, Proudfoot, Copeland, and others who see AI as somehow philosophically or conceptually problematic often present themselves as accepting that the mind is physical.
Your comment makes me think I need to be clearer about who my opponents are—namely, people who say they accept the mind is physical but claim AI is philosophically or conceptually problematic. Does that sound right to you?
Do those same people still oppose the on-principle feasibility of the Chinese Room? I can understand why such people might have problems with the idea of a conscious AI, but I was not aware of a faction which thought that machines could never replicate a mind physically other than substance dualists. I’m not well-read in the field, so I could certainly be wrong about the existence of such people, but that seems like a super basic logic fail. Either a) minds are Turing complete, meaning we can replicate them, b) minds are hyper computers in a way which follows some normal physical law, meaning we can replicate them, or c) minds are hyper computers in a way which cannot be replicated (substance dualism). I don’t see how there is a possible fourth view where minds are hyper computers that cannot in principle be replicated, but they follow only normal physical laws. Maybe some sort of material anti-reductionist who holds that there is a particular law which governs things that are exactly minds but nothing else? They would need to deny the in-principle feasibility of humans ever building a meat brain from scratch, which is hard to do (and of course it immediately loses to Occam’s Raxor, but then this is philosophy, eh?). If you’re neither an anti-reductionist nor a dualist then there’s no way to make the claim, and there are better arguments against the people who are. I don’t really see much point in trying to convince anti-reductionists or dualisms of anything, since their beliefs are un-correlated to reality anyway.
Note: there are still interesting feasibility-in-real-life questions to be explored, but those are technical questions. In any case your paper would be well improved by adding a clear thesis near the start of what you’re proposing, in detail.
Oh, and before I forget, the question of whether machines we can currently build can implement a mind is purely a question of whether a mind is a hyper computer or not. We don’t know how to build those yet, but if it somehow was then we’d presumably figure out how that part worked.
Thank you for the detailed commentary.
You may be right about this, though I also want to be cautious because of illusion of transparency issues.
What I want to claim is somewhat stronger than that; notably there’s the question of whether [i]the general types of machines we already know how to build[/i] can do the things the human mind does. That might not be true if, e.g., you believe in physical hypercomputation (which I don’t, but it’s the kind of thing you want to address if you want to satisfy stubborn philosophers that you’ve dealt with as wide a range of possible objections as possible).
Again, it would be nice if it were that simple, but there are people who insist they’ll have nothing to do with dualism but who advance the idea that computers can’t do what the brain does, and they don’t accept that argument.
Again, slightly more complicated than this. Penrose, Proudfoot, Copeland, and others who see AI as somehow philosophically or conceptually problematic often present themselves as accepting that the mind is physical.
Your comment makes me think I need to be clearer about who my opponents are—namely, people who say they accept the mind is physical but claim AI is philosophically or conceptually problematic. Does that sound right to you?
Do those same people still oppose the on-principle feasibility of the Chinese Room? I can understand why such people might have problems with the idea of a conscious AI, but I was not aware of a faction which thought that machines could never replicate a mind physically other than substance dualists. I’m not well-read in the field, so I could certainly be wrong about the existence of such people, but that seems like a super basic logic fail. Either a) minds are Turing complete, meaning we can replicate them, b) minds are hyper computers in a way which follows some normal physical law, meaning we can replicate them, or c) minds are hyper computers in a way which cannot be replicated (substance dualism). I don’t see how there is a possible fourth view where minds are hyper computers that cannot in principle be replicated, but they follow only normal physical laws. Maybe some sort of material anti-reductionist who holds that there is a particular law which governs things that are exactly minds but nothing else? They would need to deny the in-principle feasibility of humans ever building a meat brain from scratch, which is hard to do (and of course it immediately loses to Occam’s Raxor, but then this is philosophy, eh?). If you’re neither an anti-reductionist nor a dualist then there’s no way to make the claim, and there are better arguments against the people who are. I don’t really see much point in trying to convince anti-reductionists or dualisms of anything, since their beliefs are un-correlated to reality anyway.
Note: there are still interesting feasibility-in-real-life questions to be explored, but those are technical questions. In any case your paper would be well improved by adding a clear thesis near the start of what you’re proposing, in detail.
Oh, and before I forget, the question of whether machines we can currently build can implement a mind is purely a question of whether a mind is a hyper computer or not. We don’t know how to build those yet, but if it somehow was then we’d presumably figure out how that part worked.