Another kind of argument, which I’m not sure if Lanier was making but other people have, is that you can be a naturalist without being a reductionist, and you can be a reductionist without believing that computation is the right model for human brains. EY himself has pointed out that certain forms of symbolic AI are misleading, since naming your Lisp symbol UNDERSTAND does not mean you have implemented understanding. Lanier is making a similar but stronger case against computation in general.
More reasoned critiques of computationalism from within the field have been produced by Rod Brooks, David Chapman, Phil Agre, Terry Winograd, Lucy Suchman, and others. I’d really recommend starting with them rather than revisiting the stale and ridiculous zombie argument and its relatives.
Yvain, nicely put.
Another kind of argument, which I’m not sure if Lanier was making but other people have, is that you can be a naturalist without being a reductionist, and you can be a reductionist without believing that computation is the right model for human brains. EY himself has pointed out that certain forms of symbolic AI are misleading, since naming your Lisp symbol UNDERSTAND does not mean you have implemented understanding. Lanier is making a similar but stronger case against computation in general.
More reasoned critiques of computationalism from within the field have been produced by Rod Brooks, David Chapman, Phil Agre, Terry Winograd, Lucy Suchman, and others. I’d really recommend starting with them rather than revisiting the stale and ridiculous zombie argument and its relatives.