You keep treating possibilities as actualities. LamDa might be simulating people without being programmed or prompted to, the CR might have full semantics without a single symbol being grounded ..but in both cases they might not.
There isn’t any fundamental.doubt about what computers are doing, because they are computers. Computers can’t have strongly emergent properties. You can peek inside the box and see what’s going on.
The weakness of the systems reply to the CR is that it forces you to accept that a system that is nothing but a look up table has consciousness.. or that a system without a single grounded symbol has semantics. (Searle can close the loophole about encoding images by stipulation).
Likewise, there is no reason to suppose that LamDa is simulating a person every time it answers a request—it’s not designed to do that, and it’s not going to do so inexplicably because it’s a computer, and you can examine what it’s doing.
You keep treating possibilities as actualities. LamDa might be simulating people without being programmed or prompted to, the CR might have full semantics without a single symbol being grounded ..but in both cases they might not.
There isn’t any fundamental.doubt about what computers are doing, because they are computers. Computers can’t have strongly emergent properties. You can peek inside the box and see what’s going on.
The weakness of the systems reply to the CR is that it forces you to accept that a system that is nothing but a look up table has consciousness.. or that a system without a single grounded symbol has semantics. (Searle can close the loophole about encoding images by stipulation).
Likewise, there is no reason to suppose that LamDa is simulating a person every time it answers a request—it’s not designed to do that, and it’s not going to do so inexplicably because it’s a computer, and you can examine what it’s doing.