If we know every line of a robot’s source code, and that robot happens to be conscious, then it won’t do anything unexpected as a result of being conscious
What about splitting the concept “conscious” into more different ones? (As atucker also suggested.) I think what you mean is something like “qualia-laden”, while my version could be called “strong version of consciousness” ( = “starts talking about being self-aware” etc). So I think both of us are right in a “tree falling in forest” sense.
You’re also mostly right in the question about computers being conscious: of course, they mostly aren’t… but this is what the theory also predicts: although there definitely are some recursive elements, as you described, most of the meta-information is not processed by the same level as the one they are collected about, so no “thoughts” representing other, similar “thoughts” appear. (That does not exclude a concept of self-model: I think antivirus software definitely pass the mirror test by not looking for suspicious patterns in themselves...)
I guess the reason for this is that computers (and software systems) are soo complicated that they usually don’t understand what they are doing. There might be higher level systems that partially do (JIT compilers, for example), but this is still not a loop but some frozen, unrolled version of the consciousness loop programmers had when writing the code… (Maybe that’s why computers sometimes act quite intelligently, but this suddenly breaks down as they encounter situations the programmers haven’t thought of.)
By the way, have we ever encountered any beings other than humans so close to being reflectively conscious as computers? (I usually have more intelligent communications with them than with dogs, for example...)
What about splitting the concept “conscious” into more different ones? (As atucker also suggested.) I think what you mean is something like “qualia-laden”, while my version could be called “strong version of consciousness” ( = “starts talking about being self-aware” etc). So I think both of us are right in a “tree falling in forest” sense.
You’re also mostly right in the question about computers being conscious: of course, they mostly aren’t… but this is what the theory also predicts: although there definitely are some recursive elements, as you described, most of the meta-information is not processed by the same level as the one they are collected about, so no “thoughts” representing other, similar “thoughts” appear. (That does not exclude a concept of self-model: I think antivirus software definitely pass the mirror test by not looking for suspicious patterns in themselves...)
I guess the reason for this is that computers (and software systems) are soo complicated that they usually don’t understand what they are doing. There might be higher level systems that partially do (JIT compilers, for example), but this is still not a loop but some frozen, unrolled version of the consciousness loop programmers had when writing the code… (Maybe that’s why computers sometimes act quite intelligently, but this suddenly breaks down as they encounter situations the programmers haven’t thought of.)
By the way, have we ever encountered any beings other than humans so close to being reflectively conscious as computers? (I usually have more intelligent communications with them than with dogs, for example...)