Some currently existing robots also have some representation of themselves, but they aren’t conscious at all.
Not that I honestly think you’re wrong here, but it’s worth asking how you supposedly know this. If we know every line of a robot’s source code, and that robot happens to be conscious, then it won’t do anything unexpected as a result of being conscious: No “start talking about consciousness” behavior will acausally insert itself into the robot’s program.
It’s also not entirely clear how “representing their representation” is any different from what many modern computer tools do. My computer can tell me about how much memory it’s using and what it’s using it for, about how many instructions it’s processing per second, print a stack trace of what it was doing when something went wrong, etc. It’s even recursive: the diagnostic can inspect itself while it runs.
I don’t think that my computer is conscious, but if I was going by your explanation alone I might be tempted to conclude that it is.
If we know every line of a robot’s source code, and that robot happens to be conscious, then it won’t do anything unexpected as a result of being conscious
What about splitting the concept “conscious” into more different ones? (As atucker also suggested.) I think what you mean is something like “qualia-laden”, while my version could be called “strong version of consciousness” ( = “starts talking about being self-aware” etc). So I think both of us are right in a “tree falling in forest” sense.
You’re also mostly right in the question about computers being conscious: of course, they mostly aren’t… but this is what the theory also predicts: although there definitely are some recursive elements, as you described, most of the meta-information is not processed by the same level as the one they are collected about, so no “thoughts” representing other, similar “thoughts” appear. (That does not exclude a concept of self-model: I think antivirus software definitely pass the mirror test by not looking for suspicious patterns in themselves...)
I guess the reason for this is that computers (and software systems) are soo complicated that they usually don’t understand what they are doing. There might be higher level systems that partially do (JIT compilers, for example), but this is still not a loop but some frozen, unrolled version of the consciousness loop programmers had when writing the code… (Maybe that’s why computers sometimes act quite intelligently, but this suddenly breaks down as they encounter situations the programmers haven’t thought of.)
By the way, have we ever encountered any beings other than humans so close to being reflectively conscious as computers? (I usually have more intelligent communications with them than with dogs, for example...)
Not that I honestly think you’re wrong here, but it’s worth asking how you supposedly know this. If we know every line of a robot’s source code, and that robot happens to be conscious, then it won’t do anything unexpected as a result of being conscious: No “start talking about consciousness” behavior will acausally insert itself into the robot’s program.
It’s also not entirely clear how “representing their representation” is any different from what many modern computer tools do. My computer can tell me about how much memory it’s using and what it’s using it for, about how many instructions it’s processing per second, print a stack trace of what it was doing when something went wrong, etc. It’s even recursive: the diagnostic can inspect itself while it runs.
I don’t think that my computer is conscious, but if I was going by your explanation alone I might be tempted to conclude that it is.
What about splitting the concept “conscious” into more different ones? (As atucker also suggested.) I think what you mean is something like “qualia-laden”, while my version could be called “strong version of consciousness” ( = “starts talking about being self-aware” etc). So I think both of us are right in a “tree falling in forest” sense.
You’re also mostly right in the question about computers being conscious: of course, they mostly aren’t… but this is what the theory also predicts: although there definitely are some recursive elements, as you described, most of the meta-information is not processed by the same level as the one they are collected about, so no “thoughts” representing other, similar “thoughts” appear. (That does not exclude a concept of self-model: I think antivirus software definitely pass the mirror test by not looking for suspicious patterns in themselves...)
I guess the reason for this is that computers (and software systems) are soo complicated that they usually don’t understand what they are doing. There might be higher level systems that partially do (JIT compilers, for example), but this is still not a loop but some frozen, unrolled version of the consciousness loop programmers had when writing the code… (Maybe that’s why computers sometimes act quite intelligently, but this suddenly breaks down as they encounter situations the programmers haven’t thought of.)
By the way, have we ever encountered any beings other than humans so close to being reflectively conscious as computers? (I usually have more intelligent communications with them than with dogs, for example...)