Some currently existing robots also have some representation of themselves, but they aren’t conscious at all… I think it is true that the concept of self-model has something to do with consciousness, but it is not the source of it. (By the way, there is not much recursive about the brain modeling the body.)
Animals represent things, but they don’t represent their representation.
For me, this seems to be the key point… that conscious entities have representations of their thoughts. That we can perceive them just like tables and apples in front of us, and reason about them, allowing thoughts like “I know that the thing what I see is a table” (because “I see a thought in my brain saying <
>”).
Using this view, “conscious” just stands for “able to perceive its own thoughts as sensory input”. The statement “we experience qualia” is a reasonable output for a process that has to organize inputs like … This would also explain the fact that we tend to talk about qualia as something that physically exist but we can never be sure that others also have them: they arrive to sensory pathways just like when we see an apple (so they look like parts of reality) but we get to see only our own...
Does this sound reasonable, by the way? (can’t wait for the second part, especially if it is dealing with similar topics)
Some currently existing robots also have some representation of themselves, but they aren’t conscious at all.
Not that I honestly think you’re wrong here, but it’s worth asking how you supposedly know this. If we know every line of a robot’s source code, and that robot happens to be conscious, then it won’t do anything unexpected as a result of being conscious: No “start talking about consciousness” behavior will acausally insert itself into the robot’s program.
It’s also not entirely clear how “representing their representation” is any different from what many modern computer tools do. My computer can tell me about how much memory it’s using and what it’s using it for, about how many instructions it’s processing per second, print a stack trace of what it was doing when something went wrong, etc. It’s even recursive: the diagnostic can inspect itself while it runs.
I don’t think that my computer is conscious, but if I was going by your explanation alone I might be tempted to conclude that it is.
If we know every line of a robot’s source code, and that robot happens to be conscious, then it won’t do anything unexpected as a result of being conscious
What about splitting the concept “conscious” into more different ones? (As atucker also suggested.) I think what you mean is something like “qualia-laden”, while my version could be called “strong version of consciousness” ( = “starts talking about being self-aware” etc). So I think both of us are right in a “tree falling in forest” sense.
You’re also mostly right in the question about computers being conscious: of course, they mostly aren’t… but this is what the theory also predicts: although there definitely are some recursive elements, as you described, most of the meta-information is not processed by the same level as the one they are collected about, so no “thoughts” representing other, similar “thoughts” appear. (That does not exclude a concept of self-model: I think antivirus software definitely pass the mirror test by not looking for suspicious patterns in themselves...)
I guess the reason for this is that computers (and software systems) are soo complicated that they usually don’t understand what they are doing. There might be higher level systems that partially do (JIT compilers, for example), but this is still not a loop but some frozen, unrolled version of the consciousness loop programmers had when writing the code… (Maybe that’s why computers sometimes act quite intelligently, but this suddenly breaks down as they encounter situations the programmers haven’t thought of.)
By the way, have we ever encountered any beings other than humans so close to being reflectively conscious as computers? (I usually have more intelligent communications with them than with dogs, for example...)
This is very similar to my current beliefs on the subject.
I was considering adding to that “Animals are conscious, but not self aware”, but that would mostly be using the word consciousness in a not-agreed-upon way. Namely as the ability to feel or perceive, but not full-blown human-style consciousness.
I think that’s the most commonly accepted correct word for that, but think that it means enough things to enough people that at that point it’s better to just talk about things directly.
Some currently existing robots also have some representation of themselves, but they aren’t conscious at all… I think it is true that the concept of self-model has something to do with consciousness, but it is not the source of it. (By the way, there is not much recursive about the brain modeling the body.)
For me, this seems to be the key point… that conscious entities have representations of their thoughts. That we can perceive them just like tables and apples in front of us, and reason about them, allowing thoughts like “I know that the thing what I see is a table” (because “I see a thought in my brain saying <
Using this view, “conscious” just stands for “able to perceive its own thoughts as sensory input”. The statement “we experience qualia” is a reasonable output for a process that has to organize inputs like … This would also explain the fact that we tend to talk about qualia as something that physically exist but we can never be sure that others also have them: they arrive to sensory pathways just like when we see an apple (so they look like parts of reality) but we get to see only our own...
Does this sound reasonable, by the way? (can’t wait for the second part, especially if it is dealing with similar topics)
Not that I honestly think you’re wrong here, but it’s worth asking how you supposedly know this. If we know every line of a robot’s source code, and that robot happens to be conscious, then it won’t do anything unexpected as a result of being conscious: No “start talking about consciousness” behavior will acausally insert itself into the robot’s program.
It’s also not entirely clear how “representing their representation” is any different from what many modern computer tools do. My computer can tell me about how much memory it’s using and what it’s using it for, about how many instructions it’s processing per second, print a stack trace of what it was doing when something went wrong, etc. It’s even recursive: the diagnostic can inspect itself while it runs.
I don’t think that my computer is conscious, but if I was going by your explanation alone I might be tempted to conclude that it is.
What about splitting the concept “conscious” into more different ones? (As atucker also suggested.) I think what you mean is something like “qualia-laden”, while my version could be called “strong version of consciousness” ( = “starts talking about being self-aware” etc). So I think both of us are right in a “tree falling in forest” sense.
You’re also mostly right in the question about computers being conscious: of course, they mostly aren’t… but this is what the theory also predicts: although there definitely are some recursive elements, as you described, most of the meta-information is not processed by the same level as the one they are collected about, so no “thoughts” representing other, similar “thoughts” appear. (That does not exclude a concept of self-model: I think antivirus software definitely pass the mirror test by not looking for suspicious patterns in themselves...)
I guess the reason for this is that computers (and software systems) are soo complicated that they usually don’t understand what they are doing. There might be higher level systems that partially do (JIT compilers, for example), but this is still not a loop but some frozen, unrolled version of the consciousness loop programmers had when writing the code… (Maybe that’s why computers sometimes act quite intelligently, but this suddenly breaks down as they encounter situations the programmers haven’t thought of.)
By the way, have we ever encountered any beings other than humans so close to being reflectively conscious as computers? (I usually have more intelligent communications with them than with dogs, for example...)
This is very similar to my current beliefs on the subject.
I was considering adding to that “Animals are conscious, but not self aware”, but that would mostly be using the word consciousness in a not-agreed-upon way. Namely as the ability to feel or perceive, but not full-blown human-style consciousness.
That’s called sentience, isn’t it?
I think that’s the most commonly accepted correct word for that, but think that it means enough things to enough people that at that point it’s better to just talk about things directly.