To be asked for “evidence” that cognition requires computation resources is something I find bizarre. It is not something I know how to respond to. When multiple people need to see evidence that cognition requires computation resources, this may be the wrong forum for me to discuss such things.
It strikes me as bizarre too, particularly here. So, you have to ask yourself whether you are misinterpreting. Maybe they are asking for evidence of something else.
perplexed, If detecting consciousness in someone else requires data and computation, why is our own consciousness special such that it doesn’t require data and computation to be detected? No one has presented any evidence or any arguments that our own consciousness is special.
You are asking me to think about topics I usually try to avoid. I believe that most talk about cognition is confused, and doubt that I can do any better. But here goes.
During the evolutionary development of human cognition, we passed through these stages:
(1) recognition of others (i.e. animate objects) as volitional agents who act so as to maximize the achievement of their own preferences. The ability to make this discrimination between animate and inanimate is a survival skill, as is the ability to infer the preferences of others.
(2) recognition of others as epistemic agents who have beliefs about the world. The ability to infer others’ beliefs is also a survival skill.
(3) recognition that among the beliefs of others is the belief that we ourselves are volitional and epistemic agents. It is a very important survival skill to infer the beliefs of others about ourselves.
(4) roughly at the same time, we come to understand that the beliefs of others that we are volitional and epistemic agents appear to be true. This realization is certainly interesting, but has little survival value. However, some folks call this realization “consciousness” and believe it is a big deal.
(5) finally, we develop language so that we can both (a) discuss, and (b) introspect on all of the above. This turns out, by accident as it were, to have enormous survival value and is the thing that makes us human. And some other folk call this linguistic ability “consciousness”, rather than applying that label to the mere awareness of an equivalence in cognitive function between self and other.
So that is my off-the-cuff theory of consciousness. It certainly requires social cognition and it probably requires language. It obviously requires computation. It is relatively useless, but it is the inevitable byproduct of useful things. Ah, but now let us add
(6) we also come to understand that others also believe that they are volitional and epistemic agents. Once again, this understanding provides no survival value, but it is probably inevitable if we and they want our belief structures to remain consistent.
Was that an important addition? I don’t think so. It is important to recognize volitional agents, epistemic agents, and eventually moral agents, as well as the fact that others act as if we ourselves were also agents of all three kinds. I’m not quite sure why anyone much cares whether either ourselves or any of the other agents are also conscious.
To use EY terminology from the sequences, all the useful stuff above is purely about maps. The consciousness stuff is about thinking that maps really match up to territory. But as reductionists, we know that the real matchup between map and territory actually takes place several levels down. So consciousness, like free will, is a mostly harmless illusion to be dissolved, rather than an important phenomenon to be understood.
That probably didn’t help you very much, but it helped me to clarify my own thinking.
(7) Ability to explicitly represent state of our own knowledge, intentions, focus of attention etc. Ability to analyse performance of our own brain and find ways to circumvent limitations. Ability to control brain’s resource allocation by learned (vs evolved) procedures.
The consciousness stuff is about thinking that maps really match up to territory.
Interesting thing about consciousness is that the map is a part of territory it describes, and as the map should be represented by neuronal connections and activity it can presumable influence territory.
Yes, and 1, 2, 3, 4, 5, and 6 and 7 all require data and computation resources.
And to compare a map with a territory one needs a map (i.e. data) and a comparator (i.e. a pattern recognition device) and needs computational resources to compare the data with the territory using the comparator.
When one is thinking about internal states, the map, the territory and the comparator are all internal. That they are internal does not obviate the need for them.
It strikes me as bizarre too, particularly here. So, you have to ask yourself whether you are misinterpreting. Maybe they are asking for evidence of something else.
You are asking me to think about topics I usually try to avoid. I believe that most talk about cognition is confused, and doubt that I can do any better. But here goes.
During the evolutionary development of human cognition, we passed through these stages:
(1) recognition of others (i.e. animate objects) as volitional agents who act so as to maximize the achievement of their own preferences. The ability to make this discrimination between animate and inanimate is a survival skill, as is the ability to infer the preferences of others.
(2) recognition of others as epistemic agents who have beliefs about the world. The ability to infer others’ beliefs is also a survival skill.
(3) recognition that among the beliefs of others is the belief that we ourselves are volitional and epistemic agents. It is a very important survival skill to infer the beliefs of others about ourselves.
(4) roughly at the same time, we come to understand that the beliefs of others that we are volitional and epistemic agents appear to be true. This realization is certainly interesting, but has little survival value. However, some folks call this realization “consciousness” and believe it is a big deal.
(5) finally, we develop language so that we can both (a) discuss, and (b) introspect on all of the above. This turns out, by accident as it were, to have enormous survival value and is the thing that makes us human. And some other folk call this linguistic ability “consciousness”, rather than applying that label to the mere awareness of an equivalence in cognitive function between self and other.
So that is my off-the-cuff theory of consciousness. It certainly requires social cognition and it probably requires language. It obviously requires computation. It is relatively useless, but it is the inevitable byproduct of useful things. Ah, but now let us add
(6) we also come to understand that others also believe that they are volitional and epistemic agents. Once again, this understanding provides no survival value, but it is probably inevitable if we and they want our belief structures to remain consistent.
Was that an important addition? I don’t think so. It is important to recognize volitional agents, epistemic agents, and eventually moral agents, as well as the fact that others act as if we ourselves were also agents of all three kinds. I’m not quite sure why anyone much cares whether either ourselves or any of the other agents are also conscious.
To use EY terminology from the sequences, all the useful stuff above is purely about maps. The consciousness stuff is about thinking that maps really match up to territory. But as reductionists, we know that the real matchup between map and territory actually takes place several levels down. So consciousness, like free will, is a mostly harmless illusion to be dissolved, rather than an important phenomenon to be understood.
That probably didn’t help you very much, but it helped me to clarify my own thinking.
Self-model theory of subjectivity can also suggest
(7) Ability to explicitly represent state of our own knowledge, intentions, focus of attention etc. Ability to analyse performance of our own brain and find ways to circumvent limitations. Ability to control brain’s resource allocation by learned (vs evolved) procedures.
Interesting thing about consciousness is that the map is a part of territory it describes, and as the map should be represented by neuronal connections and activity it can presumable influence territory.
Yes, and 1, 2, 3, 4, 5, and 6 and 7 all require data and computation resources.
And to compare a map with a territory one needs a map (i.e. data) and a comparator (i.e. a pattern recognition device) and needs computational resources to compare the data with the territory using the comparator.
When one is thinking about internal states, the map, the territory and the comparator are all internal. That they are internal does not obviate the need for them.