My purpose in pointing this out was to say that yes, people today are making the same types of category errors as Kelvin was; the mistaken belief that some types of objects are fundamentally not comparable (in Kelvin’s case living things and machines), in the example I used computations by a sensory neural network and computations by a machine pattern recognition system.
They are both doing computations, they can both be compared as computing devices; they both need computation resources to accomplish the computations and data to do the computations on.
For either of them to detect something they both need both data and computation resources. Even when the thing being detected is consciousness. Why there is the need/desire to keep “consciousness” as a special category of things/objects for which the normal rules of sensory detection do not apply, is not something I understand.
My experience has always been that if you look hard enough for errors you will find them. If someone wants to look for trivial errors and so discount whatever is present that is not trivial error, then discussions of difficult problems becomes impossible.
My motivation for my original post was not to do battle, but to discuss the computational requirements of consciousness and consciousness detection. If that is such a hot-button topic that people feel the need to attack me, my arguments, my ineptness at text formatting, pile on, and vote my karma down to oblivion, then perhaps LW is not ready to discuss such things and I should move it to my blog.
then perhaps LW is not ready to discuss such things
Uh, what? The post is poorly written along a number of dimensions, and was downvoted because people don’t want to see poorly written posts on the front page. The comments are pointing out specific problems with it. To interpret that as a problem with the community is a fairly egregious example of cognitive dissonance.
For either of them to detect something they both need both data and computation resources. Even when the thing being detected is consciousness. Why there is the need/desire to keep “consciousness” as a special category of things/objects for which the normal rules of sensory detection do not apply, is not something I understand.
So, if I understand you, detecting consciousness in someone else is something like detecting anger in someone else—of course we can’t do it perfectly, but we can still do it. Makes sense to me. Happy to have fed you the straight-line.
My motivation for my original post was not to do battle, but to discuss the computational requirements of consciousness and consciousness detection. If that is such a hot-button topic that people feel the need to attack me, my arguments, my ineptness at text formatting, pile on, and vote my karma down to oblivion, then perhaps LW is not ready to discuss such things and I should move it to my blog.
I understand your frustration. FWIW, I upvoted you some time ago, not because I liked your post, but rather because it wasn’t nearly bad enough to be downvoted that far. Maybe I felt a bit guilty.
I don’t really think there is “the need/desire to keep “consciousness” as a special category of things/objects”, at least not in this community. However, there is a kind of exhaustion regarding the topic, and an intuition that the topic can quickly become a quicksand. As I said, I found your title attractive because I thought it would be something like “here are the computations which we know/suspect that a conscious entity must accomplish, and here is how big/difficult they are”. Well, maybe the posting started with that, but then it shifted from computation to establish/maintain consciousness to computation to recognize consciousness, to who knows what else. My complaint was that your posting was disorganized. But down at the sentence/paragraph level, it struck me as competent and occasionally interesting. I hope you don’t let this bad experience drive you from LW.
perplexed, If detecting consciousness in someone else requires data and computation, why is our own consciousness special such that it doesn’t require data and computation to be detected? No one has presented any evidence or any arguments that our own consciousness is special. Until I see a reasonable argument otherwise; my default will be that my own consciousness is not special and that everyone else’s consciousness is not special either.
I appreciate that some people do privilege their own consciousness. My interpretation of that self-privileging is that it is not based on any rational examination of the issue but merely on feelings. If there is a rational examination of the issue I would like to see it.
If every other instance of detecting consciousness requires data and pattern recognition, then why doesn’t the self-detection of self-consciousness require data and pattern recognition?
If people are exhausted by a topic, they should not read posts on it. If people are afraid of getting caught in quicksand, they should stay away from it. If people find their intuition not useful, they should not rely on it.
When I asserted that self-detection of self-consciousness requires data and computation resources, I anticipated it being labeled a self-evident and/or obvious and/or trivial statement. To have it labeled as “opinion” is completely perplexing to me. To have that labeling as “opinion” up voted means that multiple people must share it.
How can any type of cognition happen without data and computation resources? Any type of information processing requires data and computation resources. Even a dualism treatment posits mythical immaterial data and mythical immaterial computation resources to do the necessary information processing. To be asked for “evidence” that cognition requires computation resources is something I find bizarre. It is not something I know how to respond to. When multiple people need to see evidence that cognition requires computation resources, this may be the wrong forum for me to discuss such things.
To be asked for “evidence” that cognition requires computation resources is something I find bizarre. It is not something I know how to respond to. When multiple people need to see evidence that cognition requires computation resources, this may be the wrong forum for me to discuss such things.
If smart people disagree so bizarrely, smart money’s on a misunderstanding, not a disagreement. e.g. here, cousin_it said:
However there must be certain computational functions that must be accomplished for consciousness to be experienced.
The same question applies: how on Earth do you know that? Where’s your evidence? Sharing opinions only gets us so far!
What might he have meant that’s not insane? Perhaps that he wants evidence that there must be certain computational functions, rather than that he wants evidence that there must be certain computational functions.
GuySrinivasan, I really can’t figure out what is being meant.
In my next sentence I say I am not trying to describe all computations that are necessary, and in the sentence after that I start talking about entity detection computation structures being necessary.
First an entity must have a “self detector”; a pattern recognition computation structure which it uses to recognizes its own state of being an entity and of being the same entity over time. If an entity is unable to recognize itself as an entity, then it can’t be conscious that it is an entity.
I think that is a pretty clear description of a certain cognitive structure that requires computational resources for an entity to self-recognize itself.
What is it that cousin_it is disputing and wants me to provide evidence for? That an entity doesn’t need a “self-detector” to recognize itself? That a “self-detector” doesn’t require pattern recognition? That pattern recognition doesn’t require computation?
I really don’t understand. But some other people must have understood it because they up voted the comment, maybe some of those people could explain it to me.
What is it that cousin_it is disputing and wants me to provide evidence for?
That consciousness requires a self detector thingy. This may or may not be true—you haven’t given enough evidence either way. Sure, humans are conscious and they can also self-detect; so what? At this stage it’s like claiming that flight requires flapping your wings.
It is your contention that an entity can be conscious without being aware that it is conscious?
There are entities that are not aware of being conscious. To me, if an entity is not aware of being conscious (i.e. is unconscious of being conscious), then it is unconscious.
By my understanding of the term, the one thing an entity must be aware of to be conscious is its own consciousness. I see that as an inherent part of the definition. I can not conceive of a definition of “consciousness” that allows for a conscious entity to be unaware that it is conscious.
Could you give me a definition of “consciousness” that allows for being unaware of being conscious?
if all that consciousness entails is being aware of being conscious, it doesn’t mean anything at all, does it? We could just as well say:
“My machine is fepton! I know this because it’s aware of being fepton; just ask, and it well tell you that it’s fepton! What’s fepton, you ask? Well, it’s the property of being aware of being fepton!”
I’m not allowed, under your definition, to posit a conscious being that is aware of every fact about the universe except the fact of its own consciousness, only because a being with such a description would be unconscious, by definition. It seems to be a pretty useless thing to be aware of.
If a being is not aware of being conscious, then it is not conscious no matter what else it is aware of.
I am not saying that all consciousness entails is being aware of being conscious, but it does at a minimum entail that. If an entity does not have self-awareness, then it is not conscious, no matter what other properties that entity has.
You are free to make up any hypothetical entities and states that you want, but the term “consciousness” has a generally recognized meaning. If you want to deviate from that meaning you have to tell me what you mean by the term, otherwise my default is the generally recognized meaning.
Could you give me a definition of “consciousness” that allows for being unaware of being conscious?
Could you give me a definition of “consciousness” that allows for being unaware of being conscious?
10 seconds ago I was unaware of being conscious: my attention was directed elsewhere. Does that mean I was unconscious? How about a creature who spends all its life like that? - will you claim that it’s only conscious because it has a potential possibility of noticing its own consciousness, or something?
Yes, if you are not aware of being conscious then you are unconscious. You may have the capacity to be conscious, but if you are not using that capacity, because you are asleep, are under anesthesia, or because you have sufficiently dissociated from being conscious, then you are not conscious at that moment.
There are states where people do “black-out”, that is where they seemingly function appropriately but have no memory later of those periods. Those states can occur due to drug use, they can also happen via psychogenic processes called a fugue state.
There is also the term semiconscious. Maybe that would be the appropriate term to use when an entity capable of consciousness is not using that capacity.
Yes. I would consider those states to be “unconscious”. I am not using “conscious” or “unconscious” as pejorative terms or as terms with any type of value, but purely as descriptive terms that describe the state of an entity. If an entity is not self-aware in the moment, then it is not conscious.
People are not self-aware of the data processing their visual cortex is doing (at least I am not). When you are not aware of the data processing you are doing, the outcome of that data processing is “transparent” to you, that is the output is achieved without an understanding of the path by which the output was achieved. Because you don’t have the ability to influence the data processing your visual cortex is doing, the output is susceptible to optical illusions.
Dissociation is not uncommon. In thinking about it, I think I dissociate quite a bit, and that it is fairly easy for me to dissociate. I do my best intellectual work when I am in what I call a “dissociative focus”. Where I really am quite oblivious to a lot of extraneous things and even about my physical state, hunger, fatigue, those kinds of things.
I think that entering a dissociative state is not uncommon, particularly under conditions of very high stress. I think there is a reason for that, under conditions of very high stress, all computational resources of the brain are needed to deal with what ever is causing that stress. Spending computational resources being conscious or self-aware is a luxury that an entity can’t afford while it is “running from a bear” (to use my favorite extreme stress state).
I haven’t looked at the living luminously sequences carefully, but I think I mostly disagree with it as something to strive for. It is ok, and if that is what you want to do that is fine, but I don’t aspire to think that way. Trying to think that way would interfere with what I am trying to accomplish.
I see living while being extremely conscious of self (i.e. what I understand to be the luminous state), and being dissociated from being conscious as two extremes along a continuum, what I consider thinking with your “theory of mind” (the self-conscious luminous state) and thinking with your “theory of reality”, what I consider to be the dissociative state. I discuss that in great detail on my blog about autism.
If you are not in a mode where you are thinking about entities, then you are not using your “theory of mind”. If you are thinking about things in purely non-anthropomorphic terms, you are not using your “theory of mind”.
I think these two different states are useful for thinking about different kinds of problems. Interpersonal problems, interactions with other people, communication are best dealt with by the “theory of mind”. All the examples in the Seven Shining Stories are what I would consider pretty much pure theory of mind-type problems. Theory of reality-type problems are like the traveling salesman problem, multiplying numbers, running more algorithmey-type problems like counting. Problems where there is little or no interpersonal or communication component.
Yes, if you are not aware of being conscious then you are unconscious. You may have the capacity to be conscious, but if you are not using that capacity, because you are asleep, are under anesthesia, or because you have sufficiently dissociated from being conscious, then you are not conscious at that moment.
There are states where people do “black-out”, that is where they seemingly function appropriately but have no memory later of those periods. Those states can occur due to drug use, they can also happen via psychogenic processes called a fugue state.
There is also the term semiconscious. Maybe that would be the appropriate term to use when an entity capable of consciousness is not using that capacity.
To be asked for “evidence” that cognition requires computation resources is something I find bizarre. It is not something I know how to respond to. When multiple people need to see evidence that cognition requires computation resources, this may be the wrong forum for me to discuss such things.
It strikes me as bizarre too, particularly here. So, you have to ask yourself whether you are misinterpreting. Maybe they are asking for evidence of something else.
perplexed, If detecting consciousness in someone else requires data and computation, why is our own consciousness special such that it doesn’t require data and computation to be detected? No one has presented any evidence or any arguments that our own consciousness is special.
You are asking me to think about topics I usually try to avoid. I believe that most talk about cognition is confused, and doubt that I can do any better. But here goes.
During the evolutionary development of human cognition, we passed through these stages:
(1) recognition of others (i.e. animate objects) as volitional agents who act so as to maximize the achievement of their own preferences. The ability to make this discrimination between animate and inanimate is a survival skill, as is the ability to infer the preferences of others.
(2) recognition of others as epistemic agents who have beliefs about the world. The ability to infer others’ beliefs is also a survival skill.
(3) recognition that among the beliefs of others is the belief that we ourselves are volitional and epistemic agents. It is a very important survival skill to infer the beliefs of others about ourselves.
(4) roughly at the same time, we come to understand that the beliefs of others that we are volitional and epistemic agents appear to be true. This realization is certainly interesting, but has little survival value. However, some folks call this realization “consciousness” and believe it is a big deal.
(5) finally, we develop language so that we can both (a) discuss, and (b) introspect on all of the above. This turns out, by accident as it were, to have enormous survival value and is the thing that makes us human. And some other folk call this linguistic ability “consciousness”, rather than applying that label to the mere awareness of an equivalence in cognitive function between self and other.
So that is my off-the-cuff theory of consciousness. It certainly requires social cognition and it probably requires language. It obviously requires computation. It is relatively useless, but it is the inevitable byproduct of useful things. Ah, but now let us add
(6) we also come to understand that others also believe that they are volitional and epistemic agents. Once again, this understanding provides no survival value, but it is probably inevitable if we and they want our belief structures to remain consistent.
Was that an important addition? I don’t think so. It is important to recognize volitional agents, epistemic agents, and eventually moral agents, as well as the fact that others act as if we ourselves were also agents of all three kinds. I’m not quite sure why anyone much cares whether either ourselves or any of the other agents are also conscious.
To use EY terminology from the sequences, all the useful stuff above is purely about maps. The consciousness stuff is about thinking that maps really match up to territory. But as reductionists, we know that the real matchup between map and territory actually takes place several levels down. So consciousness, like free will, is a mostly harmless illusion to be dissolved, rather than an important phenomenon to be understood.
That probably didn’t help you very much, but it helped me to clarify my own thinking.
(7) Ability to explicitly represent state of our own knowledge, intentions, focus of attention etc. Ability to analyse performance of our own brain and find ways to circumvent limitations. Ability to control brain’s resource allocation by learned (vs evolved) procedures.
The consciousness stuff is about thinking that maps really match up to territory.
Interesting thing about consciousness is that the map is a part of territory it describes, and as the map should be represented by neuronal connections and activity it can presumable influence territory.
Yes, and 1, 2, 3, 4, 5, and 6 and 7 all require data and computation resources.
And to compare a map with a territory one needs a map (i.e. data) and a comparator (i.e. a pattern recognition device) and needs computational resources to compare the data with the territory using the comparator.
When one is thinking about internal states, the map, the territory and the comparator are all internal. That they are internal does not obviate the need for them.
To have it labeled as “opinion” is completely perplexing to me.
It’s perplexing to me that you would be perplexed by this. Is it not your opinion? I would assume it is your opinion, since you have asserted it. It is clearly not your opinion that its negation is true.
I confess, I am lost. It seems we are in an arguments as soldiers situation in which everyone is shooting at everyone else. To recap:
You said “we can never “know for sure” that an entity is actually experiencing consciousness”. (Incidentally, I agree.)
Cousin_it criticised, comparing you to Kelvin.
You responded, pointing out that the Kelvin quote is odd, given what we suspect Kelvin knew (Why did you do this?)
I suggest the Kelvin quote was maybe not so odd, given his misconceptions (Why did I do this???)
You point out that people today (what people?) have misconceptions as severe as Kelvin’s.
This is either a rhetorical master stroke, or just random lashing out. I can’t tell. I am completely lost. WTF is going on?
My purpose in pointing this out was to say that yes, people today are making the same types of category errors as Kelvin was; the mistaken belief that some types of objects are fundamentally not comparable (in Kelvin’s case living things and machines), in the example I used computations by a sensory neural network and computations by a machine pattern recognition system.
They are both doing computations, they can both be compared as computing devices; they both need computation resources to accomplish the computations and data to do the computations on.
For either of them to detect something they both need both data and computation resources. Even when the thing being detected is consciousness. Why there is the need/desire to keep “consciousness” as a special category of things/objects for which the normal rules of sensory detection do not apply, is not something I understand.
My experience has always been that if you look hard enough for errors you will find them. If someone wants to look for trivial errors and so discount whatever is present that is not trivial error, then discussions of difficult problems becomes impossible.
My motivation for my original post was not to do battle, but to discuss the computational requirements of consciousness and consciousness detection. If that is such a hot-button topic that people feel the need to attack me, my arguments, my ineptness at text formatting, pile on, and vote my karma down to oblivion, then perhaps LW is not ready to discuss such things and I should move it to my blog.
Uh, what? The post is poorly written along a number of dimensions, and was downvoted because people don’t want to see poorly written posts on the front page. The comments are pointing out specific problems with it. To interpret that as a problem with the community is a fairly egregious example of cognitive dissonance.
So, if I understand you, detecting consciousness in someone else is something like detecting anger in someone else—of course we can’t do it perfectly, but we can still do it. Makes sense to me. Happy to have fed you the straight-line.
I understand your frustration. FWIW, I upvoted you some time ago, not because I liked your post, but rather because it wasn’t nearly bad enough to be downvoted that far. Maybe I felt a bit guilty.
I don’t really think there is “the need/desire to keep “consciousness” as a special category of things/objects”, at least not in this community. However, there is a kind of exhaustion regarding the topic, and an intuition that the topic can quickly become a quicksand. As I said, I found your title attractive because I thought it would be something like “here are the computations which we know/suspect that a conscious entity must accomplish, and here is how big/difficult they are”. Well, maybe the posting started with that, but then it shifted from computation to establish/maintain consciousness to computation to recognize consciousness, to who knows what else. My complaint was that your posting was disorganized. But down at the sentence/paragraph level, it struck me as competent and occasionally interesting. I hope you don’t let this bad experience drive you from LW.
perplexed, If detecting consciousness in someone else requires data and computation, why is our own consciousness special such that it doesn’t require data and computation to be detected? No one has presented any evidence or any arguments that our own consciousness is special. Until I see a reasonable argument otherwise; my default will be that my own consciousness is not special and that everyone else’s consciousness is not special either.
I appreciate that some people do privilege their own consciousness. My interpretation of that self-privileging is that it is not based on any rational examination of the issue but merely on feelings. If there is a rational examination of the issue I would like to see it.
If every other instance of detecting consciousness requires data and pattern recognition, then why doesn’t the self-detection of self-consciousness require data and pattern recognition?
If people are exhausted by a topic, they should not read posts on it. If people are afraid of getting caught in quicksand, they should stay away from it. If people find their intuition not useful, they should not rely on it.
When I asserted that self-detection of self-consciousness requires data and computation resources, I anticipated it being labeled a self-evident and/or obvious and/or trivial statement. To have it labeled as “opinion” is completely perplexing to me. To have that labeling as “opinion” up voted means that multiple people must share it.
How can any type of cognition happen without data and computation resources? Any type of information processing requires data and computation resources. Even a dualism treatment posits mythical immaterial data and mythical immaterial computation resources to do the necessary information processing. To be asked for “evidence” that cognition requires computation resources is something I find bizarre. It is not something I know how to respond to. When multiple people need to see evidence that cognition requires computation resources, this may be the wrong forum for me to discuss such things.
If smart people disagree so bizarrely, smart money’s on a misunderstanding, not a disagreement. e.g. here, cousin_it said:
What might he have meant that’s not insane? Perhaps that he wants evidence that there must be certain computational functions, rather than that he wants evidence that there must be certain computational functions.
GuySrinivasan, I really can’t figure out what is being meant.
In my next sentence I say I am not trying to describe all computations that are necessary, and in the sentence after that I start talking about entity detection computation structures being necessary.
I think that is a pretty clear description of a certain cognitive structure that requires computational resources for an entity to self-recognize itself.
What is it that cousin_it is disputing and wants me to provide evidence for? That an entity doesn’t need a “self-detector” to recognize itself? That a “self-detector” doesn’t require pattern recognition? That pattern recognition doesn’t require computation?
I really don’t understand. But some other people must have understood it because they up voted the comment, maybe some of those people could explain it to me.
That consciousness requires a self detector thingy. This may or may not be true—you haven’t given enough evidence either way. Sure, humans are conscious and they can also self-detect; so what? At this stage it’s like claiming that flight requires flapping your wings.
It is your contention that an entity can be conscious without being aware that it is conscious?
There are entities that are not aware of being conscious. To me, if an entity is not aware of being conscious (i.e. is unconscious of being conscious), then it is unconscious.
By my understanding of the term, the one thing an entity must be aware of to be conscious is its own consciousness. I see that as an inherent part of the definition. I can not conceive of a definition of “consciousness” that allows for a conscious entity to be unaware that it is conscious.
Could you give me a definition of “consciousness” that allows for being unaware of being conscious?
if all that consciousness entails is being aware of being conscious, it doesn’t mean anything at all, does it? We could just as well say:
“My machine is fepton! I know this because it’s aware of being fepton; just ask, and it well tell you that it’s fepton! What’s fepton, you ask? Well, it’s the property of being aware of being fepton!”
I’m not allowed, under your definition, to posit a conscious being that is aware of every fact about the universe except the fact of its own consciousness, only because a being with such a description would be unconscious, by definition. It seems to be a pretty useless thing to be aware of.
If a being is not aware of being conscious, then it is not conscious no matter what else it is aware of.
I am not saying that all consciousness entails is being aware of being conscious, but it does at a minimum entail that. If an entity does not have self-awareness, then it is not conscious, no matter what other properties that entity has.
You are free to make up any hypothetical entities and states that you want, but the term “consciousness” has a generally recognized meaning. If you want to deviate from that meaning you have to tell me what you mean by the term, otherwise my default is the generally recognized meaning.
Could you give me a definition of “consciousness” that allows for being unaware of being conscious?
10 seconds ago I was unaware of being conscious: my attention was directed elsewhere. Does that mean I was unconscious? How about a creature who spends all its life like that? - will you claim that it’s only conscious because it has a potential possibility of noticing its own consciousness, or something?
Yes, if you are not aware of being conscious then you are unconscious. You may have the capacity to be conscious, but if you are not using that capacity, because you are asleep, are under anesthesia, or because you have sufficiently dissociated from being conscious, then you are not conscious at that moment.
There are states where people do “black-out”, that is where they seemingly function appropriately but have no memory later of those periods. Those states can occur due to drug use, they can also happen via psychogenic processes called a fugue state.
There is also the term semiconscious. Maybe that would be the appropriate term to use when an entity capable of consciousness is not using that capacity.
Do you consider flow states (being so fascinated by something that you forget yourself and the passage of time) as not being conscious?
Yes. I would consider those states to be “unconscious”. I am not using “conscious” or “unconscious” as pejorative terms or as terms with any type of value, but purely as descriptive terms that describe the state of an entity. If an entity is not self-aware in the moment, then it is not conscious.
People are not self-aware of the data processing their visual cortex is doing (at least I am not). When you are not aware of the data processing you are doing, the outcome of that data processing is “transparent” to you, that is the output is achieved without an understanding of the path by which the output was achieved. Because you don’t have the ability to influence the data processing your visual cortex is doing, the output is susceptible to optical illusions.
Dissociation is not uncommon. In thinking about it, I think I dissociate quite a bit, and that it is fairly easy for me to dissociate. I do my best intellectual work when I am in what I call a “dissociative focus”. Where I really am quite oblivious to a lot of extraneous things and even about my physical state, hunger, fatigue, those kinds of things.
I think that entering a dissociative state is not uncommon, particularly under conditions of very high stress. I think there is a reason for that, under conditions of very high stress, all computational resources of the brain are needed to deal with what ever is causing that stress. Spending computational resources being conscious or self-aware is a luxury that an entity can’t afford while it is “running from a bear” (to use my favorite extreme stress state).
I haven’t looked at the living luminously sequences carefully, but I think I mostly disagree with it as something to strive for. It is ok, and if that is what you want to do that is fine, but I don’t aspire to think that way. Trying to think that way would interfere with what I am trying to accomplish.
I see living while being extremely conscious of self (i.e. what I understand to be the luminous state), and being dissociated from being conscious as two extremes along a continuum, what I consider thinking with your “theory of mind” (the self-conscious luminous state) and thinking with your “theory of reality”, what I consider to be the dissociative state. I discuss that in great detail on my blog about autism.
If you are not in a mode where you are thinking about entities, then you are not using your “theory of mind”. If you are thinking about things in purely non-anthropomorphic terms, you are not using your “theory of mind”.
I think these two different states are useful for thinking about different kinds of problems. Interpersonal problems, interactions with other people, communication are best dealt with by the “theory of mind”. All the examples in the Seven Shining Stories are what I would consider pretty much pure theory of mind-type problems. Theory of reality-type problems are like the traveling salesman problem, multiplying numbers, running more algorithmey-type problems like counting. Problems where there is little or no interpersonal or communication component.
Yes, if you are not aware of being conscious then you are unconscious. You may have the capacity to be conscious, but if you are not using that capacity, because you are asleep, are under anesthesia, or because you have sufficiently dissociated from being conscious, then you are not conscious at that moment.
There are states where people do “black-out”, that is where they seemingly function appropriately but have no memory later of those periods. Those states can occur due to drug use, they can also happen via psychogenic processes called a fugue state.
There is also the term semiconscious. Maybe that would be the appropriate term to use when an entity capable of consciousness is not using that capacity.
It strikes me as bizarre too, particularly here. So, you have to ask yourself whether you are misinterpreting. Maybe they are asking for evidence of something else.
You are asking me to think about topics I usually try to avoid. I believe that most talk about cognition is confused, and doubt that I can do any better. But here goes.
During the evolutionary development of human cognition, we passed through these stages:
(1) recognition of others (i.e. animate objects) as volitional agents who act so as to maximize the achievement of their own preferences. The ability to make this discrimination between animate and inanimate is a survival skill, as is the ability to infer the preferences of others.
(2) recognition of others as epistemic agents who have beliefs about the world. The ability to infer others’ beliefs is also a survival skill.
(3) recognition that among the beliefs of others is the belief that we ourselves are volitional and epistemic agents. It is a very important survival skill to infer the beliefs of others about ourselves.
(4) roughly at the same time, we come to understand that the beliefs of others that we are volitional and epistemic agents appear to be true. This realization is certainly interesting, but has little survival value. However, some folks call this realization “consciousness” and believe it is a big deal.
(5) finally, we develop language so that we can both (a) discuss, and (b) introspect on all of the above. This turns out, by accident as it were, to have enormous survival value and is the thing that makes us human. And some other folk call this linguistic ability “consciousness”, rather than applying that label to the mere awareness of an equivalence in cognitive function between self and other.
So that is my off-the-cuff theory of consciousness. It certainly requires social cognition and it probably requires language. It obviously requires computation. It is relatively useless, but it is the inevitable byproduct of useful things. Ah, but now let us add
(6) we also come to understand that others also believe that they are volitional and epistemic agents. Once again, this understanding provides no survival value, but it is probably inevitable if we and they want our belief structures to remain consistent.
Was that an important addition? I don’t think so. It is important to recognize volitional agents, epistemic agents, and eventually moral agents, as well as the fact that others act as if we ourselves were also agents of all three kinds. I’m not quite sure why anyone much cares whether either ourselves or any of the other agents are also conscious.
To use EY terminology from the sequences, all the useful stuff above is purely about maps. The consciousness stuff is about thinking that maps really match up to territory. But as reductionists, we know that the real matchup between map and territory actually takes place several levels down. So consciousness, like free will, is a mostly harmless illusion to be dissolved, rather than an important phenomenon to be understood.
That probably didn’t help you very much, but it helped me to clarify my own thinking.
Self-model theory of subjectivity can also suggest
(7) Ability to explicitly represent state of our own knowledge, intentions, focus of attention etc. Ability to analyse performance of our own brain and find ways to circumvent limitations. Ability to control brain’s resource allocation by learned (vs evolved) procedures.
Interesting thing about consciousness is that the map is a part of territory it describes, and as the map should be represented by neuronal connections and activity it can presumable influence territory.
Yes, and 1, 2, 3, 4, 5, and 6 and 7 all require data and computation resources.
And to compare a map with a territory one needs a map (i.e. data) and a comparator (i.e. a pattern recognition device) and needs computational resources to compare the data with the territory using the comparator.
When one is thinking about internal states, the map, the territory and the comparator are all internal. That they are internal does not obviate the need for them.
It’s perplexing to me that you would be perplexed by this. Is it not your opinion? I would assume it is your opinion, since you have asserted it. It is clearly not your opinion that its negation is true.