What we call consciousness/self-awareness is just a meaningless side-effect of brain processes (55%)
What does this mean? What is the difference between saying “What we call consciousness/self-awareness is just a side-effect of brain processes”, which is pretty obviously true and saying that they’re meaningless side effects?
Sorry, I was letting my own uncertainty get in the way of clarity there. A stronger version of what I was trying to say would be that consciousness gives us the illusion of being in control of our actions when in fact we have no such control.
Or to put it another way: we’re all P-zombies with delusions of grandeur (yes, this doesn’t actually make logical sense, but it works for me)
So I agree with the science you cite, right? But what you said really doesn’t follow. Just because our phonologic loop doesn’t actually have the control it thinks it does, it doesn’t follow that sensory modalities are “meaningless.” You might want to re-read Joy in the Merely Real with this thought of yours in mind.
Well, sure, you can find meaning wherever you want. I’m currently listening to some music that I find beautiful and meaningful. But that beauty and meaning isn’t an inherent trait of the music, it’s just something that I read into it. Similarly when I say that consciousness is meaningless I don’t mean that we should all become nihilists, only that consciousness doesn’t pay rent and so any meaning or usefulness it has is what you invent for it.
Sure. Here’s a version of the analogy that first got me thinking about it:
If I turn on a lamp at night, it sheds both heat and light. But I wouldn’t say that the point of a lamp is to produce heat, nor that the amount of heat it does or doesn’t produce is relevant to its useful light-shedding properties.
In the same way, consciousness is not the point of the brain and doesn’t do much for us. There’s a fair amount of cogsci literature that suggests that we have little if any conscious control over our actions and reinforces this opinion. But I like feeling responsible for my actions, even if it is just an illusion, hence the low probability assignment even though it feels intuitively correct to me.
(I’m not sure why I pushed the button to reply, but here I am so I guess I’ll just make something up to cover my confusion.)
Do you also believe that we use language—speaking, writing, listening, reading, reasoning, doing arithmetic calculations, etc. - without using our consciousness?
I’m.. honestly not sure. I think that the vast majority of the time we don’t consciously choose whether to speak or what exact words to say when we do speak. Listening and reading are definitely unconscious processes, otherwise it would be possible to turn them off (also, cocktail party effect is a huge indication of listening being largely unconscious). Arithmetic calculations—that’s a matter of learning an algorithm which usually involves mnemonics for the numbers..
On balance I have to go with yes, I don’t think those processes require consciousness
Some autistic people, particularly those in the middle and middle-to-severe part of the spectrum, report that during overload, some kinds of processing—most often understanding or being able to produce speech, but also other sensory processing—turn off. Some report that turned-off processing skills can be consciously turned back on, often at the expense of a different skill, or that the relevant skill can be consciously emulated even when the normal mode of producing the intended result is offline. I’ve personally experienced this.
Also, in my experience, a fair portion (20-30%) of adults of average intelligence aren’t fluent in reading, and do have to consciously parse each word.
I have to go with yes, I don’t think those [symbolic, linguistic] processes require consciousness.
You pretty much have to go with “yes” if you want to claim that “consciousness/self-awareness is just a meaningless side-effect of brain processes.” I’ve got to disagree. What my introspection calls my “consciousness” is mostly listening to myself talk to myself. And then after I have practiced saying it to myself, I may go on to say it out loud.
Not all of my speech works this way, but some does. And almost all of my writing, including this note. So I have to disagree that consciousness has no causal role in my behavior. Sometimes I act with “malice aforethought”. Or at least I sometimes speak that way.
For these reasons, I prefer “spotlight” consciousness theories, like “global workspace” or “integrated information theory”. Theories that capture the fact that we observe some things consciously and do some things consciously.
I’ve got to disagree. What my introspection calls my “consciousness” is mostly listening to myself talk to myself. And then after I have practiced saying it to myself, I may go on to say it out loud.
Agreed, but that tells you consciousness requires language. That doesn’t tell you language requires consciousness. Drugs such as alcohol or Ambien can cause people to have conversations and engage in other activities while unconscious.
No mod to the original comment; I would downmod the “consciousness was not a positive factor in the evolution of brains” part and upmod the “we do not actually rely much if at all on conscious thought” one.
Having just stumbled across LW yesterday, I’ve been gorging myself on rationality and discovering that I have a lot of cruft in my thought process, but I have to disagree with you on this.
“Meaning” and “mysterious” don’t apply to reality, they only apply to maps of the terrain reality. Self-awareness itself is what allows a pattern/agent/model to preserve itself in the face of entropy and competitors, making it “meaningful” to an observer of the agent/model that is trying to understand how it will operate. Being self-aware of the self-awareness (i.e. mapping the map, or recursively refining the super-model to understand itself better) can also impact our ability to preserve ourselves, making it “meaningful” to the agent/model itself. Being aware of others self-awareness (i.e. mapping a different agent/map and realizing that it will act to preserve itself) is probably one of the most critical developments in the evolution of humans.
“I am” a super-agent. It is a stack of component agents.
At each layer, a shared belief by a system of agents (that each agent is working towards the common utility of all the agents) results in a super-agent with more complex goals that does not have a belief that it is composed of distinct sub-agents. Like the 7-layer network model or the transistor-gate-chip-computer model, each layer is just an emergent property of its components. But each layer has meaning because it provides us a predictive model to understand the system’s behavior, in a way that we don’t understand by just looking at a complex version of the layer below it.
My super-agent has a super-model of reality, similarly composed. Some parts of that super-model are tagged, weakly or strongly, with an attribute. The collection of cells that makes up a fatty lump on my head is weakly marked with that attribute. The parts of reality where my super-agent/-model exist are very strongly tagged.
My super-agent survives because it has marked the area on its model corresponding to where it exists, and it has a goal of continually remarking this area. If it has an accurate model, but marks a different region of reality (or marks the correct region but doesn’t protect it), it will eventually be destroyed by entropy. If it has an inaccurate model, it won’t be able to effectively interact with reality to protect the region where it resides. If it has an accurate model, and marks only where it originally is, it won’t be able to adapt to face environmental changes and challenges while still maintaining its reality.
What we call consciousness/self-awareness is just a meaningless side-effect of brain processes (55%)
What does this mean? What is the difference between saying “What we call consciousness/self-awareness is just a side-effect of brain processes”, which is pretty obviously true and saying that they’re meaningless side effects?
Sorry, I was letting my own uncertainty get in the way of clarity there. A stronger version of what I was trying to say would be that consciousness gives us the illusion of being in control of our actions when in fact we have no such control. Or to put it another way: we’re all P-zombies with delusions of grandeur (yes, this doesn’t actually make logical sense, but it works for me)
So I agree with the science you cite, right? But what you said really doesn’t follow. Just because our phonologic loop doesn’t actually have the control it thinks it does, it doesn’t follow that sensory modalities are “meaningless.” You might want to re-read Joy in the Merely Real with this thought of yours in mind.
Well, sure, you can find meaning wherever you want. I’m currently listening to some music that I find beautiful and meaningful. But that beauty and meaning isn’t an inherent trait of the music, it’s just something that I read into it. Similarly when I say that consciousness is meaningless I don’t mean that we should all become nihilists, only that consciousness doesn’t pay rent and so any meaning or usefulness it has is what you invent for it.
I don’t know about you, but I’m not a P-zombie. :)
That emoticon isn’t fooling anyone.
Upvoted for ‘not even being wrong’.
I’m not sure whether “not even wrong” calls for an upvote, does it?
Could you expand a little on this?
Sure. Here’s a version of the analogy that first got me thinking about it:
If I turn on a lamp at night, it sheds both heat and light. But I wouldn’t say that the point of a lamp is to produce heat, nor that the amount of heat it does or doesn’t produce is relevant to its useful light-shedding properties. In the same way, consciousness is not the point of the brain and doesn’t do much for us. There’s a fair amount of cogsci literature that suggests that we have little if any conscious control over our actions and reinforces this opinion. But I like feeling responsible for my actions, even if it is just an illusion, hence the low probability assignment even though it feels intuitively correct to me.
(I’m not sure why I pushed the button to reply, but here I am so I guess I’ll just make something up to cover my confusion.)
Do you also believe that we use language—speaking, writing, listening, reading, reasoning, doing arithmetic calculations, etc. - without using our consciousness?
Hah! I found it amusing at least.
I’m.. honestly not sure. I think that the vast majority of the time we don’t consciously choose whether to speak or what exact words to say when we do speak. Listening and reading are definitely unconscious processes, otherwise it would be possible to turn them off (also, cocktail party effect is a huge indication of listening being largely unconscious). Arithmetic calculations—that’s a matter of learning an algorithm which usually involves mnemonics for the numbers..
On balance I have to go with yes, I don’t think those processes require consciousness
Some autistic people, particularly those in the middle and middle-to-severe part of the spectrum, report that during overload, some kinds of processing—most often understanding or being able to produce speech, but also other sensory processing—turn off. Some report that turned-off processing skills can be consciously turned back on, often at the expense of a different skill, or that the relevant skill can be consciously emulated even when the normal mode of producing the intended result is offline. I’ve personally experienced this.
Also, in my experience, a fair portion (20-30%) of adults of average intelligence aren’t fluent in reading, and do have to consciously parse each word.
You pretty much have to go with “yes” if you want to claim that “consciousness/self-awareness is just a meaningless side-effect of brain processes.” I’ve got to disagree. What my introspection calls my “consciousness” is mostly listening to myself talk to myself. And then after I have practiced saying it to myself, I may go on to say it out loud.
Not all of my speech works this way, but some does. And almost all of my writing, including this note. So I have to disagree that consciousness has no causal role in my behavior. Sometimes I act with “malice aforethought”. Or at least I sometimes speak that way.
For these reasons, I prefer “spotlight” consciousness theories, like “global workspace” or “integrated information theory”. Theories that capture the fact that we observe some things consciously and do some things consciously.
Agreed, but that tells you consciousness requires language. That doesn’t tell you language requires consciousness. Drugs such as alcohol or Ambien can cause people to have conversations and engage in other activities while unconscious.
Thanks; +1 for the explanation.
No mod to the original comment; I would downmod the “consciousness was not a positive factor in the evolution of brains” part and upmod the “we do not actually rely much if at all on conscious thought” one.
Upvoted for underconfidence.
Having just stumbled across LW yesterday, I’ve been gorging myself on rationality and discovering that I have a lot of cruft in my thought process, but I have to disagree with you on this.
“Meaning” and “mysterious” don’t apply to reality, they only apply to maps of the terrain reality. Self-awareness itself is what allows a pattern/agent/model to preserve itself in the face of entropy and competitors, making it “meaningful” to an observer of the agent/model that is trying to understand how it will operate. Being self-aware of the self-awareness (i.e. mapping the map, or recursively refining the super-model to understand itself better) can also impact our ability to preserve ourselves, making it “meaningful” to the agent/model itself. Being aware of others self-awareness (i.e. mapping a different agent/map and realizing that it will act to preserve itself) is probably one of the most critical developments in the evolution of humans. “I am” a super-agent. It is a stack of component agents.
At each layer, a shared belief by a system of agents (that each agent is working towards the common utility of all the agents) results in a super-agent with more complex goals that does not have a belief that it is composed of distinct sub-agents. Like the 7-layer network model or the transistor-gate-chip-computer model, each layer is just an emergent property of its components. But each layer has meaning because it provides us a predictive model to understand the system’s behavior, in a way that we don’t understand by just looking at a complex version of the layer below it. My super-agent has a super-model of reality, similarly composed. Some parts of that super-model are tagged, weakly or strongly, with an attribute. The collection of cells that makes up a fatty lump on my head is weakly marked with that attribute. The parts of reality where my super-agent/-model exist are very strongly tagged. My super-agent survives because it has marked the area on its model corresponding to where it exists, and it has a goal of continually remarking this area. If it has an accurate model, but marks a different region of reality (or marks the correct region but doesn’t protect it), it will eventually be destroyed by entropy. If it has an inaccurate model, it won’t be able to effectively interact with reality to protect the region where it resides. If it has an accurate model, and marks only where it originally is, it won’t be able to adapt to face environmental changes and challenges while still maintaining its reality.