This post was probably quite useful for you to write. But I feel like this also exemplifies some of the dangers of ordinary philosophy, and I wouldn’t recommend doing a scaled-up version of this as your entire mode of learning about consciousness—you’d have to spend a lot of time learning about garbage and trying to give it a fair shake, rather than focusing on what you think is actually useful to you and trying to synthesize it.
Some insights that I think maybe don’t get a fair shake if you just go through the content of this post:
There is no inner observer inside the brain. The brain is where our thoughts are (or representations of those thoughts), but there isn’t some much-more-specific-than-that place in the brain that’s “us” while the rest is merely a pile of tagalong grey matter. This isn’t just a dunk on Descartes (or on making students read Descartes as if it taught good thinking habits), it’s actually important. It means that to think about consciousness you have to train yourself to imagine how parallel information-processing can be used to make decisions, form memories, etc.
The word “consciousness” is used to refer to many different things, often at-the-same-time-all-bundled-together. People can use it to mean a soul, or an “inner observer” who reads your thoughts, or something it would be like for me to be in that state, or a good explanation for other peoples’ reports, or an information-processing mechanism, or a powerful piece of their favorite theory of decision-making, or a pattern of brain activity, or a way of talking about their own perceptions that also emphasizes how great it is to be alive. It can have fine-grained properties related to memory, stress responses, learning to avoid pain, fight or flight, love, happiness, laughter, attention, distraction, boredom, imagination, spatial awareness, spiritual awareness, vision, hearing, confusion, inspiration, planning, social cooperation, determination, intuition. This is key not just to understanding what people mean when they say the word, but also to understanding why people defined that way in the first place, and why they expect you to react certain ways to certain arguments.
Illusionism and non-illusionism are kinda just arguing over semantics, and what’s really important is avoiding essentialism. If there are ten different properties that people typically bundle together when they say the word “consciousness,” and we have good explanations of how we have six of them, does this mean consciousness is an illusion or not? Answer: this semantic distinction is not a super big deal.
There is no inner observer inside the brain. The brain is where our thoughts are (or representations of those thoughts), but there isn’t some much-more-specific-than-that place in the brain that’s “us” while the rest is merely a pile of tagalong grey matter.
I agree with the general thrust of the post, and with your comment. However, I’m not sure I buy this particular piece.
My position is that I am a submodule in my brain, and I communicate with the rest of the brain through a limited interface. Maybe I’m not physically distinct from the rest of the brain, off in my own little section, but I’m logically distinct.
At the very least, there is a visual processing layer in my brain that is not part of me. I know this because visual data sometimes gets modified before it gets from my eyes to me. (For example, when looking at an optical illusion or hallucinating on a drug.) I have no awareness of or control over this preprocessing.
On the output side, I have more control. If I send a command to a muscle, rarely will it be vetoed by some later process. I take it that’s because I’m the executive module, and my whole purpose is to decide muscle movements. Nothing else in the brain is qualified to override my choices on that front.
However, there are some exceptions where my muscles will move in a way that I didn’t choose, presumably at the behest of another part of my brain which is not me. An example is the hanger reflex, where I put a clothes hanger around my head, and my head turns automatically. Or dumb things like my heartbeat, my stomach, or my breathing while asleep. I am only needed to govern the muscle movements that require intelligence, the movements we call “voluntary.”
If I was my entire brain, then what would be the difference between a voluntary and an involuntary brain-induced action?
At the very least, there is a visual processing layer in my brain that is not part of me. I know this because because visual data sometimes gets modified before it gets from my eyes to me. (For example, when looking at an optical illusion or hallucinating on a drug.) I have no awareness of or control over this preprocessing.
I think you did a good job making this claim strong enough that I both think it’s important and disagree with it :)
I totally agree that you have no conscious control over the processing that makes e.g. this illusion happen:
But everything is kinda like this. When I translate the abstract concepts in my head into these words that I’m typing, I just do the information processing, I can maybe focus on different aspects of it consciously, but I don’t know what my brain is doing and can’t make a conscious decision to use someone else’s word-generation method instead of my own. That doesn’t make “me” separate from my verbal abilities, it just means that my verbal abilities are made of unconscious components. The job of the brain is not to be able to consciously manipulate every single part of itself, it’s just to navigate the world and form memories and have experiences.
Another way of putting this is that every process in the brain that can be thought of as conscious, can also be thought of as unconscious if you break it into small pieces. This is obviously necessary at some point if you want to make conscious me out of unconscious atoms. I’m saying that I think the visual judgment that goes wrong in the arrows illusion is like this—it’s a perfectly valid component of the thinking you do when you consciously see the world, and when you zoom in on it it doesn’t seem conscious or even particularly controllable by consciousness, and those aren’t incompatible.
On this topic, I might also recommend the great Eliminate the Middletoad, commentary on a biology paper in 1987.
But everything is kinda like this. When I translate the abstract concepts in my head into these words that I’m typing, I just do the information processing, I can maybe focus on different aspects of it consciously, but I don’t know what my brain is doing and can’t make a conscious decision to use someone else’s word-generation method instead of my own.
I would say the process that maps concepts to words is outside of me, so the fact that it happens unconsciously is in harmony with my argument. If I’m seeking a word for a concept, it feels like I direct my attention to the concept, and then all of its associations are handed back to me, one of the strongest ones being the word I’m looking for. That is, the retrieval of the word requires hitting an external memory store to get the concept’s associations.
On the other hand, the choice of concept to convey is made by me. I also choose whether to use the first word I find, or to look for a better one. Plus I choose to sit down and write in the first place. Unlike looking up words from my memory, where the words I receive are out of my control, I could have made these choices differently if I wanted to. Thus, they are part of my limited domain within the brain. You could say, “those choices are making themselves,” but then what are people referring to when they say a person did something consciously? There must be a physical distinction between conscious and unconscious actions, and that’s where I suspect you’ll find a reasonable definition of a “self module.”
Another way of putting this is that every process in the brain that can be thought of as conscious, can also be thought of as unconscious if you break it into small pieces.
I agree completely with that. But the visual processing that occurs to produce optical illusions cannot be thought of as conscious, period. Anything I would call conscious excludes that visual processing layer. It is not a “perfectly valid component of the thinking I do,” because it happens before I get access to the information to think about it.
If you put on a pair of warped glasses that distort your vision, you would not call those glasses part of your thinking process. But when the visual information you are receiving is warped in exactly the same way due to an optical illusion, you say it’s your own reasoning that made it like that. As far as I’m concerned, the only real difference is that you can’t remove your visual processing system. It’s like a pair of warped glasses that is glued to your face.
To be fair, this might be just another semantic argument. Maybe if we both understood the brain in perfect detail, we would still disagree about whether to call some specific part of it “us.” Or maybe I would change my mind at that point. I get the feeling you’ve investigated the brain more than me, and maybe you reach a point in your learning where you’re forced to discard the default model. Still, I think the position I’ve laid out has to be the default position in absence of any specific knowledge about the brain, because this is the model which is clearly suggested by our day-to-day experience.
The word “consciousness” is used to refer to many different things
Yep.
If there are ten different properties that people typically bundle together when they say the word “consciousness,” and we have good explanations of how we have six of them, does this mean consciousness is an illusion or not?
You have to do something about the other four. You can leave them unexplained, and consciousness therefore only 60% explained. You can argue semantically that they were never part of the concept of consciousness. Or you can argue that they don’t really exist. You’re not forced into illusionism, but you might prefer it to the other options.
The interesting experience is quite warranted as Rob and I had a chat before I finished this post that definitely affected parts of it.
I think your opinion is a very fair assessment. Most theories, despite elegance or mathematical rigor, don’t end up delivering useful tools for further analysis. I also feel that a lot of time is wasted on arguing over semantics since it is impossible not to be biased about your own subjective experience, so I intentionally left the definitions of “qualia” and “consciousness” as vague as possible while still useful. Maybe “illusionism” deserves the same treatment.
I skimmed this while listening to a somewhat related podcast that released today. It was an interesting experience.
This post was probably quite useful for you to write. But I feel like this also exemplifies some of the dangers of ordinary philosophy, and I wouldn’t recommend doing a scaled-up version of this as your entire mode of learning about consciousness—you’d have to spend a lot of time learning about garbage and trying to give it a fair shake, rather than focusing on what you think is actually useful to you and trying to synthesize it.
Some insights that I think maybe don’t get a fair shake if you just go through the content of this post:
There is no inner observer inside the brain. The brain is where our thoughts are (or representations of those thoughts), but there isn’t some much-more-specific-than-that place in the brain that’s “us” while the rest is merely a pile of tagalong grey matter. This isn’t just a dunk on Descartes (or on making students read Descartes as if it taught good thinking habits), it’s actually important. It means that to think about consciousness you have to train yourself to imagine how parallel information-processing can be used to make decisions, form memories, etc.
The word “consciousness” is used to refer to many different things, often at-the-same-time-all-bundled-together. People can use it to mean a soul, or an “inner observer” who reads your thoughts, or something it would be like for me to be in that state, or a good explanation for other peoples’ reports, or an information-processing mechanism, or a powerful piece of their favorite theory of decision-making, or a pattern of brain activity, or a way of talking about their own perceptions that also emphasizes how great it is to be alive. It can have fine-grained properties related to memory, stress responses, learning to avoid pain, fight or flight, love, happiness, laughter, attention, distraction, boredom, imagination, spatial awareness, spiritual awareness, vision, hearing, confusion, inspiration, planning, social cooperation, determination, intuition. This is key not just to understanding what people mean when they say the word, but also to understanding why people defined that way in the first place, and why they expect you to react certain ways to certain arguments.
Illusionism and non-illusionism are kinda just arguing over semantics, and what’s really important is avoiding essentialism. If there are ten different properties that people typically bundle together when they say the word “consciousness,” and we have good explanations of how we have six of them, does this mean consciousness is an illusion or not? Answer: this semantic distinction is not a super big deal.
I agree with the general thrust of the post, and with your comment. However, I’m not sure I buy this particular piece.
My position is that I am a submodule in my brain, and I communicate with the rest of the brain through a limited interface. Maybe I’m not physically distinct from the rest of the brain, off in my own little section, but I’m logically distinct.
At the very least, there is a visual processing layer in my brain that is not part of me. I know this because visual data sometimes gets modified before it gets from my eyes to me. (For example, when looking at an optical illusion or hallucinating on a drug.) I have no awareness of or control over this preprocessing.
On the output side, I have more control. If I send a command to a muscle, rarely will it be vetoed by some later process. I take it that’s because I’m the executive module, and my whole purpose is to decide muscle movements. Nothing else in the brain is qualified to override my choices on that front.
However, there are some exceptions where my muscles will move in a way that I didn’t choose, presumably at the behest of another part of my brain which is not me. An example is the hanger reflex, where I put a clothes hanger around my head, and my head turns automatically. Or dumb things like my heartbeat, my stomach, or my breathing while asleep. I am only needed to govern the muscle movements that require intelligence, the movements we call “voluntary.”
If I was my entire brain, then what would be the difference between a voluntary and an involuntary brain-induced action?
I think you did a good job making this claim strong enough that I both think it’s important and disagree with it :)
I totally agree that you have no conscious control over the processing that makes e.g. this illusion happen:
But everything is kinda like this. When I translate the abstract concepts in my head into these words that I’m typing, I just do the information processing, I can maybe focus on different aspects of it consciously, but I don’t know what my brain is doing and can’t make a conscious decision to use someone else’s word-generation method instead of my own. That doesn’t make “me” separate from my verbal abilities, it just means that my verbal abilities are made of unconscious components. The job of the brain is not to be able to consciously manipulate every single part of itself, it’s just to navigate the world and form memories and have experiences.
Another way of putting this is that every process in the brain that can be thought of as conscious, can also be thought of as unconscious if you break it into small pieces. This is obviously necessary at some point if you want to make conscious me out of unconscious atoms. I’m saying that I think the visual judgment that goes wrong in the arrows illusion is like this—it’s a perfectly valid component of the thinking you do when you consciously see the world, and when you zoom in on it it doesn’t seem conscious or even particularly controllable by consciousness, and those aren’t incompatible.
On this topic, I might also recommend the great Eliminate the Middletoad, commentary on a biology paper in 1987.
I would say the process that maps concepts to words is outside of me, so the fact that it happens unconsciously is in harmony with my argument. If I’m seeking a word for a concept, it feels like I direct my attention to the concept, and then all of its associations are handed back to me, one of the strongest ones being the word I’m looking for. That is, the retrieval of the word requires hitting an external memory store to get the concept’s associations.
On the other hand, the choice of concept to convey is made by me. I also choose whether to use the first word I find, or to look for a better one. Plus I choose to sit down and write in the first place. Unlike looking up words from my memory, where the words I receive are out of my control, I could have made these choices differently if I wanted to. Thus, they are part of my limited domain within the brain. You could say, “those choices are making themselves,” but then what are people referring to when they say a person did something consciously? There must be a physical distinction between conscious and unconscious actions, and that’s where I suspect you’ll find a reasonable definition of a “self module.”
I agree completely with that. But the visual processing that occurs to produce optical illusions cannot be thought of as conscious, period. Anything I would call conscious excludes that visual processing layer. It is not a “perfectly valid component of the thinking I do,” because it happens before I get access to the information to think about it.
If you put on a pair of warped glasses that distort your vision, you would not call those glasses part of your thinking process. But when the visual information you are receiving is warped in exactly the same way due to an optical illusion, you say it’s your own reasoning that made it like that. As far as I’m concerned, the only real difference is that you can’t remove your visual processing system. It’s like a pair of warped glasses that is glued to your face.
To be fair, this might be just another semantic argument. Maybe if we both understood the brain in perfect detail, we would still disagree about whether to call some specific part of it “us.” Or maybe I would change my mind at that point. I get the feeling you’ve investigated the brain more than me, and maybe you reach a point in your learning where you’re forced to discard the default model. Still, I think the position I’ve laid out has to be the default position in absence of any specific knowledge about the brain, because this is the model which is clearly suggested by our day-to-day experience.
Yep.
You have to do something about the other four. You can leave them unexplained, and consciousness therefore only 60% explained. You can argue semantically that they were never part of the concept of consciousness. Or you can argue that they don’t really exist. You’re not forced into illusionism, but you might prefer it to the other options.
The interesting experience is quite warranted as Rob and I had a chat before I finished this post that definitely affected parts of it.
I think your opinion is a very fair assessment. Most theories, despite elegance or mathematical rigor, don’t end up delivering useful tools for further analysis. I also feel that a lot of time is wasted on arguing over semantics since it is impossible not to be biased about your own subjective experience, so I intentionally left the definitions of “qualia” and “consciousness” as vague as possible while still useful. Maybe “illusionism” deserves the same treatment.