Hrothgar: What’s your answer to the hard problem of consciousness?
Rob Bensinger: The hard problem makes sense, and seems to successfully do away with ‘consciousness is real and reducible’. But ‘consciousness is real and irreducible’ isn’t tenable: it either implies violations of physics as we know it (interactionism), or implies we can’t know we’re conscious (epiphenomenalism).
So we seem to be forced to accept that consciousness (of the sort cited in the hard problem) is somehow illusory. This is… very weird and hard to wrap one’s head around. But some version of this view (illusionism) seems incredibly hard to avoid.
(Note: This is a twitter-length statement of my view, so it leaves out a lot of details. E.g., I think panpsychist views must be interactionist or epiphenomenalist, in the sense that matters. But this isn’t trivial to establish.)
Hrothgar: What does “illusory” mean here? I think I’m interpreting as gesturing toward denying consciousness is happening, which is, like, the one thing that can’t even be doubted (since the experience of doubt requires a conscious experiencer in the first place)
Rob Bensinger: I think “the fact that I’m having an experience” seems undeniable. E.g., it seems to just be a fact that I’m experiencing this exact color of redness as I look at the chair next to me. There’s a long philosophical tradition of treating experience as ‘directly given’, the foundation on which all our other knowledge is built.
I find this super compelling and intuitive at a glance, even if I can’t explain how you’d actually build a brain/computer that has infallible ‘directly given’ knowledge about some of its inner workings.
But I think the arguments alluded to above ultimately force us to reject this picture, and endorse the crazy-sounding view ‘the character of my own experiences can be illusory, even though it seems obviously directly given’.
I don’t want to endorse the obviously false claim ‘light isn’t bouncing off the chair, hitting my eyes, and getting processed as environmental information by my brain.’
My brain is tracking facts about the environment. And it can accurately model many, many things about itself!
But I think my brain’s native self-modeling gets two things wrong: (1) it models my subjective experience as a sort of concrete, ‘manifest’ inner world; (2) it represents this world as having properties that are too specific or arbitrary to logically follow from ‘mere physics’.
I think there is a genuine perception-like (not ‘hunch-like’) introspective illusion that makes those things appear to be true (to people who are decent introspectors and have thought through the implications) -- even though they’re not true. Like a metacognitive optical illusion.
And yes, this sounds totally incoherent from the traditional Descartes-inspired philosophical vantage point.
Optical illusions are fine; calling consciousness itself an illusion invites the question ‘what is conscious of this illusion?’.
I nonetheless think this weird view is right.
I want to say: There’s of course something going on here; and the things that seems present in my visual field must correspond to real things insofar as they have the potential to affect my actions. But my visual field as-it-appears-to-me isn’t a real movie screen playing for an inner Me.
And what’s more, the movie screen isn’t translatable into neural firings that encode all the ‘given’-seeming stuff. (!)
The movie screen is a lie the brain tells itself—tells itself at the sensory, raw-feel level, not just at the belief/hunch level. (Illusion, rather than delusion.)
And (somehow! this isn’t intuitive to me either!) since there’s no homunculus outside the brain to notice all this, there’s no ‘check’ on the brain forcing it to not trick itself in how it represents the most basic features of ‘experience’ to itself.
The way the brain models itself is entirely a product of the functioning of that very brain, with no law of physics or CS to guarantee the truth of anything! No matter how counter-intuitive that seems to the brain itself. (And yes, it’s still counter-intuitive to me. I wouldn’t endorse this view if I didn’t think the alternatives were even worse!)
Core argument:
1. a Bayesian view of cognition. ‘the exact redness of red’ has to cause brain changes, or our brains can’t know about it.
3. hard problem: ‘the exact redness of red’ isn’t reducible.
Thus, ‘the exact redness of red’ must somehow not be real. Secondarily, we can circle back and consider things that help make sense of this conclusion and help show it isn’t nonsense:
5. questioning the claim that (e.g.) my visual field is ‘directly given’ to me in an infallible way. questioning how you could design a computer that genuinely has infallible access to its internal states.
Hrothgar: But even if I grant that experience is illusion, the fact of ‘experiencing illusion’ is itself then undeniable. I don’t consider it a philosophical tradition, just a description of reality 🤷
Whether this reconciles with physics etc seems like a downstream problem
Reading what you wrote again, I think it’s likely I’m misunderstanding you.
What you’re saying seems crazy or nonsensical to me, and/but I’m super appreciative that you wrote this all out, and I do intend to spend more time with your words (now or later) to see if i can catch more of your drift
(I don’t claim to have it all figured out)
Rob Bensinger: Good, if it sounds crazy/nonsensical then I suspect that (a) I’ve communicated well, and (b) we share key background context: ‘why does consciousness seem obviously real?’, ‘why does the hard problem seem so hard?’, etc.
If my claims seemed obviously true, I’d be worried.
Hrothgar: I haven’t read your blog post yet, but i suppose my main objection right now is something like, “Thinking is itself sensorial in nature, & that nature precedes its content. Effectively it seems like you’re using thinking to try to refute thinking, & we get into gödel problems”
Rob Bensinger: I agree that thinking has an (apparent) phenomenal character, like e.g. seeing.
I don’t think that per se raises a special problem. A calculator could introspect on its acts of calculating and wrongly perceive them as ‘fluffy’ or ‘flibulous’, while still getting 2+2=4 right.
Hrothgar: Why would fluffy or flibulous be wrong? I don’t see what correctness has to do with it (fluffiness is neither wrong nor right) -- where is there a logical basis to evaluate “correctness” of that which isn’t a proposition?
Rob Bensinger: If we take ‘fluffy’ literally, then the computations can’t be fluffy because they aren’t physical. It’s possible to think that some property holds of your thoughts, when it simply doesn’t.
But ‘consciousness is real and irreducible’ isn’t tenable: it either implies violations of physics as we know it (interactionism), or implies we can’t know we’re conscious (epiphenomenalism).
Edit: What it implies is violations of physicalism. You can accept that physics is a map that predicts observations, without accepting that it is the map, to which all other maps must be reduced.
The epiphenomenalist worry is that, if qualia are not denied entirely, they have no causal role
to play, since physical causation already accounts for everything that needs to be accounted for.
But physics is a set of theories and descriptions...a map. Usually, the ability of a map
to explain and is not exclusive of another map’s ability to do so on. We can explain the death of Mr Smith as the result of bullet entering his heart, or as the result of a finger squeezing a trigger, or a a result of the insurance policy recently taken out on his life, and so on.
So why can’t we resolve the epiphenomenal worry by saying that that physical causation and mental causation are just different, non rivalrous, maps? I screamed because my pain fibres fired” alongside—not versus “I screamed becaue I felt a sharp pain”. It is not the case that there is physical stuff that is doing all the causation, and mental stuff that is doing none of it: rather there is a physical view of what is going on, and a mentalistic view.
Physicalists are reluctant to go down this route, because physicalism is based on the idea that there is something special about the physical map, which means it is not just another map. This special quality means that a physical explanation excludes others, unlike a typical map. But what is it?
It’s rooted in reductionism, the idea that every other map (that is, every theory of the special sciences) can or should reduce to the physical map.
But the reducibility of consciousness is the center of the Hard Problem. If consciousness really is irreducible, and not just unreduced, then that is evidence against the reduction of everything to the physical, and, in turn, evidence against the special, exclusive nature of the physical map.
So, without the reducibility of consciousness, the epiphenomenal worry can be resolved by the two-view manoeuvre. (And without denying the very existence of qualia).
If the physics map doesn’t imply the mind map (because of the zombie argument, the Mary’s room argument, etc.), then how do you come to know about the mind map? The causal process by which you come to know the physics map is easy to understand:
Light leaves the Sun and strikes your shoelaces and bounces off; some photons enter the pupils of your eyes and strike your retina; the energy of the photons triggers neural impulses; the neural impulses are transmitted to the visual-processing areas of the brain; and there the optical information is processed and reconstructed into a 3D model that is recognized as an untied shoelace.
What is the version of this story for the mind map, once we assume that the mind map has contents that have no causal effect on the physical world? (E.g., your mind map had absolutely no effect on the words you typed into the LW page.)
At some point you didn’t have a concept for “qualia”; how did you learn it, if your qualia have no causal effects?
At some point you heard about the zombie argument and concluded “ah yes, my mental map must be logically independent of my physical map”; how did you do that without your mental map having any effects?
I can imagine an interactionist video game, where my brain has more processing power than the game and therefore can’t be fully represented in the game itself. It would then make sense that I can talk about properties that don’t exist within the game’s engine: I myself exist outside the game universe, and I can use that fact to causally change the game’s outcomes in ways that a less computationally powerful agent could not.
Equally, I can imagine an epiphenomenal video game, where I’m strapped into a headset but forbidden from using the controls. I passively watch the events occurring in the game; but no event in the game ever reflects or takes note of the fact that I exist or have any ‘unphysical’ properties, and if there is an AI steering my avatar or camera’s behavior, the AI knows zilch about me. (You could imagine a programmer deliberately designing the game to have NPCs talk about entities outside the game world; but then the programmer’s game-transcending cognitive capacities are not epiphenomenal relative to the game.)
The thing that doesn’t make sense is to import intuitions from the interactionist game to the epiphenomenal game, while insisting it’s all still epiphenomenal.
If the physics map doesn’t imply the mind map (because of the zombie argument, the Mary’s room argument, etc.), then how do you come to know about the mind map?
Direct evidence. That’s the starting point of the whole thing. People think that they have qualia because it seems to them that they do.
Edit: In fact, it’s the other way round: we are always using the mind map, but we remove the subjectivity, “warm fuzzies” from it to arrive at the physics map. Ho wdo we know that physics is the whole story, when we start with our experience, and make a subset of it?
What is the version of this story for the mind map, once we assume that the mind map has contents that have no causal effect on the physical world?
I’m not assuming that. I’m arguing against epiphenomenalism.
So I am saying that the mental is causal, but I am not saying that it is a kind of physical causality, as per reductive physicalism. Reductive physicalism is false because consciousness is irreducible, as you agree. Since mental causation isn’t a kind of physical causation, I don’t have to give a physical account if it.
And I am further not saying that the physical and mental are two separate ontologcal domains, two separate territories. I am talking about maps, not territories.
Without ontological dualism, there are no issues of overdetermination or interaction.
Hrothgar: What’s your answer to the hard problem of consciousness?
Rob Bensinger: The hard problem makes sense, and seems to successfully do away with ‘consciousness is real and reducible’. But ‘consciousness is real and irreducible’ isn’t tenable: it either implies violations of physics as we know it (interactionism), or implies we can’t know we’re conscious (epiphenomenalism).
So we seem to be forced to accept that consciousness (of the sort cited in the hard problem) is somehow illusory. This is… very weird and hard to wrap one’s head around. But some version of this view (illusionism) seems incredibly hard to avoid.
(Note: This is a twitter-length statement of my view, so it leaves out a lot of details. E.g., I think panpsychist views must be interactionist or epiphenomenalist, in the sense that matters. But this isn’t trivial to establish.)
Hrothgar: What does “illusory” mean here? I think I’m interpreting as gesturing toward denying consciousness is happening, which is, like, the one thing that can’t even be doubted (since the experience of doubt requires a conscious experiencer in the first place)
Rob Bensinger: I think “the fact that I’m having an experience” seems undeniable. E.g., it seems to just be a fact that I’m experiencing this exact color of redness as I look at the chair next to me. There’s a long philosophical tradition of treating experience as ‘directly given’, the foundation on which all our other knowledge is built.
I find this super compelling and intuitive at a glance, even if I can’t explain how you’d actually build a brain/computer that has infallible ‘directly given’ knowledge about some of its inner workings.
But I think the arguments alluded to above ultimately force us to reject this picture, and endorse the crazy-sounding view ‘the character of my own experiences can be illusory, even though it seems obviously directly given’.
An attempt to clarify what this means: https://nothingismere.com/2017/02/23/phenomenal-consciousness-is-a-quasiperceptual-illusion-objections-and-replies/
I don’t want to endorse the obviously false claim ‘light isn’t bouncing off the chair, hitting my eyes, and getting processed as environmental information by my brain.’
My brain is tracking facts about the environment. And it can accurately model many, many things about itself!
But I think my brain’s native self-modeling gets two things wrong: (1) it models my subjective experience as a sort of concrete, ‘manifest’ inner world; (2) it represents this world as having properties that are too specific or arbitrary to logically follow from ‘mere physics’.
I think there is a genuine perception-like (not ‘hunch-like’) introspective illusion that makes those things appear to be true (to people who are decent introspectors and have thought through the implications) -- even though they’re not true. Like a metacognitive optical illusion.
And yes, this sounds totally incoherent from the traditional Descartes-inspired philosophical vantage point.
Optical illusions are fine; calling consciousness itself an illusion invites the question ‘what is conscious of this illusion?’.
I nonetheless think this weird view is right.
I want to say: There’s of course something going on here; and the things that seems present in my visual field must correspond to real things insofar as they have the potential to affect my actions. But my visual field as-it-appears-to-me isn’t a real movie screen playing for an inner Me.
And what’s more, the movie screen isn’t translatable into neural firings that encode all the ‘given’-seeming stuff. (!)
The movie screen is a lie the brain tells itself—tells itself at the sensory, raw-feel level, not just at the belief/hunch level. (Illusion, rather than delusion.)
And (somehow! this isn’t intuitive to me either!) since there’s no homunculus outside the brain to notice all this, there’s no ‘check’ on the brain forcing it to not trick itself in how it represents the most basic features of ‘experience’ to itself.
The way the brain models itself is entirely a product of the functioning of that very brain, with no law of physics or CS to guarantee the truth of anything! No matter how counter-intuitive that seems to the brain itself. (And yes, it’s still counter-intuitive to me. I wouldn’t endorse this view if I didn’t think the alternatives were even worse!)
Core argument:
1. a Bayesian view of cognition. ‘the exact redness of red’ has to cause brain changes, or our brains can’t know about it.
2. we know enough about physics to know these causes aren’t coming from outside of physics.
3. hard problem: ‘the exact redness of red’ isn’t reducible.
Thus, ‘the exact redness of red’ must somehow not be real. Secondarily, we can circle back and consider things that help make sense of this conclusion and help show it isn’t nonsense:
4. thinking in detail about what cognition goes on in p-zombies’ heads that makes them think there’s a hard problem.
5. questioning the claim that (e.g.) my visual field is ‘directly given’ to me in an infallible way. questioning how you could design a computer that genuinely has infallible access to its internal states.
Hrothgar: But even if I grant that experience is illusion, the fact of ‘experiencing illusion’ is itself then undeniable. I don’t consider it a philosophical tradition, just a description of reality 🤷
Whether this reconciles with physics etc seems like a downstream problem
Reading what you wrote again, I think it’s likely I’m misunderstanding you.
What you’re saying seems crazy or nonsensical to me, and/but I’m super appreciative that you wrote this all out, and I do intend to spend more time with your words (now or later) to see if i can catch more of your drift
(I don’t claim to have it all figured out)
Rob Bensinger: Good, if it sounds crazy/nonsensical then I suspect that (a) I’ve communicated well, and (b) we share key background context: ‘why does consciousness seem obviously real?’, ‘why does the hard problem seem so hard?’, etc.
If my claims seemed obviously true, I’d be worried.
Hrothgar: I haven’t read your blog post yet, but i suppose my main objection right now is something like, “Thinking is itself sensorial in nature, & that nature precedes its content. Effectively it seems like you’re using thinking to try to refute thinking, & we get into gödel problems”
Rob Bensinger: I agree that thinking has an (apparent) phenomenal character, like e.g. seeing.
I don’t think that per se raises a special problem. A calculator could introspect on its acts of calculating and wrongly perceive them as ‘fluffy’ or ‘flibulous’, while still getting 2+2=4 right.
Hrothgar: Why would fluffy or flibulous be wrong? I don’t see what correctness has to do with it (fluffiness is neither wrong nor right) -- where is there a logical basis to evaluate “correctness” of that which isn’t a proposition?
Rob Bensinger: If we take ‘fluffy’ literally, then the computations can’t be fluffy because they aren’t physical. It’s possible to think that some property holds of your thoughts, when it simply doesn’t.
Edit: What it implies is violations of physicalism. You can accept that physics is a map that predicts observations, without accepting that it is the map, to which all other maps must be reduced.
The epiphenomenalist worry is that, if qualia are not denied entirely, they have no causal role to play, since physical causation already accounts for everything that needs to be accounted for.
But physics is a set of theories and descriptions...a map. Usually, the ability of a map to explain and is not exclusive of another map’s ability to do so on. We can explain the death of Mr Smith as the result of bullet entering his heart, or as the result of a finger squeezing a trigger, or a a result of the insurance policy recently taken out on his life, and so on.
So why can’t we resolve the epiphenomenal worry by saying that that physical causation and mental causation are just different, non rivalrous, maps? I screamed because my pain fibres fired” alongside—not versus “I screamed becaue I felt a sharp pain”. It is not the case that there is physical stuff that is doing all the causation, and mental stuff that is doing none of it: rather there is a physical view of what is going on, and a mentalistic view.
Physicalists are reluctant to go down this route, because physicalism is based on the idea that there is something special about the physical map, which means it is not just another map. This special quality means that a physical explanation excludes others, unlike a typical map. But what is it?
It’s rooted in reductionism, the idea that every other map (that is, every theory of the special sciences) can or should reduce to the physical map.
But the reducibility of consciousness is the center of the Hard Problem. If consciousness really is irreducible, and not just unreduced, then that is evidence against the reduction of everything to the physical, and, in turn, evidence against the special, exclusive nature of the physical map.
So, without the reducibility of consciousness, the epiphenomenal worry can be resolved by the two-view manoeuvre. (And without denying the very existence of qualia).
If the physics map doesn’t imply the mind map (because of the zombie argument, the Mary’s room argument, etc.), then how do you come to know about the mind map? The causal process by which you come to know the physics map is easy to understand:
What is the version of this story for the mind map, once we assume that the mind map has contents that have no causal effect on the physical world? (E.g., your mind map had absolutely no effect on the words you typed into the LW page.)
At some point you didn’t have a concept for “qualia”; how did you learn it, if your qualia have no causal effects?
At some point you heard about the zombie argument and concluded “ah yes, my mental map must be logically independent of my physical map”; how did you do that without your mental map having any effects?
I can imagine an interactionist video game, where my brain has more processing power than the game and therefore can’t be fully represented in the game itself. It would then make sense that I can talk about properties that don’t exist within the game’s engine: I myself exist outside the game universe, and I can use that fact to causally change the game’s outcomes in ways that a less computationally powerful agent could not.
Equally, I can imagine an epiphenomenal video game, where I’m strapped into a headset but forbidden from using the controls. I passively watch the events occurring in the game; but no event in the game ever reflects or takes note of the fact that I exist or have any ‘unphysical’ properties, and if there is an AI steering my avatar or camera’s behavior, the AI knows zilch about me. (You could imagine a programmer deliberately designing the game to have NPCs talk about entities outside the game world; but then the programmer’s game-transcending cognitive capacities are not epiphenomenal relative to the game.)
The thing that doesn’t make sense is to import intuitions from the interactionist game to the epiphenomenal game, while insisting it’s all still epiphenomenal.
Direct evidence. That’s the starting point of the whole thing. People think that they have qualia because it seems to them that they do.
Edit: In fact, it’s the other way round: we are always using the mind map, but we remove the subjectivity, “warm fuzzies” from it to arrive at the physics map. Ho wdo we know that physics is the whole story, when we start with our experience, and make a subset of it?
I’m not assuming that. I’m arguing against epiphenomenalism.
So I am saying that the mental is causal, but I am not saying that it is a kind of physical causality, as per reductive physicalism. Reductive physicalism is false because consciousness is irreducible, as you agree. Since mental causation isn’t a kind of physical causation, I don’t have to give a physical account if it.
And I am further not saying that the physical and mental are two separate ontologcal domains, two separate territories. I am talking about maps, not territories.
Without ontological dualism, there are no issues of overdetermination or interaction.