EDIT: After thinking things through, I concluded that Eliezer was right, and that epiphenomalism was indeed confused and incoherent. Leaving this comment here as a record of how I came to agree with that conclusion.
The closest theory to this which definitely does seem coherent—i.e., it’s imaginable that it has a pinpointed meaning—would be if there was another little brain living inside my brain, made of shadow particles which could affect each other and be affected by my brain, but not affect my brain in turn. This brain would correctly hypothesize the reasons for its sensory experiences—that there was, from its perspective, an upper tier of particles interacting with each other that it couldn’t affect. Upper-tier particles are observable, i.e., can affect lower-tier senses, so it would be possible to correctly induct a simplest explanation for them. And this inner brain would think, “I can imagine a Zombie Universe in which I am missing, but all the upper-tier particles go on interacting with each other as before.” If we imagine that the upper-tier brain is just a robotic sort of agent, or a kitten, then the inner brain might justifiably imagine that the Zombie Universe would contain nobody to listen—no lower-tier brains to watch and be aware of events.
Positing “another little brain” to act as the epiphenomenal component sounds unnecessarily complicated to me. You mentioned earlier the possibility of programming a simulation with balls that bounce off each other, and then some little shadows that followed the balls around. It is not obvious to me why “balls that bounce off each other” couldn’t be the physical activity of the neurons, “little shadows that followed the balls around” couldn’t be the qualia that were produced as a side-effect of the neurons’ physical activity.
I think—though I might have misunderstood—that you are trying to exclude this possibility with the “How could you possibly know about the lower tier, even if it existed?” question, and the suggestion that we could only know about the shadows because we can look at the simulation from the outside, and the notion that there is no flow of causality in our world from which we could infer the existence of the “shadow tier”.
I’m not convinced that this is right. We know that there are qualia, for the obvious reason that we are not p-zombies: and we know that consciousness is created via neurons in the brain, for we can map a correspondence between qualia and brain states. E.g. there’s a clear shift in both brain activity and subjective experience when we fall asleep, or become agitated, or drink alcohol. So we know that there is an arrow from “changes in the brain” to “qualia/subjective experience”, because we correctly anticipate that if somebody changed our brain chemistry, our subjective experience would change.
But there’s nothing in our observed physical world that actually explains consciousness in the sense of explaining why there couldn’t just be a physical world utterly devoid of consciousness. Yes, you can develop sophisticated theories of strange loops and of how consciousness is self-representation and all… which explains why there could be symbols and dynamics within a cognitive system which would make an entity behave as if it had consciousness. If the programmer is sufficiently sophisticated, it can make a program behave like it had anything at all.
But that’s still only an explanation of why it behaved as if it had a consciousness. Well, that’s not quite right. It would actually have a consciousness in the sense that it had all the right information-processing dynamics which caused it to have some internal state which would serve the functional role of “sadness” or “loneliness” or “excitement” in influencing its information-processing and behavior. And it would have a pattern recognition engine which analyzed its own experience, which would notice that those kinds of internal states repeated themselves and had predictable effects on its behavior and information-processing.
So it would assign those states labels and introduce symbols corresponding to those labels into its reasoning system, so it could ask itself questions like “every now and then I get put into a state that I have labeled as ‘being sad’, when does that happen and what does it do?”. And as it collected more and more observations and created increasingly sophisticated symbols to represent ever-more-complex concepts, then it seems entirely conceivable that there happened something like… like it noticing that all the symbols it had collected for its own internal states shared the property of being symbols for its own internal states, and its explanation-seeking-mechanism would do what it always did, namely to ask the question of “why do I have this set of states in the first place, instead of having nothing”. And then, because that was an ill-defined question in the first place, it would get stumped and fail to answer it, and then it could very well write philosophical papers that tried to attack the question but made no progress.
… and when I started writing this comment, I was originally going to end this by saying “and that explains why it would behave as if it was conscious, but it still wouldn’t explain why it couldn’t do all of that without having any subjective experience”. Except that, upon writing that out, I started feeling like I might just have dissolved the question and that there was nothing left to explain anymore. Um. I’ll have to think about this some more.
We know that there are qualia, for the obvious reason that we are not p-zombies: and we know that consciousness is created via neurons in the brain, for we can map a correspondence between qualia and brain states
Some questions here. How do you know that other people are not p-zombies? Presumably you believe them when they say they have qualia! But then those speech acts are caused by brain states, and if qualia are epiphenomenal, the speech acts are not caused by qualia. Similarly, the correspondence you describe is between brain states and reported qualia by other people: I doubt you’ve ever managed to map your own brain states to your own qualia.
Related, how do you know that you were not a p-zombie every day of your life up to yesterday? Or that if you had qualia yesterday, how do you know that you didn’t have a green quale when looking at red (stop) traffic lights? Well because you remember having qualia, and you remember them being the same as the qualia you have today! But then, aren’t those memories encoded in brain states (neural connections and synaptic strengths)? How could qualia cause those memories to become encoded if they were epiphenomenal to brain states?
Stuff like this makes me pretty sure that epiphenomenalism is false.
How could qualia cause those memories to become encoded if they were epiphenomenal to brain states?
You have it the wrong way around. In epiphenomenalism, brain states cause qualia, qualia don’t cause brain states. When my brain was in a particular past state, the computation of that state produced qualia and also recorded information of having been in that state; and recalling that memory, by emulating the past state, sensibly also produces qualia which are similar to the past state. I can’t know for sure that the memory of the experience I have now accurately matches the experience I actually had, of course… but then that problem is hardly unique to epiphenomenalist theories, or even particularly implied by the epiphenomenalist theory.
In general, most of the questions in your comment are valid, but they’re general arguments for solipsism or extreme skepticism, not arguments against epiphenomenalism in particular. (And the answer to them is that “consistency is a simpler explanation than some people being p-zombies and some not, or people being p-zombies at certain points of time and not at other points”)
How could qualia cause those memories to become encoded if they were epiphenomenal to brain states?
You have it the wrong way around. In epiphenomenalism, brain states cause qualia, qualia don’t cause brain states.
The question was rhetorical of course… the point is that if your qualia truly are epiphenomenal, then there is no way you can remember having had them. So you’re left with an extremely weak inductive argument from just one data point, basically “my brain states are creating qualia right now, so I’ll infer that they always created the same qualia in the past, and that similar brain states in other people are creating similar qualia”. It doesn’t take extreme skepticism to suspect there is a problem with that argument.
Still seems like Occam’s Razor would rule against past versions of me and all versions of other people—all of which seem to behave like I do, for the reasons I do—doing so without the qualia I have.
the point is that if your qualia truly are epiphenomenal, then there is no way you can remember having had them.
I don’t see how this follows. Or rather, I don’t see how “if qualia are epiphenomenal, there is no way you can remember having had them” is any more or less true than “there is no way you can remember having had qualia, period”.
After pondering both Eliezer’s post and your comments for a while, I concluded that you were right, and that my previous belief in epiphenomenalism was incoherent and confused. I have now renounced it, for which I thank you both.
Lots of memories are constructed and modified post hoc, sometimes confabulating about events that you cannot have witnessed, or that cannot have formed memories from. (Two famous examples: Memory of seeing both twin towers collapse one after the next as it happened (when it fact the latter was shown only after a large gap), memory of being born / being in the womb.)
I’m not positing that you can have causeless memories, but there is a large swath of evidence indicating that the causal experience does not have to match your memory of it.
As a thought experiment, imagine implanted memories. They do have a cause, but certainly their content need not mirror the causal event.
Well, you really wouldn’t be able to remember qualia, but you’d be able to recall brain states that evoke the same qualia as the original events they recorded. In that sense, “to remember” means your brain enters states that are in some way similar to those of the moments of experience (and, in a world where qualia exist, these remembering-brain-states evoke qualia accordingly).
So, although I still agree with other arguments agains epiphenomenalism, I don’t think this one refutes it.
I have, on occasion, read really good books. As I read the descriptions of certain scenes, I imagined them occurring. I remember some of those scenes.
The scene, as I remembered it, is not a cause of my memory because the scene as I remember it did not occur. The memory was, rather, caused by a pattern of ink on paper. But I remember the scene, not the pattern of ink.
Well presumably the X here for you Is “my imagining a scene from the book” and that act of imagination was the cause of your memory. So I’m not sure it counts as a counter-example, though if you’d somehow forgotten it was a fictional scene, and became convinced it really happened, then it could be argued as a counter-example.
I said “Interesting” in response to Kaj, because I’d also started to think of scenarios based on mis-remembering or false memory syndrome, or even dream memories. I’m not sure these examples of false memory help the epiphenomenalist much...
You have it the wrong way around. In epiphenomenalism, brain states cause qualia, qualia don’t cause brain states.
If qualia don’t cause brain states, what caused the brain state that caused your hands to type this sentence? In order for the actual material brain to represent beliefs about qualia, there has to be an arrow from the qualia to the brain.
See my original comment. It’s relatively easy (well, at least it is if you accept that we could build conscious AIs in the first place) to construct an explanation of why an information-processing system would behave as if it had qualia and why it would even represent qualia internally. But that only explains why it behaves as if it had qualia, not why it actually has them.
I did read that before commenting, but I misinterpreted it, and now I still find myself unable to understand it. The way I read it, it seem to equivocate between knowing something as in representing it in your physical brain and knowing something as in representing it in the ‘shadow brain’. You know which one is intended where, but I can’t figure it out.
Not entirely sure what you’re asking, but nothing too radical. I just thought about it and realized that my model was indeed incoherent about whether or not it presumed the existence of some causal arrows. My philosophy of mind was already functionalist, so I just dropped the epiphenomenalist component from it.
A bigger impact was that I’ll need to rethink some parts of my model of personal identity, but I haven’t gotten around that yet.
Even the mental sentence, “I am seeing the apple as red”, occurs shortly after the experience that warranted it. The fact that a qualitatively identical experience is happening while I affirm the mental sentence, is a separate fact. So even knowing what I’m feeling right now requires non-epiphenomenal qualia.
But couldn’t the mental sentences also be part of the lower-tier shadow realm? Not my mental sentences. My thoughts are the ones I’m typing, and the ones that I act on.
EDIT: After thinking things through, I concluded that Eliezer was right, and that epiphenomalism was indeed confused and incoherent. Leaving this comment here as a record of how I came to agree with that conclusion.
Positing “another little brain” to act as the epiphenomenal component sounds unnecessarily complicated to me. You mentioned earlier the possibility of programming a simulation with balls that bounce off each other, and then some little shadows that followed the balls around. It is not obvious to me why “balls that bounce off each other” couldn’t be the physical activity of the neurons, “little shadows that followed the balls around” couldn’t be the qualia that were produced as a side-effect of the neurons’ physical activity.
I think—though I might have misunderstood—that you are trying to exclude this possibility with the “How could you possibly know about the lower tier, even if it existed?” question, and the suggestion that we could only know about the shadows because we can look at the simulation from the outside, and the notion that there is no flow of causality in our world from which we could infer the existence of the “shadow tier”.
I’m not convinced that this is right. We know that there are qualia, for the obvious reason that we are not p-zombies: and we know that consciousness is created via neurons in the brain, for we can map a correspondence between qualia and brain states. E.g. there’s a clear shift in both brain activity and subjective experience when we fall asleep, or become agitated, or drink alcohol. So we know that there is an arrow from “changes in the brain” to “qualia/subjective experience”, because we correctly anticipate that if somebody changed our brain chemistry, our subjective experience would change.
But there’s nothing in our observed physical world that actually explains consciousness in the sense of explaining why there couldn’t just be a physical world utterly devoid of consciousness. Yes, you can develop sophisticated theories of strange loops and of how consciousness is self-representation and all… which explains why there could be symbols and dynamics within a cognitive system which would make an entity behave as if it had consciousness. If the programmer is sufficiently sophisticated, it can make a program behave like it had anything at all.
But that’s still only an explanation of why it behaved as if it had a consciousness. Well, that’s not quite right. It would actually have a consciousness in the sense that it had all the right information-processing dynamics which caused it to have some internal state which would serve the functional role of “sadness” or “loneliness” or “excitement” in influencing its information-processing and behavior. And it would have a pattern recognition engine which analyzed its own experience, which would notice that those kinds of internal states repeated themselves and had predictable effects on its behavior and information-processing.
So it would assign those states labels and introduce symbols corresponding to those labels into its reasoning system, so it could ask itself questions like “every now and then I get put into a state that I have labeled as ‘being sad’, when does that happen and what does it do?”. And as it collected more and more observations and created increasingly sophisticated symbols to represent ever-more-complex concepts, then it seems entirely conceivable that there happened something like… like it noticing that all the symbols it had collected for its own internal states shared the property of being symbols for its own internal states, and its explanation-seeking-mechanism would do what it always did, namely to ask the question of “why do I have this set of states in the first place, instead of having nothing”. And then, because that was an ill-defined question in the first place, it would get stumped and fail to answer it, and then it could very well write philosophical papers that tried to attack the question but made no progress.
… and when I started writing this comment, I was originally going to end this by saying “and that explains why it would behave as if it was conscious, but it still wouldn’t explain why it couldn’t do all of that without having any subjective experience”. Except that, upon writing that out, I started feeling like I might just have dissolved the question and that there was nothing left to explain anymore. Um. I’ll have to think about this some more.
Some questions here. How do you know that other people are not p-zombies? Presumably you believe them when they say they have qualia! But then those speech acts are caused by brain states, and if qualia are epiphenomenal, the speech acts are not caused by qualia. Similarly, the correspondence you describe is between brain states and reported qualia by other people: I doubt you’ve ever managed to map your own brain states to your own qualia.
Related, how do you know that you were not a p-zombie every day of your life up to yesterday? Or that if you had qualia yesterday, how do you know that you didn’t have a green quale when looking at red (stop) traffic lights? Well because you remember having qualia, and you remember them being the same as the qualia you have today! But then, aren’t those memories encoded in brain states (neural connections and synaptic strengths)? How could qualia cause those memories to become encoded if they were epiphenomenal to brain states?
Stuff like this makes me pretty sure that epiphenomenalism is false.
You have it the wrong way around. In epiphenomenalism, brain states cause qualia, qualia don’t cause brain states. When my brain was in a particular past state, the computation of that state produced qualia and also recorded information of having been in that state; and recalling that memory, by emulating the past state, sensibly also produces qualia which are similar to the past state. I can’t know for sure that the memory of the experience I have now accurately matches the experience I actually had, of course… but then that problem is hardly unique to epiphenomenalist theories, or even particularly implied by the epiphenomenalist theory.
In general, most of the questions in your comment are valid, but they’re general arguments for solipsism or extreme skepticism, not arguments against epiphenomenalism in particular. (And the answer to them is that “consistency is a simpler explanation than some people being p-zombies and some not, or people being p-zombies at certain points of time and not at other points”)
The question was rhetorical of course… the point is that if your qualia truly are epiphenomenal, then there is no way you can remember having had them. So you’re left with an extremely weak inductive argument from just one data point, basically “my brain states are creating qualia right now, so I’ll infer that they always created the same qualia in the past, and that similar brain states in other people are creating similar qualia”. It doesn’t take extreme skepticism to suspect there is a problem with that argument.
Still seems like Occam’s Razor would rule against past versions of me and all versions of other people—all of which seem to behave like I do, for the reasons I do—doing so without the qualia I have.
I don’t see how this follows. Or rather, I don’t see how “if qualia are epiphenomenal, there is no way you can remember having had them” is any more or less true than “there is no way you can remember having had qualia, period”.
So you reject this schema: “I can remember X only if X is a cause of my memories”? Interesting.
After pondering both Eliezer’s post and your comments for a while, I concluded that you were right, and that my previous belief in epiphenomenalism was incoherent and confused. I have now renounced it, for which I thank you both.
Hmm. I tried to write a response, but then I noticed that I was confused. Let me think about that for a while.
Lots of memories are constructed and modified post hoc, sometimes confabulating about events that you cannot have witnessed, or that cannot have formed memories from. (Two famous examples: Memory of seeing both twin towers collapse one after the next as it happened (when it fact the latter was shown only after a large gap), memory of being born / being in the womb.)
I’m not positing that you can have causeless memories, but there is a large swath of evidence indicating that the causal experience does not have to match your memory of it.
As a thought experiment, imagine implanted memories. They do have a cause, but certainly their content need not mirror the causal event.
Well, you really wouldn’t be able to remember qualia, but you’d be able to recall brain states that evoke the same qualia as the original events they recorded. In that sense, “to remember” means your brain enters states that are in some way similar to those of the moments of experience (and, in a world where qualia exist, these remembering-brain-states evoke qualia accordingly). So, although I still agree with other arguments agains epiphenomenalism, I don’t think this one refutes it.
I have, on occasion, read really good books. As I read the descriptions of certain scenes, I imagined them occurring. I remember some of those scenes.
The scene, as I remembered it, is not a cause of my memory because the scene as I remember it did not occur. The memory was, rather, caused by a pattern of ink on paper. But I remember the scene, not the pattern of ink.
Well presumably the X here for you Is “my imagining a scene from the book” and that act of imagination was the cause of your memory. So I’m not sure it counts as a counter-example, though if you’d somehow forgotten it was a fictional scene, and became convinced it really happened, then it could be argued as a counter-example.
I said “Interesting” in response to Kaj, because I’d also started to think of scenarios based on mis-remembering or false memory syndrome, or even dream memories. I’m not sure these examples of false memory help the epiphenomenalist much...
If qualia don’t cause brain states, what caused the brain state that caused your hands to type this sentence? In order for the actual material brain to represent beliefs about qualia, there has to be an arrow from the qualia to the brain.
See my original comment. It’s relatively easy (well, at least it is if you accept that we could build conscious AIs in the first place) to construct an explanation of why an information-processing system would behave as if it had qualia and why it would even represent qualia internally. But that only explains why it behaves as if it had qualia, not why it actually has them.
I did read that before commenting, but I misinterpreted it, and now I still find myself unable to understand it. The way I read it, it seem to equivocate between knowing something as in representing it in your physical brain and knowing something as in representing it in the ‘shadow brain’. You know which one is intended where, but I can’t figure it out.
Never mind.
Can you describe the qualia associated with going from epiphenominalism to functionalism/physicalism/wherever you went?
Not entirely sure what you’re asking, but nothing too radical. I just thought about it and realized that my model was indeed incoherent about whether or not it presumed the existence of some causal arrows. My philosophy of mind was already functionalist, so I just dropped the epiphenomenalist component from it.
A bigger impact was that I’ll need to rethink some parts of my model of personal identity, but I haven’t gotten around that yet.
You might enjoy one of Eliezer’s presuppositional arguments against epiphenomenalism.
Funny, thanks.
Even the mental sentence, “I am seeing the apple as red”, occurs shortly after the experience that warranted it. The fact that a qualitatively identical experience is happening while I affirm the mental sentence, is a separate fact. So even knowing what I’m feeling right now requires non-epiphenomenal qualia.
But couldn’t the mental sentences also be part of the lower-tier shadow realm? Not my mental sentences. My thoughts are the ones I’m typing, and the ones that I act on.