A story in a book, versus a mind in a brain. Where to begin in criticizing that analogy!
I’m sure there’s some really profound way to criticize that analogy, as actually symptomatic of a whole wrong philosophy of mind. It’s not just an accident that you chose to criticize a pro-physical, anti-virtual theory of mind, by inventing a semantic phlogiston that materially inhabits the words on a page and gives them their meaning. Unfortunately, even after so many years arguing with functionalists and other computationalists, I still don’t have a sufficiently nuanced understanding of where their views come from, to make the profound critique, the really illuminating one.
But surely you see that explaining how it is that words on a page have meaning, and how it is that thoughts in a brain have meaning, are completely different questions! The book doesn’t think, it doesn’t act, the events in the story do not occur in the book. There is no meaning in the book unless brains are involved. Without them, words on a page are just shapes on a surface. The experience of the book as meaningful does not occur in the book, it occurs in the brain of a reader; so even the solution of this problem is fundamentally about brains and not about books. The fact that meaning is ultimately not in the book is why semantic phlogiston is absurd in that context.
But the brain is a different context. It’s the end of the line. As with all of naturalism’s ontological problems with mind, once you get to the brain, you cannot evade them any further. By all means, let the world outside the skull be a place wholly without time or color or meaning, if that is indeed your theory of reality. That just means you have to find all those things inside the skull. And you have to find them for real, because they are real. If your theory of such things, is that they are nothing more than labels applied by a neural net to certain inputs, inputs that are not actually changing or colorful or meaningful—then you are in denial about your own experience.
Or at least, I would have to deny the basic facts of my own experience of reality, in order to adopt such views. Maybe you’re some other sort of being, which genuinely doesn’t experience time passing or see colors or have thoughts that are about things. But I doubt it.
I agree with almost all of what you wrote. Here’s the only line I disagree with.
If your theory of such things, is that they are nothing more than labels applied by a neural net to certain inputs, inputs that are not actually changing or colorful or meaningful—then you are in denial about your own experience.
I affirm that my own subjective experience is as you describe; I deny that I am in denial about its import.
I want to be clear that I’m discussing the topic of what makes sense to affirm as most plausible given what we know. In particular, I’m not calling your conjecture impossible.
Human brains don’t look different in lower-level organization than those of, say, cats, and there’s no higher level structure in the brain that obviously corresponds to whatever special sauce it is that makes humans conscious. On the other hand, there are specific brain regions which are known to carry out specific functional tasks. My understanding is that human subjective experience, when picked apart by reductive cognitive neuroscience, appears to be an ex post facto narrative constructed/integrated out of events whose causes can be more-or-less assigned to particular functional sub-components of the brain. Positing that there’s a special sauce—especially a non-classical one—just because my brain’s capacity for self-reflection includes an impression of “unity of consciousness”—well, to me, it’s not the simplest conceivable explanation.
Maybe the universe really does admit the possibility of an agent which approximates my internal structure to arbitrary (or at least sufficient) accuracy and claims to have conscious experiences for reasons which are isomorphic to my own, yet actually has none because it’s implemented on an inadequate physical substrate. But I doubt it.
A story in a book, versus a mind in a brain. Where to begin in criticizing that analogy!
I’m sure there’s some really profound way to criticize that analogy, as actually symptomatic of a whole wrong philosophy of mind. It’s not just an accident that you chose to criticize a pro-physical, anti-virtual theory of mind, by inventing a semantic phlogiston that materially inhabits the words on a page and gives them their meaning. Unfortunately, even after so many years arguing with functionalists and other computationalists, I still don’t have a sufficiently nuanced understanding of where their views come from, to make the profound critique, the really illuminating one.
But surely you see that explaining how it is that words on a page have meaning, and how it is that thoughts in a brain have meaning, are completely different questions! The book doesn’t think, it doesn’t act, the events in the story do not occur in the book. There is no meaning in the book unless brains are involved. Without them, words on a page are just shapes on a surface. The experience of the book as meaningful does not occur in the book, it occurs in the brain of a reader; so even the solution of this problem is fundamentally about brains and not about books. The fact that meaning is ultimately not in the book is why semantic phlogiston is absurd in that context.
But the brain is a different context. It’s the end of the line. As with all of naturalism’s ontological problems with mind, once you get to the brain, you cannot evade them any further. By all means, let the world outside the skull be a place wholly without time or color or meaning, if that is indeed your theory of reality. That just means you have to find all those things inside the skull. And you have to find them for real, because they are real. If your theory of such things, is that they are nothing more than labels applied by a neural net to certain inputs, inputs that are not actually changing or colorful or meaningful—then you are in denial about your own experience.
Or at least, I would have to deny the basic facts of my own experience of reality, in order to adopt such views. Maybe you’re some other sort of being, which genuinely doesn’t experience time passing or see colors or have thoughts that are about things. But I doubt it.
I agree with almost all of what you wrote. Here’s the only line I disagree with.
I affirm that my own subjective experience is as you describe; I deny that I am in denial about its import.
I want to be clear that I’m discussing the topic of what makes sense to affirm as most plausible given what we know. In particular, I’m not calling your conjecture impossible.
Human brains don’t look different in lower-level organization than those of, say, cats, and there’s no higher level structure in the brain that obviously corresponds to whatever special sauce it is that makes humans conscious. On the other hand, there are specific brain regions which are known to carry out specific functional tasks. My understanding is that human subjective experience, when picked apart by reductive cognitive neuroscience, appears to be an ex post facto narrative constructed/integrated out of events whose causes can be more-or-less assigned to particular functional sub-components of the brain. Positing that there’s a special sauce—especially a non-classical one—just because my brain’s capacity for self-reflection includes an impression of “unity of consciousness”—well, to me, it’s not the simplest conceivable explanation.
Maybe the universe really does admit the possibility of an agent which approximates my internal structure to arbitrary (or at least sufficient) accuracy and claims to have conscious experiences for reasons which are isomorphic to my own, yet actually has none because it’s implemented on an inadequate physical substrate. But I doubt it.