I don’t believe any of the various purely literary definitions of narrative and characterization, so just preserving the shapes and orderings of the letters of a story, etc., doesn’t mean much to me. My best bet is that a novel is a single physical thing, a specific physical phenomenon, which forms at a definite moment in the printing of a book, persists through time even when not read, and ceases to exist when its physical form becomes illegible. For example, it might be an intricate topological vortex that forms in a (completely hypothetical) condensate of ink and/or paper, somewhere between the front and back cover.
That is just a wild speculation, made for the sake of concreteness. But what is really unlikely is that a novel is just a collection of letters, in the sense of orthography—a sequence of glyphs representing letters that are coarse-grainings of the actual microphysical states, and which can survive to be read on another, physically distinct medium, so long as it reproduces the sequence of letters of the original.
Physically, what is a novel? Nuclei and electrons. And physically, what is a story? It is an extreme abstraction of what some of those nuclei and electrons are doing. Books are designed so that these abstractions remain valid—so that the dynamics of the story will match the sequence of the letters, unless something physically disruptive occurs.
The physical object is the reality, the narrative is just a concept. But the information-centric theory of what stories are and what novels are, is that they are narratives—a reification of a conceptual construct. This is false to the robust reality of a reader’s consciousness, especially, which is why I insist on a literary theory that is physical and not just computational.
I don’t want to belabor this point, but just want to make clear again why I dissent from the hundred protean ideas out there, about narrative uploading, copies, conscious readers, authorial intent, instances of decompression from digital letter-maps, and so on, in favor of speculations about a physical story within the book. Such a story would surely have information-theoretic story structures, other book regions that would be more like narratives, structural adjuncts to the novel part, such as the immediate suppliers of the boundary conditions which show up in experience as plot structure. But you can’t regard the whole of the novel as nothing but creative writing. Some part of it has to be objectively real.
I think I’ll stop here. Apologies to Mitchell Porter, who I judge to be a smart guy—more knowledgeable than me about physics, without question—who happens to believe a crazy thing. (I expect he judges my beliefs philosophically incoherent and hence crazy, so we’re even on that score.) I should note that the above analogy hasn’t been constructed with a great deal of care; I expect it can be picked apart quite thoroughly.
ETA: As I re-read this, I feel kind of bad about the mocking tone expressed by this kind of rhetorical construction, so let me state explicitly that I did it for the lulz; on the actual substantive matter at issue, I judge Mitchell Porter’s comment to be at DH4 on the disagreement hierarchy and my own reply to be at DH3.
As much as I might try to find holes in the analogy, I still insisted I ought to upvote your comment, because frankly, it had to be said.
In trying to find those holes, I actually came to agree with your analogy well: The story is recreated in the mind/brain by each individual reader, and does not necessarily depend on the format. In the same way, if consciousness has a physical presence that it lacked in a simulation, then we will need to account for and simulate that as well. It may even eventually be possible to design and experiment to show that the raw mechanism of consciousness and its simulation were the same thing. Barring any possibility of simulation of perception, we can think of our minds as books to be read my a massive biologically-resembling brain that would retain such a mechanism, allowing the full re-creation of our conscious in that brain from a state of initially being a simulation that it reads. I have to say, once I’m aware I’m a simulation, I’m not terribly concerned about transferring to different mediums of simulation.
A story in a book, versus a mind in a brain. Where to begin in criticizing that analogy!
I’m sure there’s some really profound way to criticize that analogy, as actually symptomatic of a whole wrong philosophy of mind. It’s not just an accident that you chose to criticize a pro-physical, anti-virtual theory of mind, by inventing a semantic phlogiston that materially inhabits the words on a page and gives them their meaning. Unfortunately, even after so many years arguing with functionalists and other computationalists, I still don’t have a sufficiently nuanced understanding of where their views come from, to make the profound critique, the really illuminating one.
But surely you see that explaining how it is that words on a page have meaning, and how it is that thoughts in a brain have meaning, are completely different questions! The book doesn’t think, it doesn’t act, the events in the story do not occur in the book. There is no meaning in the book unless brains are involved. Without them, words on a page are just shapes on a surface. The experience of the book as meaningful does not occur in the book, it occurs in the brain of a reader; so even the solution of this problem is fundamentally about brains and not about books. The fact that meaning is ultimately not in the book is why semantic phlogiston is absurd in that context.
But the brain is a different context. It’s the end of the line. As with all of naturalism’s ontological problems with mind, once you get to the brain, you cannot evade them any further. By all means, let the world outside the skull be a place wholly without time or color or meaning, if that is indeed your theory of reality. That just means you have to find all those things inside the skull. And you have to find them for real, because they are real. If your theory of such things, is that they are nothing more than labels applied by a neural net to certain inputs, inputs that are not actually changing or colorful or meaningful—then you are in denial about your own experience.
Or at least, I would have to deny the basic facts of my own experience of reality, in order to adopt such views. Maybe you’re some other sort of being, which genuinely doesn’t experience time passing or see colors or have thoughts that are about things. But I doubt it.
I agree with almost all of what you wrote. Here’s the only line I disagree with.
If your theory of such things, is that they are nothing more than labels applied by a neural net to certain inputs, inputs that are not actually changing or colorful or meaningful—then you are in denial about your own experience.
I affirm that my own subjective experience is as you describe; I deny that I am in denial about its import.
I want to be clear that I’m discussing the topic of what makes sense to affirm as most plausible given what we know. In particular, I’m not calling your conjecture impossible.
Human brains don’t look different in lower-level organization than those of, say, cats, and there’s no higher level structure in the brain that obviously corresponds to whatever special sauce it is that makes humans conscious. On the other hand, there are specific brain regions which are known to carry out specific functional tasks. My understanding is that human subjective experience, when picked apart by reductive cognitive neuroscience, appears to be an ex post facto narrative constructed/integrated out of events whose causes can be more-or-less assigned to particular functional sub-components of the brain. Positing that there’s a special sauce—especially a non-classical one—just because my brain’s capacity for self-reflection includes an impression of “unity of consciousness”—well, to me, it’s not the simplest conceivable explanation.
Maybe the universe really does admit the possibility of an agent which approximates my internal structure to arbitrary (or at least sufficient) accuracy and claims to have conscious experiences for reasons which are isomorphic to my own, yet actually has none because it’s implemented on an inadequate physical substrate. But I doubt it.
I don’t believe any of the various purely literary definitions of narrative and characterization, so just preserving the shapes and orderings of the letters of a story, etc., doesn’t mean much to me. My best bet is that a novel is a single physical thing, a specific physical phenomenon, which forms at a definite moment in the printing of a book, persists through time even when not read, and ceases to exist when its physical form becomes illegible. For example, it might be an intricate topological vortex that forms in a (completely hypothetical) condensate of ink and/or paper, somewhere between the front and back cover.
That is just a wild speculation, made for the sake of concreteness. But what is really unlikely is that a novel is just a collection of letters, in the sense of orthography—a sequence of glyphs representing letters that are coarse-grainings of the actual microphysical states, and which can survive to be read on another, physically distinct medium, so long as it reproduces the sequence of letters of the original.
Physically, what is a novel? Nuclei and electrons. And physically, what is a story? It is an extreme abstraction of what some of those nuclei and electrons are doing. Books are designed so that these abstractions remain valid—so that the dynamics of the story will match the sequence of the letters, unless something physically disruptive occurs.
The physical object is the reality, the narrative is just a concept. But the information-centric theory of what stories are and what novels are, is that they are narratives—a reification of a conceptual construct. This is false to the robust reality of a reader’s consciousness, especially, which is why I insist on a literary theory that is physical and not just computational.
I don’t want to belabor this point, but just want to make clear again why I dissent from the hundred protean ideas out there, about narrative uploading, copies, conscious readers, authorial intent, instances of decompression from digital letter-maps, and so on, in favor of speculations about a physical story within the book. Such a story would surely have information-theoretic story structures, other book regions that would be more like narratives, structural adjuncts to the novel part, such as the immediate suppliers of the boundary conditions which show up in experience as plot structure. But you can’t regard the whole of the novel as nothing but creative writing. Some part of it has to be objectively real.
I think I’ll stop here. Apologies to Mitchell Porter, who I judge to be a smart guy—more knowledgeable than me about physics, without question—who happens to believe a crazy thing. (I expect he judges my beliefs philosophically incoherent and hence crazy, so we’re even on that score.) I should note that the above analogy hasn’t been constructed with a great deal of care; I expect it can be picked apart quite thoroughly.
ETA: As I re-read this, I feel kind of bad about the mocking tone expressed by this kind of rhetorical construction, so let me state explicitly that I did it for the lulz; on the actual substantive matter at issue, I judge Mitchell Porter’s comment to be at DH4 on the disagreement hierarchy and my own reply to be at DH3.
As much as I might try to find holes in the analogy, I still insisted I ought to upvote your comment, because frankly, it had to be said.
In trying to find those holes, I actually came to agree with your analogy well: The story is recreated in the mind/brain by each individual reader, and does not necessarily depend on the format. In the same way, if consciousness has a physical presence that it lacked in a simulation, then we will need to account for and simulate that as well. It may even eventually be possible to design and experiment to show that the raw mechanism of consciousness and its simulation were the same thing. Barring any possibility of simulation of perception, we can think of our minds as books to be read my a massive biologically-resembling brain that would retain such a mechanism, allowing the full re-creation of our conscious in that brain from a state of initially being a simulation that it reads. I have to say, once I’m aware I’m a simulation, I’m not terribly concerned about transferring to different mediums of simulation.
A story in a book, versus a mind in a brain. Where to begin in criticizing that analogy!
I’m sure there’s some really profound way to criticize that analogy, as actually symptomatic of a whole wrong philosophy of mind. It’s not just an accident that you chose to criticize a pro-physical, anti-virtual theory of mind, by inventing a semantic phlogiston that materially inhabits the words on a page and gives them their meaning. Unfortunately, even after so many years arguing with functionalists and other computationalists, I still don’t have a sufficiently nuanced understanding of where their views come from, to make the profound critique, the really illuminating one.
But surely you see that explaining how it is that words on a page have meaning, and how it is that thoughts in a brain have meaning, are completely different questions! The book doesn’t think, it doesn’t act, the events in the story do not occur in the book. There is no meaning in the book unless brains are involved. Without them, words on a page are just shapes on a surface. The experience of the book as meaningful does not occur in the book, it occurs in the brain of a reader; so even the solution of this problem is fundamentally about brains and not about books. The fact that meaning is ultimately not in the book is why semantic phlogiston is absurd in that context.
But the brain is a different context. It’s the end of the line. As with all of naturalism’s ontological problems with mind, once you get to the brain, you cannot evade them any further. By all means, let the world outside the skull be a place wholly without time or color or meaning, if that is indeed your theory of reality. That just means you have to find all those things inside the skull. And you have to find them for real, because they are real. If your theory of such things, is that they are nothing more than labels applied by a neural net to certain inputs, inputs that are not actually changing or colorful or meaningful—then you are in denial about your own experience.
Or at least, I would have to deny the basic facts of my own experience of reality, in order to adopt such views. Maybe you’re some other sort of being, which genuinely doesn’t experience time passing or see colors or have thoughts that are about things. But I doubt it.
I agree with almost all of what you wrote. Here’s the only line I disagree with.
I affirm that my own subjective experience is as you describe; I deny that I am in denial about its import.
I want to be clear that I’m discussing the topic of what makes sense to affirm as most plausible given what we know. In particular, I’m not calling your conjecture impossible.
Human brains don’t look different in lower-level organization than those of, say, cats, and there’s no higher level structure in the brain that obviously corresponds to whatever special sauce it is that makes humans conscious. On the other hand, there are specific brain regions which are known to carry out specific functional tasks. My understanding is that human subjective experience, when picked apart by reductive cognitive neuroscience, appears to be an ex post facto narrative constructed/integrated out of events whose causes can be more-or-less assigned to particular functional sub-components of the brain. Positing that there’s a special sauce—especially a non-classical one—just because my brain’s capacity for self-reflection includes an impression of “unity of consciousness”—well, to me, it’s not the simplest conceivable explanation.
Maybe the universe really does admit the possibility of an agent which approximates my internal structure to arbitrary (or at least sufficient) accuracy and claims to have conscious experiences for reasons which are isomorphic to my own, yet actually has none because it’s implemented on an inadequate physical substrate. But I doubt it.