Some sorts of knowledge about consciousness will necessarily be as messy as the brain is messy, but the core question is whether there’s any ‘clean substructure’ to be discovered about phenomenology itself. Here’s what I suggest in Principia Qualia:
--------
>Brains vs conscious systems:
>There are fundamentally two kinds of knowledge about valence: things that are true specifically in brains like ours, and general principles common to all conscious entities. Almost all of what we know about pain and pleasure is of the first type – essentially, affective neuroscience has been synonymous with making maps of the mammalian brain’s evolved, adaptive affective modules and contingent architectural quirks (“spandrels”).
>This paper attempts to chart a viable course for this second type of research: it’s an attempt toward a general theory of valence, a.k.a. universal, substrate-independent principles that apply equally to and are precisely true in all conscious entities, be they humans, non-human animals, aliens, or conscious artificial intelligence (AI).
>In order to generalize valence research in this way, we need to understand valence research as a subset of qualia research, and qualia research as a problem in information theory and/or physics, rather than neuroscience. Such a generalized approach avoids focusing on contingent facts and instead seeks general principles for how the causal organization of a physical system generates or corresponds to its phenomenology, or how it feels to subjectively be that system. David Chalmers has hypothesized about this in terms of “psychophysical laws” (Chalmers 1995), or translational principles which we could use to derive a system’s qualia, much like we can derive the electromagnetic field generated by some electronic gadget purely from knowledge of the gadget’s internal composition and circuitry.
How is “clean substructure” different in principle from a garden-variety high-level description? Crepes are a thin pancake made with approximately equal parts egg, milk, and flour, potentially with sugar, salt, oil, or small amounts of leavening, spread in a large pan and cooked quickly. This english sentence is radically simpler than a microscopic description of a crepe. As a law of crepeitude, it has many admirable practical qualities, allowing me to make crepes, and to tell which recipes are for crepes and which are not, even if they’re slightly different from my description.
A similar high-level description for consciousness might start with “Conscious beings are a lot like humans—they do a lot of information processing, have memories and imaginations and desires, think about the world and make plans, feel emotions like happiness or sadness, and often navigate the world using bodies that are in a complex feedback loop with their central information processor.” This english sentence is, again, a lot simpler than a microscopic description of a person. It is, all in all, a remarkable feat of compression.
Of course, I suspect this isn’t what you want—you hope that consciousness is obligingly simple in ways that cut out the reliance on human interpretation from the above description, while still being short enough to fit on a napkin. The main way that this sort of thing has been true in physics and chemistry is when humans are noticing some pattern in the world with a simple explanation in terms of underlying essences. The broad lack of such essences in philosophy explains the historical failure of myriad simple and objective theories of humanity, life, the good, etc.
To compress a lot of thoughts into a small remark, I think both possibilities (consciousness is like electromagnetism in that it has some deep structure to be formalized, vs consciousness is like elan vital in that it lacks any such deep structure) are live possibilities. What’s most interesting to me is doing the work that will give us evidence which of these worlds we live in. There are a lot of threads mentioned in my first comment that I think can generate value/clarity here; in general I’d recommend brainstorming “what would I expect to see if I lived in a world where consciousness does, vs does not, have a crisp substructure?”
Some sorts of knowledge about consciousness will necessarily be as messy as the brain is messy, but the core question is whether there’s any ‘clean substructure’ to be discovered about phenomenology itself. Here’s what I suggest in Principia Qualia:
--------
>Brains vs conscious systems:
>There are fundamentally two kinds of knowledge about valence: things that are true specifically in brains like ours, and general principles common to all conscious entities. Almost all of what we know about pain and pleasure is of the first type – essentially, affective neuroscience has been synonymous with making maps of the mammalian brain’s evolved, adaptive affective modules and contingent architectural quirks (“spandrels”).
>This paper attempts to chart a viable course for this second type of research: it’s an attempt toward a general theory of valence, a.k.a. universal, substrate-independent principles that apply equally to and are precisely true in all conscious entities, be they humans, non-human animals, aliens, or conscious artificial intelligence (AI).
>In order to generalize valence research in this way, we need to understand valence research as a subset of qualia research, and qualia research as a problem in information theory and/or physics, rather than neuroscience. Such a generalized approach avoids focusing on contingent facts and instead seeks general principles for how the causal organization of a physical system generates or corresponds to its phenomenology, or how it feels to subjectively be that system. David Chalmers has hypothesized about this in terms of “psychophysical laws” (Chalmers 1995), or translational principles which we could use to derive a system’s qualia, much like we can derive the electromagnetic field generated by some electronic gadget purely from knowledge of the gadget’s internal composition and circuitry.
How is “clean substructure” different in principle from a garden-variety high-level description? Crepes are a thin pancake made with approximately equal parts egg, milk, and flour, potentially with sugar, salt, oil, or small amounts of leavening, spread in a large pan and cooked quickly. This english sentence is radically simpler than a microscopic description of a crepe. As a law of crepeitude, it has many admirable practical qualities, allowing me to make crepes, and to tell which recipes are for crepes and which are not, even if they’re slightly different from my description.
A similar high-level description for consciousness might start with “Conscious beings are a lot like humans—they do a lot of information processing, have memories and imaginations and desires, think about the world and make plans, feel emotions like happiness or sadness, and often navigate the world using bodies that are in a complex feedback loop with their central information processor.” This english sentence is, again, a lot simpler than a microscopic description of a person. It is, all in all, a remarkable feat of compression.
Of course, I suspect this isn’t what you want—you hope that consciousness is obligingly simple in ways that cut out the reliance on human interpretation from the above description, while still being short enough to fit on a napkin. The main way that this sort of thing has been true in physics and chemistry is when humans are noticing some pattern in the world with a simple explanation in terms of underlying essences. The broad lack of such essences in philosophy explains the historical failure of myriad simple and objective theories of humanity, life, the good, etc.
Hi Charlie,
To compress a lot of thoughts into a small remark, I think both possibilities (consciousness is like electromagnetism in that it has some deep structure to be formalized, vs consciousness is like elan vital in that it lacks any such deep structure) are live possibilities. What’s most interesting to me is doing the work that will give us evidence which of these worlds we live in. There are a lot of threads mentioned in my first comment that I think can generate value/clarity here; in general I’d recommend brainstorming “what would I expect to see if I lived in a world where consciousness does, vs does not, have a crisp substructure?”