Appreciate the crepe joke! My preference is sweet over savory.
On the topic of language, I strongly support Mike’s reply which pushes in the direction of finding the ‘deep structure’ of consciousness. Johannes Kleiner also has written about ways to approach this problem in his paper “Mathematical Models of Consciousness” (https://arxiv.org/pdf/1907.03223.pdf).
To respond to your ask for us to rethink our philosophical commitments… if you were alive before the period table of elements was discovered, would you similarly urge Mendeleev to rethink his commitment to exploring the structure of matter / finding precise definitions for elements like ‘gold’ and ‘iron’? What reasons or evidence would you need to make research into the structure of matter seem worthwhile? What similar reasons or evidence would we need to decide the same for qualia? A priori, why should we expect that qualia does not have deep structure but matter does? Given the information that colors have certain structural relationships (leading to the CIELAB Color Space: https://en.wikipedia.org/wiki/CIELAB_color_space), does that make you more or less confident that there is something real and precise here to be studied?
I haven’t watched that talk by Ned Block. Thank you for sharing it and I’ll check it out!
The way I see it, the crux is not in a deep structure being definable—functionalism is perfectly compatible with definitions of experience on the same level of precision and reality as elements. And the research into the physical structures that people associate with consciousness certainly can be worthwhile and it can be used to resolve ethical disagreements in the sense that actual humans would express agreement afterwards. But the stance of QRI seems to be that resulting precise definition would be literally objective as in “new fundamental physics”—I think it should be explicitly clarified whether it’s the case.
Neuroscience and philosophy are not physics and chemistry. I don’t expect there to be an “atomic theory of color qualia” or anything like it because of a combination of factors like:
Cultural and general interpersonal differences in color perception.
The tendency of evolution to produce complicated, interlinked mechanisms, including in the brain, rather than modular ones.
Examples of brain damage and people with unusual psychology or physiology that have dramatically different color qualia than me.
Animals and artificial systems that use color perception to navigate the world but don’t seem to converge to similar ways pf perceiving color.
The evidence of absence of a soul or other homuncular center of perception, which necessitates understanding perception as an emergent phenomenon made of lots of little pieces.
The causal efficacy of color perception (i.e. I don’t just see things, I actually do different things depending on what I see) tying colors into all the other complications of the human mind.
Complications that we know about from neuroscience, such as asymmetric local centers of function, and certain individual clusters of neurons being causally related to individual memories, motions, and sensations.
Our experience with artificial neural networks, and how challenging interpreting their weights is.
--
If we compare this with atoms, atoms do indeed have some local variation in mass, but only within a suspiciously small range. Rules like conservation of mass appear to hold among elements, rather than there being common exceptions. We didn’t already know that atoms were emergent phenomena from the interactions of bajillions of pieces. We did not already have a scientific field studying how many of those bajillions of pieces played idiosyncratic and evolutionarily contingent roles. Et c.
Some sorts of knowledge about consciousness will necessarily be as messy as the brain is messy, but the core question is whether there’s any ‘clean substructure’ to be discovered about phenomenology itself. Here’s what I suggest in Principia Qualia:
--------
>Brains vs conscious systems:
>There are fundamentally two kinds of knowledge about valence: things that are true specifically in brains like ours, and general principles common to all conscious entities. Almost all of what we know about pain and pleasure is of the first type – essentially, affective neuroscience has been synonymous with making maps of the mammalian brain’s evolved, adaptive affective modules and contingent architectural quirks (“spandrels”).
>This paper attempts to chart a viable course for this second type of research: it’s an attempt toward a general theory of valence, a.k.a. universal, substrate-independent principles that apply equally to and are precisely true in all conscious entities, be they humans, non-human animals, aliens, or conscious artificial intelligence (AI).
>In order to generalize valence research in this way, we need to understand valence research as a subset of qualia research, and qualia research as a problem in information theory and/or physics, rather than neuroscience. Such a generalized approach avoids focusing on contingent facts and instead seeks general principles for how the causal organization of a physical system generates or corresponds to its phenomenology, or how it feels to subjectively be that system. David Chalmers has hypothesized about this in terms of “psychophysical laws” (Chalmers 1995), or translational principles which we could use to derive a system’s qualia, much like we can derive the electromagnetic field generated by some electronic gadget purely from knowledge of the gadget’s internal composition and circuitry.
How is “clean substructure” different in principle from a garden-variety high-level description? Crepes are a thin pancake made with approximately equal parts egg, milk, and flour, potentially with sugar, salt, oil, or small amounts of leavening, spread in a large pan and cooked quickly. This english sentence is radically simpler than a microscopic description of a crepe. As a law of crepeitude, it has many admirable practical qualities, allowing me to make crepes, and to tell which recipes are for crepes and which are not, even if they’re slightly different from my description.
A similar high-level description for consciousness might start with “Conscious beings are a lot like humans—they do a lot of information processing, have memories and imaginations and desires, think about the world and make plans, feel emotions like happiness or sadness, and often navigate the world using bodies that are in a complex feedback loop with their central information processor.” This english sentence is, again, a lot simpler than a microscopic description of a person. It is, all in all, a remarkable feat of compression.
Of course, I suspect this isn’t what you want—you hope that consciousness is obligingly simple in ways that cut out the reliance on human interpretation from the above description, while still being short enough to fit on a napkin. The main way that this sort of thing has been true in physics and chemistry is when humans are noticing some pattern in the world with a simple explanation in terms of underlying essences. The broad lack of such essences in philosophy explains the historical failure of myriad simple and objective theories of humanity, life, the good, etc.
To compress a lot of thoughts into a small remark, I think both possibilities (consciousness is like electromagnetism in that it has some deep structure to be formalized, vs consciousness is like elan vital in that it lacks any such deep structure) are live possibilities. What’s most interesting to me is doing the work that will give us evidence which of these worlds we live in. There are a lot of threads mentioned in my first comment that I think can generate value/clarity here; in general I’d recommend brainstorming “what would I expect to see if I lived in a world where consciousness does, vs does not, have a crisp substructure?”
Appreciate the crepe joke! My preference is sweet over savory.
On the topic of language, I strongly support Mike’s reply which pushes in the direction of finding the ‘deep structure’ of consciousness. Johannes Kleiner also has written about ways to approach this problem in his paper “Mathematical Models of Consciousness” (https://arxiv.org/pdf/1907.03223.pdf).
To respond to your ask for us to rethink our philosophical commitments… if you were alive before the period table of elements was discovered, would you similarly urge Mendeleev to rethink his commitment to exploring the structure of matter / finding precise definitions for elements like ‘gold’ and ‘iron’? What reasons or evidence would you need to make research into the structure of matter seem worthwhile? What similar reasons or evidence would we need to decide the same for qualia? A priori, why should we expect that qualia does not have deep structure but matter does? Given the information that colors have certain structural relationships (leading to the CIELAB Color Space: https://en.wikipedia.org/wiki/CIELAB_color_space), does that make you more or less confident that there is something real and precise here to be studied?
I haven’t watched that talk by Ned Block. Thank you for sharing it and I’ll check it out!
The way I see it, the crux is not in a deep structure being definable—functionalism is perfectly compatible with definitions of experience on the same level of precision and reality as elements. And the research into the physical structures that people associate with consciousness certainly can be worthwhile and it can be used to resolve ethical disagreements in the sense that actual humans would express agreement afterwards. But the stance of QRI seems to be that resulting precise definition would be literally objective as in “new fundamental physics”—I think it should be explicitly clarified whether it’s the case.
Neuroscience and philosophy are not physics and chemistry. I don’t expect there to be an “atomic theory of color qualia” or anything like it because of a combination of factors like:
Cultural and general interpersonal differences in color perception.
The tendency of evolution to produce complicated, interlinked mechanisms, including in the brain, rather than modular ones.
Examples of brain damage and people with unusual psychology or physiology that have dramatically different color qualia than me.
Animals and artificial systems that use color perception to navigate the world but don’t seem to converge to similar ways pf perceiving color.
The evidence of absence of a soul or other homuncular center of perception, which necessitates understanding perception as an emergent phenomenon made of lots of little pieces.
The causal efficacy of color perception (i.e. I don’t just see things, I actually do different things depending on what I see) tying colors into all the other complications of the human mind.
Complications that we know about from neuroscience, such as asymmetric local centers of function, and certain individual clusters of neurons being causally related to individual memories, motions, and sensations.
Our experience with artificial neural networks, and how challenging interpreting their weights is.
--
If we compare this with atoms, atoms do indeed have some local variation in mass, but only within a suspiciously small range. Rules like conservation of mass appear to hold among elements, rather than there being common exceptions. We didn’t already know that atoms were emergent phenomena from the interactions of bajillions of pieces. We did not already have a scientific field studying how many of those bajillions of pieces played idiosyncratic and evolutionarily contingent roles. Et c.
Some sorts of knowledge about consciousness will necessarily be as messy as the brain is messy, but the core question is whether there’s any ‘clean substructure’ to be discovered about phenomenology itself. Here’s what I suggest in Principia Qualia:
--------
>Brains vs conscious systems:
>There are fundamentally two kinds of knowledge about valence: things that are true specifically in brains like ours, and general principles common to all conscious entities. Almost all of what we know about pain and pleasure is of the first type – essentially, affective neuroscience has been synonymous with making maps of the mammalian brain’s evolved, adaptive affective modules and contingent architectural quirks (“spandrels”).
>This paper attempts to chart a viable course for this second type of research: it’s an attempt toward a general theory of valence, a.k.a. universal, substrate-independent principles that apply equally to and are precisely true in all conscious entities, be they humans, non-human animals, aliens, or conscious artificial intelligence (AI).
>In order to generalize valence research in this way, we need to understand valence research as a subset of qualia research, and qualia research as a problem in information theory and/or physics, rather than neuroscience. Such a generalized approach avoids focusing on contingent facts and instead seeks general principles for how the causal organization of a physical system generates or corresponds to its phenomenology, or how it feels to subjectively be that system. David Chalmers has hypothesized about this in terms of “psychophysical laws” (Chalmers 1995), or translational principles which we could use to derive a system’s qualia, much like we can derive the electromagnetic field generated by some electronic gadget purely from knowledge of the gadget’s internal composition and circuitry.
How is “clean substructure” different in principle from a garden-variety high-level description? Crepes are a thin pancake made with approximately equal parts egg, milk, and flour, potentially with sugar, salt, oil, or small amounts of leavening, spread in a large pan and cooked quickly. This english sentence is radically simpler than a microscopic description of a crepe. As a law of crepeitude, it has many admirable practical qualities, allowing me to make crepes, and to tell which recipes are for crepes and which are not, even if they’re slightly different from my description.
A similar high-level description for consciousness might start with “Conscious beings are a lot like humans—they do a lot of information processing, have memories and imaginations and desires, think about the world and make plans, feel emotions like happiness or sadness, and often navigate the world using bodies that are in a complex feedback loop with their central information processor.” This english sentence is, again, a lot simpler than a microscopic description of a person. It is, all in all, a remarkable feat of compression.
Of course, I suspect this isn’t what you want—you hope that consciousness is obligingly simple in ways that cut out the reliance on human interpretation from the above description, while still being short enough to fit on a napkin. The main way that this sort of thing has been true in physics and chemistry is when humans are noticing some pattern in the world with a simple explanation in terms of underlying essences. The broad lack of such essences in philosophy explains the historical failure of myriad simple and objective theories of humanity, life, the good, etc.
Hi Charlie,
To compress a lot of thoughts into a small remark, I think both possibilities (consciousness is like electromagnetism in that it has some deep structure to be formalized, vs consciousness is like elan vital in that it lacks any such deep structure) are live possibilities. What’s most interesting to me is doing the work that will give us evidence which of these worlds we live in. There are a lot of threads mentioned in my first comment that I think can generate value/clarity here; in general I’d recommend brainstorming “what would I expect to see if I lived in a world where consciousness does, vs does not, have a crisp substructure?”