I’m confused about your use of the term “symmetry” (and even more confused what a “symmetry gradient” is). For example, if I put a front-to-back mirror into my brain, it would reflect the frontal lobe into the occipital lobe—that’s not going to be symmetric. The brain isn’t an undifferentiated blob. Different neurons are connected to different things, in an information-bearing way.
You don’t define “symmetry” here but mention three ideas: (1) “The size of the mathematical object’s symmetry group”. Well, I am aware of zero nontrivial symmetry transformations of the brain, and zero nontrivial symmetry transformations of qualia. Can you name any? “If my mind is currently all-consumed by the thought of cucumber sandwiches, then my current qualia space is symmetric under the transformation that swaps the concepts of rain and snow”??? :-P (2) “Compressibility”. In the brain context, I would call that “redundancy” not “symmetry”. I absolutely believe that the brain stores information in ways that involve heavy redundancy; if one neuron dies you don’t suddenly forget your name. I think brains, just like hard drives, can make tradeoffs between capacity and redundancy in their information encoding mechanisms. I don’t see any connection between that and valence. Or maybe you’re not thinking about neurons but instead imagining the compressibility of qualia? I dunno, if I can’t think about anything besides how much my toe hurts right now, that’s negative valence, but it’s also low information content / high compressibility, right? (3) “practical approximations for finding symmetry in graphs…adapted for the precise structure of Qualia space (a metric space?)”. If Qualia space isn’t a graph, I’m not sure why you’re bringing up graphs. Can you walk through an example, even an intuitive one? I really don’t understand where you’re coming from here.
I skimmed this thing by Smolensky and it struck me as quite unrelated to anything you’re talking about. I read it as saying that cortical inference involves certain types of low-level algorithms that have stable attractor states (as do energy-based models, PGMs, Hopfield networks, etc.). So if you try to imagine a “stationary falling rock” you can’t, because the different pieces are contradicting each other, but if you try to imagine a “purple tree” you can pretty quickly come up with a self-consistent mental image. Smolensky (poetically) uses the term “harmonious” for what I would call a “stable attractor” or “self-consistent configuration” in the model space. (Steve Grossberg would call them “resonant”.) Again I don’t see any relation between that and CSHW or STV. Like, when I try to imagine a “stationary falling rock”, I can’t, but that doesn’t lead to me suffering—on the contrary, it’s kinda fun. The opposite of Smolensky’s “harmony” would be closer to confusion than suffering, in my book, and comes with no straightforward association to valence. Moreover, I believe that the attractor dynamics in question, the stuff Smolensky is (I think) talking about, are happening in the cortex and thalamus but not other parts of the brain—but those other parts of the brain are I believe clearly involved in suffering (e.g. lateral habenula, parabrachial nucleus, etc.).
(Also, not to gripe, but if you don’t yet have a precise definition of “symmetry”, then I might suggest that you not describe STV as a “crisp formalism”. I normally think “formalism” ≈ “formal” ≈ “the things you’re talking about have precise unambiguous definitions”. Just my opinion.)
potential infinite regress: what ‘makes’ something a pleasure center?
I would start by just listing a bunch of properties of “pleasure”. For example, other things equal, if something is more pleasurable, then I’m more likely to make a decision that result in my doing that thing in the future, or my continuing to do that thing if I’m already doing it, or my doing it again if it was in the past. Then if I found a “center” that causes all those properties to happen (via comprehensible, causal mechanisms), I would feel pretty good calling it a “pleasure center”. (I’m not sure there is such a “center”.)
(FWIW, I think that “pleasure”, like “suffering” etc., is a learned concept with contextual and social associations, and therefore won’t necessarily exactly correspond to a natural category of processes in the brain.)
Unrelated, but your documents bring up IIT sometimes; I found this blog post helpful in coming to the conclusion that IIT is just a bunch of baloney. :)
This is a great comment and I hope I can do it justice (took an overnight bus and am somewhat sleep-deprived).
First I’d say that neither we nor anyone has a full theory of consciousness. I.e. we’re not at the point where we can look at a brain, and derive an exact mathematical representation of what it’s feeling. I would suggest thinking of STV as a piece of this future full theory of consciousness, which I’ve tried to optimize for compatibility by remaining agnostic about certain details.
One such detail is the state space: if we knew the mathematical space consciousness ‘live in’, we could zero in on symmetry metrics optimized for this space. Tononi’s IIT for instance suggests it‘s a vector space — but I think it would be a mistake to assume IIT is right about this. Graphs assume less structure than vector spaces, so it’s a little safer to speak about symmetry metrics in graphs.
Another ’move’ motivated by compatibility is STV’s focus on the mathematical representation of phenomenology, rather than on patterns in the brain. STV is not a neuro theory, but a metaphysical one. I.e. assuming that in the future we can construct a full formalism for consciousness, and thus represent a given experience mathematically, the symmetry in this representation will hold an identity relationship with pleasure.
Appreciate the remarks about Smolensky! I think what you said is reasonable and I’ll have to think about how that fits with e.g. CSHW. His emphasis is of course language and neural representation, very different domains.
>(Also, not to gripe, but if you don’t yet have a precise definition of “symmetry”, then I might suggest that you not describe STV as a “crisp formalism”. I normally think “formalism” ≈ “formal” ≈ “the things you’re talking about have precise unambiguous definitions”. Just my opinion.)
I definitely understand this. On the other hand, STV should basically have zero degrees of freedom once we do have a full formal theory of consciousness. I.e., once we know the state space, have example mathematical representations of phenomenology, have defined the parallels between qualia space and physics, etc, it should be obvious what symmetry metric to use. (My intuition is, we’ll import it directly from physics.) In this sense it is a crisp formalism. However, I get your objection and more precisely it’s a dependent formalism, and dependent upon something that doesn’t yet exist.
>(FWIW, I think that “pleasure”, like “suffering” etc., is a learned concept with contextual and social associations, and therefore won’t necessarily exactly correspond to a natural category of processes in the brain.)
I think one of the most interesting questions in the universe is whether you’re right, or whether I’m right! :) Definitely hope to figure out good ways of ‘making beliefs pay rent’ here. In general I find the question of “what are the universe’s natural kinds?” to be fascinating.
I’m confused about your use of the term “symmetry” (and even more confused what a “symmetry gradient” is). For example, if I put a front-to-back mirror into my brain, it would reflect the frontal lobe into the occipital lobe—that’s not going to be symmetric. The brain isn’t an undifferentiated blob. Different neurons are connected to different things, in an information-bearing way.
You don’t define “symmetry” here but mention three ideas: (1) “The size of the mathematical object’s symmetry group”. Well, I am aware of zero nontrivial symmetry transformations of the brain, and zero nontrivial symmetry transformations of qualia. Can you name any? “If my mind is currently all-consumed by the thought of cucumber sandwiches, then my current qualia space is symmetric under the transformation that swaps the concepts of rain and snow”??? :-P (2) “Compressibility”. In the brain context, I would call that “redundancy” not “symmetry”. I absolutely believe that the brain stores information in ways that involve heavy redundancy; if one neuron dies you don’t suddenly forget your name. I think brains, just like hard drives, can make tradeoffs between capacity and redundancy in their information encoding mechanisms. I don’t see any connection between that and valence. Or maybe you’re not thinking about neurons but instead imagining the compressibility of qualia? I dunno, if I can’t think about anything besides how much my toe hurts right now, that’s negative valence, but it’s also low information content / high compressibility, right? (3) “practical approximations for finding symmetry in graphs…adapted for the precise structure of Qualia space (a metric space?)”. If Qualia space isn’t a graph, I’m not sure why you’re bringing up graphs. Can you walk through an example, even an intuitive one? I really don’t understand where you’re coming from here.
I skimmed this thing by Smolensky and it struck me as quite unrelated to anything you’re talking about. I read it as saying that cortical inference involves certain types of low-level algorithms that have stable attractor states (as do energy-based models, PGMs, Hopfield networks, etc.). So if you try to imagine a “stationary falling rock” you can’t, because the different pieces are contradicting each other, but if you try to imagine a “purple tree” you can pretty quickly come up with a self-consistent mental image. Smolensky (poetically) uses the term “harmonious” for what I would call a “stable attractor” or “self-consistent configuration” in the model space. (Steve Grossberg would call them “resonant”.) Again I don’t see any relation between that and CSHW or STV. Like, when I try to imagine a “stationary falling rock”, I can’t, but that doesn’t lead to me suffering—on the contrary, it’s kinda fun. The opposite of Smolensky’s “harmony” would be closer to confusion than suffering, in my book, and comes with no straightforward association to valence. Moreover, I believe that the attractor dynamics in question, the stuff Smolensky is (I think) talking about, are happening in the cortex and thalamus but not other parts of the brain—but those other parts of the brain are I believe clearly involved in suffering (e.g. lateral habenula, parabrachial nucleus, etc.).
(Also, not to gripe, but if you don’t yet have a precise definition of “symmetry”, then I might suggest that you not describe STV as a “crisp formalism”. I normally think “formalism” ≈ “formal” ≈ “the things you’re talking about have precise unambiguous definitions”. Just my opinion.)
I would start by just listing a bunch of properties of “pleasure”. For example, other things equal, if something is more pleasurable, then I’m more likely to make a decision that result in my doing that thing in the future, or my continuing to do that thing if I’m already doing it, or my doing it again if it was in the past. Then if I found a “center” that causes all those properties to happen (via comprehensible, causal mechanisms), I would feel pretty good calling it a “pleasure center”. (I’m not sure there is such a “center”.)
(FWIW, I think that “pleasure”, like “suffering” etc., is a learned concept with contextual and social associations, and therefore won’t necessarily exactly correspond to a natural category of processes in the brain.)
Unrelated, but your documents bring up IIT sometimes; I found this blog post helpful in coming to the conclusion that IIT is just a bunch of baloney. :)
Hi Steven,
This is a great comment and I hope I can do it justice (took an overnight bus and am somewhat sleep-deprived).
First I’d say that neither we nor anyone has a full theory of consciousness. I.e. we’re not at the point where we can look at a brain, and derive an exact mathematical representation of what it’s feeling. I would suggest thinking of STV as a piece of this future full theory of consciousness, which I’ve tried to optimize for compatibility by remaining agnostic about certain details.
One such detail is the state space: if we knew the mathematical space consciousness ‘live in’, we could zero in on symmetry metrics optimized for this space. Tononi’s IIT for instance suggests it‘s a vector space — but I think it would be a mistake to assume IIT is right about this. Graphs assume less structure than vector spaces, so it’s a little safer to speak about symmetry metrics in graphs.
Another ’move’ motivated by compatibility is STV’s focus on the mathematical representation of phenomenology, rather than on patterns in the brain. STV is not a neuro theory, but a metaphysical one. I.e. assuming that in the future we can construct a full formalism for consciousness, and thus represent a given experience mathematically, the symmetry in this representation will hold an identity relationship with pleasure.
Appreciate the remarks about Smolensky! I think what you said is reasonable and I’ll have to think about how that fits with e.g. CSHW. His emphasis is of course language and neural representation, very different domains.
>(Also, not to gripe, but if you don’t yet have a precise definition of “symmetry”, then I might suggest that you not describe STV as a “crisp formalism”. I normally think “formalism” ≈ “formal” ≈ “the things you’re talking about have precise unambiguous definitions”. Just my opinion.)
I definitely understand this. On the other hand, STV should basically have zero degrees of freedom once we do have a full formal theory of consciousness. I.e., once we know the state space, have example mathematical representations of phenomenology, have defined the parallels between qualia space and physics, etc, it should be obvious what symmetry metric to use. (My intuition is, we’ll import it directly from physics.) In this sense it is a crisp formalism. However, I get your objection and more precisely it’s a dependent formalism, and dependent upon something that doesn’t yet exist.
>(FWIW, I think that “pleasure”, like “suffering” etc., is a learned concept with contextual and social associations, and therefore won’t necessarily exactly correspond to a natural category of processes in the brain.)
I think one of the most interesting questions in the universe is whether you’re right, or whether I’m right! :) Definitely hope to figure out good ways of ‘making beliefs pay rent’ here. In general I find the question of “what are the universe’s natural kinds?” to be fascinating.