I’ll preface this by saying that I haven’t spent much time engaging with your material (it’s been on my to-do list for a very long time), and could well be misunderstanding things, and that I have great respect for what you’re trying to do. So you and everyone can feel free to ignore this, but here I go anyway.
OK, maybe the most basic reason that I’m skeptical of your STV stuff is that I’m going in expecting a, um, computational theory of valence, suffering, etc. As in, the brain has all those trillions of synapses and intricate circuitry in order to do evolutionary-fitness-improving calculations, and suffering is part of those calculations (e.g. other things equal, I’d rather not suffer, and I make decisions accordingly, and this presumably has helped my ancestors to survive and have more viable children).
So let’s say we’re sitting together at a computer, and we’re running a Super Mario executable on an emulator, and we’re watching the bits in the processor’s SRAM. You tell me: “Take the bits in the SRAM register, and take the Fourier transform, and look at the spectrum (≈ absolute value of the Fourier components). If most of the spectral weight is in long-wavelength components, e.g. the bits are “11111000111100000000...”, then Mario is doing really well in the game. If most of the spectral weight is in the short-wavelength components, e.g. the bits are “101010101101010″, then Mario is doing poorly in the game. That’s my theory!”
I would say “Ummm, I mean, I guess that’s possible. But if that’s true at all, it’s not an explanation, it’s a random coincidence.”
(This isn’t a perfect analogy, just trying to gesture at where I’m coming from right now.)
So that’s the real reason I don’t believe in STV—it just looks wrong to me, in the same way that Mario’s progress should not look like certain types of large-scale structure in SRAM bits.
I want a better argument than that though. So here are a few more specific things:
(1) waves and symmetries don’t carry many bits of information. If you think valence and suffering are fundamentally few-dimensional, maybe that doesn’t bother you; but I think it’s at least possible for people know whether they’re suffering from arm pain or finger pain or air-hunger or guilt or whatever. I guess I raised this issue in an offhand comment a couple years ago, and lsusr responded, and then I apparently dropped out of the conversation, I guess I must have gotten busy or something, hmm I guess I should read that. :-/
(2) From the outside, it’s easy to look at an fMRI or whatever and talk about its harmonic decomposition and symmetries. But from the perspective of any one neuron, that information is awfully hard to access. It’s not impossible, but I think you’d need the neuron to have a bunch of inputs from across the brain hooked into complicated timing circuits etc. My starting point, as I mentioned, is that suffering causes behavioral changes (including self-reports, trying not to suffer, etc.), so there has to be a way for the “am I suffering” information to impact specific brain computations, and I don’t know what that mechanism is in STV. (In the Mario analogy, if you just look at one SRAM bit, or even a few bits, you get almost no information about the spectrum of the whole SRAM register.) If “suffering” was a particular signal carried by a particular neurotransmitter, for example, we wouldn’t have that problem, we just take that signal and wire it to whatever circuits need to be modulated by the presence/absence of suffering. So theories like that strike me as more plausible.
(3) Conversely, I’m confused at how you would tell a story where getting tortured (for example) leads to suffering. This is just the opposite of the previous one: Just as a brain-wide harmonic decomposition can’t have a straightforward and systematic impact on a specific neural signal, likewise a specific neural signal can’t have a straightforward and systematic impact on a brain-wide harmonic decomposition, as far as I can tell.
(4) I don’t have a particularly well-formed alternative theory to STV, but all the most intriguing ideas that I’ve played around with so far that seem to have something to do with the nature of valence and suffering (e.g. here , here , various other things I haven’t written up) look wildly different from STV. Instead they tend to involve certain signals in the insular cortex and reticular activating system and those signals have certain effects on decisionmaking circuits, blah blah blah.
Hi Steven, amazing comment, thank you. I’ll try to address your points in order.
0. I get your Mario example, and totally agree within that context; however, this conclusion may or may not transfer to brains, depending on how e.g. they implement utility functions. If the brain is a ‘harmonic computer’ then it may be doing e.g. gradient descent in such a way that the state of its utility function can be inferred from its large-scale structure.
1. On this question I’ll gracefully punt to lsusr‘s comment :) I endorse both his comment and framing. I’d also offer that dissonance is in an important sense ‘directional’ — if you have a symmetrical network and something breaks its symmetry, the new network pattern is not symmetrical and this break in symmetry allows you to infer where the ‘damage’ is. An analogy might be, a spider’s spiderweb starts as highly symmetrical, but its vibrations become asymmetrical when a fly bumbles along and gets stuck. The spider can infer where the fly is on the web based on the particular ‘flavor’ of new vibrations.
2. Complex question. First I’d say that STV as technically stated is a metaphysical claim, not a claim about brain dynamics. But I don’t want to hide behind this; I think your question deserves an answer. This perhaps touches on lsusr’s comment, but I’d add that if the brain does tend to follow a symmetry gradient (following e.g. Smolensky’s work on computational harmony), it likely does so in a fractal way. It will have tiny regions which follow a local symmetry gradient, it will have bigger regions which span many circuits where a larger symmetry gradient will form, and it will have brain-wide dynamics which follow a global symmetry gradient. How exactly these different scales of gradients interact is a very non-trivial thing, but I think it gives at least a hint as to how information might travel from large scales to small, and from small to large.
3. I think my answer to (2) also addresses this;
4. I think, essentially, that we can both be correct here. STV is intended to be an implementational account of valence; as we abstract away details of implementation, other frames may become relatively more useful. However, I do think that e.g. talk of “pleasure centers” involves potential infinite regress: what ‘makes’ something a pleasure center? A strength of STV is it fundamentally defines an identity relationship.
I hope that helps! Definitely would recommend lsusr’s comments, and just want to thank you again for your careful comment.
I’m confused about your use of the term “symmetry” (and even more confused what a “symmetry gradient” is). For example, if I put a front-to-back mirror into my brain, it would reflect the frontal lobe into the occipital lobe—that’s not going to be symmetric. The brain isn’t an undifferentiated blob. Different neurons are connected to different things, in an information-bearing way.
You don’t define “symmetry” here but mention three ideas: (1) “The size of the mathematical object’s symmetry group”. Well, I am aware of zero nontrivial symmetry transformations of the brain, and zero nontrivial symmetry transformations of qualia. Can you name any? “If my mind is currently all-consumed by the thought of cucumber sandwiches, then my current qualia space is symmetric under the transformation that swaps the concepts of rain and snow”??? :-P (2) “Compressibility”. In the brain context, I would call that “redundancy” not “symmetry”. I absolutely believe that the brain stores information in ways that involve heavy redundancy; if one neuron dies you don’t suddenly forget your name. I think brains, just like hard drives, can make tradeoffs between capacity and redundancy in their information encoding mechanisms. I don’t see any connection between that and valence. Or maybe you’re not thinking about neurons but instead imagining the compressibility of qualia? I dunno, if I can’t think about anything besides how much my toe hurts right now, that’s negative valence, but it’s also low information content / high compressibility, right? (3) “practical approximations for finding symmetry in graphs…adapted for the precise structure of Qualia space (a metric space?)”. If Qualia space isn’t a graph, I’m not sure why you’re bringing up graphs. Can you walk through an example, even an intuitive one? I really don’t understand where you’re coming from here.
I skimmed this thing by Smolensky and it struck me as quite unrelated to anything you’re talking about. I read it as saying that cortical inference involves certain types of low-level algorithms that have stable attractor states (as do energy-based models, PGMs, Hopfield networks, etc.). So if you try to imagine a “stationary falling rock” you can’t, because the different pieces are contradicting each other, but if you try to imagine a “purple tree” you can pretty quickly come up with a self-consistent mental image. Smolensky (poetically) uses the term “harmonious” for what I would call a “stable attractor” or “self-consistent configuration” in the model space. (Steve Grossberg would call them “resonant”.) Again I don’t see any relation between that and CSHW or STV. Like, when I try to imagine a “stationary falling rock”, I can’t, but that doesn’t lead to me suffering—on the contrary, it’s kinda fun. The opposite of Smolensky’s “harmony” would be closer to confusion than suffering, in my book, and comes with no straightforward association to valence. Moreover, I believe that the attractor dynamics in question, the stuff Smolensky is (I think) talking about, are happening in the cortex and thalamus but not other parts of the brain—but those other parts of the brain are I believe clearly involved in suffering (e.g. lateral habenula, parabrachial nucleus, etc.).
(Also, not to gripe, but if you don’t yet have a precise definition of “symmetry”, then I might suggest that you not describe STV as a “crisp formalism”. I normally think “formalism” ≈ “formal” ≈ “the things you’re talking about have precise unambiguous definitions”. Just my opinion.)
potential infinite regress: what ‘makes’ something a pleasure center?
I would start by just listing a bunch of properties of “pleasure”. For example, other things equal, if something is more pleasurable, then I’m more likely to make a decision that result in my doing that thing in the future, or my continuing to do that thing if I’m already doing it, or my doing it again if it was in the past. Then if I found a “center” that causes all those properties to happen (via comprehensible, causal mechanisms), I would feel pretty good calling it a “pleasure center”. (I’m not sure there is such a “center”.)
(FWIW, I think that “pleasure”, like “suffering” etc., is a learned concept with contextual and social associations, and therefore won’t necessarily exactly correspond to a natural category of processes in the brain.)
Unrelated, but your documents bring up IIT sometimes; I found this blog post helpful in coming to the conclusion that IIT is just a bunch of baloney. :)
This is a great comment and I hope I can do it justice (took an overnight bus and am somewhat sleep-deprived).
First I’d say that neither we nor anyone has a full theory of consciousness. I.e. we’re not at the point where we can look at a brain, and derive an exact mathematical representation of what it’s feeling. I would suggest thinking of STV as a piece of this future full theory of consciousness, which I’ve tried to optimize for compatibility by remaining agnostic about certain details.
One such detail is the state space: if we knew the mathematical space consciousness ‘live in’, we could zero in on symmetry metrics optimized for this space. Tononi’s IIT for instance suggests it‘s a vector space — but I think it would be a mistake to assume IIT is right about this. Graphs assume less structure than vector spaces, so it’s a little safer to speak about symmetry metrics in graphs.
Another ’move’ motivated by compatibility is STV’s focus on the mathematical representation of phenomenology, rather than on patterns in the brain. STV is not a neuro theory, but a metaphysical one. I.e. assuming that in the future we can construct a full formalism for consciousness, and thus represent a given experience mathematically, the symmetry in this representation will hold an identity relationship with pleasure.
Appreciate the remarks about Smolensky! I think what you said is reasonable and I’ll have to think about how that fits with e.g. CSHW. His emphasis is of course language and neural representation, very different domains.
>(Also, not to gripe, but if you don’t yet have a precise definition of “symmetry”, then I might suggest that you not describe STV as a “crisp formalism”. I normally think “formalism” ≈ “formal” ≈ “the things you’re talking about have precise unambiguous definitions”. Just my opinion.)
I definitely understand this. On the other hand, STV should basically have zero degrees of freedom once we do have a full formal theory of consciousness. I.e., once we know the state space, have example mathematical representations of phenomenology, have defined the parallels between qualia space and physics, etc, it should be obvious what symmetry metric to use. (My intuition is, we’ll import it directly from physics.) In this sense it is a crisp formalism. However, I get your objection and more precisely it’s a dependent formalism, and dependent upon something that doesn’t yet exist.
>(FWIW, I think that “pleasure”, like “suffering” etc., is a learned concept with contextual and social associations, and therefore won’t necessarily exactly correspond to a natural category of processes in the brain.)
I think one of the most interesting questions in the universe is whether you’re right, or whether I’m right! :) Definitely hope to figure out good ways of ‘making beliefs pay rent’ here. In general I find the question of “what are the universe’s natural kinds?” to be fascinating.
> I’m going in expecting a, um, computational theory of valence
Let’s contrast that with physicalist theory of valence, such as the STV.
> So that’s the real reason I don’t believe in STV—it just looks wrong to me, in the same way that Mario’s progress should not look like certain types of large-scale structure in SRAM bits.
Well, since the STV is a physicalist theory, a better analogy might be like: properties like viscosity of a fluid can be found by the overall structure of the fluid.
I’m going to start with your last point, because I think it’s the most important.
> Instead they tend to involve certain signals in the insular cortex and reticular activating system and those signals have certain effects on decisionmaking circuits, blah blah blah.
We’re not necessarily interested in asking the question “*which* part is causally associated with valence?” but rather the question “*how* does that part actually do it, how is it implemented?”. That is, how does the qualia of suffering/pleasure arise at all? How can the qualia itself be causally relevant? If it’s mere computation, how does the qualia take on a particular texture, and what role does it play in the algorithm as a texture? At what point in the computation does it arise, how long does it arise for, etc. It leads to the so-called “Hard Problem of Consciousness”. If physicalism is true, combined with the fact that we’re not philosophical zombies, there must be some physical signature of consciousness.
> (1) waves and symmetries don’t carry many bits of information. If you think valence and suffering are fundamentally few-dimensional, maybe that doesn’t bother you; but I think it’s at least possible for people know whether they’re suffering from arm pain or finger pain or air-hunger or guilt or whatever.
Due to binding, a low-dimensional property can become mixed and imbued with other forms of qualia to form gestalts. Simple building blocks can create complex macro objects. You can still have a lot of information about the location, frequency, and phase of a textural pattern (1), even if it’s consonant and thus carry (relatively) less information. That said, at the peak—where you get to fully consonant experiences (2), you do in fact see a loss of information content
> But from the perspective of any one neuron, that information is awfully hard to access. It’s not impossible, but I think you’d need the neuron to have a bunch of inputs from across the brain hooked into complicated timing circuits etc.
If we go back to the viscosity, this would be like asking how a single atom can access information about the structure of the liquid as a whole. Furthermore, top-down causality can emerge in the right conditions in a physical system.
> If “suffering” was a particular signal carried by a particular neurotransmitter, for example, we wouldn’t have that problem.
You have the worse problem though, of showing how qualia can arise from a neurotransmitter, with what textures and with what causal influence beyond the signal itself: if the causality is from the signal alone, why would qualia arise at all? What purpose would it serve in addition the purpose of the signal?
> Conversely, I’m confused at how you would tell a story where getting tortured (for example) leads to suffering. This is just the opposite of the previous one: Just as a brain-wide harmonic decomposition can’t have a straightforward and systematic impact on a specific neural signal, likewise a specific neural signal can’t have a straightforward and systematic impact on a brain-wide harmonic decomposition, as far as I can tell.
Brain-wide harmonics exert a top-down influence on individual neurons (e.g. through something like EM field dynamics [3]) and individual neurons collectively create the overall brain-wide harmonics.
If you ask me a question about, umm, I’m not sure the exact term, let’s say “3rd-person-observable properties of the physical world that have something to do with the human brain”—questions like “When humans emit self-reports about their own conscious experience, why do they often describe it as having properties A,B,C?” or “When humans move their mouths and say words on the topic of ‘qualia’, why do they often describe it as having properties X,Y,Z?”—then I feel like I’m on pretty firm ground, and that I’m in my comfort zone, and that I’m able to answer such questions, at least in broad outline and to some extent at a pretty gory level of detail. (Some broad-outline ingredients are in my old post here, and I’m open to further discussion as time permits.)
BUT, I feel like that’s probably not the game you want to play here. My guess is that, even if I perfectly nail every one of those “3rd-person” questions above, you would still say that I haven’t even begun to engage with the nature of qualia, that I’m missing the forest for the trees, whatever. (I notice that I’m putting words in your mouth; feel free to disagree.)
If I’m correct so far, then this is a more basic disagreement about the nature of consciousness and how to think about it and learn about it etc. You can see my “wristwatch” discussion here for basically where I’m coming from. But I’m not too interested in hashing out that disagreement, sorry. For me, it’s vaguely in the same category as arguing with a theology professor about whether God exists (I’m an atheist): My position is “Y’know, I really truly think I’m right about this, but there’s a gazillion pages of technical literature on this topic, and I’ve read practically none of it, and my experience strongly suggests that we’re not going to make any meaningful progress on this disagreement in the amount of time that I’m willing to spend talking about it.” :-P Sorry!
Waves and symmetries don’t carry many bits of information. If you think valence and suffering are fundamentally few-dimensional, maybe that doesn’t bother you; but I think it’s at least possible for people know whether they’re suffering from arm pain or finger pain or air-hunger or guilt or whatever.
“What is the problem?” should have a pretty high information content, but there might be a separate “how bad is it?” question that constitutes the actual unpleasant part of the experience, which wouldn’t have to be much more than a 1d scalar.
For the record, I do actually believe that. I was trying to state what seemed to be a problem in the STV framework as I was understanding it.
In my picture, the brainstem communicates valence to the neocortex via a midbrain dopamine signal (one particular signal of the many), and sometimes communicates the suggested cause / remediation via executing orienting reactions (saccading, moving your head, etc.—the brainstem can do this by itself), and sending acetylcholine to the corresponding parts of your cortex, which then override the normal top-down attention mechanism and force attention onto whatever your brainstem demands. For example, when your finger hurts a lot, it’s really hard to think about anything else, and my tentative theory is that the mechanism here involves the brainstem sending acetylcholine to the finger-pain-area of the insular cortex. (To be clear, this is casual speculation that I haven’t thought too hard about or looked into much.)
I’ll preface this by saying that I haven’t spent much time engaging with your material (it’s been on my to-do list for a very long time), and could well be misunderstanding things, and that I have great respect for what you’re trying to do. So you and everyone can feel free to ignore this, but here I go anyway.
OK, maybe the most basic reason that I’m skeptical of your STV stuff is that I’m going in expecting a, um, computational theory of valence, suffering, etc. As in, the brain has all those trillions of synapses and intricate circuitry in order to do evolutionary-fitness-improving calculations, and suffering is part of those calculations (e.g. other things equal, I’d rather not suffer, and I make decisions accordingly, and this presumably has helped my ancestors to survive and have more viable children).
So let’s say we’re sitting together at a computer, and we’re running a Super Mario executable on an emulator, and we’re watching the bits in the processor’s SRAM. You tell me: “Take the bits in the SRAM register, and take the Fourier transform, and look at the spectrum (≈ absolute value of the Fourier components). If most of the spectral weight is in long-wavelength components, e.g. the bits are “11111000111100000000...”, then Mario is doing really well in the game. If most of the spectral weight is in the short-wavelength components, e.g. the bits are “101010101101010″, then Mario is doing poorly in the game. That’s my theory!”
I would say “Ummm, I mean, I guess that’s possible. But if that’s true at all, it’s not an explanation, it’s a random coincidence.”
(This isn’t a perfect analogy, just trying to gesture at where I’m coming from right now.)
So that’s the real reason I don’t believe in STV—it just looks wrong to me, in the same way that Mario’s progress should not look like certain types of large-scale structure in SRAM bits.
I want a better argument than that though. So here are a few more specific things:
(1) waves and symmetries don’t carry many bits of information. If you think valence and suffering are fundamentally few-dimensional, maybe that doesn’t bother you; but I think it’s at least possible for people know whether they’re suffering from arm pain or finger pain or air-hunger or guilt or whatever. I guess I raised this issue in an offhand comment a couple years ago, and lsusr responded, and then I apparently dropped out of the conversation, I guess I must have gotten busy or something, hmm I guess I should read that. :-/
(2) From the outside, it’s easy to look at an fMRI or whatever and talk about its harmonic decomposition and symmetries. But from the perspective of any one neuron, that information is awfully hard to access. It’s not impossible, but I think you’d need the neuron to have a bunch of inputs from across the brain hooked into complicated timing circuits etc. My starting point, as I mentioned, is that suffering causes behavioral changes (including self-reports, trying not to suffer, etc.), so there has to be a way for the “am I suffering” information to impact specific brain computations, and I don’t know what that mechanism is in STV. (In the Mario analogy, if you just look at one SRAM bit, or even a few bits, you get almost no information about the spectrum of the whole SRAM register.) If “suffering” was a particular signal carried by a particular neurotransmitter, for example, we wouldn’t have that problem, we just take that signal and wire it to whatever circuits need to be modulated by the presence/absence of suffering. So theories like that strike me as more plausible.
(3) Conversely, I’m confused at how you would tell a story where getting tortured (for example) leads to suffering. This is just the opposite of the previous one: Just as a brain-wide harmonic decomposition can’t have a straightforward and systematic impact on a specific neural signal, likewise a specific neural signal can’t have a straightforward and systematic impact on a brain-wide harmonic decomposition, as far as I can tell.
(4) I don’t have a particularly well-formed alternative theory to STV, but all the most intriguing ideas that I’ve played around with so far that seem to have something to do with the nature of valence and suffering (e.g. here , here , various other things I haven’t written up) look wildly different from STV. Instead they tend to involve certain signals in the insular cortex and reticular activating system and those signals have certain effects on decisionmaking circuits, blah blah blah.
Hi Steven, amazing comment, thank you. I’ll try to address your points in order.
0. I get your Mario example, and totally agree within that context; however, this conclusion may or may not transfer to brains, depending on how e.g. they implement utility functions. If the brain is a ‘harmonic computer’ then it may be doing e.g. gradient descent in such a way that the state of its utility function can be inferred from its large-scale structure.
1. On this question I’ll gracefully punt to lsusr‘s comment :) I endorse both his comment and framing. I’d also offer that dissonance is in an important sense ‘directional’ — if you have a symmetrical network and something breaks its symmetry, the new network pattern is not symmetrical and this break in symmetry allows you to infer where the ‘damage’ is. An analogy might be, a spider’s spiderweb starts as highly symmetrical, but its vibrations become asymmetrical when a fly bumbles along and gets stuck. The spider can infer where the fly is on the web based on the particular ‘flavor’ of new vibrations.
2. Complex question. First I’d say that STV as technically stated is a metaphysical claim, not a claim about brain dynamics. But I don’t want to hide behind this; I think your question deserves an answer. This perhaps touches on lsusr’s comment, but I’d add that if the brain does tend to follow a symmetry gradient (following e.g. Smolensky’s work on computational harmony), it likely does so in a fractal way. It will have tiny regions which follow a local symmetry gradient, it will have bigger regions which span many circuits where a larger symmetry gradient will form, and it will have brain-wide dynamics which follow a global symmetry gradient. How exactly these different scales of gradients interact is a very non-trivial thing, but I think it gives at least a hint as to how information might travel from large scales to small, and from small to large.
3. I think my answer to (2) also addresses this;
4. I think, essentially, that we can both be correct here. STV is intended to be an implementational account of valence; as we abstract away details of implementation, other frames may become relatively more useful. However, I do think that e.g. talk of “pleasure centers” involves potential infinite regress: what ‘makes’ something a pleasure center? A strength of STV is it fundamentally defines an identity relationship.
I hope that helps! Definitely would recommend lsusr’s comments, and just want to thank you again for your careful comment.
I’m confused about your use of the term “symmetry” (and even more confused what a “symmetry gradient” is). For example, if I put a front-to-back mirror into my brain, it would reflect the frontal lobe into the occipital lobe—that’s not going to be symmetric. The brain isn’t an undifferentiated blob. Different neurons are connected to different things, in an information-bearing way.
You don’t define “symmetry” here but mention three ideas: (1) “The size of the mathematical object’s symmetry group”. Well, I am aware of zero nontrivial symmetry transformations of the brain, and zero nontrivial symmetry transformations of qualia. Can you name any? “If my mind is currently all-consumed by the thought of cucumber sandwiches, then my current qualia space is symmetric under the transformation that swaps the concepts of rain and snow”??? :-P (2) “Compressibility”. In the brain context, I would call that “redundancy” not “symmetry”. I absolutely believe that the brain stores information in ways that involve heavy redundancy; if one neuron dies you don’t suddenly forget your name. I think brains, just like hard drives, can make tradeoffs between capacity and redundancy in their information encoding mechanisms. I don’t see any connection between that and valence. Or maybe you’re not thinking about neurons but instead imagining the compressibility of qualia? I dunno, if I can’t think about anything besides how much my toe hurts right now, that’s negative valence, but it’s also low information content / high compressibility, right? (3) “practical approximations for finding symmetry in graphs…adapted for the precise structure of Qualia space (a metric space?)”. If Qualia space isn’t a graph, I’m not sure why you’re bringing up graphs. Can you walk through an example, even an intuitive one? I really don’t understand where you’re coming from here.
I skimmed this thing by Smolensky and it struck me as quite unrelated to anything you’re talking about. I read it as saying that cortical inference involves certain types of low-level algorithms that have stable attractor states (as do energy-based models, PGMs, Hopfield networks, etc.). So if you try to imagine a “stationary falling rock” you can’t, because the different pieces are contradicting each other, but if you try to imagine a “purple tree” you can pretty quickly come up with a self-consistent mental image. Smolensky (poetically) uses the term “harmonious” for what I would call a “stable attractor” or “self-consistent configuration” in the model space. (Steve Grossberg would call them “resonant”.) Again I don’t see any relation between that and CSHW or STV. Like, when I try to imagine a “stationary falling rock”, I can’t, but that doesn’t lead to me suffering—on the contrary, it’s kinda fun. The opposite of Smolensky’s “harmony” would be closer to confusion than suffering, in my book, and comes with no straightforward association to valence. Moreover, I believe that the attractor dynamics in question, the stuff Smolensky is (I think) talking about, are happening in the cortex and thalamus but not other parts of the brain—but those other parts of the brain are I believe clearly involved in suffering (e.g. lateral habenula, parabrachial nucleus, etc.).
(Also, not to gripe, but if you don’t yet have a precise definition of “symmetry”, then I might suggest that you not describe STV as a “crisp formalism”. I normally think “formalism” ≈ “formal” ≈ “the things you’re talking about have precise unambiguous definitions”. Just my opinion.)
I would start by just listing a bunch of properties of “pleasure”. For example, other things equal, if something is more pleasurable, then I’m more likely to make a decision that result in my doing that thing in the future, or my continuing to do that thing if I’m already doing it, or my doing it again if it was in the past. Then if I found a “center” that causes all those properties to happen (via comprehensible, causal mechanisms), I would feel pretty good calling it a “pleasure center”. (I’m not sure there is such a “center”.)
(FWIW, I think that “pleasure”, like “suffering” etc., is a learned concept with contextual and social associations, and therefore won’t necessarily exactly correspond to a natural category of processes in the brain.)
Unrelated, but your documents bring up IIT sometimes; I found this blog post helpful in coming to the conclusion that IIT is just a bunch of baloney. :)
Hi Steven,
This is a great comment and I hope I can do it justice (took an overnight bus and am somewhat sleep-deprived).
First I’d say that neither we nor anyone has a full theory of consciousness. I.e. we’re not at the point where we can look at a brain, and derive an exact mathematical representation of what it’s feeling. I would suggest thinking of STV as a piece of this future full theory of consciousness, which I’ve tried to optimize for compatibility by remaining agnostic about certain details.
One such detail is the state space: if we knew the mathematical space consciousness ‘live in’, we could zero in on symmetry metrics optimized for this space. Tononi’s IIT for instance suggests it‘s a vector space — but I think it would be a mistake to assume IIT is right about this. Graphs assume less structure than vector spaces, so it’s a little safer to speak about symmetry metrics in graphs.
Another ’move’ motivated by compatibility is STV’s focus on the mathematical representation of phenomenology, rather than on patterns in the brain. STV is not a neuro theory, but a metaphysical one. I.e. assuming that in the future we can construct a full formalism for consciousness, and thus represent a given experience mathematically, the symmetry in this representation will hold an identity relationship with pleasure.
Appreciate the remarks about Smolensky! I think what you said is reasonable and I’ll have to think about how that fits with e.g. CSHW. His emphasis is of course language and neural representation, very different domains.
>(Also, not to gripe, but if you don’t yet have a precise definition of “symmetry”, then I might suggest that you not describe STV as a “crisp formalism”. I normally think “formalism” ≈ “formal” ≈ “the things you’re talking about have precise unambiguous definitions”. Just my opinion.)
I definitely understand this. On the other hand, STV should basically have zero degrees of freedom once we do have a full formal theory of consciousness. I.e., once we know the state space, have example mathematical representations of phenomenology, have defined the parallels between qualia space and physics, etc, it should be obvious what symmetry metric to use. (My intuition is, we’ll import it directly from physics.) In this sense it is a crisp formalism. However, I get your objection and more precisely it’s a dependent formalism, and dependent upon something that doesn’t yet exist.
>(FWIW, I think that “pleasure”, like “suffering” etc., is a learned concept with contextual and social associations, and therefore won’t necessarily exactly correspond to a natural category of processes in the brain.)
I think one of the most interesting questions in the universe is whether you’re right, or whether I’m right! :) Definitely hope to figure out good ways of ‘making beliefs pay rent’ here. In general I find the question of “what are the universe’s natural kinds?” to be fascinating.
> I’m going in expecting a, um, computational theory of valence
Let’s contrast that with physicalist theory of valence, such as the STV.
> So that’s the real reason I don’t believe in STV—it just looks wrong to me, in the same way that Mario’s progress should not look like certain types of large-scale structure in SRAM bits.
Well, since the STV is a physicalist theory, a better analogy might be like: properties like viscosity of a fluid can be found by the overall structure of the fluid.
I’m going to start with your last point, because I think it’s the most important.
> Instead they tend to involve certain signals in the insular cortex and reticular activating system and those signals have certain effects on decisionmaking circuits, blah blah blah.
We’re not necessarily interested in asking the question “*which* part is causally associated with valence?” but rather the question “*how* does that part actually do it, how is it implemented?”. That is, how does the qualia of suffering/pleasure arise at all? How can the qualia itself be causally relevant? If it’s mere computation, how does the qualia take on a particular texture, and what role does it play in the algorithm as a texture? At what point in the computation does it arise, how long does it arise for, etc. It leads to the so-called “Hard Problem of Consciousness”. If physicalism is true, combined with the fact that we’re not philosophical zombies, there must be some physical signature of consciousness.
> (1) waves and symmetries don’t carry many bits of information. If you think valence and suffering are fundamentally few-dimensional, maybe that doesn’t bother you; but I think it’s at least possible for people know whether they’re suffering from arm pain or finger pain or air-hunger or guilt or whatever.
Due to binding, a low-dimensional property can become mixed and imbued with other forms of qualia to form gestalts. Simple building blocks can create complex macro objects. You can still have a lot of information about the location, frequency, and phase of a textural pattern (1), even if it’s consonant and thus carry (relatively) less information. That said, at the peak—where you get to fully consonant experiences (2), you do in fact see a loss of information content
> But from the perspective of any one neuron, that information is awfully hard to access. It’s not impossible, but I think you’d need the neuron to have a bunch of inputs from across the brain hooked into complicated timing circuits etc.
If we go back to the viscosity, this would be like asking how a single atom can access information about the structure of the liquid as a whole. Furthermore, top-down causality can emerge in the right conditions in a physical system.
> If “suffering” was a particular signal carried by a particular neurotransmitter, for example, we wouldn’t have that problem.
You have the worse problem though, of showing how qualia can arise from a neurotransmitter, with what textures and with what causal influence beyond the signal itself: if the causality is from the signal alone, why would qualia arise at all? What purpose would it serve in addition the purpose of the signal?
> Conversely, I’m confused at how you would tell a story where getting tortured (for example) leads to suffering. This is just the opposite of the previous one: Just as a brain-wide harmonic decomposition can’t have a straightforward and systematic impact on a specific neural signal, likewise a specific neural signal can’t have a straightforward and systematic impact on a brain-wide harmonic decomposition, as far as I can tell.
Brain-wide harmonics exert a top-down influence on individual neurons (e.g. through something like EM field dynamics [3]) and individual neurons collectively create the overall brain-wide harmonics.
1: https://qualiacomputing.com/2017/06/18/quantifying-bliss-talk-summary/
2: https://qualiacomputing.com/2021/11/23/the-supreme-state-unconsciousness-classical-enlightenment-from-the-point-of-view-of-valence-structuralism/
3: https://psyarxiv.com/jtng9/?fbclid=IwAR3c7_mCMo44wB8AHR3OXAsObN5kRWtCtwq_FKXiMS1c-OZmRIZARwwSxmo
Thanks for your reply.
If you ask me a question about, umm, I’m not sure the exact term, let’s say “3rd-person-observable properties of the physical world that have something to do with the human brain”—questions like “When humans emit self-reports about their own conscious experience, why do they often describe it as having properties A,B,C?” or “When humans move their mouths and say words on the topic of ‘qualia’, why do they often describe it as having properties X,Y,Z?”—then I feel like I’m on pretty firm ground, and that I’m in my comfort zone, and that I’m able to answer such questions, at least in broad outline and to some extent at a pretty gory level of detail. (Some broad-outline ingredients are in my old post here, and I’m open to further discussion as time permits.)
BUT, I feel like that’s probably not the game you want to play here. My guess is that, even if I perfectly nail every one of those “3rd-person” questions above, you would still say that I haven’t even begun to engage with the nature of qualia, that I’m missing the forest for the trees, whatever. (I notice that I’m putting words in your mouth; feel free to disagree.)
If I’m correct so far, then this is a more basic disagreement about the nature of consciousness and how to think about it and learn about it etc. You can see my “wristwatch” discussion here for basically where I’m coming from. But I’m not too interested in hashing out that disagreement, sorry. For me, it’s vaguely in the same category as arguing with a theology professor about whether God exists (I’m an atheist): My position is “Y’know, I really truly think I’m right about this, but there’s a gazillion pages of technical literature on this topic, and I’ve read practically none of it, and my experience strongly suggests that we’re not going to make any meaningful progress on this disagreement in the amount of time that I’m willing to spend talking about it.” :-P Sorry!
“What is the problem?” should have a pretty high information content, but there might be a separate “how bad is it?” question that constitutes the actual unpleasant part of the experience, which wouldn’t have to be much more than a 1d scalar.
For the record, I do actually believe that. I was trying to state what seemed to be a problem in the STV framework as I was understanding it.
In my picture, the brainstem communicates valence to the neocortex via a midbrain dopamine signal (one particular signal of the many), and sometimes communicates the suggested cause / remediation via executing orienting reactions (saccading, moving your head, etc.—the brainstem can do this by itself), and sending acetylcholine to the corresponding parts of your cortex, which then override the normal top-down attention mechanism and force attention onto whatever your brainstem demands. For example, when your finger hurts a lot, it’s really hard to think about anything else, and my tentative theory is that the mechanism here involves the brainstem sending acetylcholine to the finger-pain-area of the insular cortex. (To be clear, this is casual speculation that I haven’t thought too hard about or looked into much.)