> I’m going in expecting a, um, computational theory of valence
Let’s contrast that with physicalist theory of valence, such as the STV.
> So that’s the real reason I don’t believe in STV—it just looks wrong to me, in the same way that Mario’s progress should not look like certain types of large-scale structure in SRAM bits.
Well, since the STV is a physicalist theory, a better analogy might be like: properties like viscosity of a fluid can be found by the overall structure of the fluid.
I’m going to start with your last point, because I think it’s the most important.
> Instead they tend to involve certain signals in the insular cortex and reticular activating system and those signals have certain effects on decisionmaking circuits, blah blah blah.
We’re not necessarily interested in asking the question “*which* part is causally associated with valence?” but rather the question “*how* does that part actually do it, how is it implemented?”. That is, how does the qualia of suffering/pleasure arise at all? How can the qualia itself be causally relevant? If it’s mere computation, how does the qualia take on a particular texture, and what role does it play in the algorithm as a texture? At what point in the computation does it arise, how long does it arise for, etc. It leads to the so-called “Hard Problem of Consciousness”. If physicalism is true, combined with the fact that we’re not philosophical zombies, there must be some physical signature of consciousness.
> (1) waves and symmetries don’t carry many bits of information. If you think valence and suffering are fundamentally few-dimensional, maybe that doesn’t bother you; but I think it’s at least possible for people know whether they’re suffering from arm pain or finger pain or air-hunger or guilt or whatever.
Due to binding, a low-dimensional property can become mixed and imbued with other forms of qualia to form gestalts. Simple building blocks can create complex macro objects. You can still have a lot of information about the location, frequency, and phase of a textural pattern (1), even if it’s consonant and thus carry (relatively) less information. That said, at the peak—where you get to fully consonant experiences (2), you do in fact see a loss of information content
> But from the perspective of any one neuron, that information is awfully hard to access. It’s not impossible, but I think you’d need the neuron to have a bunch of inputs from across the brain hooked into complicated timing circuits etc.
If we go back to the viscosity, this would be like asking how a single atom can access information about the structure of the liquid as a whole. Furthermore, top-down causality can emerge in the right conditions in a physical system.
> If “suffering” was a particular signal carried by a particular neurotransmitter, for example, we wouldn’t have that problem.
You have the worse problem though, of showing how qualia can arise from a neurotransmitter, with what textures and with what causal influence beyond the signal itself: if the causality is from the signal alone, why would qualia arise at all? What purpose would it serve in addition the purpose of the signal?
> Conversely, I’m confused at how you would tell a story where getting tortured (for example) leads to suffering. This is just the opposite of the previous one: Just as a brain-wide harmonic decomposition can’t have a straightforward and systematic impact on a specific neural signal, likewise a specific neural signal can’t have a straightforward and systematic impact on a brain-wide harmonic decomposition, as far as I can tell.
Brain-wide harmonics exert a top-down influence on individual neurons (e.g. through something like EM field dynamics [3]) and individual neurons collectively create the overall brain-wide harmonics.
If you ask me a question about, umm, I’m not sure the exact term, let’s say “3rd-person-observable properties of the physical world that have something to do with the human brain”—questions like “When humans emit self-reports about their own conscious experience, why do they often describe it as having properties A,B,C?” or “When humans move their mouths and say words on the topic of ‘qualia’, why do they often describe it as having properties X,Y,Z?”—then I feel like I’m on pretty firm ground, and that I’m in my comfort zone, and that I’m able to answer such questions, at least in broad outline and to some extent at a pretty gory level of detail. (Some broad-outline ingredients are in my old post here, and I’m open to further discussion as time permits.)
BUT, I feel like that’s probably not the game you want to play here. My guess is that, even if I perfectly nail every one of those “3rd-person” questions above, you would still say that I haven’t even begun to engage with the nature of qualia, that I’m missing the forest for the trees, whatever. (I notice that I’m putting words in your mouth; feel free to disagree.)
If I’m correct so far, then this is a more basic disagreement about the nature of consciousness and how to think about it and learn about it etc. You can see my “wristwatch” discussion here for basically where I’m coming from. But I’m not too interested in hashing out that disagreement, sorry. For me, it’s vaguely in the same category as arguing with a theology professor about whether God exists (I’m an atheist): My position is “Y’know, I really truly think I’m right about this, but there’s a gazillion pages of technical literature on this topic, and I’ve read practically none of it, and my experience strongly suggests that we’re not going to make any meaningful progress on this disagreement in the amount of time that I’m willing to spend talking about it.” :-P Sorry!
> I’m going in expecting a, um, computational theory of valence
Let’s contrast that with physicalist theory of valence, such as the STV.
> So that’s the real reason I don’t believe in STV—it just looks wrong to me, in the same way that Mario’s progress should not look like certain types of large-scale structure in SRAM bits.
Well, since the STV is a physicalist theory, a better analogy might be like: properties like viscosity of a fluid can be found by the overall structure of the fluid.
I’m going to start with your last point, because I think it’s the most important.
> Instead they tend to involve certain signals in the insular cortex and reticular activating system and those signals have certain effects on decisionmaking circuits, blah blah blah.
We’re not necessarily interested in asking the question “*which* part is causally associated with valence?” but rather the question “*how* does that part actually do it, how is it implemented?”. That is, how does the qualia of suffering/pleasure arise at all? How can the qualia itself be causally relevant? If it’s mere computation, how does the qualia take on a particular texture, and what role does it play in the algorithm as a texture? At what point in the computation does it arise, how long does it arise for, etc. It leads to the so-called “Hard Problem of Consciousness”. If physicalism is true, combined with the fact that we’re not philosophical zombies, there must be some physical signature of consciousness.
> (1) waves and symmetries don’t carry many bits of information. If you think valence and suffering are fundamentally few-dimensional, maybe that doesn’t bother you; but I think it’s at least possible for people know whether they’re suffering from arm pain or finger pain or air-hunger or guilt or whatever.
Due to binding, a low-dimensional property can become mixed and imbued with other forms of qualia to form gestalts. Simple building blocks can create complex macro objects. You can still have a lot of information about the location, frequency, and phase of a textural pattern (1), even if it’s consonant and thus carry (relatively) less information. That said, at the peak—where you get to fully consonant experiences (2), you do in fact see a loss of information content
> But from the perspective of any one neuron, that information is awfully hard to access. It’s not impossible, but I think you’d need the neuron to have a bunch of inputs from across the brain hooked into complicated timing circuits etc.
If we go back to the viscosity, this would be like asking how a single atom can access information about the structure of the liquid as a whole. Furthermore, top-down causality can emerge in the right conditions in a physical system.
> If “suffering” was a particular signal carried by a particular neurotransmitter, for example, we wouldn’t have that problem.
You have the worse problem though, of showing how qualia can arise from a neurotransmitter, with what textures and with what causal influence beyond the signal itself: if the causality is from the signal alone, why would qualia arise at all? What purpose would it serve in addition the purpose of the signal?
> Conversely, I’m confused at how you would tell a story where getting tortured (for example) leads to suffering. This is just the opposite of the previous one: Just as a brain-wide harmonic decomposition can’t have a straightforward and systematic impact on a specific neural signal, likewise a specific neural signal can’t have a straightforward and systematic impact on a brain-wide harmonic decomposition, as far as I can tell.
Brain-wide harmonics exert a top-down influence on individual neurons (e.g. through something like EM field dynamics [3]) and individual neurons collectively create the overall brain-wide harmonics.
1: https://qualiacomputing.com/2017/06/18/quantifying-bliss-talk-summary/
2: https://qualiacomputing.com/2021/11/23/the-supreme-state-unconsciousness-classical-enlightenment-from-the-point-of-view-of-valence-structuralism/
3: https://psyarxiv.com/jtng9/?fbclid=IwAR3c7_mCMo44wB8AHR3OXAsObN5kRWtCtwq_FKXiMS1c-OZmRIZARwwSxmo
Thanks for your reply.
If you ask me a question about, umm, I’m not sure the exact term, let’s say “3rd-person-observable properties of the physical world that have something to do with the human brain”—questions like “When humans emit self-reports about their own conscious experience, why do they often describe it as having properties A,B,C?” or “When humans move their mouths and say words on the topic of ‘qualia’, why do they often describe it as having properties X,Y,Z?”—then I feel like I’m on pretty firm ground, and that I’m in my comfort zone, and that I’m able to answer such questions, at least in broad outline and to some extent at a pretty gory level of detail. (Some broad-outline ingredients are in my old post here, and I’m open to further discussion as time permits.)
BUT, I feel like that’s probably not the game you want to play here. My guess is that, even if I perfectly nail every one of those “3rd-person” questions above, you would still say that I haven’t even begun to engage with the nature of qualia, that I’m missing the forest for the trees, whatever. (I notice that I’m putting words in your mouth; feel free to disagree.)
If I’m correct so far, then this is a more basic disagreement about the nature of consciousness and how to think about it and learn about it etc. You can see my “wristwatch” discussion here for basically where I’m coming from. But I’m not too interested in hashing out that disagreement, sorry. For me, it’s vaguely in the same category as arguing with a theology professor about whether God exists (I’m an atheist): My position is “Y’know, I really truly think I’m right about this, but there’s a gazillion pages of technical literature on this topic, and I’ve read practically none of it, and my experience strongly suggests that we’re not going to make any meaningful progress on this disagreement in the amount of time that I’m willing to spend talking about it.” :-P Sorry!