Wait. You’re using “sentience” to mean “reacting and planning”, which in my understanding is NOT the same thing, and is exactly why you made the original comment—they’re not the same thing, or we’d just say “planning” rather than endless failures to define qualia and consciousness.
I think our main disagreement is early in your comment
what we are interested in is precisely subjective sensation, and the fact that there is any at all
And then you go on to talk about objective sensations and imagined sensations, and planning to seek/avoid sensations. There may or may not be a subjective experience behind any of that, depending on how the experiencer is configured.
No, I do not mean sentience is identical with “reacting and planning”. I am saying that in biological organisms, it is a prerequisite for some kinds of reacting and planning—namely the one rationalists tend to be most interested in. The idea is that phenomenal consciousness works as an input for reasoning; distils insights from unconscious processing into a format for slow analysis.
I’m not sure what you mean by “objective sensations”.
I suspect that at the core, our disagreement starts with the fact that I do not see sentience as something that happens extraneously on top of functional processes, but rather as something identical with some functional processes, with the processes which are experienced by subjects and reported by them as such sharing tangible characteristics. This is supported primarily by the fact that consciousness can be quite selectively disrupted while leaving unconscious processing intact, but that this correlates with a distinct loss in rational functioning; fast automatic reactions to stimuli still work fine, even though the humans tell you they cannot see them—but a rational, planned, counter-intuitive response does not, because you rational mind no longer has access to the necessary information.
The fact that sentience is subjectively experienced with valence and hence entails suffering is of incredible ethical importance, but the idea that this experience can be divorced from function, that you could have a perfectly functioning brain doing exactly what your brain does while consciousness never arises or is extinguished without any behavioural consequence (epiphenomenalism, zombies) runs into logical self-contradictions, and is without empirical support. Consciousness itself enables you to do different stuff which you cannot do without it. (Or at least, a brain running under biological constraints cannot; AI might be currently bruteforcing alternative solutions which are too grossly inefficient to realistically be sustainable for a biological entity gaining energy from food only.)
I think I’ll bow out for now—I’m not certain I understand precisely where we disagree, but it seems to be related to whether “phenomenal consciousness works as an input for reasoning;” is a valid statement, without being able to detect or operationally define “consciousness”. I find it equally plausible that “phenomenological consciousness is a side-effect of some kinds of reasoning in some percentage of cognitive architectures”.
It is totally okay for you to bow out and no longer respond. I will leave this here if you ever want to look into it more or for others, because the position you seem to be describing as equally plausible here is a commonly held one, but one that runs into a logical contradiction that should be more well-known.
If brains just produce consciousness as an side-effect of how they work (so we have an internally complete functional process that does reasoning, but as it runs, it happens to produce consciousness, without the consciousness itself entailing any functional changes), hence without that side-effect itself having an impact on physical processes in the brain—how and why the heck are we talking about consciousness? After all, speaking, or writing, about p-consciousness are undoubtably physical things controlled by our brains. They aren’t illusions, they are observable and reproducible phenomena. Humans talk about consciousness; they have done so spontaneously over the millennia, over and over. But how would our brains have knowledge of consciousness? Humans claim direct knowledge of and access to consciousness, a lot. They reflect about it, speak about it, write about it, share incredibly detailed memories of it, express the on-going formation of more, alter careers to pursue it.
At that point, you have to either accept interactionist dualism (aka, consciousness is magic, but magic affects physical reality—which runs counter to, essentially, our entire scientific understanding of the physical universe), or consciousness as a functional physical process affecting other physical processes. That is the where the option “p-consciousness as input for reasoning” comes from. The idea that enabling us to talk about it is not the only thing that consciousness enables. It enables us to reason about our experiences.
I think I have a similar view to Dagon’s, so let me pop in and hopefully help explain it.
I believe that when you refer to “consciousness” you are equating it with what philosophers would usually call the neural correlates of consciousness. Consciousness as used by (most) philosophers (or, and more importantly in my opinion, laypeople) refers specifically to the subjective experience, the “blueness of blue”, and is inherently metaphysically queer, in this respect similar to objective, human-independent morality (realism) or non-compatibilist conception of free will. And, like those, it does not exist in the real world; people are just mistaken for various reasons. Unfortunately, unlike those, it is seemingly impossible to fully deconfuse oneself from believing consciousness exists, a quirk of our hardware is that it comes with the axiom that consciousness is real, probably because of the advantages you mention: it made reasoning/communicating about one’s state easier. (Note, it’s merely the false belief that consciousness exists, which is hardcoded, not consciousness itself).
Hopefully the answers to your questions are clear under this framework (we talk about consciousness, because we believe in it, we believe in it because it was useful to believe in it even though it is a false belief, humans have no direct knowledge about consciousness as knowledge requires the belief to be true, they merely have a belief, consciousness IS magic by definition, unfortunately magic does not (probably) exist)
After reading this, you might dispute the usefulness of this definition of consciousness, and I don’t have much to offer. I simply dislike redefining things from their original meanings just so we can claim statements we are happier about (like compatibilist, meta-ethical expressivist, naturalist etc philosphers do).
I am equating consciousness with its neural correlates, but this is not a result of me being sloppy with terminology—it is a conscious choice to subscribe to identity theory and physicalism, rather than to consciousness being magic and to dualism, which runs into interactionist dilemmas.
Our traditional definitions of consciousness in philosophy indeed sound magical. But I think this reflects that our understanding of consciousness, while having improved a lot, is still crucially incomplete and lacking in clarity, and the improvements I have seen that finally make sense of this have come from a philosophically informed and interpreted empirical neuroscience and mathematical theory. And I think that once we have understood this phenomenon properly, it will still seem remarkable and amazing, but no longer mysterious, but rather, a precise and concrete thing we can identify and build.
How and why do you think a brain would obtain a false belief in the existence of consciousness, enabling us to speak about it, if consciousness has no reality and they have no direct access to it (yet also have a false belief that they have direct access?) Where do the neural signals about it come from, then? Why would a belief in consciousness be useful, if consciousness has no reality, affects nothing in reality, is hence utterly irrelevant, making it about as meaningful and useful to believe in as ghosts? I’ve seen attempts to counter self-stultification through elaborate constructs, and while such constructs can be made, none have yet convinced me as remotely plausible under Ockham’s razor, let alone plausible on a neurological level or backed by evolutionary observations. Animals have shown zero difficulties in communicating about their internal states—a desire to mate, a threat to attack—without having to invoke a magic spirit residing inside them.
I agree that consciousness is a remarkable and baffling phenomenon. Trying to parse it into my understanding of physical reality gives me genuine, literal headaches whenever I begin to feel that I am finally getting close. It feels easier for me to retreat and say “ah, it will always be mysterious, and ineffable, and beyond our understanding, and beyond our physical laws”. But this explains nothing, it won’t enable us to figure out uploading, or diagnose consciousness in animals that need protection, or figure out if an AI is sentient, or cure disruptions of consciousness and psychiatric disease at the root, all of which are things I really, really want us to do. Saying that it is mysterious magic just absolves me from trying to understand a thing that I really want to understand, and that we need to understand.
I see the fact that I currently cannot yet piece together how my subjective experience fits into physical reality as an indication of the fact that my brain evolved with goals like “trick other monkey out of two bananas”, not “understand the nature of my own cognition”. And my conclusion from that is to team up with lots of others, improve our brains, and hit us with more data and math and metaphors and images and sketches and observations and experiments until it clicks. So far, I am pleasantly surprised that clicks are happening at all, that I no longer feel the empirical research is irrelevant to the thing I am interested in, but instead see it as actually helping to make things clearer, and leaving us with concrete questions and approaches. Speaking of the blueness of blue: I find this sort of thing https://www.lesswrong.com/posts/LYgJrBf6awsqFRCt3/is-red-for-gpt-4-the-same-as-red-for-you?commentId=5Z8BEFPgzJnMF3Dgr#5Z8BEFPgzJnMF3Dgr far more helpful than endless rhapsodies on the ineffable nature of qualia, which never left me wiser than I was at the start, and also seemed only aimed at convincing me that none of us ever could be. Yet apparently, the relations to other qualia are actually beautifully clear to spell out, and pinpointing those clearly suddenly leads to a bunch of clearly defined questions that simultaneously make tangible progress in ruling out inverse qualia scenarios. I love stuff like this. I look at the specific asymmetric relations of blue with all the other colours, the way this pattern is encoded in the brain, and I increasingly think… we are narrowing down the blueness of blue. Not something that causes the blueness of blue, but the blueness of blue itself, characterised by its difference from yellow and red, its proximity to green and purple, its proximity to black, a mutually referencing network in which the individual position becomes ineffible in isolation, but clear as day as part of the whole. After a long time of feeling that all this progress in neuroscience had taught us nothing about what really mattered to me, I’m increasingly seeing things like this that allow an outline to appear in the dark, a sense that we are getting closer to something, and I want to grab it and drag it into the light.
Wait. You’re using “sentience” to mean “reacting and planning”, which in my understanding is NOT the same thing, and is exactly why you made the original comment—they’re not the same thing, or we’d just say “planning” rather than endless failures to define qualia and consciousness.
I think our main disagreement is early in your comment
And then you go on to talk about objective sensations and imagined sensations, and planning to seek/avoid sensations. There may or may not be a subjective experience behind any of that, depending on how the experiencer is configured.
No, I do not mean sentience is identical with “reacting and planning”. I am saying that in biological organisms, it is a prerequisite for some kinds of reacting and planning—namely the one rationalists tend to be most interested in. The idea is that phenomenal consciousness works as an input for reasoning; distils insights from unconscious processing into a format for slow analysis.
I’m not sure what you mean by “objective sensations”.
I suspect that at the core, our disagreement starts with the fact that I do not see sentience as something that happens extraneously on top of functional processes, but rather as something identical with some functional processes, with the processes which are experienced by subjects and reported by them as such sharing tangible characteristics. This is supported primarily by the fact that consciousness can be quite selectively disrupted while leaving unconscious processing intact, but that this correlates with a distinct loss in rational functioning; fast automatic reactions to stimuli still work fine, even though the humans tell you they cannot see them—but a rational, planned, counter-intuitive response does not, because you rational mind no longer has access to the necessary information.
The fact that sentience is subjectively experienced with valence and hence entails suffering is of incredible ethical importance, but the idea that this experience can be divorced from function, that you could have a perfectly functioning brain doing exactly what your brain does while consciousness never arises or is extinguished without any behavioural consequence (epiphenomenalism, zombies) runs into logical self-contradictions, and is without empirical support. Consciousness itself enables you to do different stuff which you cannot do without it. (Or at least, a brain running under biological constraints cannot; AI might be currently bruteforcing alternative solutions which are too grossly inefficient to realistically be sustainable for a biological entity gaining energy from food only.)
I think I’ll bow out for now—I’m not certain I understand precisely where we disagree, but it seems to be related to whether “phenomenal consciousness works as an input for reasoning;” is a valid statement, without being able to detect or operationally define “consciousness”. I find it equally plausible that “phenomenological consciousness is a side-effect of some kinds of reasoning in some percentage of cognitive architectures”.
It is totally okay for you to bow out and no longer respond. I will leave this here if you ever want to look into it more or for others, because the position you seem to be describing as equally plausible here is a commonly held one, but one that runs into a logical contradiction that should be more well-known.
If brains just produce consciousness as an side-effect of how they work (so we have an internally complete functional process that does reasoning, but as it runs, it happens to produce consciousness, without the consciousness itself entailing any functional changes), hence without that side-effect itself having an impact on physical processes in the brain—how and why the heck are we talking about consciousness? After all, speaking, or writing, about p-consciousness are undoubtably physical things controlled by our brains. They aren’t illusions, they are observable and reproducible phenomena. Humans talk about consciousness; they have done so spontaneously over the millennia, over and over. But how would our brains have knowledge of consciousness? Humans claim direct knowledge of and access to consciousness, a lot. They reflect about it, speak about it, write about it, share incredibly detailed memories of it, express the on-going formation of more, alter careers to pursue it.
At that point, you have to either accept interactionist dualism (aka, consciousness is magic, but magic affects physical reality—which runs counter to, essentially, our entire scientific understanding of the physical universe), or consciousness as a functional physical process affecting other physical processes. That is the where the option “p-consciousness as input for reasoning” comes from. The idea that enabling us to talk about it is not the only thing that consciousness enables. It enables us to reason about our experiences.
I think I have a similar view to Dagon’s, so let me pop in and hopefully help explain it.
I believe that when you refer to “consciousness” you are equating it with what philosophers would usually call the neural correlates of consciousness. Consciousness as used by (most) philosophers (or, and more importantly in my opinion, laypeople) refers specifically to the subjective experience, the “blueness of blue”, and is inherently metaphysically queer, in this respect similar to objective, human-independent morality (realism) or non-compatibilist conception of free will. And, like those, it does not exist in the real world; people are just mistaken for various reasons. Unfortunately, unlike those, it is seemingly impossible to fully deconfuse oneself from believing consciousness exists, a quirk of our hardware is that it comes with the axiom that consciousness is real, probably because of the advantages you mention: it made reasoning/communicating about one’s state easier. (Note, it’s merely the false belief that consciousness exists, which is hardcoded, not consciousness itself).
Hopefully the answers to your questions are clear under this framework (we talk about consciousness, because we believe in it, we believe in it because it was useful to believe in it even though it is a false belief, humans have no direct knowledge about consciousness as knowledge requires the belief to be true, they merely have a belief, consciousness IS magic by definition, unfortunately magic does not (probably) exist)
After reading this, you might dispute the usefulness of this definition of consciousness, and I don’t have much to offer. I simply dislike redefining things from their original meanings just so we can claim statements we are happier about (like compatibilist, meta-ethical expressivist, naturalist etc philosphers do).
I am equating consciousness with its neural correlates, but this is not a result of me being sloppy with terminology—it is a conscious choice to subscribe to identity theory and physicalism, rather than to consciousness being magic and to dualism, which runs into interactionist dilemmas.
Our traditional definitions of consciousness in philosophy indeed sound magical. But I think this reflects that our understanding of consciousness, while having improved a lot, is still crucially incomplete and lacking in clarity, and the improvements I have seen that finally make sense of this have come from a philosophically informed and interpreted empirical neuroscience and mathematical theory. And I think that once we have understood this phenomenon properly, it will still seem remarkable and amazing, but no longer mysterious, but rather, a precise and concrete thing we can identify and build.
How and why do you think a brain would obtain a false belief in the existence of consciousness, enabling us to speak about it, if consciousness has no reality and they have no direct access to it (yet also have a false belief that they have direct access?) Where do the neural signals about it come from, then? Why would a belief in consciousness be useful, if consciousness has no reality, affects nothing in reality, is hence utterly irrelevant, making it about as meaningful and useful to believe in as ghosts? I’ve seen attempts to counter self-stultification through elaborate constructs, and while such constructs can be made, none have yet convinced me as remotely plausible under Ockham’s razor, let alone plausible on a neurological level or backed by evolutionary observations. Animals have shown zero difficulties in communicating about their internal states—a desire to mate, a threat to attack—without having to invoke a magic spirit residing inside them.
I agree that consciousness is a remarkable and baffling phenomenon. Trying to parse it into my understanding of physical reality gives me genuine, literal headaches whenever I begin to feel that I am finally getting close. It feels easier for me to retreat and say “ah, it will always be mysterious, and ineffable, and beyond our understanding, and beyond our physical laws”. But this explains nothing, it won’t enable us to figure out uploading, or diagnose consciousness in animals that need protection, or figure out if an AI is sentient, or cure disruptions of consciousness and psychiatric disease at the root, all of which are things I really, really want us to do. Saying that it is mysterious magic just absolves me from trying to understand a thing that I really want to understand, and that we need to understand.
I see the fact that I currently cannot yet piece together how my subjective experience fits into physical reality as an indication of the fact that my brain evolved with goals like “trick other monkey out of two bananas”, not “understand the nature of my own cognition”. And my conclusion from that is to team up with lots of others, improve our brains, and hit us with more data and math and metaphors and images and sketches and observations and experiments until it clicks. So far, I am pleasantly surprised that clicks are happening at all, that I no longer feel the empirical research is irrelevant to the thing I am interested in, but instead see it as actually helping to make things clearer, and leaving us with concrete questions and approaches. Speaking of the blueness of blue: I find this sort of thing https://www.lesswrong.com/posts/LYgJrBf6awsqFRCt3/is-red-for-gpt-4-the-same-as-red-for-you?commentId=5Z8BEFPgzJnMF3Dgr#5Z8BEFPgzJnMF3Dgr far more helpful than endless rhapsodies on the ineffable nature of qualia, which never left me wiser than I was at the start, and also seemed only aimed at convincing me that none of us ever could be. Yet apparently, the relations to other qualia are actually beautifully clear to spell out, and pinpointing those clearly suddenly leads to a bunch of clearly defined questions that simultaneously make tangible progress in ruling out inverse qualia scenarios. I love stuff like this. I look at the specific asymmetric relations of blue with all the other colours, the way this pattern is encoded in the brain, and I increasingly think… we are narrowing down the blueness of blue. Not something that causes the blueness of blue, but the blueness of blue itself, characterised by its difference from yellow and red, its proximity to green and purple, its proximity to black, a mutually referencing network in which the individual position becomes ineffible in isolation, but clear as day as part of the whole. After a long time of feeling that all this progress in neuroscience had taught us nothing about what really mattered to me, I’m increasingly seeing things like this that allow an outline to appear in the dark, a sense that we are getting closer to something, and I want to grab it and drag it into the light.