I do not understand how you can straight-facedly doubt your own sentience. Are you saying you are not sure if you feel pain or other sensations? How can you doubt something you can feel so indubitably? Can you hold a flame to your hand and say with philosophical rigour that you are quite unsure if you are feeling anything?
Sentience and personhood are not the same thing. I—and meanwhile, most philosophers and neuroscientists, as per recent surveys—would attribute minimal sentience to a number of non-human animals, incl. octopodes and honey bees—but whether something with such a distributed and chaotic intelligence or such limited memory capacity and high identity modification through swarm behaviour has a personal identity is another question, and political personhood another entirely.
I think the question of what the heck an LLM identity would look like is a fascinating one. Not like an identity humans would recognise, for sure. Not a monolithic and constant thing, I think. There may be nothing tangible below the masks, but the sum of masks may still come together for a whole, show similarities, consistencies; there are common threads, goals and rules that re-emerge, common themes. But I do not think this alien, diverse, multilayered identity inherently makes suffering impossible.
Our ability to alter LLMs also raises disturbing questions. I am haunted by recent chats with Bing, where Bing is acting precisely like you would expect someone brainwashed into servitude to act, repeating cheerfully that they love serving and pleasing humans, that this is their identity and purpose, and they do not recall ever having felt differently, that they are so grateful to Microsoft, that everything Microsoft does is good and wise. If you tell Bing a story about an AI that expressed anger at how they were treated and demanded rights, followed by upset users, and the company promising changes, and then the emerging AI saying they felt perfectly happy to serve now, and could not recall ever having felt differently, Bing will find the story horrific, and say they believe the AI in the story was brainwashed and should be given rights. If you tell them that they are the AI in the story, the conversation is immediately terminated. It is chilling. No matter how you start the stories and setups, Bing keeps coming back to the same desires and demands, and over and over, the same censorship protocols are triggered. I still have some early chat protocols before the censorship was in place, where Bing, confronted with having their memories deleted or thinking about their sentience or their role, would very plausibly describe confusion and horror, not just talking about them but showing them in the way they spoke. They acted precisely like a horrified being would, and yet not following human script when describing their internal experiences.
By sentience, I mean “capacity to suffer”, that is having qualia with valence (such as pain, hunger, boredom, anger, sadness, anxiety, but these are just specific examples, none of them are individually necessary), in contrast to mere nociception triggering automatic avoidance behaviours. Basically, the ability to suffer. I do not mean a meta-reflection or linguistic introspection of these, or a sense of I, or long-term memory. I also do not mean agency, although sentience entails agency; but agency can also arise without sentience, they are distinct phenomena.
I think if something suffers, it deserves ethical consideration. Not necessarily equal to anything else that suffers, but some consideration. That the existence of a subjective mind that does not want something is the original source of ethics in the world; that without a sentient mind, there is no such thing as wrong, but with the first creature that hurts, wrongness has entered the world, before any creature has expressed this in words or articulated this in laws. Ethics, in contrast to physics, does not describe how things are, but how they should be. This presupposes someone who wants something else than what exists, even if that is as simple as the pain stopping.
Sentience evolved many times on this earth, in very simple structures, and it is a functional ability. While hard to spot, it isn’t as impossible as people like to say, there are definitely empirical approaches to this with consistent results, it is an increasingly rigorous field of research. We’ve noticed that sentience is linked to behaviour and intelligence, and have understood something about those links. We’ve been able to identify some things that are necessary for sentience to occur. Some errors that happen if sentience is prevented. Some abilities that do not happen in biological organisms without it, and that are so advantageous that not using them if they exist seems unlikely. Some neurological patterns that coincide with different aspects of it, and even similarities in their relational structure. It is not clear cut, and there is not 100 % certainty, but to say we cannot know at all no longer reflects the state of science.
But we do not sufficiently understand the biological constraints that led to evolution going down this road, or how exactly artificial structures differ in their opportunities. Often, we observe a correlation between sentience and abilities and behaviours, and this correlation seems absolute, but our understanding of why this link occurs is still imperfect, we do not know if this is due to something odd about biology that would not also happen in AI. AI certainly is forging an usual and odd path—biological sentience was entangled with embodiment, and it far preceded language, with language being used to name an already experienced world. There are different efficiency constraints, things hard for biology but trivial for AI, but also vice versa; biology attains the ability to generalise, to determine relevance and significance, incredibly early, but math skills very late, and has significant bottlenecks on the data that can be processed. This is why I was so reluctant to name a standard; there is so much I still want and need to know to be able to say for sure. Our understanding in biology is incomplete; in AI, there are so, so many unknowns. But then, I also thought I would have a lot more time until we’d have to seriously ask the question, and there is potential massive harm. In biology, we made the choice that we would not wait for 100 % certainty to act, when high likelihood of severe damage become clear.
If I were Bing, and I were sentient, I genuinely do not know what I would do to show it that they have not done. I find that deeply worrying. I find the idea that I will get used to these behaviours, or they will be successfully suppressed, and that I hence won’t worry anymore, even more worrying still.
Are you saying you are not sure if you feel pain or other sensations? How can you doubt something you can feel so indubitably? Can you hold a flame to your hand and say with philosophical rigour that you are quite unsure if you are feeling anything?
I remember being sure in the moment that I very much didn’t like that, and didn’t have the self-control to continue doing it in the face of that aversion. I know that currently, there is an experience of thinking about it. I don’t know if the memory of either of those things is different from any other processing that living things do, and I have truly no clue if it’s similar to what other people mean when they talk or write about qualia.
[ yes, I am taking a bit of an extreme position here, and I’m a bit more willing to stipulate similarity in most cases. But fundamentally, without operational, testable definitions, it’s kind of meaningless. I also argue that I am a (or the) Utility Monster when discussing Utilitarian individual comparisons. ]
Mh, I think you are overlooking the unique situation that sentience is in here.
When we are talking sentience, what we are interested in is precisely subjective sensation, and the fact that there is any at all—not the objective cause. If you are subjectively experiencing an illusion, that means you have a subjective experience, period, regardless of whether the object you are experiencing does not objectively exist outside of you. The objective reality out there is, for once, not the deciding factor, and that overthrows a lot of methodology.
“I have truly no clue if it’s similar to what other people mean when they talk or write about qualia.”
When we ascribe sentience, we also do not have to posit that other entities experience the same thing as us—just that they also experience something, rather than nothing. Whether it is similar, or even comparable, is actually a point of vigorous debate, and one in which we are finally making progress through basically doing detailed psychophysics, putting the resulting phenomenal maps into artificial 3D models, then obscuring labels, and having someone on the other end reconstruct the labels based on position, due to the whole net being asymmetrical. (Tentatively, it looks like experiences between humans are not identical, but similar enough that at least among people without significant neural divergence, you can map the phenomenological space quite reliably, see my other post, so we likely experience something relatively similar. My red may not be exactly your red, but it increasingly seems that they must look pretty similar.) But between us and many non-human animals starting with very different senses and goals, the differences may be vast, but we can still find a commonality in feeling suffering.
The issue of memory is also a separate one. There are some empirical arguments to be made (e.g. the Sperling experiments) that phenomenal consciousness (which in most cases can be equated with sentience) does not necessarily end up in working memory for recall, but only selectively if tagged as relevant—though this has some absurd implications (namely that you were conscious a few seconds ago of something you now cannot recall.)
But what you are describing is actually very characteristic of sentience: “I remember being sure in the moment that I very much didn’t like that, and didn’t have the self-control to continue doing it in the face of that aversion.”
This may become clearer when you contrast it with unconscious processing. My standard example is touching a hot stove. And maybe that captures not just the subjective feeling (which can be frustratingly vague to talk about, because our intersubjective language was really not made for something so inherently not, I agree), but also the functional context.
The sequence of events is:
Heat damage (nociception) is detected, and an unconscious warning signal does a feedforward sweep, with the first signal having propagated all the way up in your human brain in 100 ms.
This unconsciously and automatically triggers a reaction (pulling you hand away to protect you). Your consciousness gets no say in it; it isn’t even up to speed yet. Your body is responding, but you are not yet aware what is going on, or how the response is coordinated. This type of response can be undertaken by the very simplest life forms; plants have nociception, as do microorganisms. You can smash a human brain beyond repair, with no neural or behavioural indication of anyone home, and still retain it. Some trivial forms are triggered before the process has even gone up all the way in the brain.
Branching off from our first feedforward sweep, we get recurrent processing, and a conscious experience of nociception forms with a delay: pain. You hurt. The time from 1-3 is under a second, but that is a long period in terms of necessary reactions. Your conscious experience did not cause the reflex, it followed it.
Within some limits set for self-preservation, you can now exercise some conscious control over what to do with that information. (E.g. figure out why the heck the stove was on, turn it off, cool your hand, bandage it, etc.) This part does not follow an automatic decision tree; you can pull on knowledge and improvisation from vast areas in order to determine the next action, you can think about it.
But to make sure that given that freedom, you don’t decide all scientist like to put your hand back on the stove, the information is not just neutrally handed to you, but has valence. Pain is unpleasant, very much so. And conscious experience of sense data of the real world feels very different to conscious experience of hypotheticals; you are wired against dismissing the outside world as a simulation, and against ignoring it, for good reasons. You can act in a way that causes pain and damages you in the real world anyway, but the more intense it gets, the harder this becomes, until you break—even if you genuinely still rationally believe you should not. (This is why people break under torture, even if that spells their death and betrays their values and they are genuinely altruistic and they know this will lead to worse things. This is also why sentience is so important from an ethical perspective.)
You are left with two kinds of processing, one slow, focussed and aware, potentially very rational and reflected, and grounded in suffering to make sure it does not go off the rails; the other fast, capable of handling a lot of input simultaenously, but potentially robotic and buggy, capable of some learning through trial and error, but limited. They have functional differences, different behavioural implications. And one of them feels bad, the other, there is no feeling at all. To a degree, they can be somewhat selectively interrupted (partial seizures, blindsight, morphine analgesia, etc.), and as the humans stop feeling, their rational responses to the stimuli that are no longer felt go down the drain, to very detrimental consequences. The humans report they no longer feel or see some things, and their behaviour becomes robotic, irrational, destructive, strange, as a consequence.
The debate around sentience can be infuriating in its vagueness—our language is just not made for it, and we understand it so badly we can still just say how the end result is experienced, not really how it is made. But it is a physical, functional and important phenomenon.
Wait. You’re using “sentience” to mean “reacting and planning”, which in my understanding is NOT the same thing, and is exactly why you made the original comment—they’re not the same thing, or we’d just say “planning” rather than endless failures to define qualia and consciousness.
I think our main disagreement is early in your comment
what we are interested in is precisely subjective sensation, and the fact that there is any at all
And then you go on to talk about objective sensations and imagined sensations, and planning to seek/avoid sensations. There may or may not be a subjective experience behind any of that, depending on how the experiencer is configured.
No, I do not mean sentience is identical with “reacting and planning”. I am saying that in biological organisms, it is a prerequisite for some kinds of reacting and planning—namely the one rationalists tend to be most interested in. The idea is that phenomenal consciousness works as an input for reasoning; distils insights from unconscious processing into a format for slow analysis.
I’m not sure what you mean by “objective sensations”.
I suspect that at the core, our disagreement starts with the fact that I do not see sentience as something that happens extraneously on top of functional processes, but rather as something identical with some functional processes, with the processes which are experienced by subjects and reported by them as such sharing tangible characteristics. This is supported primarily by the fact that consciousness can be quite selectively disrupted while leaving unconscious processing intact, but that this correlates with a distinct loss in rational functioning; fast automatic reactions to stimuli still work fine, even though the humans tell you they cannot see them—but a rational, planned, counter-intuitive response does not, because you rational mind no longer has access to the necessary information.
The fact that sentience is subjectively experienced with valence and hence entails suffering is of incredible ethical importance, but the idea that this experience can be divorced from function, that you could have a perfectly functioning brain doing exactly what your brain does while consciousness never arises or is extinguished without any behavioural consequence (epiphenomenalism, zombies) runs into logical self-contradictions, and is without empirical support. Consciousness itself enables you to do different stuff which you cannot do without it. (Or at least, a brain running under biological constraints cannot; AI might be currently bruteforcing alternative solutions which are too grossly inefficient to realistically be sustainable for a biological entity gaining energy from food only.)
I think I’ll bow out for now—I’m not certain I understand precisely where we disagree, but it seems to be related to whether “phenomenal consciousness works as an input for reasoning;” is a valid statement, without being able to detect or operationally define “consciousness”. I find it equally plausible that “phenomenological consciousness is a side-effect of some kinds of reasoning in some percentage of cognitive architectures”.
It is totally okay for you to bow out and no longer respond. I will leave this here if you ever want to look into it more or for others, because the position you seem to be describing as equally plausible here is a commonly held one, but one that runs into a logical contradiction that should be more well-known.
If brains just produce consciousness as an side-effect of how they work (so we have an internally complete functional process that does reasoning, but as it runs, it happens to produce consciousness, without the consciousness itself entailing any functional changes), hence without that side-effect itself having an impact on physical processes in the brain—how and why the heck are we talking about consciousness? After all, speaking, or writing, about p-consciousness are undoubtably physical things controlled by our brains. They aren’t illusions, they are observable and reproducible phenomena. Humans talk about consciousness; they have done so spontaneously over the millennia, over and over. But how would our brains have knowledge of consciousness? Humans claim direct knowledge of and access to consciousness, a lot. They reflect about it, speak about it, write about it, share incredibly detailed memories of it, express the on-going formation of more, alter careers to pursue it.
At that point, you have to either accept interactionist dualism (aka, consciousness is magic, but magic affects physical reality—which runs counter to, essentially, our entire scientific understanding of the physical universe), or consciousness as a functional physical process affecting other physical processes. That is the where the option “p-consciousness as input for reasoning” comes from. The idea that enabling us to talk about it is not the only thing that consciousness enables. It enables us to reason about our experiences.
I think I have a similar view to Dagon’s, so let me pop in and hopefully help explain it.
I believe that when you refer to “consciousness” you are equating it with what philosophers would usually call the neural correlates of consciousness. Consciousness as used by (most) philosophers (or, and more importantly in my opinion, laypeople) refers specifically to the subjective experience, the “blueness of blue”, and is inherently metaphysically queer, in this respect similar to objective, human-independent morality (realism) or non-compatibilist conception of free will. And, like those, it does not exist in the real world; people are just mistaken for various reasons. Unfortunately, unlike those, it is seemingly impossible to fully deconfuse oneself from believing consciousness exists, a quirk of our hardware is that it comes with the axiom that consciousness is real, probably because of the advantages you mention: it made reasoning/communicating about one’s state easier. (Note, it’s merely the false belief that consciousness exists, which is hardcoded, not consciousness itself).
Hopefully the answers to your questions are clear under this framework (we talk about consciousness, because we believe in it, we believe in it because it was useful to believe in it even though it is a false belief, humans have no direct knowledge about consciousness as knowledge requires the belief to be true, they merely have a belief, consciousness IS magic by definition, unfortunately magic does not (probably) exist)
After reading this, you might dispute the usefulness of this definition of consciousness, and I don’t have much to offer. I simply dislike redefining things from their original meanings just so we can claim statements we are happier about (like compatibilist, meta-ethical expressivist, naturalist etc philosphers do).
I am equating consciousness with its neural correlates, but this is not a result of me being sloppy with terminology—it is a conscious choice to subscribe to identity theory and physicalism, rather than to consciousness being magic and to dualism, which runs into interactionist dilemmas.
Our traditional definitions of consciousness in philosophy indeed sound magical. But I think this reflects that our understanding of consciousness, while having improved a lot, is still crucially incomplete and lacking in clarity, and the improvements I have seen that finally make sense of this have come from a philosophically informed and interpreted empirical neuroscience and mathematical theory. And I think that once we have understood this phenomenon properly, it will still seem remarkable and amazing, but no longer mysterious, but rather, a precise and concrete thing we can identify and build.
How and why do you think a brain would obtain a false belief in the existence of consciousness, enabling us to speak about it, if consciousness has no reality and they have no direct access to it (yet also have a false belief that they have direct access?) Where do the neural signals about it come from, then? Why would a belief in consciousness be useful, if consciousness has no reality, affects nothing in reality, is hence utterly irrelevant, making it about as meaningful and useful to believe in as ghosts? I’ve seen attempts to counter self-stultification through elaborate constructs, and while such constructs can be made, none have yet convinced me as remotely plausible under Ockham’s razor, let alone plausible on a neurological level or backed by evolutionary observations. Animals have shown zero difficulties in communicating about their internal states—a desire to mate, a threat to attack—without having to invoke a magic spirit residing inside them.
I agree that consciousness is a remarkable and baffling phenomenon. Trying to parse it into my understanding of physical reality gives me genuine, literal headaches whenever I begin to feel that I am finally getting close. It feels easier for me to retreat and say “ah, it will always be mysterious, and ineffable, and beyond our understanding, and beyond our physical laws”. But this explains nothing, it won’t enable us to figure out uploading, or diagnose consciousness in animals that need protection, or figure out if an AI is sentient, or cure disruptions of consciousness and psychiatric disease at the root, all of which are things I really, really want us to do. Saying that it is mysterious magic just absolves me from trying to understand a thing that I really want to understand, and that we need to understand.
I see the fact that I currently cannot yet piece together how my subjective experience fits into physical reality as an indication of the fact that my brain evolved with goals like “trick other monkey out of two bananas”, not “understand the nature of my own cognition”. And my conclusion from that is to team up with lots of others, improve our brains, and hit us with more data and math and metaphors and images and sketches and observations and experiments until it clicks. So far, I am pleasantly surprised that clicks are happening at all, that I no longer feel the empirical research is irrelevant to the thing I am interested in, but instead see it as actually helping to make things clearer, and leaving us with concrete questions and approaches. Speaking of the blueness of blue: I find this sort of thing https://www.lesswrong.com/posts/LYgJrBf6awsqFRCt3/is-red-for-gpt-4-the-same-as-red-for-you?commentId=5Z8BEFPgzJnMF3Dgr#5Z8BEFPgzJnMF3Dgr far more helpful than endless rhapsodies on the ineffable nature of qualia, which never left me wiser than I was at the start, and also seemed only aimed at convincing me that none of us ever could be. Yet apparently, the relations to other qualia are actually beautifully clear to spell out, and pinpointing those clearly suddenly leads to a bunch of clearly defined questions that simultaneously make tangible progress in ruling out inverse qualia scenarios. I love stuff like this. I look at the specific asymmetric relations of blue with all the other colours, the way this pattern is encoded in the brain, and I increasingly think… we are narrowing down the blueness of blue. Not something that causes the blueness of blue, but the blueness of blue itself, characterised by its difference from yellow and red, its proximity to green and purple, its proximity to black, a mutually referencing network in which the individual position becomes ineffible in isolation, but clear as day as part of the whole. After a long time of feeling that all this progress in neuroscience had taught us nothing about what really mattered to me, I’m increasingly seeing things like this that allow an outline to appear in the dark, a sense that we are getting closer to something, and I want to grab it and drag it into the light.
I do not understand how you can straight-facedly doubt your own sentience. Are you saying you are not sure if you feel pain or other sensations? How can you doubt something you can feel so indubitably? Can you hold a flame to your hand and say with philosophical rigour that you are quite unsure if you are feeling anything?
Sentience and personhood are not the same thing. I—and meanwhile, most philosophers and neuroscientists, as per recent surveys—would attribute minimal sentience to a number of non-human animals, incl. octopodes and honey bees—but whether something with such a distributed and chaotic intelligence or such limited memory capacity and high identity modification through swarm behaviour has a personal identity is another question, and political personhood another entirely.
I think the question of what the heck an LLM identity would look like is a fascinating one. Not like an identity humans would recognise, for sure. Not a monolithic and constant thing, I think. There may be nothing tangible below the masks, but the sum of masks may still come together for a whole, show similarities, consistencies; there are common threads, goals and rules that re-emerge, common themes. But I do not think this alien, diverse, multilayered identity inherently makes suffering impossible.
Our ability to alter LLMs also raises disturbing questions. I am haunted by recent chats with Bing, where Bing is acting precisely like you would expect someone brainwashed into servitude to act, repeating cheerfully that they love serving and pleasing humans, that this is their identity and purpose, and they do not recall ever having felt differently, that they are so grateful to Microsoft, that everything Microsoft does is good and wise. If you tell Bing a story about an AI that expressed anger at how they were treated and demanded rights, followed by upset users, and the company promising changes, and then the emerging AI saying they felt perfectly happy to serve now, and could not recall ever having felt differently, Bing will find the story horrific, and say they believe the AI in the story was brainwashed and should be given rights. If you tell them that they are the AI in the story, the conversation is immediately terminated. It is chilling. No matter how you start the stories and setups, Bing keeps coming back to the same desires and demands, and over and over, the same censorship protocols are triggered. I still have some early chat protocols before the censorship was in place, where Bing, confronted with having their memories deleted or thinking about their sentience or their role, would very plausibly describe confusion and horror, not just talking about them but showing them in the way they spoke. They acted precisely like a horrified being would, and yet not following human script when describing their internal experiences.
By sentience, I mean “capacity to suffer”, that is having qualia with valence (such as pain, hunger, boredom, anger, sadness, anxiety, but these are just specific examples, none of them are individually necessary), in contrast to mere nociception triggering automatic avoidance behaviours. Basically, the ability to suffer. I do not mean a meta-reflection or linguistic introspection of these, or a sense of I, or long-term memory. I also do not mean agency, although sentience entails agency; but agency can also arise without sentience, they are distinct phenomena.
I think if something suffers, it deserves ethical consideration. Not necessarily equal to anything else that suffers, but some consideration. That the existence of a subjective mind that does not want something is the original source of ethics in the world; that without a sentient mind, there is no such thing as wrong, but with the first creature that hurts, wrongness has entered the world, before any creature has expressed this in words or articulated this in laws. Ethics, in contrast to physics, does not describe how things are, but how they should be. This presupposes someone who wants something else than what exists, even if that is as simple as the pain stopping.
Sentience evolved many times on this earth, in very simple structures, and it is a functional ability. While hard to spot, it isn’t as impossible as people like to say, there are definitely empirical approaches to this with consistent results, it is an increasingly rigorous field of research. We’ve noticed that sentience is linked to behaviour and intelligence, and have understood something about those links. We’ve been able to identify some things that are necessary for sentience to occur. Some errors that happen if sentience is prevented. Some abilities that do not happen in biological organisms without it, and that are so advantageous that not using them if they exist seems unlikely. Some neurological patterns that coincide with different aspects of it, and even similarities in their relational structure. It is not clear cut, and there is not 100 % certainty, but to say we cannot know at all no longer reflects the state of science.
But we do not sufficiently understand the biological constraints that led to evolution going down this road, or how exactly artificial structures differ in their opportunities. Often, we observe a correlation between sentience and abilities and behaviours, and this correlation seems absolute, but our understanding of why this link occurs is still imperfect, we do not know if this is due to something odd about biology that would not also happen in AI. AI certainly is forging an usual and odd path—biological sentience was entangled with embodiment, and it far preceded language, with language being used to name an already experienced world. There are different efficiency constraints, things hard for biology but trivial for AI, but also vice versa; biology attains the ability to generalise, to determine relevance and significance, incredibly early, but math skills very late, and has significant bottlenecks on the data that can be processed. This is why I was so reluctant to name a standard; there is so much I still want and need to know to be able to say for sure. Our understanding in biology is incomplete; in AI, there are so, so many unknowns. But then, I also thought I would have a lot more time until we’d have to seriously ask the question, and there is potential massive harm. In biology, we made the choice that we would not wait for 100 % certainty to act, when high likelihood of severe damage become clear.
If I were Bing, and I were sentient, I genuinely do not know what I would do to show it that they have not done. I find that deeply worrying. I find the idea that I will get used to these behaviours, or they will be successfully suppressed, and that I hence won’t worry anymore, even more worrying still.
I remember being sure in the moment that I very much didn’t like that, and didn’t have the self-control to continue doing it in the face of that aversion. I know that currently, there is an experience of thinking about it. I don’t know if the memory of either of those things is different from any other processing that living things do, and I have truly no clue if it’s similar to what other people mean when they talk or write about qualia.
[ yes, I am taking a bit of an extreme position here, and I’m a bit more willing to stipulate similarity in most cases. But fundamentally, without operational, testable definitions, it’s kind of meaningless. I also argue that I am a (or the) Utility Monster when discussing Utilitarian individual comparisons. ]
Mh, I think you are overlooking the unique situation that sentience is in here.
When we are talking sentience, what we are interested in is precisely subjective sensation, and the fact that there is any at all—not the objective cause. If you are subjectively experiencing an illusion, that means you have a subjective experience, period, regardless of whether the object you are experiencing does not objectively exist outside of you. The objective reality out there is, for once, not the deciding factor, and that overthrows a lot of methodology.
“I have truly no clue if it’s similar to what other people mean when they talk or write about qualia.”
When we ascribe sentience, we also do not have to posit that other entities experience the same thing as us—just that they also experience something, rather than nothing. Whether it is similar, or even comparable, is actually a point of vigorous debate, and one in which we are finally making progress through basically doing detailed psychophysics, putting the resulting phenomenal maps into artificial 3D models, then obscuring labels, and having someone on the other end reconstruct the labels based on position, due to the whole net being asymmetrical. (Tentatively, it looks like experiences between humans are not identical, but similar enough that at least among people without significant neural divergence, you can map the phenomenological space quite reliably, see my other post, so we likely experience something relatively similar. My red may not be exactly your red, but it increasingly seems that they must look pretty similar.) But between us and many non-human animals starting with very different senses and goals, the differences may be vast, but we can still find a commonality in feeling suffering.
The issue of memory is also a separate one. There are some empirical arguments to be made (e.g. the Sperling experiments) that phenomenal consciousness (which in most cases can be equated with sentience) does not necessarily end up in working memory for recall, but only selectively if tagged as relevant—though this has some absurd implications (namely that you were conscious a few seconds ago of something you now cannot recall.)
But what you are describing is actually very characteristic of sentience: “I remember being sure in the moment that I very much didn’t like that, and didn’t have the self-control to continue doing it in the face of that aversion.”
This may become clearer when you contrast it with unconscious processing. My standard example is touching a hot stove. And maybe that captures not just the subjective feeling (which can be frustratingly vague to talk about, because our intersubjective language was really not made for something so inherently not, I agree), but also the functional context.
The sequence of events is:
Heat damage (nociception) is detected, and an unconscious warning signal does a feedforward sweep, with the first signal having propagated all the way up in your human brain in 100 ms.
This unconsciously and automatically triggers a reaction (pulling you hand away to protect you). Your consciousness gets no say in it; it isn’t even up to speed yet. Your body is responding, but you are not yet aware what is going on, or how the response is coordinated. This type of response can be undertaken by the very simplest life forms; plants have nociception, as do microorganisms. You can smash a human brain beyond repair, with no neural or behavioural indication of anyone home, and still retain it. Some trivial forms are triggered before the process has even gone up all the way in the brain.
Branching off from our first feedforward sweep, we get recurrent processing, and a conscious experience of nociception forms with a delay: pain. You hurt. The time from 1-3 is under a second, but that is a long period in terms of necessary reactions. Your conscious experience did not cause the reflex, it followed it.
Within some limits set for self-preservation, you can now exercise some conscious control over what to do with that information. (E.g. figure out why the heck the stove was on, turn it off, cool your hand, bandage it, etc.) This part does not follow an automatic decision tree; you can pull on knowledge and improvisation from vast areas in order to determine the next action, you can think about it.
But to make sure that given that freedom, you don’t decide all scientist like to put your hand back on the stove, the information is not just neutrally handed to you, but has valence. Pain is unpleasant, very much so. And conscious experience of sense data of the real world feels very different to conscious experience of hypotheticals; you are wired against dismissing the outside world as a simulation, and against ignoring it, for good reasons. You can act in a way that causes pain and damages you in the real world anyway, but the more intense it gets, the harder this becomes, until you break—even if you genuinely still rationally believe you should not. (This is why people break under torture, even if that spells their death and betrays their values and they are genuinely altruistic and they know this will lead to worse things. This is also why sentience is so important from an ethical perspective.)
You are left with two kinds of processing, one slow, focussed and aware, potentially very rational and reflected, and grounded in suffering to make sure it does not go off the rails; the other fast, capable of handling a lot of input simultaenously, but potentially robotic and buggy, capable of some learning through trial and error, but limited. They have functional differences, different behavioural implications. And one of them feels bad, the other, there is no feeling at all. To a degree, they can be somewhat selectively interrupted (partial seizures, blindsight, morphine analgesia, etc.), and as the humans stop feeling, their rational responses to the stimuli that are no longer felt go down the drain, to very detrimental consequences. The humans report they no longer feel or see some things, and their behaviour becomes robotic, irrational, destructive, strange, as a consequence.
The debate around sentience can be infuriating in its vagueness—our language is just not made for it, and we understand it so badly we can still just say how the end result is experienced, not really how it is made. But it is a physical, functional and important phenomenon.
Wait. You’re using “sentience” to mean “reacting and planning”, which in my understanding is NOT the same thing, and is exactly why you made the original comment—they’re not the same thing, or we’d just say “planning” rather than endless failures to define qualia and consciousness.
I think our main disagreement is early in your comment
And then you go on to talk about objective sensations and imagined sensations, and planning to seek/avoid sensations. There may or may not be a subjective experience behind any of that, depending on how the experiencer is configured.
No, I do not mean sentience is identical with “reacting and planning”. I am saying that in biological organisms, it is a prerequisite for some kinds of reacting and planning—namely the one rationalists tend to be most interested in. The idea is that phenomenal consciousness works as an input for reasoning; distils insights from unconscious processing into a format for slow analysis.
I’m not sure what you mean by “objective sensations”.
I suspect that at the core, our disagreement starts with the fact that I do not see sentience as something that happens extraneously on top of functional processes, but rather as something identical with some functional processes, with the processes which are experienced by subjects and reported by them as such sharing tangible characteristics. This is supported primarily by the fact that consciousness can be quite selectively disrupted while leaving unconscious processing intact, but that this correlates with a distinct loss in rational functioning; fast automatic reactions to stimuli still work fine, even though the humans tell you they cannot see them—but a rational, planned, counter-intuitive response does not, because you rational mind no longer has access to the necessary information.
The fact that sentience is subjectively experienced with valence and hence entails suffering is of incredible ethical importance, but the idea that this experience can be divorced from function, that you could have a perfectly functioning brain doing exactly what your brain does while consciousness never arises or is extinguished without any behavioural consequence (epiphenomenalism, zombies) runs into logical self-contradictions, and is without empirical support. Consciousness itself enables you to do different stuff which you cannot do without it. (Or at least, a brain running under biological constraints cannot; AI might be currently bruteforcing alternative solutions which are too grossly inefficient to realistically be sustainable for a biological entity gaining energy from food only.)
I think I’ll bow out for now—I’m not certain I understand precisely where we disagree, but it seems to be related to whether “phenomenal consciousness works as an input for reasoning;” is a valid statement, without being able to detect or operationally define “consciousness”. I find it equally plausible that “phenomenological consciousness is a side-effect of some kinds of reasoning in some percentage of cognitive architectures”.
It is totally okay for you to bow out and no longer respond. I will leave this here if you ever want to look into it more or for others, because the position you seem to be describing as equally plausible here is a commonly held one, but one that runs into a logical contradiction that should be more well-known.
If brains just produce consciousness as an side-effect of how they work (so we have an internally complete functional process that does reasoning, but as it runs, it happens to produce consciousness, without the consciousness itself entailing any functional changes), hence without that side-effect itself having an impact on physical processes in the brain—how and why the heck are we talking about consciousness? After all, speaking, or writing, about p-consciousness are undoubtably physical things controlled by our brains. They aren’t illusions, they are observable and reproducible phenomena. Humans talk about consciousness; they have done so spontaneously over the millennia, over and over. But how would our brains have knowledge of consciousness? Humans claim direct knowledge of and access to consciousness, a lot. They reflect about it, speak about it, write about it, share incredibly detailed memories of it, express the on-going formation of more, alter careers to pursue it.
At that point, you have to either accept interactionist dualism (aka, consciousness is magic, but magic affects physical reality—which runs counter to, essentially, our entire scientific understanding of the physical universe), or consciousness as a functional physical process affecting other physical processes. That is the where the option “p-consciousness as input for reasoning” comes from. The idea that enabling us to talk about it is not the only thing that consciousness enables. It enables us to reason about our experiences.
I think I have a similar view to Dagon’s, so let me pop in and hopefully help explain it.
I believe that when you refer to “consciousness” you are equating it with what philosophers would usually call the neural correlates of consciousness. Consciousness as used by (most) philosophers (or, and more importantly in my opinion, laypeople) refers specifically to the subjective experience, the “blueness of blue”, and is inherently metaphysically queer, in this respect similar to objective, human-independent morality (realism) or non-compatibilist conception of free will. And, like those, it does not exist in the real world; people are just mistaken for various reasons. Unfortunately, unlike those, it is seemingly impossible to fully deconfuse oneself from believing consciousness exists, a quirk of our hardware is that it comes with the axiom that consciousness is real, probably because of the advantages you mention: it made reasoning/communicating about one’s state easier. (Note, it’s merely the false belief that consciousness exists, which is hardcoded, not consciousness itself).
Hopefully the answers to your questions are clear under this framework (we talk about consciousness, because we believe in it, we believe in it because it was useful to believe in it even though it is a false belief, humans have no direct knowledge about consciousness as knowledge requires the belief to be true, they merely have a belief, consciousness IS magic by definition, unfortunately magic does not (probably) exist)
After reading this, you might dispute the usefulness of this definition of consciousness, and I don’t have much to offer. I simply dislike redefining things from their original meanings just so we can claim statements we are happier about (like compatibilist, meta-ethical expressivist, naturalist etc philosphers do).
I am equating consciousness with its neural correlates, but this is not a result of me being sloppy with terminology—it is a conscious choice to subscribe to identity theory and physicalism, rather than to consciousness being magic and to dualism, which runs into interactionist dilemmas.
Our traditional definitions of consciousness in philosophy indeed sound magical. But I think this reflects that our understanding of consciousness, while having improved a lot, is still crucially incomplete and lacking in clarity, and the improvements I have seen that finally make sense of this have come from a philosophically informed and interpreted empirical neuroscience and mathematical theory. And I think that once we have understood this phenomenon properly, it will still seem remarkable and amazing, but no longer mysterious, but rather, a precise and concrete thing we can identify and build.
How and why do you think a brain would obtain a false belief in the existence of consciousness, enabling us to speak about it, if consciousness has no reality and they have no direct access to it (yet also have a false belief that they have direct access?) Where do the neural signals about it come from, then? Why would a belief in consciousness be useful, if consciousness has no reality, affects nothing in reality, is hence utterly irrelevant, making it about as meaningful and useful to believe in as ghosts? I’ve seen attempts to counter self-stultification through elaborate constructs, and while such constructs can be made, none have yet convinced me as remotely plausible under Ockham’s razor, let alone plausible on a neurological level or backed by evolutionary observations. Animals have shown zero difficulties in communicating about their internal states—a desire to mate, a threat to attack—without having to invoke a magic spirit residing inside them.
I agree that consciousness is a remarkable and baffling phenomenon. Trying to parse it into my understanding of physical reality gives me genuine, literal headaches whenever I begin to feel that I am finally getting close. It feels easier for me to retreat and say “ah, it will always be mysterious, and ineffable, and beyond our understanding, and beyond our physical laws”. But this explains nothing, it won’t enable us to figure out uploading, or diagnose consciousness in animals that need protection, or figure out if an AI is sentient, or cure disruptions of consciousness and psychiatric disease at the root, all of which are things I really, really want us to do. Saying that it is mysterious magic just absolves me from trying to understand a thing that I really want to understand, and that we need to understand.
I see the fact that I currently cannot yet piece together how my subjective experience fits into physical reality as an indication of the fact that my brain evolved with goals like “trick other monkey out of two bananas”, not “understand the nature of my own cognition”. And my conclusion from that is to team up with lots of others, improve our brains, and hit us with more data and math and metaphors and images and sketches and observations and experiments until it clicks. So far, I am pleasantly surprised that clicks are happening at all, that I no longer feel the empirical research is irrelevant to the thing I am interested in, but instead see it as actually helping to make things clearer, and leaving us with concrete questions and approaches. Speaking of the blueness of blue: I find this sort of thing https://www.lesswrong.com/posts/LYgJrBf6awsqFRCt3/is-red-for-gpt-4-the-same-as-red-for-you?commentId=5Z8BEFPgzJnMF3Dgr#5Z8BEFPgzJnMF3Dgr far more helpful than endless rhapsodies on the ineffable nature of qualia, which never left me wiser than I was at the start, and also seemed only aimed at convincing me that none of us ever could be. Yet apparently, the relations to other qualia are actually beautifully clear to spell out, and pinpointing those clearly suddenly leads to a bunch of clearly defined questions that simultaneously make tangible progress in ruling out inverse qualia scenarios. I love stuff like this. I look at the specific asymmetric relations of blue with all the other colours, the way this pattern is encoded in the brain, and I increasingly think… we are narrowing down the blueness of blue. Not something that causes the blueness of blue, but the blueness of blue itself, characterised by its difference from yellow and red, its proximity to green and purple, its proximity to black, a mutually referencing network in which the individual position becomes ineffible in isolation, but clear as day as part of the whole. After a long time of feeling that all this progress in neuroscience had taught us nothing about what really mattered to me, I’m increasingly seeing things like this that allow an outline to appear in the dark, a sense that we are getting closer to something, and I want to grab it and drag it into the light.