Unfortunately the more we write about consciousness the further we seem to be getting away from confronting the hard problem of consciousness. Has the author read Nagel’s Seeing Like a Bat?
Yep (I read “What is it like to be a bat?”)! Indeed, that started the thought process that let to the above post.
Nagel’s biggest failing, as I see it, it that he makes everything boolean. “a bat’s consciousness is inaccessible to a human” differs in degree, not kind, from “a human’s consciousness is inaccessible to another human”. There are observations that could convince me that a human has or has not true insight into a bat’s consciousness.
I’m glad you’ve read it, I had a moment of thought yesterday that as modern fields (computer science, rationality studies etc) begin to deal with consciousness they’ll re-invent the wheel when other fields (eg philosophy, psychology) have been thinking about this for awhile.
Putting bats aside, what could convince you that my subjective experience (I promise you I’m a human) is substantially similar to yours?
Mainly people describing their own subjective experience in ways that make me think “hey, that’s just like me—and I haven’t told anyone about it!” Or me modelling you as having a subjective experience close to my own, using this model to predict your actions, and being reasonably accurate.
But if I said ‘oh that sunset makes me feel so warm and fuzzy’, yes I’m using the same words you might use, but how can you know they’re the same subjective experience?? You say ‘describing’, that relies on a set of words, but do they point to the same thing in everyone?
I can start asking you about what features “warm and fuzzy” have, what other emotions or words they connect to, what kind of things make you feel warm and fuzzy and what kind of things don’t. If enough of what you’re describing seems similar to what I feel, then I can increase my confidence that we’re feeling the same thing.
What evidence do you have that other people are conscious, apart from words (and facial expressions, etc...)? And would that evidence apply or not apply to an AI?
I’m not solving the hard problem of consciousness, I’m saying that that Bayesian evidence exists that some agents have subjective experiences. Compare to an AI that mouths the words but gets them wrong (“fuzzy is like being stabbed by needles”); we have at least evidence that an agent with the right words has a higher chance of having similar subjective experiences.
I think a “subjective experience” (edit: in the sense that two people can have the same subjective experience; not a particular instantiation of one) is just a particular (edit: category in a) categorization of possible experiences, defined by grouping together experiences that put the [person] into similar states (under some metric of “similar” that we care about). This recovers the ability to talk about “lies about subjective experiences” within a physicalist worldview.
In this case, we could look at how the AI internally changes in response to various stimuli, and group the stimuli on the basis of similar induced states. If this grouping doesn’t match to its claims at all, then we can conclude that it is perhaps lying. (See: cleaving reality at its joints.) EDIT: Were you saying that AI cannot have subjective experience? Then I think this points at the crux; see my statements below about how I don’t see human subjectivity as fundamentally special.
Yes, this means that we can talk about any physical thing having a “subjective experience”. This is not a bug. The special thing about animals is that they have significant variance between different “subjective experiences”, whereas a rock will react very similarly to any stimuli that don’t break or boil it. Humans are different because they have very high meta-subjectivity and the ability to encode their “subjective experiences” into language. However, this still doesn’t match up very well to human intuitions: any sort of database or measurement device can be said to have significant “subjective experiences”. But my goal isn’t to describe human intuitions; it’s to describe the same thing that human intuitions describe. Human subjectivity doesn’t seem to be fundamentally different from that of any other physical system.
A lot of words but you don’t grapple with the hard problem of consciousness. When I look at the sun, how can you know I feel/see the same thing as you? Yes I’ll use words, ‘yellow’, ‘warm’, ‘bright’ etc because we’ve been taught those label what we are experiencing. But it says nothing about whether my experience is the same as yours.
What do you mean by “you don’t grapple with the hard problem of consciousness”? (Is this just an abstruse way of saying “no, you’re wrong” to set up the following description of how I’m wrong? In that case, I’m not sure you have a leg to stand on when you say that I use “a lot of words”.) Edit: to be a bit more charitable, maybe it means “my model has elements that my model of your model doesn’t model”.
How can you know I see the same thing that you do? That depends on what you mean by “same”. To me, to talk about whether things are the same, we need to specify what characteristics we care about, or what category system we’re using. I know what it means for two animals to be of the same species, and what it means for two people to have the same parent. But for any two things to be the same, period, doesn’t really mean anything on its own. (You could argue that everything is the same as itself, but that’s a trivial case.)
This might seem like I’m saying that there isn’t any fundamental truth, only many ways of splitting the world up into categories. Not exactly. I don’t think there’s any fundamental truth to categories. There might be fundamental monads, or something like that, but human subjective experiences are definitely not fundamental. (And what truths can even be said of a stateless monad when considered on its own?)
I think things are not so bad. If our talking of consciousness leads to a satisfactory functional theory, we might conclude that we have solved the hard problem (at least the “how” part). Not everyone will be satisfied, but it will be hard to make an argument that we should care about the hard problem of consciousness more than we currently care about the hard problem of gravity.
I haven’t read Nagel’s paper but from what I have read _about_ it, it seems like his main point is that it’s impossible to fully explain subjective experience by just talking about physical processes in the brain. It seems to me that we do get closer to such explanation by thinking about analogies between conscious minds and AIs. Whether we’ll be able to get all the way there is hard to predict but it seems plausible that at some point our theories of consciousness would be “good enough”.
Unfortunately the more we write about consciousness the further we seem to be getting away from confronting the hard problem of consciousness. Has the author read Nagel’s Seeing Like a Bat?
Yep (I read “What is it like to be a bat?”)! Indeed, that started the thought process that let to the above post.
Nagel’s biggest failing, as I see it, it that he makes everything boolean. “a bat’s consciousness is inaccessible to a human” differs in degree, not kind, from “a human’s consciousness is inaccessible to another human”. There are observations that could convince me that a human has or has not true insight into a bat’s consciousness.
I’m glad you’ve read it, I had a moment of thought yesterday that as modern fields (computer science, rationality studies etc) begin to deal with consciousness they’ll re-invent the wheel when other fields (eg philosophy, psychology) have been thinking about this for awhile.
Putting bats aside, what could convince you that my subjective experience (I promise you I’m a human) is substantially similar to yours?
Mainly people describing their own subjective experience in ways that make me think “hey, that’s just like me—and I haven’t told anyone about it!” Or me modelling you as having a subjective experience close to my own, using this model to predict your actions, and being reasonably accurate.
But if I said ‘oh that sunset makes me feel so warm and fuzzy’, yes I’m using the same words you might use, but how can you know they’re the same subjective experience?? You say ‘describing’, that relies on a set of words, but do they point to the same thing in everyone?
I can start asking you about what features “warm and fuzzy” have, what other emotions or words they connect to, what kind of things make you feel warm and fuzzy and what kind of things don’t. If enough of what you’re describing seems similar to what I feel, then I can increase my confidence that we’re feeling the same thing.
But they’re all just words and suffer the same problem. What if an AI says the right words?
What evidence do you have that other people are conscious, apart from words (and facial expressions, etc...)? And would that evidence apply or not apply to an AI?
I’m not solving the hard problem of consciousness, I’m saying that that Bayesian evidence exists that some agents have subjective experiences. Compare to an AI that mouths the words but gets them wrong (“fuzzy is like being stabbed by needles”); we have at least evidence that an agent with the right words has a higher chance of having similar subjective experiences.
I think a “subjective experience” (edit: in the sense that two people can have the same subjective experience; not a particular instantiation of one) is just a particular (edit: category in a) categorization of possible experiences, defined by grouping together experiences that put the [person] into similar states (under some metric of “similar” that we care about). This recovers the ability to talk about “lies about subjective experiences” within a physicalist worldview.
In this case, we could look at how the AI internally changes in response to various stimuli, and group the stimuli on the basis of similar induced states. If this grouping doesn’t match to its claims at all, then we can conclude that it is perhaps lying. (See: cleaving reality at its joints.) EDIT: Were you saying that AI cannot have subjective experience? Then I think this points at the crux; see my statements below about how I don’t see human subjectivity as fundamentally special.
Yes, this means that we can talk about any physical thing having a “subjective experience”. This is not a bug. The special thing about animals is that they have significant variance between different “subjective experiences”, whereas a rock will react very similarly to any stimuli that don’t break or boil it. Humans are different because they have very high meta-subjectivity and the ability to encode their “subjective experiences” into language. However, this still doesn’t match up very well to human intuitions: any sort of database or measurement device can be said to have significant “subjective experiences”. But my goal isn’t to describe human intuitions; it’s to describe the same thing that human intuitions describe. Human subjectivity doesn’t seem to be fundamentally different from that of any other physical system.
A lot of words but you don’t grapple with the hard problem of consciousness. When I look at the sun, how can you know I feel/see the same thing as you? Yes I’ll use words, ‘yellow’, ‘warm’, ‘bright’ etc because we’ve been taught those label what we are experiencing. But it says nothing about whether my experience is the same as yours.
What do you mean by “you don’t grapple with the hard problem of consciousness”? (Is this just an abstruse way of saying “no, you’re wrong” to set up the following description of how I’m wrong? In that case, I’m not sure you have a leg to stand on when you say that I use “a lot of words”.) Edit: to be a bit more charitable, maybe it means “my model has elements that my model of your model doesn’t model”.
How can you know I see the same thing that you do? That depends on what you mean by “same”. To me, to talk about whether things are the same, we need to specify what characteristics we care about, or what category system we’re using. I know what it means for two animals to be of the same species, and what it means for two people to have the same parent. But for any two things to be the same, period, doesn’t really mean anything on its own. (You could argue that everything is the same as itself, but that’s a trivial case.)
This might seem like I’m saying that there isn’t any fundamental truth, only many ways of splitting the world up into categories. Not exactly. I don’t think there’s any fundamental truth to categories. There might be fundamental monads, or something like that, but human subjective experiences are definitely not fundamental. (And what truths can even be said of a stateless monad when considered on its own?)
I think things are not so bad. If our talking of consciousness leads to a satisfactory functional theory, we might conclude that we have solved the hard problem (at least the “how” part). Not everyone will be satisfied, but it will be hard to make an argument that we should care about the hard problem of consciousness more than we currently care about the hard problem of gravity.
I haven’t read Nagel’s paper but from what I have read _about_ it, it seems like his main point is that it’s impossible to fully explain subjective experience by just talking about physical processes in the brain. It seems to me that we do get closer to such explanation by thinking about analogies between conscious minds and AIs. Whether we’ll be able to get all the way there is hard to predict but it seems plausible that at some point our theories of consciousness would be “good enough”.