These and other definitions all seem to mostly shuffle mysterious-ness of consciousness between the words “subjective”, “experience” and “qualia”, and if you’ll try to find a philosophical definition of those you’ll end up going in sepulki loop pretty quickly.
Qualia are sufficiently well defined to enable us to tell that you have not solved the hard problem. Qualia are sensory qualities. “Examples of qualia include the perceived sensation of pain of a headache, the taste of wine, as well as the redness of an evening sky. ”
Qualia are not self-reflexive awareness ,or awareness-of-awareness, the thing that you have offered to explain in order to solve the hard problem.
I think this is basically what you’re saying, though: you’re talking about qualities of experience/sensations, but to speak of qualities is to categorize things rather than just experience them, which means some part of experience is being made into a thing.
Whether or not they are explained, the import thing I’m pointing at is that they are of the same type, hence whether or not you find an explanation satisfying, it is aiming to explain the same kind of thing.
I can categorsise stones , and I can categorise flowers, but that doesn’t mean stones are flowers. In general, the ability to categorise things of type X doesn’t exhaustively describe them because they also have intrinsic properties. There is a theory of qualia according to which they don’t have any interesting properties other than bring different from each other, the GENSYM, theory, but I don’t know whether you’re be endorsing it.
For one, the part about qualia talks specifically about sensory inputs. So I’m not sure how’s what you’re saying is an objection, can you clarify? Also, this part leans really heavily on “Surfing the Uncertainty”, not sure if you’ve read it but if you haven’t that may be another issue.
And if you say you have a good definition of quale, even if you can’t quite put it into words—can you explain me when a sensation perceived by a human (in the biological sense of perceiving) stops being a quale? Like with your example of “redness of the evening sky”. Lets say I sit and stare at the sky marveling how red it is—in my understanding I definitely experience a quale. What if I just walking down the street and maybe vaguely aware that the sky is red? What if I’m running for my life from a dog, or blind drunk or somehow else have no clue what color the sky is even though it’s technically in my field of vision?
The part that you quoted doesn’t define anything, it’s just 3 examples, which together may be just as well defined simply as “sensations”. And the Wikipedia article itself lists a number of different, not equivalent definitions none of which is anything I’d called rigorous, plus a number of references to qualia proponents who claim that this or that part of some definition is wrong (e.g. Ramachandran and Hirstein say that qualia could be communicated), plus a list of qualia opponents who have significant issues with the whole concept. That is exactly what I’m referring to as “ill-defined”.
Now you say that you think qualia is well-defined, so I’m asking you to help me to understand the definition you have in mind, so we can talk about it meaningfully. That’s why the questions matter—I can’t answer you whether I think I or Clark or Graziano or whoever else solved the hard problem if I don’t understand what do you mean by the hard problem (for which not all definitions even include the term “qualia”).
Do you have something to say about qualia that depends on knowing the answer?
Well of course, everything I have to say depends on knowing the answer because the answer would help me understand what is it that you mean by qualia. So do you feel like your definition allows you to answer this question? And, while we’re at it, my follow up question of whether you assume animals have qualia and if yes which of them? If so, that’d be very helpful for my understanding.
The part that you quoted doesn’t define anything, it’s just 3 examples, which together may be just as well defined simply as “sensations”.
No, they are about the quality of sensations. You keep trying to pull the subject towards “explaining sensation” because you actually can explain sensation, absent the qualities of sensation. But if the HP were really about explaining sensation in that way it wouldn’t be hard. You should be using the famous hardness of the HP as a guide to understanding it … If it seems easy , you’ve got it wrong.
And the Wikipedia article itself lists a number of different, not equivalent definitions none of which is anything I’d called rigorous
But that might be an isolated demand for rigour.
FYI, there is no precise and universally accepted definition of “matter”.
(e.g. Ramachandran and Hirstein say that qualia could be communicated
Note that not everything that is true of qualia (or anything else) needs to be in the definition.
Now you say that you think qualia is well-defined
I didn’t say that.
I can’t answer you whether I think I or Clark or Graziano or whoever else solved the hard problem if I don’t understand what do you mean by the hard problem
I’m not using an idiosyncratic defintion.
So do you feel like your definition allows you to answer this question?
I would not expect a definition alone to answer every possible question. I once read a paper arguing that unseen qualia are a coherent idea, but I forget the details.
I’m not trying to pull the subject towards anything, I’m just genuinely trying to understand your position, and I’d appreciate a little bit of cooperation on your part in this. Such as, answering any of the questions I asked. And “I don’t know” is a perfectly valid answer, I have no intention to “gotcha” you or anything like this, and by your own admission the problem is hard. So I’d ask you to not interpret any of my words above or below as an attack, quite the opposite I’m doing my best to see your point.
You should be using the famous hardness of the HP as a guide to understanding it … If it seems easy , you’ve got it wrong.
With all due respect, that sounds to me like you’re insisting that the answer to a mysterious question should be itself mysterious, which it shouldn’t. Sorry if I misinterpret your words, in that case again I’d appreciate you being a bit more clear about what you’re trying to say.
FYI, there is no precise and universally accepted definition of “matter”.
Exactly, and that is why using Wikipedia article for definition in such debates is not a good idea. Ideally, I’d ask you (or try myself in an identical situation) to taboo the words “qualia” and “hard problem” and try to explain what exactly question(s) do you think remains unanswered by the theory. But failing that, we can at least agree on the definition on qualia.
And even if we insist on using Wiki as the source of truth, here’s the direct quote: “Much of the debate over their importance hinges on the definition of the term, and various philosophers emphasize or deny the existence of certain features of qualia. Consequently, the nature and existence of various definitions of qualia remain controversial because they are not verifiable.” To me it sounds at odds with, again direct quote: “Qualia are sufficiently well defined to enable us to tell that you have not solved the hard problem”. If nature and even existence something depends on the definition, it’s not sufficiently well defined to tell whether theory X explains it (which is all not to say that you’re wrong and wikipedia is right, I don’t think it’s the highest authority on such matters. Just that you seem to have some different, narrower definition in mind so we can’t use reference to wiki as the source of truth)
Note that not everything that is true of qualia (or anything else) needs to be in the definition.
Yeah, I kinda hoped that I don’t need to spell it out, but okay, there we go. You’re correct, not everything that’s true of qualia needs to be in the definition. However I would insist that a reasonable definition doesn’t directly contradict any important true facts. Whereas one of the definitions in that wiki article (by Dennett) says that qualia is “private; that is, all interpersonal comparisons of qualia are systematically impossible.”
I would not expect a definition alone to answer every possible question.
Again, totally agree, that’s why I started with specific questions rather than definitions. So, considering that “I don’t know” is a perfectly reasonable answer, could you maybe try answering them? Or, if that’s seems like a better option to you, give an example of a question which you think proves Graziano/my theory isn’t sufficient to solve the hard problem?
With all due respect, that sounds to me like you’re insisting that the answer to a mysterious question should be itself mysterious,
I’m not saying that. But answeers to questions should relevant.
Exactly, and that is why using Wikipedia article for definition in such debates is not a good idea. Ideally, I’d ask you (or try myself in an identical situation) to taboo the words “qualia” and “hard problem
I’ve already done that. I can replace “qualia” with *sensory qualities”, and point out that you are not solving the hard problem because you are no explaining sensory qualities.
And even if we insist on using Wiki as the source of truth, here’s the direct quote: “Much of the debate over their importance hinges on the definition of the term, and various philosophers emphasize or deny the existence of certain features of qualia. Consequently, the nature and existence of various definitions of qualia remain controversial because they are not verifiable.” To me it sounds at odds with, again direct quote: “Qualia are sufficiently well defined to enable us to tell that you have not solved the hard problem”.
Theres no real contradiction. Even though there is disagreement about some features of qualia ,there can still be agreement that they in some sense about sensory qualities. I used a simple, almost naive , definition , consisting of a few examples, for a reason.
Or, if that’s seems like a better option to you, give an example of a question which you think proves Graziano/my theory isn’t sufficient to solve the hard problem?
I’ve said s already, haven’t I? A solution to the HP would allow you to predict sensory qualities from detailed brain scans, in the way that Mary can’t.
Replacing it with another word of which you then use identically isn’t the same as tabooing, that’s kind of defeats the purpose.
there can still be agreement that they in some sense about sensory qualities.
There may be, but then it seems there’s no agreement about what sensory qualities are.
I’ve said s already, haven’t I? A solution to the HP would allow you to predict sensory qualities from detailed brain scans, in the way that Mary can’t.
No, you have not, in fact in all your comments you haven’t mentioned “predict” or “mary” or “brain” ever once. But now we’re getting somewhere! How do you tell that a certain solution can or can’t predict “sensory qualities”? Or better, when you say “predict qualities from the brain scans” do you mean “feel/imagine them yourself as if you’ve experienced those sensory inputs firsthand”, or do you mean something else?
there’s no agreement about what sensory qualities are.
They’re things like the perceived sensation of pain of a headache, the taste of wine, as well as the redness of an evening sky. ”
I don’t believe that’s difficult to understand.
How do you tell that a certain solution can or can’t predict “sensory qualities”?
How do you tell that a putative explanation can predict something? You make a theoretical prediction, and you perform an experiment to confirm it.
Otherwise, non -predictiveness is the default.
So, a solution to the HP needs to be able make a theoretical prediction: there needs to be some gizmo were you input a brain state and get a predicted quale as output.
Sure, I wasn’t claiming at any point to provide a precise mathematical model let alone implementation, if that’s what you’re talking about. What I was saying is that I have guesses as to what that mathematical model should be computing. In order to tell whether the person experiences a quale of X (in the sense of them perceiving this sensation), you’d want to see whether the sensory input from the eyes corresponding to the red sky is propagated all the way up to the top level of predictive cascade—the level capable of modeling itself to a degree—and whether this top level’s state is altered in a way to reflect itself observing the red sky.
And admittedly what I’m saying is super high level, but I’ve just finished reading a much more detailed and I think fully compatible account of this in this article that Kaj linked. In their sense, I think the answer to your question is that the qualia (perceived sensation) arises when both attention and awareness are focused on the input—see the article for specific definitions.
The situation where the input reaches the top level and affects it, but is not registered subjectively, corresponds to attention without awareness in their terms (or to the information having propagated to the top level, but the corresponding change in the top level state not being reflected in itself). It’s observed in people with blindsight, and also was recreated experimentally.
Looking at your debate both with me and with Gordon below, it seems like your side of the argument mostly consists of telling the opponent “no you’re wrong” without providing any evidence to that claim. I honestly did my best to raise the sanity waterline a little, but to no success, so I don’t see much sense in continuing.
when a sensation perceived by a human (in the biological sense of perceiving) stops being a quale?
When it stops feeling like your “self-awareness” and starts feeling like “there was nobody “in there””. And then it raises questions like “why not having ability to do recursion stops you from feeling pain”.
Yeah that sounds reasonable and in line with my intuitions. Where by “somebody” I would mean consciousness—the mind modeling itself. The difference between “qualia” and “no qualia” would be the difference between the signal of e.g. pain propagating all the way to the topmost, conscious level, which would predict not just receiving the signal (as all layers below also do), but also predict its own state altered by receiving the signal. In the latter case, the reason why the mind knows there’s “somebody” experiencing it, is because it observes (=predicts) this “somebody” experiencing it. And of course that “somebody” is the mind itself.
And then it raises questions like “why not having ability to do recursion stops you from feeling pain”.
Well my—and many other people’s—answer to that would be that of course it doesn’t, for any reasonable definition of pain. Do you believe it does?
I believe it depends on one’s preferences. Wait, you think it doesn’t? By “ability to do recursion” I meant “ability to predict its own state altered by receiving the signal” or whatever the difference of the top level is supposed to be. I assumed that in your model whoever doesn’t implement it doesn’t have qualia therefore doesn’t feel pain because there is no one to feel it. And for the interested in the Hard Problem the question would be “why this specific physical arrangement interpreted as recursive modeling feels so different from when the pain didn’t propagate to the top level”.
I don’t think qualia—to the degree it is at all a useful term—has much to do with the ability to feel pain, or anything. In my understanding all definitions of qualia assume it is a different thing from purely neurological perceptions (which is what I’d understand by “feelings”), more specifically that the perceptions can generate qualia sometimes in some creatures but they don’t automatically do.
Otherwise you’d have to argue one of the two:
Either even the most primitive animals like worms which you can literally simulate neuron by neuron, have qualia as long as they have some senses and neurons.
..or “feel pain” and e.g. “feel warmth” are somehow fundamentally different where the first necessarily requires/produces quale and the second may or may not produce it.
Both sound rather indefensible to me, so it follows that an animal can feel pain without experiencing a quale of it, just like a scallop can see the light without experiencing a quale of it. But two caveats on this. First, I don’t have a really good grasp on what a qualia is, and as wikipedia attests neither do the experts. I feel there’s some core of truth that people are trying to get at with this concept (something along what you said in your first comment), but also it’s very often used as a rug for people to hide their confusion under, so I’m always skeptical about using this term. Second, whether or not one should ascribe any moral worth to the agents without consciousness/qualia is decisively not a part of what I’m saying here. I personally do, but as you say it depends on one’s preferences, and so largely orthogonal to the question of how consciousness works.
In my understanding all definitions of qualia assume it is a different thing from purely neurological perceptions (which is what I’d understand by “feelings”), more specifically that the perceptions can generate qualia sometimes in some creatures but they don’t automatically do.
Of course, the minimal definition of “qualia” I have been using doesn’t have that implication.
Ok, by these definitions what I was saying is “why not having ability to do recursion stops you from having pain-qualia?”. Just feeling like there is a core of truth to qualia (“conceivability” in zombie language) is enough for asking your world-model to provide a reason why not everything, including recursively self-modeling systems, feels like qualialess feelings—why recursively self-modeling is not just another kind of reaction and perception?
Ah, I see. My take on this question would be that we should focus on the word “you” rather than “qualia”. If you have a conscious mind subjectively perceiving anything about the outside world (or its own internal workings), it has to feel like something, almost by definition. Like, if you went to go get your covid shot and it hurt you’d say “it felt like something”. If and only if somehow you didn’t even feel the needle piercing your skin, you’ll say “I didn’t feel anything”. There were experiments proving that people can react to a stimulus they are not subjectively aware of (mostly for visual stimuli), but I’m pretty sure in all those cases they’d say they didn’t see anything—basically that’s how we know they were not subjectively aware of it. What would it even mean for a conscious mind to be aware of a stimulus but it not “feeling like something”? It must have some representation in the consciousness, that’s basically what we mean by “being aware of X” or “consciously experiencing X”.
So I’d say given a consciousness experiencing stuff, you necessarily have conscious experiences (aka qualia), that’s a tautology basically. So the question becomes why some things have consciousness, or to narrow it down to your question—why (certain) recursively self-modeling systems are conscious? And that’s kind of what I was trying to explain by the part 4 of the post, and approximately the same idea just from another perspective is much better covered in this book review and this article.
But if I tried to put it in one paragraph, I’d start with—how do I know that I’m conscious and why do I think I know it? And the answer would be a ramble along the lines of: well when I look into my mind I can see me, i.e. some guy who thinks and makes decisions and is aware of things, and have emotions and memories and so on and so forth. And at the same time as I see I also am this guy! I can have different thoughts whenever I choose to (to a degree), I can do different things whenever I choose to (to a still more limited degree), and at the same time I can reflect on the choice process. So my theory is that I can perceive myself as a human mind mostly because the self-reflecting model—which is me—has trained to perceive other human mind so well that it learned to generalize to itself (see the whole entire post for the details). Although Graziano in the article and book I linked provides a very convincing explanation as to why this self-modeling would also be very helpful for the general reasoning ability—something I was unsuccessfully trying to figure out in the part 5.
So my theory is that I can perceive myself as a human mind mostly because the self-reflecting model—which is me—has trained to perceive other human mind so well that it learned to generalize to itself.
What’s your theory for why consciousness is actually your ability to perceive yourself as human mind? From your explanation it seems to be
You think (and say) you have consciousness.
When you examine why you think it, you use your ability to perceive yourself as human mind.
Therefore consciousness is your ability to perceive yourself as human mind.
You are basically saying that consciousness detector in the brain is an “algorithm of awareness” detector (and algorithm of awareness can work as “algorithm of awareness” detector). But what are the actual reasons to believe it? Only that if it is awareness, then it explains why you can detect it? It certainly is not a perfect detector, because some people will explicitly say “no, my definition of consciousness is not about awareness”. And because it doesn’t automatically fits into “If you have a conscious mind subjectively perceiving anything about the outside world, it has to feel like something” if you just replace “conscious” by “able to percieve itself”.
Those are all great points. Regarding your first question, no, that’s not the reasoning I have. I think consciousness is the ability to reflect on myself firstly because it feels like the ability to reflect on myself. Kind of like the reason that I believe I can see is that when I open my eyes I start seeing things and if I interact with those things they really are mostly where I see them, nothing more sophisticated than that. There’s a bunch of longer more theoretical arguments I can bring for this point, but I never thought I should because I was kind of taking it as a given. It well may be me falling into the typical mind fallacy, if you say some people say otherwise. So if you have different intuitions about the consciousness, can you tell:
How do you subjectively, from the first person view, know that you are conscious?
Can you genuinely imagine being conscious but not self aware from the first person view?
If you get to talk to and interact with, an alien or an AI of unknown power and architecture, how would you go about finding out if they are conscious?
And because it doesn’t automatically fits into “If you have a conscious mind subjectively perceiving anything about the outside world, it has to feel like something” if you just replace “conscious” by “able to percieve itself”.
Well, no, it doesn’t fit quite as simple, but overall I think it works out. If you have an agent able to reflect on itself and model itself perceiving something, it’s going to reflect on the fact that it perceives something. I.e. it’s going to have some mental representation for both the perception and for itself perceiving it. It will be able to reason about itself perceiving things, and if it can communicate it will probably also talk about it. Different perceptions will be in relation to each other (e.g. sky is not the same color as grass, and grass color is associated with summer and warmth and so on). And, perhaps most importantly, it will have models of other such agents perceiving things and it will on the high abstract level that they have the same perceptions in them. But it will only have the access to the lower level data for such perceptions from its own sensory inputs, not others’, so it won’t be able to tell for sure what it “feels like” to them, because it won’t be getting theirs stream of low-level sensory inputs.
In short, I think—and please do correct me if you have a counterexample—that we have reasons to expect such an agent to make any claim humans make (given similar circumstances and training examples), and we can make any testable claim about such an agent that we can make about a human.
To me it looks like the defining feature of consciousness intuition is one’s certainty in having it, so I define consciousness as the only thing one can be certain about and then I know I am conscious by executing “cogito ergo sum”.
I can imagine disabling specific features associated with awareness starting with memory: seeing something without remembering feels like seeing something and then forgetting about it. Usually when you don’t remember seeing something recent it means your perception wasn’t conscious, but you certainly forgot some conscious moments in the past.
Then I can imagine not having any thoughts. It is harder for long periods of time, but I can create short durations of just seeing that, as far as I remember, are not associated with any thoughts.
At that point it becomes harder to describe this process as self-awareness. You could argue that if there is representation of the lower level somewhere in the high level, then it is still modeling. But there is no more reason to consider these levels parts of the same system, than to consider any sender-receiver pair as self-modeling system.
I don’t know. It’s all ethics, so I’ll probably just check for some arbitrary similarity-to-human-mind metric.
we have reasons to expect such an agent to make any claim humans make
Depending on detailed definitions of “reflect on itself” and “model itself perceiving” I think you can make an agent that wouldn’t claim to be perfectly certain in its own consciousness. For example, I don’t see a reason why some simple cartesian agent with direct read-only access to its own code would think in terms of consciousness.
But it will only have the access to the lower level data for such perceptions from its own sensory inputs, not others’, so it won’t be able to tell for sure what it “feels like” to them, because it won’t be getting theirs stream of low-level sensory inputs.
That’s nothing new, it’s the intuition that the Mary thought experiment is designed to address.
Qualia are sufficiently well defined to enable us to tell that you have not solved the hard problem. Qualia are sensory qualities. “Examples of qualia include the perceived sensation of pain of a headache, the taste of wine, as well as the redness of an evening sky. ”
Qualia are not self-reflexive awareness ,or awareness-of-awareness, the thing that you have offered to explain in order to solve the hard problem.
(Edited for typos)
I think we can have it both ways on qualia. Compare these two posts of mine:
Form and Feedback in Phenomenology
Introduction to Noematology
Similar problem I’m afraid … I just don’t see how the things you are talking about relate to qualia.
I guess I don’t see how it doesn’t, since it’s a reification of experience, which is what qualia are.
I’ve never seen “qualia” defined that way.
I think this is basically what you’re saying, though: you’re talking about qualities of experience/sensations, but to speak of qualities is to categorize things rather than just experience them, which means some part of experience is being made into a thing.
I don’t think you can explain qualities just by explaining categorisation. If that’s what you are saying.
Whether or not they are explained, the import thing I’m pointing at is that they are of the same type, hence whether or not you find an explanation satisfying, it is aiming to explain the same kind of thing.
I can categorsise stones , and I can categorise flowers, but that doesn’t mean stones are flowers. In general, the ability to categorise things of type X doesn’t exhaustively describe them because they also have intrinsic properties. There is a theory of qualia according to which they don’t have any interesting properties other than bring different from each other, the GENSYM, theory, but I don’t know whether you’re be endorsing it.
For one, the part about qualia talks specifically about sensory inputs. So I’m not sure how’s what you’re saying is an objection, can you clarify? Also, this part leans really heavily on “Surfing the Uncertainty”, not sure if you’ve read it but if you haven’t that may be another issue.
And if you say you have a good definition of quale, even if you can’t quite put it into words—can you explain me when a sensation perceived by a human (in the biological sense of perceiving) stops being a quale? Like with your example of “redness of the evening sky”. Lets say I sit and stare at the sky marveling how red it is—in my understanding I definitely experience a quale. What if I just walking down the street and maybe vaguely aware that the sky is red? What if I’m running for my life from a dog, or blind drunk or somehow else have no clue what color the sky is even though it’s technically in my field of vision?
But not about sensory qualities.
I read Scott’s review. And appears Clark is not saying anything about qualia. Or are you saying Clark has solved the HP?
I quoted Wikipedia. They managed to put it into words
Why does that matter? Do you have something to say about qualia that depends on knowing the answer?
The part that you quoted doesn’t define anything, it’s just 3 examples, which together may be just as well defined simply as “sensations”. And the Wikipedia article itself lists a number of different, not equivalent definitions none of which is anything I’d called rigorous, plus a number of references to qualia proponents who claim that this or that part of some definition is wrong (e.g. Ramachandran and Hirstein say that qualia could be communicated), plus a list of qualia opponents who have significant issues with the whole concept. That is exactly what I’m referring to as “ill-defined”.
Now you say that you think qualia is well-defined, so I’m asking you to help me to understand the definition you have in mind, so we can talk about it meaningfully. That’s why the questions matter—I can’t answer you whether I think I or Clark or Graziano or whoever else solved the hard problem if I don’t understand what do you mean by the hard problem (for which not all definitions even include the term “qualia”).
Well of course, everything I have to say depends on knowing the answer because the answer would help me understand what is it that you mean by qualia. So do you feel like your definition allows you to answer this question? And, while we’re at it, my follow up question of whether you assume animals have qualia and if yes which of them? If so, that’d be very helpful for my understanding.
No, they are about the quality of sensations. You keep trying to pull the subject towards “explaining sensation” because you actually can explain sensation, absent the qualities of sensation. But if the HP were really about explaining sensation in that way it wouldn’t be hard. You should be using the famous hardness of the HP as a guide to understanding it … If it seems easy , you’ve got it wrong.
But that might be an isolated demand for rigour.
FYI, there is no precise and universally accepted definition of “matter”.
Note that not everything that is true of qualia (or anything else) needs to be in the definition.
I didn’t say that.
I’m not using an idiosyncratic defintion.
I would not expect a definition alone to answer every possible question. I once read a paper arguing that unseen qualia are a coherent idea, but I forget the details.
I’m not trying to pull the subject towards anything, I’m just genuinely trying to understand your position, and I’d appreciate a little bit of cooperation on your part in this. Such as, answering any of the questions I asked. And “I don’t know” is a perfectly valid answer, I have no intention to “gotcha” you or anything like this, and by your own admission the problem is hard. So I’d ask you to not interpret any of my words above or below as an attack, quite the opposite I’m doing my best to see your point.
With all due respect, that sounds to me like you’re insisting that the answer to a mysterious question should be itself mysterious, which it shouldn’t. Sorry if I misinterpret your words, in that case again I’d appreciate you being a bit more clear about what you’re trying to say.
Exactly, and that is why using Wikipedia article for definition in such debates is not a good idea. Ideally, I’d ask you (or try myself in an identical situation) to taboo the words “qualia” and “hard problem” and try to explain what exactly question(s) do you think remains unanswered by the theory. But failing that, we can at least agree on the definition on qualia.
And even if we insist on using Wiki as the source of truth, here’s the direct quote: “Much of the debate over their importance hinges on the definition of the term, and various philosophers emphasize or deny the existence of certain features of qualia. Consequently, the nature and existence of various definitions of qualia remain controversial because they are not verifiable.” To me it sounds at odds with, again direct quote: “Qualia are sufficiently well defined to enable us to tell that you have not solved the hard problem”. If nature and even existence something depends on the definition, it’s not sufficiently well defined to tell whether theory X explains it (which is all not to say that you’re wrong and wikipedia is right, I don’t think it’s the highest authority on such matters. Just that you seem to have some different, narrower definition in mind so we can’t use reference to wiki as the source of truth)
Yeah, I kinda hoped that I don’t need to spell it out, but okay, there we go. You’re correct, not everything that’s true of qualia needs to be in the definition. However I would insist that a reasonable definition doesn’t directly contradict any important true facts. Whereas one of the definitions in that wiki article (by Dennett) says that qualia is “private; that is, all interpersonal comparisons of qualia are systematically impossible.”
Again, totally agree, that’s why I started with specific questions rather than definitions. So, considering that “I don’t know” is a perfectly reasonable answer, could you maybe try answering them? Or, if that’s seems like a better option to you, give an example of a question which you think proves Graziano/my theory isn’t sufficient to solve the hard problem?
I’m not saying that. But answeers to questions should relevant.
I’ve already done that. I can replace “qualia” with *sensory qualities”, and point out that you are not solving the hard problem because you are no explaining sensory qualities.
Theres no real contradiction. Even though there is disagreement about some features of qualia ,there can still be agreement that they in some sense about sensory qualities. I used a simple, almost naive , definition , consisting of a few examples, for a reason.
I’ve said s already, haven’t I? A solution to the HP would allow you to predict sensory qualities from detailed brain scans, in the way that Mary can’t.
Replacing it with another word of which you then use identically isn’t the same as tabooing, that’s kind of defeats the purpose.
There may be, but then it seems there’s no agreement about what sensory qualities are.
No, you have not, in fact in all your comments you haven’t mentioned “predict” or “mary” or “brain” ever once. But now we’re getting somewhere! How do you tell that a certain solution can or can’t predict “sensory qualities”? Or better, when you say “predict qualities from the brain scans” do you mean “feel/imagine them yourself as if you’ve experienced those sensory inputs firsthand”, or do you mean something else?
They’re things like the perceived sensation of pain of a headache, the taste of wine, as well as the redness of an evening sky. ”
I don’t believe that’s difficult to understand.
How do you tell that a putative explanation can predict something? You make a theoretical prediction, and you perform an experiment to confirm it.
Otherwise, non -predictiveness is the default.
So, a solution to the HP needs to be able make a theoretical prediction: there needs to be some gizmo were you input a brain state and get a predicted quale as output.
Sure, I wasn’t claiming at any point to provide a precise mathematical model let alone implementation, if that’s what you’re talking about. What I was saying is that I have guesses as to what that mathematical model should be computing. In order to tell whether the person experiences a quale of X (in the sense of them perceiving this sensation), you’d want to see whether the sensory input from the eyes corresponding to the red sky is propagated all the way up to the top level of predictive cascade—the level capable of modeling itself to a degree—and whether this top level’s state is altered in a way to reflect itself observing the red sky.
And admittedly what I’m saying is super high level, but I’ve just finished reading a much more detailed and I think fully compatible account of this in this article that Kaj linked. In their sense, I think the answer to your question is that the qualia (perceived sensation) arises when both attention and awareness are focused on the input—see the article for specific definitions.
The situation where the input reaches the top level and affects it, but is not registered subjectively, corresponds to attention without awareness in their terms (or to the information having propagated to the top level, but the corresponding change in the top level state not being reflected in itself). It’s observed in people with blindsight, and also was recreated experimentally.
Only you define “quale” in terms of experiencing versus not experiencing.
Looking at your debate both with me and with Gordon below, it seems like your side of the argument mostly consists of telling the opponent “no you’re wrong” without providing any evidence to that claim. I honestly did my best to raise the sanity waterline a little, but to no success, so I don’t see much sense in continuing.
We’re mostly arguing about the definition of qualia. I’ve quoted Wikipedia , you haven’t quoted anybody.
When it stops feeling like your “self-awareness” and starts feeling like “there was nobody “in there””. And then it raises questions like “why not having ability to do recursion stops you from feeling pain”.
Yeah that sounds reasonable and in line with my intuitions. Where by “somebody” I would mean consciousness—the mind modeling itself. The difference between “qualia” and “no qualia” would be the difference between the signal of e.g. pain propagating all the way to the topmost, conscious level, which would predict not just receiving the signal (as all layers below also do), but also predict its own state altered by receiving the signal. In the latter case, the reason why the mind knows there’s “somebody” experiencing it, is because it observes (=predicts) this “somebody” experiencing it. And of course that “somebody” is the mind itself.
Well my—and many other people’s—answer to that would be that of course it doesn’t, for any reasonable definition of pain. Do you believe it does?
I believe it depends on one’s preferences. Wait, you think it doesn’t? By “ability to do recursion” I meant “ability to predict its own state altered by receiving the signal” or whatever the difference of the top level is supposed to be. I assumed that in your model whoever doesn’t implement it doesn’t have qualia therefore doesn’t feel pain because there is no one to feel it. And for the interested in the Hard Problem the question would be “why this specific physical arrangement interpreted as recursive modeling feels so different from when the pain didn’t propagate to the top level”.
I don’t think qualia—to the degree it is at all a useful term—has much to do with the ability to feel pain, or anything. In my understanding all definitions of qualia assume it is a different thing from purely neurological perceptions (which is what I’d understand by “feelings”), more specifically that the perceptions can generate qualia sometimes in some creatures but they don’t automatically do.
Otherwise you’d have to argue one of the two:
Either even the most primitive animals like worms which you can literally simulate neuron by neuron, have qualia as long as they have some senses and neurons.
..or “feel pain” and e.g. “feel warmth” are somehow fundamentally different where the first necessarily requires/produces quale and the second may or may not produce it.
Both sound rather indefensible to me, so it follows that an animal can feel pain without experiencing a quale of it, just like a scallop can see the light without experiencing a quale of it. But two caveats on this. First, I don’t have a really good grasp on what a qualia is, and as wikipedia attests neither do the experts. I feel there’s some core of truth that people are trying to get at with this concept (something along what you said in your first comment), but also it’s very often used as a rug for people to hide their confusion under, so I’m always skeptical about using this term. Second, whether or not one should ascribe any moral worth to the agents without consciousness/qualia is decisively not a part of what I’m saying here. I personally do, but as you say it depends on one’s preferences, and so largely orthogonal to the question of how consciousness works.
Of course, the minimal definition of “qualia” I have been using doesn’t have that implication.
Your “definition” (which really isn’t a definition but just three examples) have almost no implications at all, that’s my only issue with it.
That’s a feature, since it begs the minimal number of questions.
Ok, by these definitions what I was saying is “why not having ability to do recursion stops you from having pain-qualia?”. Just feeling like there is a core of truth to qualia (“conceivability” in zombie language) is enough for asking your world-model to provide a reason why not everything, including recursively self-modeling systems, feels like qualialess feelings—why recursively self-modeling is not just another kind of reaction and perception?
Ah, I see. My take on this question would be that we should focus on the word “you” rather than “qualia”. If you have a conscious mind subjectively perceiving anything about the outside world (or its own internal workings), it has to feel like something, almost by definition. Like, if you went to go get your covid shot and it hurt you’d say “it felt like something”. If and only if somehow you didn’t even feel the needle piercing your skin, you’ll say “I didn’t feel anything”. There were experiments proving that people can react to a stimulus they are not subjectively aware of (mostly for visual stimuli), but I’m pretty sure in all those cases they’d say they didn’t see anything—basically that’s how we know they were not subjectively aware of it. What would it even mean for a conscious mind to be aware of a stimulus but it not “feeling like something”? It must have some representation in the consciousness, that’s basically what we mean by “being aware of X” or “consciously experiencing X”.
So I’d say given a consciousness experiencing stuff, you necessarily have conscious experiences (aka qualia), that’s a tautology basically. So the question becomes why some things have consciousness, or to narrow it down to your question—why (certain) recursively self-modeling systems are conscious? And that’s kind of what I was trying to explain by the part 4 of the post, and approximately the same idea just from another perspective is much better covered in this book review and this article.
But if I tried to put it in one paragraph, I’d start with—how do I know that I’m conscious and why do I think I know it? And the answer would be a ramble along the lines of: well when I look into my mind I can see me, i.e. some guy who thinks and makes decisions and is aware of things, and have emotions and memories and so on and so forth. And at the same time as I see I also am this guy! I can have different thoughts whenever I choose to (to a degree), I can do different things whenever I choose to (to a still more limited degree), and at the same time I can reflect on the choice process. So my theory is that I can perceive myself as a human mind mostly because the self-reflecting model—which is me—has trained to perceive other human mind so well that it learned to generalize to itself (see the whole entire post for the details). Although Graziano in the article and book I linked provides a very convincing explanation as to why this self-modeling would also be very helpful for the general reasoning ability—something I was unsuccessfully trying to figure out in the part 5.
What’s your theory for why consciousness is actually your ability to perceive yourself as human mind? From your explanation it seems to be
You think (and say) you have consciousness.
When you examine why you think it, you use your ability to perceive yourself as human mind.
Therefore consciousness is your ability to perceive yourself as human mind.
You are basically saying that consciousness detector in the brain is an “algorithm of awareness” detector (and algorithm of awareness can work as “algorithm of awareness” detector). But what are the actual reasons to believe it? Only that if it is awareness, then it explains why you can detect it? It certainly is not a perfect detector, because some people will explicitly say “no, my definition of consciousness is not about awareness”. And because it doesn’t automatically fits into “If you have a conscious mind subjectively perceiving anything about the outside world, it has to feel like something” if you just replace “conscious” by “able to percieve itself”.
Those are all great points. Regarding your first question, no, that’s not the reasoning I have. I think consciousness is the ability to reflect on myself firstly because it feels like the ability to reflect on myself. Kind of like the reason that I believe I can see is that when I open my eyes I start seeing things and if I interact with those things they really are mostly where I see them, nothing more sophisticated than that. There’s a bunch of longer more theoretical arguments I can bring for this point, but I never thought I should because I was kind of taking it as a given. It well may be me falling into the typical mind fallacy, if you say some people say otherwise. So if you have different intuitions about the consciousness, can you tell:
How do you subjectively, from the first person view, know that you are conscious?
Can you genuinely imagine being conscious but not self aware from the first person view?
If you get to talk to and interact with, an alien or an AI of unknown power and architecture, how would you go about finding out if they are conscious?
Well, no, it doesn’t fit quite as simple, but overall I think it works out. If you have an agent able to reflect on itself and model itself perceiving something, it’s going to reflect on the fact that it perceives something. I.e. it’s going to have some mental representation for both the perception and for itself perceiving it. It will be able to reason about itself perceiving things, and if it can communicate it will probably also talk about it. Different perceptions will be in relation to each other (e.g. sky is not the same color as grass, and grass color is associated with summer and warmth and so on). And, perhaps most importantly, it will have models of other such agents perceiving things and it will on the high abstract level that they have the same perceptions in them. But it will only have the access to the lower level data for such perceptions from its own sensory inputs, not others’, so it won’t be able to tell for sure what it “feels like” to them, because it won’t be getting theirs stream of low-level sensory inputs.
In short, I think—and please do correct me if you have a counterexample—that we have reasons to expect such an agent to make any claim humans make (given similar circumstances and training examples), and we can make any testable claim about such an agent that we can make about a human.
To me it looks like the defining feature of consciousness intuition is one’s certainty in having it, so I define consciousness as the only thing one can be certain about and then I know I am conscious by executing “cogito ergo sum”.
I can imagine disabling specific features associated with awareness starting with memory: seeing something without remembering feels like seeing something and then forgetting about it. Usually when you don’t remember seeing something recent it means your perception wasn’t conscious, but you certainly forgot some conscious moments in the past.
Then I can imagine not having any thoughts. It is harder for long periods of time, but I can create short durations of just seeing that, as far as I remember, are not associated with any thoughts.
At that point it becomes harder to describe this process as self-awareness. You could argue that if there is representation of the lower level somewhere in the high level, then it is still modeling. But there is no more reason to consider these levels parts of the same system, than to consider any sender-receiver pair as self-modeling system.
I don’t know. It’s all ethics, so I’ll probably just check for some arbitrary similarity-to-human-mind metric.
Depending on detailed definitions of “reflect on itself” and “model itself perceiving” I think you can make an agent that wouldn’t claim to be perfectly certain in its own consciousness. For example, I don’t see a reason why some simple cartesian agent with direct read-only access to its own code would think in terms of consciousness.
That’s nothing new, it’s the intuition that the Mary thought experiment is designed to address.