That’s starting at the finishing line. The hard problem of consciousness is about why there should be feelings at all, not about why we feel particular things.
A: Like all feelings, it was selected by evolution to signal an important situation and trigger appropriate behavior.
Q: What situation ? What behavior ?
A: Modeling oneself. Paying extra attention.
Q: And how ?
A: I expect a kluge fitting of the blind idiot god, like detecting when proprioception matches and/or drives agent modeling, probably with feedback loops. This would lower environment perception, inhibit attention zapping etc., leading to how consciousness feels.
It’s a far cry from a proper explanation, yet it already makes so much sense.
Asking the right questions did dispel much of the mystery.
A: Like all feelings, it was selected by evolution to signal an important situation and trigger appropriate behavior.
This is a design-stance explanation, which, firstly, is inherently problematic when applied to evolution (as opposed to a human designer), and, more importantly, doesn’t actually explain anything.
The Hard Problem of Consciousness is the problem of giving a functional (physical-stance, more or less—modulo the possibility of lossless abstraction away from “implementation details” of functional units) explanation of why we “feel conscious” (and just what exactly that alleged “feeling” consists of).
What’s more, even if we accept the rest of your (evolutionary) explanation, notice that it doesn’t actually answer the question, since everything you said about selection for certain functional properties, etc., would remain true even in the absence of phenomenal, a.k.a. subjective, consciousness (i.e., “what it is like to be” you).
You have, in short, managed to solve everything but the Hard Problem!
I worded poorly, but evolution does produce such apparent result.
The Hard Problem of Consciousness
Is way out my league, I did not pretend to solve it : “It’s a far cry from a proper explanation”.
But pondering it led to another find : “Feeling conscious” looks like an incentive to better model oneself, by thinking oneself special, as having something to preserve… which looks a lot like the soul.
A simple, plausible explanation that dissolves a mystery, works for me ! (until better is offered)
That line of thinking goes places, but here is not the place to develop it.
(That may be a useful clue for identifying the meaning of the question, as understood by the people pursuing it, but not necessarily a good reason to agree that it currently should be considered mysterious or that it’s a sensible question to pursue.)
That’s starting at the finishing line. The hard problem of consciousness is about why there should be feelings at all, not about why we feel particular things.
Okay. Q: Why do I think I am conscious ?
A: Because I feel conscious.
Q: Why ?
A: Like all feelings, it was selected by evolution to signal an important situation and trigger appropriate behavior.
Q: What situation ? What behavior ?
A: Modeling oneself. Paying extra attention.
Q: And how ?
A: I expect a kluge fitting of the blind idiot god, like detecting when proprioception matches and/or drives agent modeling, probably with feedback loops. This would lower environment perception, inhibit attention zapping etc., leading to how consciousness feels.
It’s a far cry from a proper explanation, yet it already makes so much sense.
Asking the right questions did dispel much of the mystery.
This is a design-stance explanation, which, firstly, is inherently problematic when applied to evolution (as opposed to a human designer), and, more importantly, doesn’t actually explain anything.
The Hard Problem of Consciousness is the problem of giving a functional (physical-stance, more or less—modulo the possibility of lossless abstraction away from “implementation details” of functional units) explanation of why we “feel conscious” (and just what exactly that alleged “feeling” consists of).
What’s more, even if we accept the rest of your (evolutionary) explanation, notice that it doesn’t actually answer the question, since everything you said about selection for certain functional properties, etc., would remain true even in the absence of phenomenal, a.k.a. subjective, consciousness (i.e., “what it is like to be” you).
You have, in short, managed to solve everything but the Hard Problem!
I worded poorly, but evolution does produce such apparent result.
Is way out my league, I did not pretend to solve it : “It’s a far cry from a proper explanation”.
But pondering it led to another find : “Feeling conscious” looks like an incentive to better model oneself, by thinking oneself special, as having something to preserve… which looks a lot like the soul.
A simple, plausible explanation that dissolves a mystery, works for me ! (until better is offered)
That line of thinking goes places, but here is not the place to develop it.
Agian, you are assuming there is no big deal about
why do I feel (anything at all),
and therefore the only issue is
why do I feel conscious
Try taking a step back an wondering why consciousness is considered mysterious when it has such a simple explanation.
(That may be a useful clue for identifying the meaning of the question, as understood by the people pursuing it, but not necessarily a good reason to agree that it currently should be considered mysterious or that it’s a sensible question to pursue.)
If you had a general rule that anything any particular theory cannot explain is “unimportant”, that would be an epistemological nighmare.