My own take on sentience/consciousness is somewhat different from yours, and in the general case, I think Anil Seth’s theory is better than Global Workspace Theory, but Global Workspace Theory does explain the weird properties of consciousness in humans better.
On that note… I’ll abstain from strong statements on whether various animals actually have self-models complex enough to be morally relevant. I suspect, however, that almost no-one’s planning algorithms are advanced enough to make good use of qualia — and evolution would not grant them senses they can’t use. In particular, this capability implies high trust placed by evolution in the planner-part: that sometimes it may know better than the built-in instincts, and should have the ability to plan around them.
But I’m pushing back against this sort of argument. As I’ve described, a mind in pain does not necessarily experience that pain. The capacity to have qualia of pain corresponds to a specific mental process where the effect of pain on the agent is picked up by a specialized “sensory apparatus” and re-fed as input to the planning module within that agent. This, on a very concrete level, is what having internal experience means. Just track the information flows!
And it’s entirely possible for a mind to simply lack that sensory apparatus.
I think that there doesn’t need to be high trust placed in the planner part, because self-modeling is basically always useful if you want to stay alive, thanks to the gooder regulator theorem by John Wentworth, and self-modeling/reasoning is more like a continuum than sharply discrete.
On why the hard problem seems hard:
I’m not sure, but I suspect that it’s a short-hand for “has inherent moral relevance”. It’s not tied to “is self-aware” because evolution wanted us to be able to dehumanize criminals and members of competing tribes: view them as beasts, soulless barbarians. So the concept is decoupled from its definition, which means we can imagine incoherent states where things that have what we define as “qualia” don’t have qualia, and vice versa.
I think this is a likely answer, and that’s despite the post below being bad methodologically for reasons Paradiddle and Sunwillrise says:
Another reason why the hard problem seems hard is that way too many philosophers are disinclined to gather any data on the phenomenon of interest at all, because they don’t have backgrounds in neuroscience, and instead want to purely define consciousness without reference to any empirical reality, which is a very bad approach to learning, and also self-report data is pretty terrible as data, unfortunately, when they do gather data, and they don’t realize this.
I have a somewhat different takeaway for ethics.
In contrast to this:
4. Moral Implications. If you accept the above, then the whole “qualia” debacle is just a massive red herring caused by the idiosyncrasies of our mental architecture. What does that imply for ethics?
Well, that’s simple: we just have to re-connect the free-floating “qualia” concept with the definition of qualia. We value things that have first-person experiences similar to ours. Hence, we have to isolate the algorithms that allow things to have first-person experiences like ours, then assign things like that moral relevance, and dismiss the moral relevance of everything else.
And with there not being some additional “magical fluid” that can confer moral relevance to a bundle of matter, we can rest assured there won’t be any shocking twists where puddles turn out to have been important this entire time.
I instead argue that we need to decouple what we value (which can be arbitrary) from what things actually have qualia (which has a single answer in general), and I absolutely disagree with the claim that we have to dismiss the moral relevance of everything else, and I also disagree with the claim that we have to assign moral relevance to algorithms that allow things to have first-person experiences like ours.
My own take on sentience/consciousness is somewhat different from yours, and in the general case, I think Anil Seth’s theory is better than Global Workspace Theory, but Global Workspace Theory does explain the weird properties of consciousness in humans better.
For more, read this review:
https://www.lesswrong.com/posts/FQhtpHFiPacG3KrvD/seth-explains-consciousness#7ncCBPLcCwpRYdXuG
On this:
I think that there doesn’t need to be high trust placed in the planner part, because self-modeling is basically always useful if you want to stay alive, thanks to the gooder regulator theorem by John Wentworth, and self-modeling/reasoning is more like a continuum than sharply discrete.
On why the hard problem seems hard:
I think this is a likely answer, and that’s despite the post below being bad methodologically for reasons Paradiddle and Sunwillrise says:
https://www.lesswrong.com/posts/KpD2fJa6zo8o2MBxg/consciousness-as-a-conflationary-alliance-term-for
Another reason why the hard problem seems hard is that way too many philosophers are disinclined to gather any data on the phenomenon of interest at all, because they don’t have backgrounds in neuroscience, and instead want to purely define consciousness without reference to any empirical reality, which is a very bad approach to learning, and also self-report data is pretty terrible as data, unfortunately, when they do gather data, and they don’t realize this.
I have a somewhat different takeaway for ethics.
In contrast to this:
I instead argue that we need to decouple what we value (which can be arbitrary) from what things actually have qualia (which has a single answer in general), and I absolutely disagree with the claim that we have to dismiss the moral relevance of everything else, and I also disagree with the claim that we have to assign moral relevance to algorithms that allow things to have first-person experiences like ours.