How to discover the nature of sentience, and ethics
Note: This is a very rough draft of a theory. Any comments or corrections are welcome. I will try to give credit to useful observations.
To discover the nature of sentience and ethics, we can actually use scientific principles even though they are sometimes considered metaphysical.
The great insight though is that consciousness is part of reality. It is a real phenomenon. So I believe it turns out we can study this phenomenon, and assemble essentially complete theories of what it ‘is’.
What do I mean by a theory of consciousness (later a theory of ethics, or ‘what we must do’)?
A theory of physics may, for example, predict the position of particles at a later time given certain initial conditions.
A theory of consciousness can answer a number of questions:
- Given a mind (e.g. a human brain), and its initial state, when we give it a certain input, for example corresponding to the physiological (neural) response to smelling a sweet, what taste does the mind/person feel? (is it really sweet or is this person different and smells what is commonly known as salty?) This is a purely subjective phenomenon, but it is emergent from information and communication in the brain.
In general, we are able to given neural activity describe the internal feelings experienced by the person using common sense words, because we have a theory of neural activity, and a theory of how different feelings manifest neurally in relationship to associated well-enough defined words and concepts (for example emotions, feelings, sensations).
- We understand all qualia and what qualia ‘is’, that is, we can tell if a certain object with say certain electrical activity manifests qualia (i.e. if it is sentient) or not, for any system.
How can we do that?
The basic principle really is observation of what people say, including with some caveats personal observations.
The second principle is that if there is a singular reality, and if things are a certain way and not another, then it may be (hypothesis) that there should be a single theory that is both logically internally consistent and consistent with reality, that explains both physics and sentience: simply because sentience is an emergent phenomenon hence part of reality. (in particular, there could perhaps be multiple theories giving the same predictions thus being equivalent). This may also be known as an ’principle of explosion’
from mathematics.
The third principle is that of regularity. For example, say for example, in the case of the sweet smell given previously. Say we collect a large sample of people smelling sweets and salts (could be a blind experiment for the researchers) and their neural patterns. Then we ask people what they tasted, and try to find correlations between the
neural activity and what people reported.
This allows us to circumvent a problem: what if people lie, don’t understand or somehow misreport what they feel? Suppose we take a sample with 50 people, 45 people tell the truth and 5 lie. We may be able to detect the outlier case and just tell there is something different about those 5 people. In a statistical model, if a fixed portion of people (say 10%) lie about what they tasted, then we can use the law of large numbers
to not only have a probabilistic notion of what is sweet and isn’t, but it may actually converge to an exact model of what is sweet and isn’t. Beyond a simple correlation, we should try to find general and simple functions that predict (exactly or approximately) those things like taste while ‘understanding’ their nature. The theory again of what is the taste known as sweet again, with enough observations, will converge in a singular theory (or set of equally predictive ones).
By understanding I mean we will have a map of what exactly are the necessary conditions, variables and structures for the manifestation of the subjective phenomenon known as tasting sweet.
What about ethics?
By ethics I mean, ‘what we *should* do’, a general normative, ideal theory of what should be done, what is good, what is purpose, what is meaning.
Because we are all again emergent phenomena of the cosmos, emergent minds from neural structures, and a feature of reality, I think a first observation should be:
- What should be done is a function of sentience, exclusively, because *we are* sentience, we are emergent phenomena of human brains.
I can declare this because to say otherwise is inconsistent, inconsistent with the definition of sentience and also inconsistent with observations of reality.
I hypothesize a curious phenomenon: as we study sentience, and as soon as we establish with varying rigor what we mean by ‘what should be done’ (using the principles outlined
in the previous section about sentience), then again a singular theory should emerge, a kind of emergent normativity, about what is ethical, what is meaning, what should be done.
An example is for example take some idea of ‘literal physical constructivism’: suppose we think the meaning of life is to build structures throughout the cosmos, literal physical structures, and the more we build the better.
But if meaning is about sentience as argued, then it cannot be: consider an universe of non-sentient robots building out large structures. It cannot be the meaning to bring about this universe, because in it there is no sentience. No one can for example experience things that caracterize sentience, therefore there cannot be meaning.
As we simultaneously understand, formalize and build theories of the questions we are interested in, and even theories of all plausible questions, and as we use the principles of singular reality and regularity, we may approach or converge toward notion of truth. In particular, the truths that are most important (ethics), and even the notion
of importance can be understood this way.
Is this practical?
I think so. Simply because simple insights such as that meaning must be a product of sentience seem already very useful. What use is money without a mind to use it and enjoy it? What use is power or wealth, if not just instrumental?
There are many other metaphysical questions that I believe can be answered more or less definitely[1] that are also very practical and very significant. This is only a sketch of this general plan but I intend to elaborate further in the future. This procedure itself can be meta-improved with the initial (bootstrapping) principles outlined in this article. It is really a key to understanding anything physical or metaphysical, by slightly extrapolation scientific principles to subjective and metaphysical phenomena; it is of course full of hypotheses and if only very weak versions of claim here pan out, I still think we can find out much with good reasons to be confident.
Finally, we may use this to improve our lives and lower the level of conflict in society as a whole.
In particular, I think AI may be of great help analyzing huge quantities of data if we wish to understand the nature of various qualia, including the ‘postive’ ones like joy, awe, contentment, etc. Also likely helpful in the effort to theorize, formalize and find inconsistencies in hypotheses.
Risks
Every theory carries some risks and it’s important to be aware of them, specially in this case I think. I think the risks posed by this kind of theory largely relate to the crystalization associated with theory-building, and possible associated formalizations and formal use of words including common-sense words. So we could formalize for example a notion of joy and use those words within the context of their theories of joy; but the theory will either likely be significantly incomplete, or the daily usage of the word within society might shift its meaning (hopefully for the better), and over-reliance on some formal system could create a prison where changing is disallowed or discouraged (for example if some reasoning systems, chatGPT-like, that we interact with adopted formal definitions). The defense against it are a few (non-exclusionary):
(1) Make such systems flexible in that definitions can be changed to match common usage;
(2) Avoid overreliance or careless usage of formal inflexible systems;
(3) Create cultural understanding that formal definitions and theories constitute a “closed world”, much like some words have specific meanings in science that may not correspond with common-sense notions and intuitions, creating a separation between the two (formalized meanings and daily-usage meanings).
[1] That is, asymptotically definitely perhaps.
That is somewhat contentious. MY consciousness and internal experiences are certainly part of reality. I do not know how to show that YOUR consciousness is similar enough to say that it’s real. You could be a p-zombie that does not have consciousness, even though you use words that claim you do. Or you could be conscious, but in a way that feels (to you) so alien to my own perceptions that it’s misleading to use the same word.
Because we’re somewhat mechanically similar, it’s probable that our conscious experiences (qualia) are also similar, but we’re not identical and have no even theoretical way to measure which, if any, differences between us are important to that question.
In other words, consciousness is a real phenomenon, but it’s not guaranteed that it’s the SAME phenomenon for me as for anything else.
This uncertainty flows into your thoughts on morals—there’s no testable model for which variations in local reality cause what variations in morals, so no tie from individual experience to universal “should”.
One of my objectives was to show you can indeed deduce that other consciousness are real, and we can actually build theories even though it may seem we can only make individual conclusions at first.
A good example is the physical world. By the same logic, there would be no way to prove that anything at all outside your own subjective experience is real. There are many other possibilities that yield the same results and they yield identical results from first-hand experience. Yet, we don’t go (and shouldn’t for scientific beliefs as I’ll explain) about daily lives considering everyone else not to be real. That would at least lead us to treat everyone else extremely poorly at least when we have something to gain. We indeed make an estimate (not definite proof) that other people are real. This is an important observation of the disproof-impossibility, which I forgot to mention, that the correct logic for this formal systems is either some bayesian logic, or even some weaker versions of formality (that can be informally soft) that are easier to work with, at least until someone discovers better ways to formalize propositions with increasing rigor.
The objective of this comment isn’t to disprove solipsism (I will do so in a later post), but I believe to disprove it (in the soft, bayesian way indicated earlier) is that the arrangement necessary to provide a ‘solipsistic experience’ (i.e. a “personal universe” in which you are the only one existing, either through some kind of simulation or many other possibilities) should be much less likely to pan out considering all possible existences. It is necessary to engineer a highly sophisticated system to provide this illusion, which would be, certainly in our universe, astronomically costly and wholly infeasible. There are many more existences where you exist normally than where you exist “solipsistically”. This of course relies on a development of metaphysics (more precisely non-directly-observable physics), and I should have noted all of this in particular ethics has a critical dependence on this metaphysics.
Back to your objection, just like in other aspects of reality, the principle of continuity is likely to apply to consciousness. As a very first observation, note we can at least estimate two very similar brains and minds (in the sense of neural state, patterns and connectivity) should be experiencing similar qualia. To advance it to all minds, and how to estimate consciousness, would be a result of careful study and theory-building of many different minds (i.e. closely examine their neural patterns and associated behavior). From this study we will probably find many different interesting systems, structures and architectures. Suppose in general ways the architecture and patterns of your mind are similar enough to other people, with no drastic differences between them. Them invoking scientific principles like Occam’s razor, the Copernican Principle, etc. (also the principle of regularity I mentioned) we should both begin to understand the necessary elements for experiencing qualia and conclude other people than ourselves also experience qualia.
It would not only be extraordinary (in a scientific sense) to be the only person experiencing qualia (more so with other people even inventing the very concept!), but since our subjective experiences are part of reality and an emergent phenomenon, if you really were the only person experiencing qualia then something different in your neural patterns should be observed. Further investigation should yield several hypothesis, why not one of them, that only you experience sentience, and logical constraints I believe would finally show or associate this unique architecture, patterns or arrangement to be fundamental to sentience. Theoretically only of course, because scientifically this possibility would be both extraordinary and absurd (very unlikely at a first estimate).
Thank you for your comment :)
Good discussion. I don’t think anyone (certainly not me) is arguing that consciousness isn’t a physical thing (“real”, in that sense). I’m arguing that “consciousness” may not be a coherent category. In the same sense that long ago, dolphins and whales were considered to be “fish”, but then more fully understood to be marine mammals. Nobody EVER thought they weren’t real. Only that the category was wrong.
Same with the orbiting rock called “pluto”. Nobody sane has claimed it’s not real, it’s just that some believe it’s not a planet. “fish” and “planet” are not real, although every instance of them is real. In fact, many things that are incorrectly thought to be them are real as well. It’s not about “real”, it’s about modeling and categorization.
“Consciousness” is similar—it’s not a real thing, though every instance that’s categorized (and miscategorized) that way is real. There’s no underlying truth or mechanism of resolving the categorization of observable matter as “conscious” or “behavior, but not conscious”—it’s just an agreement among taxonomists.
(note: personally, I find it easiest to categorize most complex behavior in brains as “conscious”—I don’t actually know how it feels to be them, and don’t REALLY know that they self-model in any way I could understand, but it’s a fine simplification to make for my own modeling. I can’t make the claim that this is objectively true, and I can’t even design theoretical tests that would distinguish it from other theories. In this way, it’s similar to MWI vs Copenhagen interpretations of QM—there’s no testable distinction, so use whichever one fits your needs best.)
Yeah, the problem is with the external boundaries and the internal classification of “consciousness”.
I have a first-hand access to my own consciousness. I can assume that other have something similar, because we are biologically similar—but even this kind of reasoning is suspicious, because we already know there are huge difference between people: people in coma are biologically quite similar to people who are awake; there are autists and psychopaths, or people who hallucinate—if there were huge differences in the quality of consciousness, as a result of this, or something else, how would we know it?
And there is the problem with those where we can’t reason by biological similarity: animals, AIs.