The great insight though is that consciousness is part of reality. It is a real phenomenon.
That is somewhat contentious. MY consciousness and internal experiences are certainly part of reality. I do not know how to show that YOUR consciousness is similar enough to say that it’s real. You could be a p-zombie that does not have consciousness, even though you use words that claim you do. Or you could be conscious, but in a way that feels (to you) so alien to my own perceptions that it’s misleading to use the same word.
Because we’re somewhat mechanically similar, it’s probable that our conscious experiences (qualia) are also similar, but we’re not identical and have no even theoretical way to measure which, if any, differences between us are important to that question.
In other words, consciousness is a real phenomenon, but it’s not guaranteed that it’s the SAME phenomenon for me as for anything else.
This uncertainty flows into your thoughts on morals—there’s no testable model for which variations in local reality cause what variations in morals, so no tie from individual experience to universal “should”.
One of my objectives was to show you can indeed deduce that other consciousness are real, and we can actually build theories even though it may seem we can only make individual conclusions at first.
A good example is the physical world. By the same logic, there would be no way to prove that anything at all outside your own subjective experience is real. There are many other possibilities that yield the same results and they yield identical results from first-hand experience. Yet, we don’t go (and shouldn’t for scientific beliefs as I’ll explain) about daily lives considering everyone else not to be real. That would at least lead us to treat everyone else extremely poorly at least when we have something to gain. We indeed make an estimate (not definite proof) that other people are real. This is an important observation of the disproof-impossibility, which I forgot to mention, that the correct logic for this formal systems is either some bayesian logic, or even some weaker versions of formality (that can be informally soft) that are easier to work with, at least until someone discovers better ways to formalize propositions with increasing rigor.
The objective of this comment isn’t to disprove solipsism (I will do so in a later post), but I believe to disprove it (in the soft, bayesian way indicated earlier) is that the arrangement necessary to provide a ‘solipsistic experience’ (i.e. a “personal universe” in which you are the only one existing, either through some kind of simulation or many other possibilities) should be much less likely to pan out considering all possible existences. It is necessary to engineer a highly sophisticated system to provide this illusion, which would be, certainly in our universe, astronomically costly and wholly infeasible. There are many more existences where you exist normally than where you exist “solipsistically”. This of course relies on a development of metaphysics (more precisely non-directly-observable physics), and I should have noted all of this in particular ethics has a critical dependence on this metaphysics.
Back to your objection, just like in other aspects of reality, the principle of continuity is likely to apply to consciousness. As a very first observation, note we can at least estimate two very similar brains and minds (in the sense of neural state, patterns and connectivity) should be experiencing similar qualia. To advance it to all minds, and how to estimate consciousness, would be a result of careful study and theory-building of many different minds (i.e. closely examine their neural patterns and associated behavior). From this study we will probably find many different interesting systems, structures and architectures. Suppose in general ways the architecture and patterns of your mind are similar enough to other people, with no drastic differences between them. Them invoking scientific principles like Occam’s razor, the Copernican Principle, etc. (also the principle of regularity I mentioned) we should both begin to understand the necessary elements for experiencing qualia and conclude other people than ourselves also experience qualia.
It would not only be extraordinary (in a scientific sense) to be the only person experiencing qualia (more so with other people even inventing the very concept!), but since our subjective experiences are part of reality and an emergent phenomenon, if you really were the only person experiencing qualia then something different in your neural patterns should be observed. Further investigation should yield several hypothesis, why not one of them, that only you experience sentience, and logical constraints I believe would finally show or associate this unique architecture, patterns or arrangement to be fundamental to sentience. Theoretically only of course, because scientifically this possibility would be both extraordinary and absurd (very unlikely at a first estimate).
Good discussion. I don’t think anyone (certainly not me) is arguing that consciousness isn’t a physical thing (“real”, in that sense). I’m arguing that “consciousness” may not be a coherent category. In the same sense that long ago, dolphins and whales were considered to be “fish”, but then more fully understood to be marine mammals. Nobody EVER thought they weren’t real. Only that the category was wrong.
Same with the orbiting rock called “pluto”. Nobody sane has claimed it’s not real, it’s just that some believe it’s not a planet. “fish” and “planet” are not real, although every instance of them is real. In fact, many things that are incorrectly thought to be them are real as well. It’s not about “real”, it’s about modeling and categorization.
“Consciousness” is similar—it’s not a real thing, though every instance that’s categorized (and miscategorized) that way is real. There’s no underlying truth or mechanism of resolving the categorization of observable matter as “conscious” or “behavior, but not conscious”—it’s just an agreement among taxonomists.
(note: personally, I find it easiest to categorize most complex behavior in brains as “conscious”—I don’t actually know how it feels to be them, and don’t REALLY know that they self-model in any way I could understand, but it’s a fine simplification to make for my own modeling. I can’t make the claim that this is objectively true, and I can’t even design theoretical tests that would distinguish it from other theories. In this way, it’s similar to MWI vs Copenhagen interpretations of QM—there’s no testable distinction, so use whichever one fits your needs best.)
Yeah, the problem is with the external boundaries and the internal classification of “consciousness”.
I have a first-hand access to my own consciousness. I can assume that other have something similar, because we are biologically similar—but even this kind of reasoning is suspicious, because we already know there are huge difference between people: people in coma are biologically quite similar to people who are awake; there are autists and psychopaths, or people who hallucinate—if there were huge differences in the quality of consciousness, as a result of this, or something else, how would we know it?
And there is the problem with those where we can’t reason by biological similarity: animals, AIs.
That is somewhat contentious. MY consciousness and internal experiences are certainly part of reality. I do not know how to show that YOUR consciousness is similar enough to say that it’s real. You could be a p-zombie that does not have consciousness, even though you use words that claim you do. Or you could be conscious, but in a way that feels (to you) so alien to my own perceptions that it’s misleading to use the same word.
Because we’re somewhat mechanically similar, it’s probable that our conscious experiences (qualia) are also similar, but we’re not identical and have no even theoretical way to measure which, if any, differences between us are important to that question.
In other words, consciousness is a real phenomenon, but it’s not guaranteed that it’s the SAME phenomenon for me as for anything else.
This uncertainty flows into your thoughts on morals—there’s no testable model for which variations in local reality cause what variations in morals, so no tie from individual experience to universal “should”.
One of my objectives was to show you can indeed deduce that other consciousness are real, and we can actually build theories even though it may seem we can only make individual conclusions at first.
A good example is the physical world. By the same logic, there would be no way to prove that anything at all outside your own subjective experience is real. There are many other possibilities that yield the same results and they yield identical results from first-hand experience. Yet, we don’t go (and shouldn’t for scientific beliefs as I’ll explain) about daily lives considering everyone else not to be real. That would at least lead us to treat everyone else extremely poorly at least when we have something to gain. We indeed make an estimate (not definite proof) that other people are real. This is an important observation of the disproof-impossibility, which I forgot to mention, that the correct logic for this formal systems is either some bayesian logic, or even some weaker versions of formality (that can be informally soft) that are easier to work with, at least until someone discovers better ways to formalize propositions with increasing rigor.
The objective of this comment isn’t to disprove solipsism (I will do so in a later post), but I believe to disprove it (in the soft, bayesian way indicated earlier) is that the arrangement necessary to provide a ‘solipsistic experience’ (i.e. a “personal universe” in which you are the only one existing, either through some kind of simulation or many other possibilities) should be much less likely to pan out considering all possible existences. It is necessary to engineer a highly sophisticated system to provide this illusion, which would be, certainly in our universe, astronomically costly and wholly infeasible. There are many more existences where you exist normally than where you exist “solipsistically”. This of course relies on a development of metaphysics (more precisely non-directly-observable physics), and I should have noted all of this in particular ethics has a critical dependence on this metaphysics.
Back to your objection, just like in other aspects of reality, the principle of continuity is likely to apply to consciousness. As a very first observation, note we can at least estimate two very similar brains and minds (in the sense of neural state, patterns and connectivity) should be experiencing similar qualia. To advance it to all minds, and how to estimate consciousness, would be a result of careful study and theory-building of many different minds (i.e. closely examine their neural patterns and associated behavior). From this study we will probably find many different interesting systems, structures and architectures. Suppose in general ways the architecture and patterns of your mind are similar enough to other people, with no drastic differences between them. Them invoking scientific principles like Occam’s razor, the Copernican Principle, etc. (also the principle of regularity I mentioned) we should both begin to understand the necessary elements for experiencing qualia and conclude other people than ourselves also experience qualia.
It would not only be extraordinary (in a scientific sense) to be the only person experiencing qualia (more so with other people even inventing the very concept!), but since our subjective experiences are part of reality and an emergent phenomenon, if you really were the only person experiencing qualia then something different in your neural patterns should be observed. Further investigation should yield several hypothesis, why not one of them, that only you experience sentience, and logical constraints I believe would finally show or associate this unique architecture, patterns or arrangement to be fundamental to sentience. Theoretically only of course, because scientifically this possibility would be both extraordinary and absurd (very unlikely at a first estimate).
Thank you for your comment :)
Good discussion. I don’t think anyone (certainly not me) is arguing that consciousness isn’t a physical thing (“real”, in that sense). I’m arguing that “consciousness” may not be a coherent category. In the same sense that long ago, dolphins and whales were considered to be “fish”, but then more fully understood to be marine mammals. Nobody EVER thought they weren’t real. Only that the category was wrong.
Same with the orbiting rock called “pluto”. Nobody sane has claimed it’s not real, it’s just that some believe it’s not a planet. “fish” and “planet” are not real, although every instance of them is real. In fact, many things that are incorrectly thought to be them are real as well. It’s not about “real”, it’s about modeling and categorization.
“Consciousness” is similar—it’s not a real thing, though every instance that’s categorized (and miscategorized) that way is real. There’s no underlying truth or mechanism of resolving the categorization of observable matter as “conscious” or “behavior, but not conscious”—it’s just an agreement among taxonomists.
(note: personally, I find it easiest to categorize most complex behavior in brains as “conscious”—I don’t actually know how it feels to be them, and don’t REALLY know that they self-model in any way I could understand, but it’s a fine simplification to make for my own modeling. I can’t make the claim that this is objectively true, and I can’t even design theoretical tests that would distinguish it from other theories. In this way, it’s similar to MWI vs Copenhagen interpretations of QM—there’s no testable distinction, so use whichever one fits your needs best.)
Yeah, the problem is with the external boundaries and the internal classification of “consciousness”.
I have a first-hand access to my own consciousness. I can assume that other have something similar, because we are biologically similar—but even this kind of reasoning is suspicious, because we already know there are huge difference between people: people in coma are biologically quite similar to people who are awake; there are autists and psychopaths, or people who hallucinate—if there were huge differences in the quality of consciousness, as a result of this, or something else, how would we know it?
And there is the problem with those where we can’t reason by biological similarity: animals, AIs.