Is self-ignorance a prerequisite of human-like sentience?
I present here some ideas I’ve been considering recently with regards to philosophy of mind, but I suppose the answer to this question would have significant implications for AI research.
Clearly, our instinctive perception of our own sentience/consciousness is one which is inaccurate and mostly ignorant: we do not have knowledge or sensation of the physical processes occurring in our brains which give rise to our sense of self.
Yet I take it as true that our brains—like everything else—are purely physical. No mysticism here, thank you very much. If they are physical, then everything that occurs within is causally deterministic. I avoid here any implications regarding free will (a topic I regard as mostly nonsense anyway). I simply point out that our brain processes will follow a causal narrative thus: input leads to brain state A leads to brain state B which leads to brain state C, and so on. These processes are entirely physical, and therefore, theoretically (not practically—yet), entirely predictable.
Now, ask yourself this question: what would our self-perception be like, if it was entirely accurate to the physical reality? If there was no barrier of ignorance between our consciousness and the inner workings of our brains?
With every idea, though, emotion, plan, memory and action we had, we would be aware of the brainwave that accompanied it—the specific pattern of neuronal firings, and how they built up to create semantically meaningful information. Further, we’d see how this brain state led to the following brain state, and so on. We would perceive ourselves as purely mechanical.
In addition, as our brain is not a single entity, but a massive network of neurons, collected into different systems (or modules), working together but having separate functions, we would not think of our mental processes as unified—at least nowhere near as much as we do now.We would no longer attribute our thoughts and mental life to an “I”, but to the totality of mechanical processes that—when we were ignorant—built up to create a unified sense of “I”.
I would tentatively suggest that such a sense of self is incompatible with our current sense of self. That how we act and behave and think, how we see ourselves and others, is intrinsically tied to the way we perceive ourselves as non-mechanical, possessing a mystical will—an I—which goes where it chooses (of course academically you may recognise that you’re a biological machine, but instinctually we all behave as if we weren’t). In short, I would suggest that our ignorance of our neural processes is necessary for the perception of ourselves as autonomous sentient individuals.
The implications of this, were it true, are clear. It would be impossible to create an AI which was both able to perceive and alter its own programming, while maintaining a human-like sentience. That’s not to say that such an AI would not be sentient—just that it would be sentient in a very different way to how we are.
Secondly, we would possibly not even be able to recognise this other-sentience, such was the difference. For every decision or proclamation the AI made, we would simply see the mechanical programming at work, and say “It’s not intelligent like we are, it’s just following mechanical principles”. (Think, for example, of Searle’s Chinese Room, which I take only shows that if we can fully comprehend every stage of an information manipulation process, most people will intuitively think it to be not sentient). We would think our AI project unfinished, and keep trying to add that “final spark of life”, unaware that we had completed the project already.
I don’t think there is really such a thing as introverted and extroverted people at all. People are encouraged to think of these things as part of their “essential character” (TM) - or even their biology.
Here’s some evidence the other way—paywalled, but the gist is on the first page.
Is self-ignorance a prerequisite of human-like sentience?
I present here some ideas I’ve been considering recently with regards to philosophy of mind, but I suppose the answer to this question would have significant implications for AI research.
Clearly, our instinctive perception of our own sentience/consciousness is one which is inaccurate and mostly ignorant: we do not have knowledge or sensation of the physical processes occurring in our brains which give rise to our sense of self.
Yet I take it as true that our brains—like everything else—are purely physical. No mysticism here, thank you very much. If they are physical, then everything that occurs within is causally deterministic. I avoid here any implications regarding free will (a topic I regard as mostly nonsense anyway). I simply point out that our brain processes will follow a causal narrative thus: input leads to brain state A leads to brain state B which leads to brain state C, and so on. These processes are entirely physical, and therefore, theoretically (not practically—yet), entirely predictable.
Now, ask yourself this question: what would our self-perception be like, if it was entirely accurate to the physical reality? If there was no barrier of ignorance between our consciousness and the inner workings of our brains?
With every idea, though, emotion, plan, memory and action we had, we would be aware of the brainwave that accompanied it—the specific pattern of neuronal firings, and how they built up to create semantically meaningful information. Further, we’d see how this brain state led to the following brain state, and so on. We would perceive ourselves as purely mechanical.
In addition, as our brain is not a single entity, but a massive network of neurons, collected into different systems (or modules), working together but having separate functions, we would not think of our mental processes as unified—at least nowhere near as much as we do now.We would no longer attribute our thoughts and mental life to an “I”, but to the totality of mechanical processes that—when we were ignorant—built up to create a unified sense of “I”.
I would tentatively suggest that such a sense of self is incompatible with our current sense of self. That how we act and behave and think, how we see ourselves and others, is intrinsically tied to the way we perceive ourselves as non-mechanical, possessing a mystical will—an I—which goes where it chooses (of course academically you may recognise that you’re a biological machine, but instinctually we all behave as if we weren’t). In short, I would suggest that our ignorance of our neural processes is necessary for the perception of ourselves as autonomous sentient individuals.
The implications of this, were it true, are clear. It would be impossible to create an AI which was both able to perceive and alter its own programming, while maintaining a human-like sentience. That’s not to say that such an AI would not be sentient—just that it would be sentient in a very different way to how we are.
Secondly, we would possibly not even be able to recognise this other-sentience, such was the difference. For every decision or proclamation the AI made, we would simply see the mechanical programming at work, and say “It’s not intelligent like we are, it’s just following mechanical principles”. (Think, for example, of Searle’s Chinese Room, which I take only shows that if we can fully comprehend every stage of an information manipulation process, most people will intuitively think it to be not sentient). We would think our AI project unfinished, and keep trying to add that “final spark of life”, unaware that we had completed the project already.
Here’s some evidence the other way—paywalled, but the gist is on the first page.
Um, thanks, but I think wrong thread.
Oops, you’re right.