The lack of an internal monologue is a distressing question to me. I run a constant inner monologue, and can’t imagine thinking differently. There may be some sense in which people who lack an inner monologue lack certain features of consciousness that others who do have one possess.
Part of the issue here is to avoid thinking of consciousness as either a discrete capacity one either has or doesn’t have, or even to think of it as existing a continuum, such that one could have “more” or “less” of it. Instead, I think of “consciousness” as a term we use to describe a set of both qualitative and quantitatively distinct capacities. It’d be a bit like talking about “cooking skills.” If someone doesn’t know how to use a knife, or start a fire, do they “lack cooking skills”? Well, they lack a particular cooking skill, but there is no single answer as to whether they “lack cooking skills” because cooking skills break down into numerous subskills, each of which may be characterized by its own continuum along which a person could be better or worse. Maybe a person doesn’t know how to start a fire, but they can bake amazing cakes if you give them an oven and the right ingredients.
This is why I am wary of saying that animals are “not conscious” and would instead say that whatever their “consciousness” is like, it would be very different from ours, if they lack a self-model and if a self-model is as central to our experiences as I think it is.
As for someone who lacks an inner monologue, I am not sure what to make of these cases. And I’m not sure whether I’d want to say someone without an inner monologue “isn’t conscious,” as that seems a bit strange. Rather, I think I’d say that they may lack a feature of the kinds of consciousness most of us have that strikes me, at first glance, as fairly central and important. But perhaps it isn’t. I’d have to think more about that, to consider whether an enculturated construction of a self-model requires an inner monologue. I do think it probably requires exposure to language...at least in practice, for humans (at least in practice, since I don’t think an AI would have to proceed through the same developmental stages as humans would to become conscious. And, of course, in principle you could print out an adult human brain, which could be conscious without having to itself have ever been subjected to childhood enculturation).
However, once the relevant concepts and structures have been “downloaded,” this may not require a very specific type of phenomenology. Maybe it does, but at the very least, we could point to substantial overlap in many of the functional outputs of people who lack inner monologues, analogues to those of us who do have an inner monologue that we would not observe in animals. People who lack inner monologues can still speak meaningfully about themselves in the past, make plans for the future, talk about themselves as agents operating within the world, employ theory of mind, would probably report that they are conscious, could describe their phenomenal experiences, and so on. In other words, there would be substantial functional overlap in the way they spoke, thought, and behaved, with only a few notable differences in how they describe their phenomenology. At least, I am supposing all this is the case. Maybe they are different in other ways, and if I knew about them, and really thought about this, it might have really disturbing implications. But I doubt that will turn out to be the case.
This reminds me of an idea for a science fiction novel. I don’t know where it came from, but I’m not sure I was the first to think if a scenario like this:
Suppose we discovered that some subset of the population definitely did not have conscious experiences, and that the rest of us did. And suppose we had some reliable test for determining who was or was not conscious. It was easy to administer, and we quickly found that our spouses, children, parents, closest friends, and so on, were not conscious at all. Such people were simply automata. There were no lights on inside. In short: they simply had no qualia at all.
How would society react? What would people do? One could imagine a story like this addressing both interpersonal relationships, and the broader, societal-scale implications of such discoveries. I hope someone can take that idea and run with it, and turn it into something worth reading or watching.
The lack of an internal monologue is a distressing question to me. I run a constant inner monologue, and can’t imagine thinking differently. There may be some sense in which people who lack an inner monologue lack certain features of consciousness that others who do have one possess.
Part of the issue here is to avoid thinking of consciousness as either a discrete capacity one either has or doesn’t have, or even to think of it as existing a continuum, such that one could have “more” or “less” of it. Instead, I think of “consciousness” as a term we use to describe a set of both qualitative and quantitatively distinct capacities. It’d be a bit like talking about “cooking skills.” If someone doesn’t know how to use a knife, or start a fire, do they “lack cooking skills”? Well, they lack a particular cooking skill, but there is no single answer as to whether they “lack cooking skills” because cooking skills break down into numerous subskills, each of which may be characterized by its own continuum along which a person could be better or worse. Maybe a person doesn’t know how to start a fire, but they can bake amazing cakes if you give them an oven and the right ingredients.
This is why I am wary of saying that animals are “not conscious” and would instead say that whatever their “consciousness” is like, it would be very different from ours, if they lack a self-model and if a self-model is as central to our experiences as I think it is.
As for someone who lacks an inner monologue, I am not sure what to make of these cases. And I’m not sure whether I’d want to say someone without an inner monologue “isn’t conscious,” as that seems a bit strange. Rather, I think I’d say that they may lack a feature of the kinds of consciousness most of us have that strikes me, at first glance, as fairly central and important. But perhaps it isn’t. I’d have to think more about that, to consider whether an enculturated construction of a self-model requires an inner monologue. I do think it probably requires exposure to language...at least in practice, for humans (at least in practice, since I don’t think an AI would have to proceed through the same developmental stages as humans would to become conscious. And, of course, in principle you could print out an adult human brain, which could be conscious without having to itself have ever been subjected to childhood enculturation).
However, once the relevant concepts and structures have been “downloaded,” this may not require a very specific type of phenomenology. Maybe it does, but at the very least, we could point to substantial overlap in many of the functional outputs of people who lack inner monologues, analogues to those of us who do have an inner monologue that we would not observe in animals. People who lack inner monologues can still speak meaningfully about themselves in the past, make plans for the future, talk about themselves as agents operating within the world, employ theory of mind, would probably report that they are conscious, could describe their phenomenal experiences, and so on. In other words, there would be substantial functional overlap in the way they spoke, thought, and behaved, with only a few notable differences in how they describe their phenomenology. At least, I am supposing all this is the case. Maybe they are different in other ways, and if I knew about them, and really thought about this, it might have really disturbing implications. But I doubt that will turn out to be the case.
This reminds me of an idea for a science fiction novel. I don’t know where it came from, but I’m not sure I was the first to think if a scenario like this:
Suppose we discovered that some subset of the population definitely did not have conscious experiences, and that the rest of us did. And suppose we had some reliable test for determining who was or was not conscious. It was easy to administer, and we quickly found that our spouses, children, parents, closest friends, and so on, were not conscious at all. Such people were simply automata. There were no lights on inside. In short: they simply had no qualia at all.
How would society react? What would people do? One could imagine a story like this addressing both interpersonal relationships, and the broader, societal-scale implications of such discoveries. I hope someone can take that idea and run with it, and turn it into something worth reading or watching.