The reason I care if something is a person or not is that “caring about people” is part of my values. I feel pretty secure in taking for granted that my readers also share that value, because it’s a pretty common one and if they don’t then there’s nothing to argue about since we just have incompatible utility functions.
What would be different if it were or weren’t, and likewise what would be different if it were just part of our person-hood?
One difference that I would expect in a world where they weren’t people is that there would be some feature you could point to in humans which cannot be found in mental models of people, and for which there is a principled reason to say “clearly, anything missing that feature is not a person”.
The reason I care if something is a person or not is that “caring about people” is part of my values.
If one is acting in the world, I would say one’s sense of what a person is has to intimately connected with value of “caring about people”. My caring about people is connecting to my experience of people—there are people I never met I care about in the abstract but that’s from extrapolating my immediate experience of people.
I would expect in a world where they weren’t people is that there would be some feature you could point to in humans which cannot be found in mental models of people
It seems like an easy criteria would be “exist entirely independently from me”. My mental models of just about everything, including people, are sketchy, feel like me “doing something”, etc. I can’t effortlessly have a conversation with any mental model I have of a person, for example. Oddly, enough I can have a conversation with another as one of my mental models or internals characters (I’m a frequency DnD GM and I have NPCs I often like playing). Mental models and characters seem more like add-ons to my ordinary consciousness.
I elaborated on this a little elsewhere, but the feature I would point to would be “ability to have independent subjective experiences”. A chicken has its own brain and can likely have a separate experience of life which I don’t share, and so although I wouldn’t call it a person, I’d call it a being which I ought to care about and do what I can to see that it doesn’t suffer. By contrast, if I imagine a character, and what that character feels or thinks or sees or hears, I am the one experiencing that character’s (imagined) sensorium and thoughts—and for a time, my consciousness of some of my own sense-inputs and ability to think about other things is taken up by the simulation and unavailable for being consciously aware of what’s going on around me. Because my brain lacks duplicates of certain features, in order to do this imagining, I have to pause/repurpose certain mental processes that were ongoing when I began imagining. The subjective experience of “being a character” is my subjective experience, not a separate set of experiences/separate consciousness that runs alongside mine the way a chicken’s consciousness would run alongside mine if one was nearby. Metaphorically, I enter into the character’s mindstate, rather than having two mindstates running in parallel.
Two sets of simultaneous subjective experiences: Two people/beings of potential moral importance. One set of subjective experiences: One person/being of potential moral importance. In the latter case, the experience of entering into the imagined mindstate of a character is just another experience that a person is having, not the creation of a second person.
The reason I reject all the arguments of the form “mental models are embedded inside another person, therefore they are that person” is that this argument is too strong. If a conscious AI was simulating you directly inside its main process, I think you would still qualify as a person of your own, even though the AI’s conscious experience would contain all your experiences in much the same way that your experience contains all the experiences of your character.
I also added an addendum to the end of the post which explains why I don’t think it’s safe to assume that you feel everything your character does the same way they do.
To be clear, I do not endorse the argument that mental models embedded in another person are necessarily that person. It makes sense that a sufficiently intelligent person with the right neural hardware would be able to simulate another person in sufficient detail that that simulated person should count, morally.
I appreciate your addendum, as well, and acknowledge that yes, given a situation like that it would be possible for a conscious entity which we should treat as a person to exist in the mind of another conscious entity we should treat as a person, without the former’s conscious experience being accessible to the latter.
What I’m trying to express (mostly in other comments) is that, given the particular neural architecture I think I have, I’m pretty sure that the process of simulating a character requires use of scarce resources such that I can only do it by being that character (feeling what it feels, seeing in my mind’s eye what it sees, etc.), not run the character in some separate thread. Some testable predictions: If I could run two separate consciousnesses simultaneously in my brain (me plus one other, call this person B) and then have a conversation with B, I would expect the experience of interacting with B to be more like the experience of interacting with other people, in specific ways that you haven’t mentioned in your posts. Examples: I would expect B to misunderstand me occasionally, to mis-hear what I was saying and need me to repeat, to become distracted by its own thoughts, to occasionally actively resist interacting with me. Whereas the experience I have is consistent with the idea that in order to simulate a character, I have to be that character temporarily—I feel what they feel, think what they think, see what they see, their conscious experience is my conscious experience, etc. - and when I’m not being them, they aren’t being. In that sense, “the character I imagine” and “me” are one. There is only one stream of consciousness, anyway. If I stop imagining a character, and then later pick back up where i left off, it doesn’t seem like they’ve been living their lives outside of my awareness and have grown and developed, in the way a non-imagined person would grow and change and have new thoughts if I stopped talking to them and came back and resumed the conversation in a week. Rather, we just pick up right where we left off, perhaps with some increased insight (in the same sort of way that I can have some increased insight after a night’s rest, because my subconscious is doing some things in the background) but not to the level of change I would expect from a separate person having its own conscious experiences.
I was thinking about this overnight, and an analogy occurs to me. Suppose in the future we know how to run minds on silicon, and store them in digital form. Further suppose we build a robot with processing power sufficient to run one human-level mind. In its backpack, it has 10 solid state drives, each with a different personality and set of memories, some of which are backups, plus one solid state drive is plugged in to its processor, which it is running as “itself” at this time. In that case, would you say the robot + the drives in its backpack = 11 people, or 1?
I’m not firm on this, but I’m leaning toward 1, particularly if the question is something like “how many people are having a good/bad life?”—what matters is how many conscious experiencers there are, not how many stored models there are. And my internal experience is kind of like being that robot, only able to load one personality at a time. But sometimes able to switch out, when I get really invested in simulating someone different from my normal self.
EDIT to add: I’d like to clarify why I think the distinction between “able to create many models of people, but only able to run one at a time” and “able to run many models of people simultaneously” is important in your particular situation. You’re worried that by imagining other people vividly enough, you could create a person with moral value who you are then obligated to protect and not cause to suffer. But: If you can only run one person at a time in your brain (regardless of what someone else’s brain/CPU might be able to do) then you know exactly what that person is experiencing, because you’re experiencing it too. There is no risk that it will wander off and suffer outside of your awareness, and if it’s suffering too much, you can just… stop imagining it suffering.
The reason I care if something is a person or not is that “caring about people” is part of my values. I feel pretty secure in taking for granted that my readers also share that value, because it’s a pretty common one and if they don’t then there’s nothing to argue about since we just have incompatible utility functions.
One difference that I would expect in a world where they weren’t people is that there would be some feature you could point to in humans which cannot be found in mental models of people, and for which there is a principled reason to say “clearly, anything missing that feature is not a person”.
If one is acting in the world, I would say one’s sense of what a person is has to intimately connected with value of “caring about people”. My caring about people is connecting to my experience of people—there are people I never met I care about in the abstract but that’s from extrapolating my immediate experience of people.
It seems like an easy criteria would be “exist entirely independently from me”. My mental models of just about everything, including people, are sketchy, feel like me “doing something”, etc. I can’t effortlessly have a conversation with any mental model I have of a person, for example. Oddly, enough I can have a conversation with another as one of my mental models or internals characters (I’m a frequency DnD GM and I have NPCs I often like playing). Mental models and characters seem more like add-ons to my ordinary consciousness.
I elaborated on this a little elsewhere, but the feature I would point to would be “ability to have independent subjective experiences”. A chicken has its own brain and can likely have a separate experience of life which I don’t share, and so although I wouldn’t call it a person, I’d call it a being which I ought to care about and do what I can to see that it doesn’t suffer. By contrast, if I imagine a character, and what that character feels or thinks or sees or hears, I am the one experiencing that character’s (imagined) sensorium and thoughts—and for a time, my consciousness of some of my own sense-inputs and ability to think about other things is taken up by the simulation and unavailable for being consciously aware of what’s going on around me. Because my brain lacks duplicates of certain features, in order to do this imagining, I have to pause/repurpose certain mental processes that were ongoing when I began imagining. The subjective experience of “being a character” is my subjective experience, not a separate set of experiences/separate consciousness that runs alongside mine the way a chicken’s consciousness would run alongside mine if one was nearby. Metaphorically, I enter into the character’s mindstate, rather than having two mindstates running in parallel.
Two sets of simultaneous subjective experiences: Two people/beings of potential moral importance. One set of subjective experiences: One person/being of potential moral importance. In the latter case, the experience of entering into the imagined mindstate of a character is just another experience that a person is having, not the creation of a second person.
The reason I reject all the arguments of the form “mental models are embedded inside another person, therefore they are that person” is that this argument is too strong. If a conscious AI was simulating you directly inside its main process, I think you would still qualify as a person of your own, even though the AI’s conscious experience would contain all your experiences in much the same way that your experience contains all the experiences of your character.
I also added an addendum to the end of the post which explains why I don’t think it’s safe to assume that you feel everything your character does the same way they do.
To be clear, I do not endorse the argument that mental models embedded in another person are necessarily that person. It makes sense that a sufficiently intelligent person with the right neural hardware would be able to simulate another person in sufficient detail that that simulated person should count, morally.
I appreciate your addendum, as well, and acknowledge that yes, given a situation like that it would be possible for a conscious entity which we should treat as a person to exist in the mind of another conscious entity we should treat as a person, without the former’s conscious experience being accessible to the latter.
What I’m trying to express (mostly in other comments) is that, given the particular neural architecture I think I have, I’m pretty sure that the process of simulating a character requires use of scarce resources such that I can only do it by being that character (feeling what it feels, seeing in my mind’s eye what it sees, etc.), not run the character in some separate thread. Some testable predictions: If I could run two separate consciousnesses simultaneously in my brain (me plus one other, call this person B) and then have a conversation with B, I would expect the experience of interacting with B to be more like the experience of interacting with other people, in specific ways that you haven’t mentioned in your posts. Examples: I would expect B to misunderstand me occasionally, to mis-hear what I was saying and need me to repeat, to become distracted by its own thoughts, to occasionally actively resist interacting with me. Whereas the experience I have is consistent with the idea that in order to simulate a character, I have to be that character temporarily—I feel what they feel, think what they think, see what they see, their conscious experience is my conscious experience, etc. - and when I’m not being them, they aren’t being. In that sense, “the character I imagine” and “me” are one. There is only one stream of consciousness, anyway. If I stop imagining a character, and then later pick back up where i left off, it doesn’t seem like they’ve been living their lives outside of my awareness and have grown and developed, in the way a non-imagined person would grow and change and have new thoughts if I stopped talking to them and came back and resumed the conversation in a week. Rather, we just pick up right where we left off, perhaps with some increased insight (in the same sort of way that I can have some increased insight after a night’s rest, because my subconscious is doing some things in the background) but not to the level of change I would expect from a separate person having its own conscious experiences.
I was thinking about this overnight, and an analogy occurs to me. Suppose in the future we know how to run minds on silicon, and store them in digital form. Further suppose we build a robot with processing power sufficient to run one human-level mind. In its backpack, it has 10 solid state drives, each with a different personality and set of memories, some of which are backups, plus one solid state drive is plugged in to its processor, which it is running as “itself” at this time. In that case, would you say the robot + the drives in its backpack = 11 people, or 1?
I’m not firm on this, but I’m leaning toward 1, particularly if the question is something like “how many people are having a good/bad life?”—what matters is how many conscious experiencers there are, not how many stored models there are. And my internal experience is kind of like being that robot, only able to load one personality at a time. But sometimes able to switch out, when I get really invested in simulating someone different from my normal self.
EDIT to add: I’d like to clarify why I think the distinction between “able to create many models of people, but only able to run one at a time” and “able to run many models of people simultaneously” is important in your particular situation. You’re worried that by imagining other people vividly enough, you could create a person with moral value who you are then obligated to protect and not cause to suffer. But: If you can only run one person at a time in your brain (regardless of what someone else’s brain/CPU might be able to do) then you know exactly what that person is experiencing, because you’re experiencing it too. There is no risk that it will wander off and suffer outside of your awareness, and if it’s suffering too much, you can just… stop imagining it suffering.