I occasionally have dreams about people who have died in which they seem really real, where they’re not saying stuff they’ve said when they were alive but stuff that sounds like something they would say. But it’s not profound original thoughts or anything? So I think what I’m thinking is pretty close to what you’re describing.
I guess if we can make one of these, then we could see how different people’s mental models of that person were? Probably there is stuff in my mental model that I can’t articulate! Stuff that’s still useful information!
But maybe people will start using these instead of faking their deaths if they wanted to run away.
I’ve suspected—though we’re talking maybe p = 0.2 here—for a while that our internal representations of people we know well might have some of the characteristics of sapience. Not enough to be fully realized persons, but enough that there’s a sense in which they can be said to have their own thoughts or preferences, not fully dependent either on our default personae or on their prototypes. Accounts like your dreams seem like they might be weak evidence for that line of thought.
Authors commonly feel like the characters they write about are real, to various extents. On the mildest end of the spectrum, the characters will just surprise their creators, doing something completely contrary to the author’s expectations when they’re put in a specific scene and forcing a complete rewrite of the plot. (“These two characters were supposed to have a huge fight and hate each other for the rest of their lives, but then they actually ended up confessing their love for each other and now it looks like they’ll be happily married. This book was supposed to be about their mutual feud, so what the heck do I do now?”) Or they might just “refuse” to do something that the author wants them to do, and she’ll feel miserable afterwards if she forces the characters to act in the wrong way nevertheless. On the other end of the spectrum, the author can actually have real conversations with them going on in her head.
I’m not much of an author, but I’ve had this happen.
My mental character-models generally have no fourth wall, which has on several occasions lead to them fighting each other for my attention so as to not fade away. I’m reasonably sure I’m not insane.
(...) but enough that there’s a sense in which they can be said to have their own thoughts or preferences, not fully dependent either on our default personae or on their prototypes.
Nah, this doesn’t require any magic; just code reuse or the equivalent. If the cognitive mechanisms that we use to simulate other people are similar enough to those we use to run our own minds, it seems logical that those simulations, once rich and coherent enough, could acquire some characteristics of our minds that we normally think of as privileged. It follows that they could then diverge from their prototypes if there’s not some fairly sophisticated error correction built in.
This seems plausible to me because evolution’s usually a pretty parsimonious process; I wouldn’t expect it to develop an independent mechanism for representing other minds when it’s got a perfectly good mechanism for representing the self. Or vice versa; with the mirror test in mind it’s plausible that self-image is a consequence of sufficiently good other-modeling, not the other way around.
Of course, I don’t have anything I’d consider strong evidence for this—hence the lowish p-value.
So, in a way Batman exists when you imagine yourself to be Batman? Do you still coexist then (since it is your cognitive architecture after all)?
I’d say that of course any high level process running on your mind has characteristics of your mind, after all, it is running on your mind. Those, however, would still be characteristics inherent to you, not to Batman.
If you were thinking of a nuclear detonation, running through the equations, would that bomb exist inside your mind?
Having a good mental model of someone and “consulting” it (apart from that model not matching the original anyways) seems to me more like your brain playing “what if”, and the accompanying consciousness and assorted properties still belonging to you pretending what-if, not to the what-if itself.
I was wondering whether things might be slightly different if you simulated batman-sapience by running the internal representation through simulations of self-awareness and decision-making, using one’s own blackboxes as substitutes, attempting to mentally simulate in as much detail as possible every conscious mental process while sharing braintime on the subconscious ones.
Then I got really interested in this crazy idea and decided to do science and try it.
So, in a way Batman exists when you imagine yourself to be Batman? Do you still coexist then (since it is your cognitive architecture after all)?
It might not be entirely off base to say that a Batman or at least part of a Batman exists under those circumstances, if your representation of Batman is sophisticated enough and if this line of thought about modeling is accurate. It might be quite different from someone else’s Batman, though; fictional characters kind of muddy the waters here. Especially ones who’ve been interpreted that many different ways.
The line between playing what-if and harboring a divergent cognitive object—I’m not sure I want to call it a mind -- seems pretty blurry to me; I wouldn’t think there’d be a specific point at which your representation of a friend stops being a mere what-if scenario, just a gradually increasing independence and fidelity as your model gets better and thinking in that mode becomes more natural.
I think the best way to say it is to say that Batman-as-Batman does not exist, but Batman-as-your-internal-representation-of-Batman does exist. I most certainly agree though that the distinction can be extremely blurry.
Has there been any work on how our internal representations of other people get built? I’ve only heard about the thin-slicing phenomenon but not much beyond that. I feel like sometimes people extrapolate pretty accurately—like, “[person] would never do that” or “[person] will probably just say this” but I don’t know how we know. I just kinda feel that a certain thing is something a certain person would do but I can’t tell always what they did that makes me think so or that I’m simulating a state machine or anything.
Exercise: pick a sentence to tell someone you know well, perhaps asking a question. Write down ahead of time exactly what you think they might say. Make a few different variations if you feel like it. Then ask them and record exactly what they do say. Repeat. Let us know if you see anything interesting.
There’s been some, yeah. I haven’t been able to find anything that looks terribly deep or low-level yet, and very little taking a cognitive science rather than traditional psychology approach, but Google and Wikipedia have turned up afewpapers.
This isn’t my field, though; perhaps some passing psychologist or cognitive scientist would have a better idea of the current state of theory.
I occasionally have dreams about people who have died in which they seem really real, where they’re not saying stuff they’ve said when they were alive but stuff that sounds like something they would say. But it’s not profound original thoughts or anything? So I think what I’m thinking is pretty close to what you’re describing.
I guess if we can make one of these, then we could see how different people’s mental models of that person were? Probably there is stuff in my mental model that I can’t articulate! Stuff that’s still useful information!
But maybe people will start using these instead of faking their deaths if they wanted to run away.
I’ve suspected—though we’re talking maybe p = 0.2 here—for a while that our internal representations of people we know well might have some of the characteristics of sapience. Not enough to be fully realized persons, but enough that there’s a sense in which they can be said to have their own thoughts or preferences, not fully dependent either on our default personae or on their prototypes. Accounts like your dreams seem like they might be weak evidence for that line of thought.
Authors commonly feel like the characters they write about are real, to various extents. On the mildest end of the spectrum, the characters will just surprise their creators, doing something completely contrary to the author’s expectations when they’re put in a specific scene and forcing a complete rewrite of the plot. (“These two characters were supposed to have a huge fight and hate each other for the rest of their lives, but then they actually ended up confessing their love for each other and now it looks like they’ll be happily married. This book was supposed to be about their mutual feud, so what the heck do I do now?”) Or they might just “refuse” to do something that the author wants them to do, and she’ll feel miserable afterwards if she forces the characters to act in the wrong way nevertheless. On the other end of the spectrum, the author can actually have real conversations with them going on in her head.
I’m not much of an author, but I’ve had this happen.
My mental character-models generally have no fourth wall, which has on several occasions lead to them fighting each other for my attention so as to not fade away. I’m reasonably sure I’m not insane.
That sounds mystical.
Nah, this doesn’t require any magic; just code reuse or the equivalent. If the cognitive mechanisms that we use to simulate other people are similar enough to those we use to run our own minds, it seems logical that those simulations, once rich and coherent enough, could acquire some characteristics of our minds that we normally think of as privileged. It follows that they could then diverge from their prototypes if there’s not some fairly sophisticated error correction built in.
This seems plausible to me because evolution’s usually a pretty parsimonious process; I wouldn’t expect it to develop an independent mechanism for representing other minds when it’s got a perfectly good mechanism for representing the self. Or vice versa; with the mirror test in mind it’s plausible that self-image is a consequence of sufficiently good other-modeling, not the other way around.
Of course, I don’t have anything I’d consider strong evidence for this—hence the lowish p-value.
Relevant smbc.
So, in a way Batman exists when you imagine yourself to be Batman? Do you still coexist then (since it is your cognitive architecture after all)?
I’d say that of course any high level process running on your mind has characteristics of your mind, after all, it is running on your mind. Those, however, would still be characteristics inherent to you, not to Batman.
If you were thinking of a nuclear detonation, running through the equations, would that bomb exist inside your mind?
Having a good mental model of someone and “consulting” it (apart from that model not matching the original anyways) seems to me more like your brain playing “what if”, and the accompanying consciousness and assorted properties still belonging to you pretending what-if, not to the what-if itself.
My cached reply: “taboo exist”.
This whole train of discussion started with
I’d argue that those characteristics of sapience still belong to the system that’s playing “what-if”, not to the what-if itself. There, no exist :-)
I was wondering whether things might be slightly different if you simulated batman-sapience by running the internal representation through simulations of self-awareness and decision-making, using one’s own blackboxes as substitutes, attempting to mentally simulate in as much detail as possible every conscious mental process while sharing braintime on the subconscious ones.
Then I got really interested in this crazy idea and decided to do science and try it.
Shouldn’t have done that.
It might not be entirely off base to say that a Batman or at least part of a Batman exists under those circumstances, if your representation of Batman is sophisticated enough and if this line of thought about modeling is accurate. It might be quite different from someone else’s Batman, though; fictional characters kind of muddy the waters here. Especially ones who’ve been interpreted that many different ways.
The line between playing what-if and harboring a divergent cognitive object—I’m not sure I want to call it a mind -- seems pretty blurry to me; I wouldn’t think there’d be a specific point at which your representation of a friend stops being a mere what-if scenario, just a gradually increasing independence and fidelity as your model gets better and thinking in that mode becomes more natural.
I think the best way to say it is to say that Batman-as-Batman does not exist, but Batman-as-your-internal-representation-of-Batman does exist. I most certainly agree though that the distinction can be extremely blurry.
Has there been any work on how our internal representations of other people get built? I’ve only heard about the thin-slicing phenomenon but not much beyond that. I feel like sometimes people extrapolate pretty accurately—like, “[person] would never do that” or “[person] will probably just say this” but I don’t know how we know. I just kinda feel that a certain thing is something a certain person would do but I can’t tell always what they did that makes me think so or that I’m simulating a state machine or anything.
Exercise: pick a sentence to tell someone you know well, perhaps asking a question. Write down ahead of time exactly what you think they might say. Make a few different variations if you feel like it. Then ask them and record exactly what they do say. Repeat. Let us know if you see anything interesting.
There’s been some, yeah. I haven’t been able to find anything that looks terribly deep or low-level yet, and very little taking a cognitive science rather than traditional psychology approach, but Google and Wikipedia have turned up a few papers.
This isn’t my field, though; perhaps some passing psychologist or cognitive scientist would have a better idea of the current state of theory.