Nah, this doesn’t require any magic; just code reuse or the equivalent. If the cognitive mechanisms that we use to simulate other people are similar enough to those we use to run our own minds, it seems logical that those simulations, once rich and coherent enough, could acquire some characteristics of our minds that we normally think of as privileged. It follows that they could then diverge from their prototypes if there’s not some fairly sophisticated error correction built in.
This seems plausible to me because evolution’s usually a pretty parsimonious process; I wouldn’t expect it to develop an independent mechanism for representing other minds when it’s got a perfectly good mechanism for representing the self. Or vice versa; with the mirror test in mind it’s plausible that self-image is a consequence of sufficiently good other-modeling, not the other way around.
Of course, I don’t have anything I’d consider strong evidence for this—hence the lowish p-value.
So, in a way Batman exists when you imagine yourself to be Batman? Do you still coexist then (since it is your cognitive architecture after all)?
I’d say that of course any high level process running on your mind has characteristics of your mind, after all, it is running on your mind. Those, however, would still be characteristics inherent to you, not to Batman.
If you were thinking of a nuclear detonation, running through the equations, would that bomb exist inside your mind?
Having a good mental model of someone and “consulting” it (apart from that model not matching the original anyways) seems to me more like your brain playing “what if”, and the accompanying consciousness and assorted properties still belonging to you pretending what-if, not to the what-if itself.
I was wondering whether things might be slightly different if you simulated batman-sapience by running the internal representation through simulations of self-awareness and decision-making, using one’s own blackboxes as substitutes, attempting to mentally simulate in as much detail as possible every conscious mental process while sharing braintime on the subconscious ones.
Then I got really interested in this crazy idea and decided to do science and try it.
So, in a way Batman exists when you imagine yourself to be Batman? Do you still coexist then (since it is your cognitive architecture after all)?
It might not be entirely off base to say that a Batman or at least part of a Batman exists under those circumstances, if your representation of Batman is sophisticated enough and if this line of thought about modeling is accurate. It might be quite different from someone else’s Batman, though; fictional characters kind of muddy the waters here. Especially ones who’ve been interpreted that many different ways.
The line between playing what-if and harboring a divergent cognitive object—I’m not sure I want to call it a mind -- seems pretty blurry to me; I wouldn’t think there’d be a specific point at which your representation of a friend stops being a mere what-if scenario, just a gradually increasing independence and fidelity as your model gets better and thinking in that mode becomes more natural.
I think the best way to say it is to say that Batman-as-Batman does not exist, but Batman-as-your-internal-representation-of-Batman does exist. I most certainly agree though that the distinction can be extremely blurry.
Nah, this doesn’t require any magic; just code reuse or the equivalent. If the cognitive mechanisms that we use to simulate other people are similar enough to those we use to run our own minds, it seems logical that those simulations, once rich and coherent enough, could acquire some characteristics of our minds that we normally think of as privileged. It follows that they could then diverge from their prototypes if there’s not some fairly sophisticated error correction built in.
This seems plausible to me because evolution’s usually a pretty parsimonious process; I wouldn’t expect it to develop an independent mechanism for representing other minds when it’s got a perfectly good mechanism for representing the self. Or vice versa; with the mirror test in mind it’s plausible that self-image is a consequence of sufficiently good other-modeling, not the other way around.
Of course, I don’t have anything I’d consider strong evidence for this—hence the lowish p-value.
Relevant smbc.
So, in a way Batman exists when you imagine yourself to be Batman? Do you still coexist then (since it is your cognitive architecture after all)?
I’d say that of course any high level process running on your mind has characteristics of your mind, after all, it is running on your mind. Those, however, would still be characteristics inherent to you, not to Batman.
If you were thinking of a nuclear detonation, running through the equations, would that bomb exist inside your mind?
Having a good mental model of someone and “consulting” it (apart from that model not matching the original anyways) seems to me more like your brain playing “what if”, and the accompanying consciousness and assorted properties still belonging to you pretending what-if, not to the what-if itself.
My cached reply: “taboo exist”.
This whole train of discussion started with
I’d argue that those characteristics of sapience still belong to the system that’s playing “what-if”, not to the what-if itself. There, no exist :-)
I was wondering whether things might be slightly different if you simulated batman-sapience by running the internal representation through simulations of self-awareness and decision-making, using one’s own blackboxes as substitutes, attempting to mentally simulate in as much detail as possible every conscious mental process while sharing braintime on the subconscious ones.
Then I got really interested in this crazy idea and decided to do science and try it.
Shouldn’t have done that.
It might not be entirely off base to say that a Batman or at least part of a Batman exists under those circumstances, if your representation of Batman is sophisticated enough and if this line of thought about modeling is accurate. It might be quite different from someone else’s Batman, though; fictional characters kind of muddy the waters here. Especially ones who’ve been interpreted that many different ways.
The line between playing what-if and harboring a divergent cognitive object—I’m not sure I want to call it a mind -- seems pretty blurry to me; I wouldn’t think there’d be a specific point at which your representation of a friend stops being a mere what-if scenario, just a gradually increasing independence and fidelity as your model gets better and thinking in that mode becomes more natural.
I think the best way to say it is to say that Batman-as-Batman does not exist, but Batman-as-your-internal-representation-of-Batman does exist. I most certainly agree though that the distinction can be extremely blurry.