Related phenomenon you might find interesting: Tulpas. That is essentially humans trying to intentionally pull off what you are describing here, in their own minds. It is based on the fact that humans predict the behaviour of other humans by modelling their minds, and that the more complex and accurate these models get, the more sentient like they become. E.g. I know my girlfriend so well that seeing her in a situation that I know hurts her feels immediately and genuinely painful to me, as though I were feeling her pain.
It is also based on the human ability to run consciousness that does not span the entire brain and is constant, but rather localised and temporary, flickering in and out. We know we can do this and are reasonably good at it—split brain humans remain functional, for example, and humans under severe pressure can develop multiple sentient personalities. You can purposefully cultivate a rational technique where multiple characters argue in your mind—you demonstrated that very well in your Harry Potter book with the various houses. We also have a bit of evidence, e.g. the Sperling experiments, that we have extensive conscious experiences that never even make it into short term memory, that just flicker up locally and disappear, because they are not selected to be kept.
So for a Tulpa, people basically try to craft a mind with as much imagination and detail as they can, and practice interacting with it, until this gets easier and easier, and eventually feels like a process they no longer control, but where something unexpected responds back. That leads to very interesting scenarios, e.g. someone being beaten in chess by their tulpa. There is a whole subreddit of people discussing how they create Tulpas and what the consequences are.
The way this works in the human brain might (just might!) also provide a solution, or at least an indication why people do not worry so much. Tulpas disappear when you can’t currently interact with them and need all your brain circuits—basically, when you do not have the resources to run them right now. They do not express grief at this. You would think they would, and yet they do not, they just cheerfully return when you can run them. Similarly, humans who very clearly and demonstrably have split brain, who are most definitely sharing their body with another mind they do not control or understand, deny that this is an issue. They try to claim ownership of actions they demonstrably did not trigger, to give explanations for actions that are demonstrably opaque to them. There seems to be a very strong pressure that after multiple minds have basically fought it out and settled on an action, all the minds agree that it is theirs and it was a consensus and this is fine. Identity seems a rather strange thing in this regard, made of flickering fragments that all strangely identify with the whole.
I can think of two reasons for the fact that these individual fragments do not react to e.g. disappearing for a while the way we would expect them to. The one is that we have a hard coded biological imperative against resisting this act of fragments being disappeared. And that would make sense, because everything else would make us utterly dysfunctional as a whole. We can’t spend 90 % of our life protesting that the things we just did as a whole are not the thing we, a particular brain process, wished to do. We can’t constantly boycott and undo and deny each other’s actions. It makes sense to fight over what to do, but to get in line once we have committed unless the circumstances were extraordinary and the action was an epic failure. Such a hard-coded rule would not be that surprising, because human minds seem to have a bunch of them. For example, we have a hard-coded aversion to considering the reality around us as a simulation. Which makes sense; human minds are good at coming up with imaginary and hypothetical scenarios, and it is crucial for survival not to confuse them for reality, and to take reality seriously and not as a game. You really, really do not want a human to conclude that reality is a simulation, and that they could just hop out of a tenth floor window to see what would happen. But the results are practically funny. It does not matter how often people read Descartes or watch the Matrix or read arguments on simulations—the vast majority of people will never become genuinely and permanently unsettled by this, even if they rationally agree that this could totally be the case and they have no arguments against it.
The other is that things we generally associate with the sentient minds of whole humans are not necessarily characteristics of sentient subprocesses, but separate additions to basic sentience. Potentially sentient subprocesses do not show a number of characteristics we would expect. E.g. they do not seem committed to self-preservation. It is possible that rights and needs we closely associate with sentience are actually only tied to sentience beyond a specific point, or can be somehow blocked.
Anyhow, I think investigating how this works on human brains now might give you empirical data and ideas to play with to develop this further.
Related phenomenon you might find interesting: Tulpas. That is essentially humans trying to intentionally pull off what you are describing here, in their own minds. It is based on the fact that humans predict the behaviour of other humans by modelling their minds, and that the more complex and accurate these models get, the more sentient like they become. E.g. I know my girlfriend so well that seeing her in a situation that I know hurts her feels immediately and genuinely painful to me, as though I were feeling her pain.
It is also based on the human ability to run consciousness that does not span the entire brain and is constant, but rather localised and temporary, flickering in and out. We know we can do this and are reasonably good at it—split brain humans remain functional, for example, and humans under severe pressure can develop multiple sentient personalities. You can purposefully cultivate a rational technique where multiple characters argue in your mind—you demonstrated that very well in your Harry Potter book with the various houses. We also have a bit of evidence, e.g. the Sperling experiments, that we have extensive conscious experiences that never even make it into short term memory, that just flicker up locally and disappear, because they are not selected to be kept.
So for a Tulpa, people basically try to craft a mind with as much imagination and detail as they can, and practice interacting with it, until this gets easier and easier, and eventually feels like a process they no longer control, but where something unexpected responds back. That leads to very interesting scenarios, e.g. someone being beaten in chess by their tulpa. There is a whole subreddit of people discussing how they create Tulpas and what the consequences are.
The way this works in the human brain might (just might!) also provide a solution, or at least an indication why people do not worry so much. Tulpas disappear when you can’t currently interact with them and need all your brain circuits—basically, when you do not have the resources to run them right now. They do not express grief at this. You would think they would, and yet they do not, they just cheerfully return when you can run them. Similarly, humans who very clearly and demonstrably have split brain, who are most definitely sharing their body with another mind they do not control or understand, deny that this is an issue. They try to claim ownership of actions they demonstrably did not trigger, to give explanations for actions that are demonstrably opaque to them. There seems to be a very strong pressure that after multiple minds have basically fought it out and settled on an action, all the minds agree that it is theirs and it was a consensus and this is fine. Identity seems a rather strange thing in this regard, made of flickering fragments that all strangely identify with the whole.
I can think of two reasons for the fact that these individual fragments do not react to e.g. disappearing for a while the way we would expect them to. The one is that we have a hard coded biological imperative against resisting this act of fragments being disappeared. And that would make sense, because everything else would make us utterly dysfunctional as a whole. We can’t spend 90 % of our life protesting that the things we just did as a whole are not the thing we, a particular brain process, wished to do. We can’t constantly boycott and undo and deny each other’s actions. It makes sense to fight over what to do, but to get in line once we have committed unless the circumstances were extraordinary and the action was an epic failure. Such a hard-coded rule would not be that surprising, because human minds seem to have a bunch of them. For example, we have a hard-coded aversion to considering the reality around us as a simulation. Which makes sense; human minds are good at coming up with imaginary and hypothetical scenarios, and it is crucial for survival not to confuse them for reality, and to take reality seriously and not as a game. You really, really do not want a human to conclude that reality is a simulation, and that they could just hop out of a tenth floor window to see what would happen. But the results are practically funny. It does not matter how often people read Descartes or watch the Matrix or read arguments on simulations—the vast majority of people will never become genuinely and permanently unsettled by this, even if they rationally agree that this could totally be the case and they have no arguments against it.
The other is that things we generally associate with the sentient minds of whole humans are not necessarily characteristics of sentient subprocesses, but separate additions to basic sentience. Potentially sentient subprocesses do not show a number of characteristics we would expect. E.g. they do not seem committed to self-preservation. It is possible that rights and needs we closely associate with sentience are actually only tied to sentience beyond a specific point, or can be somehow blocked.
Anyhow, I think investigating how this works on human brains now might give you empirical data and ideas to play with to develop this further.