Unless everything I think I understand about tulpas is wrong, this is at the very least significantly harder than just thinking yourself smarter without one. All the idea generating is done before credit is assigned to either the “self” or the “tulpa”.
What there ARE several examples of however are tulpas that are more emotionally mature, better at luminosity, and don’t share all their hosts preconceptions. This is not exactly smarts though, or even general purpose formal rationality.
One CAN imagine scenarios where you end up with a tulpa smarter than the host. For example the host might have learned helplessness, or the tulpa being imagined as “smarter than me” and thus all the brains good ideas get credited to it.
Disclaimer: this is based of only lots of anecdotes I’ve read, gut feeling, and basic stuff that should be common knowledge to any LWer.
I’m reminded of many years ago, a coworker coming into my office and asking me a question about the design of a feature that interacts with our tax calculation.
So she and I created this whole whiteboard flowchart working out the design, at the end of which I said “Hrm. So, at a high level, this seems OK. That said, you should definitely talk to Mark about this, because Mark knows a lot more about the tax code than I do, and he might see problems I missed. For example, Mark will probably notice that this bit here will fail when $condition applies, which I… um… completely failed to notice?”
I could certainly describe that as having a “Mark” in my head who is smarter about tax-code-related designs than I am, and there’s nothing intrinsically wrong with describing it that way if that makes me more comfortable or provides some other benefit.
But “Mark” in this case would just be pointing to a subset of “Dave”, just as “Dave’s fantasies about aliens” does.
See also ‘rubberducking’ and previous discussions of this on LW. My basic theory is that reasoning was developed for adversarial purposes, and by rubberducking you are essentially roleplaying as an ‘adversary’ which triggers deeper processing (if we ever get brain imaging of system I vs system II thinking, I’d expect that adversarial thinking triggers system II more compared to ‘normal’ self-centered thinking).
Yes. Indeed, I suspect I’ve told this story before on LW in just such a discussion.
I don’t necessarily buy your account—it might just be that our brains are simply not well-integrated systems, and enabling different channels whereby parts of our brains can be activated and/or interact with one another (e.g., talking to myself, singing, roleplaying different characters, getting up and walking around, drawing, etc.) gets different (and sometimes better) results.
This is also related to the circumlocution strategy for dealing with aphasia.
Yea in that case presumably the tulpa would help—but not necessarily significantly more than such a non-tulpa model that requires considerably less work and risk.
Basically, a tulpa can technically do almost anything you can… but the absence of a tulpa can do them to, and for almost all of them there’s some much easier and at least as effective way to do the same thing.
Basically, a tulpa can technically do almost anything you can...
Mental process like waking up without an alarm clock at a specific time aren’t easy. I know a bunch of people who have that skill but it’s not like there a step by step manual that you can easily follow that gives you that ability.
A tulpa can do things like that. There are many mental processes that you can’t access directly but that a tulpa might be able to access.
I am surprised to know there isn’t such a step by step manual, suspect that you’re wrong about there not being one, and in either case know about a few people that could probably easily write one if motivated to do so.
But I guess you could make this argument; that a tulpa is more flexible and has a simpler user interface, even if it’s less powerful and has a bunch of logistic and moral problems. I dont like it but I can’t think of any counter arguments other than it being lazy and unaesthetic, and the kind of meditative people that make tulpas should not be the kind to take this easy way out.
I am surprised to know there isn’t such a step by step manual, suspect that you’re wrong about there not being one, and in either case know about a few people that could probably easily write one if motivated to do so.
My point isn’t so much that it impossible but that it isn’t easy.
Creating a mental device that only wakes me up will be easier than creating a whole Tupla but once you do have a Tulpa you can reuse it a lot.
Let’s say I want to practice Salsa dance moves at home. Visualising a full dance partner completely just for the purpose of having a dance partner at home wouldn’t be worth the effort.
I’m not sure about how much you gain by pair programming with a Tulpa, but the Tulpa might be useful for that task.
It takes a lot of energy to create it the first time but afterwards you reap the benefits.
I dont like it but I can’t think of any counter arguments other than it being lazy and unaesthetic, and the kind of meditative people that make tulpas should not be the kind to take this easy way out.
Tulpa creation involves quite a lot of effort so it doesn’t seem the lazy road.
Mental process like waking up without an alarm clock at a specific time aren’t easy. I know a bunch of people who have that skill but it’s not like there a step by step manual that you can easily follow that gives you that ability.
I do not have “wake up at a specific time” ability, but I have trained myself to have “wake up within ~1.5 hours of the specific time” ability. I did this over a summer break in elementary school because I learned about how sleep worked and thought it would be cool. Note that you will need to have basically no sleep debt (you consistently wake up without an alarm) for this to work correctly.
The central point of this method is this: a sleep cycle (the time it takes to go from a light stage of sleep to the deeper stages of sleep and back again) is about 1.5 hours long. If I am not under stress or sleep debt, I can estimate my sleeping time to the nearest sleep cycle. Using the sleep cycle as a unit of measurement lets me partition out sleep without being especially reliant on my (in)ability to perceive time.
The way I did it is this (each step was done until I could do it reliably, which took up to a week each for me [but I was a preteen then, so it may be different for adults]):
Block off approximately 2 hours (depending on how long it takes you to fall asleep), right after lunch so it has the least danger of merging with your consolidated/night sleep, and take a nap. Note how this makes you feel.
Do that again, but instead of blocking off the 2 hours with an alarm clock, try doing it naturally, and awakening when it feels natural, around the 1.5h mark (repeating this because it is very important: you will need to have very little to no accumulated sleep debt for this to work). Note how this makes you feel.
Do that again, but with a ~3.5-hour block. Take two 1.5 hour sleep cycle naps one after another (wake up in between).
During a night’s sleep, try waking up between every sleep cycle. Check this against [your sleep time in hours / 1.5h per sleep cycle] to make sure that you caught all of them.
Block off a ~3.5 hour nap and try taking it as two sleep cycles without waking up in between them. (Not sure about the order with this point and the previous one. Did I do them in the opposite order? I’m reconstructing from memory here. It’s probably possible to make this work in either order.)
You probably know from step 4 how many sleep cycles you have in a night. Now you should be able to do things like consciously split up your sleep biphasically, or waking up a sleep cycle earlier than you usually do.
I then spent the rest of summer break with a biphasic “first/second sleep” rhythm, which disappeared once I was in school and had to wake up at specific times again.
To this day, I sleep especially lightly, must take my naps in 1.5 hour intervals, and will frequently wake up between sleep cycles (I’ve had to keep a clock on my nightstand since then so I can orient myself if I get woken unexpectedly by noises, because a 3:30AM waking is different from a 5AM waking, but they’re at the same point on the cycle so they feel similar). I also almost always wake up 10-45 minutes before any set alarms, which would be more useful if the spread was smaller (45 minutes before I actually need to wake up seems like a waste). It’s a cool skill to have, but it has its downsides.
Well if you consider that the tulpa doing it on it’s own
Well, let me put it this way: suppose my tulpa composes a sonnet (call that event E1), recites that sonnet using my vocal cords (E2), and writes the sonnet down using my fingers (E3).
I would not consider any of those to be the tulpa doing something “on its own”, personally. (I don’t mean to raise the whole “independence” question again, as I understand you don’t consider that very important, but, well, you brought it up.)
But if I were willing to consider E1an example of the tulpa doing something on its own (despite using my brain) I can’t imagine a justification for not considering E2 and E3 equally well examples of the tulpa doing something on its own (despite using my muscles).
But I infer that you would consider E1 (though not E2 or E3) the tulpa doing something on its own. Yes?
So, that’s interesting. Can you expand on your reasons for drawing that distinction?
I feel like I’m tangled up in a lot of words and would like to point out that I’m not an expert and don’t have a tulpa, I just got the basics from reading lots of anecdotes on reddit.
You are entirely right here- although I’d like to point out most tulpas wouldn’t be able to do E2 and E3, independent or not. Also, that something like “composing a sonnet” is probably more the kind of thing brains do when their resources are dedicated to it by identities, not something identities do, and tulpas are mainly just identities. But I could be wrong both about that and what kind of activity sonet composing is.
“composing a sonnet” is probably more the kind of thing brains do when their resources are dedicated to it by identities, not something identities do, and tulpas are mainly just identities
Interesting! OK, that’s not a distinction I’d previously understood you as making. So, what do identities do, as distinct from what brains can be directed to do? (In my own model, FWIW, brains construct identities in much the same way brains compose sonnets.)
I guess I basically think of identities as user accounts, in this case. I just grabbed the closest fitting language dichotomy to “brain” (which IS referring to the physical brain) and trying to define and it further now will just lead to overfitting, especially since it almost certainly varies far more than either of us expect (due to the typical mind fallacy) from brain to brain.
And yea, brains construct identities the same way they construct sonnets. And just like music it can be small (jingle, minor character in something you write) or big (long symphony, Tulpa). And identities only slightly more compose sonnets, than sonnets create identities.
It’s all just mental content, that can be composed, remixed, deleted, executed, etc. Now, brains have a strong tendency to in the lack of an identity create one and give it root access, and this identity end up WAY more developed and powerful than even the most ancient and powerful tulpas, but there is no probably no or very little qualitative difference.
There are a lot of confounding factors. For example, something that I consider impossibly absurd seems to be the norm for most humans; considering their physical body as a part of “themselves” and feel as if they are violated if their body is. Put in their perspective, it’s not surprising most people can’t disentangle parts of their own brain(s), mind(s), and identities without meditating for years until they get it shaved in their face via direct perception, and even then probably often get it wrong. Although I guess my illness has shaved it in my face just as anviliciouslly.
Disclaimer: I got tired trying to put disclaimers on the dubious sources on each individual sentence, so just take it with a grain of salt OK and don’t assume I believe everything I say in any persistent way.
OK… I think I understand this. And I agree with much of it.
Some exceptions...
Now, brains have a strong tendency to in the lack of an identity create one and give it root access,
I don’t think I understand what you mean by “root access” here. Can you give me some examples of things that an identity with root access can do, that an identity without root access cannot do?
something that I consider impossibly absurd seems to be the norm for most humans; considering their physical body as a part of “themselves”
This is admittedly a digression, but for my own part, treating my physical body as part of myself seems no more absurd or arbitrary to me than treating my memories of what I had for breakfast this morning as part of myself, or my memories of my mom, or my inability to juggle. It’s kind of absurd, yes, but all attachment to personal identity is kind of absurd. We do it anyway.
All of that said… well, let me put it this way: continuing the sonnet analogy, let’s say my brain writes a sonnet (S1) today and then writes a sonnet (S2) tomorrow. To my way of thinking, the value-add of S2 over and above S1 depends significantly on the overlap between them. If the only difference is that S2 corrects a mis-spelled word in S1, for example, I’m inclined to say that value(S1+S2) = value(S2) ~= value(S1) .
For example, if S1 → S2 is an improvement, I’m happy to discard S1 if I can keep S2, but I’m almost as happy to discard S2 if I can keep S1 -- while I do have a preference for keeping S2 over keeping S1, it’s noise relative to my preference for keeping one of them over losing both.
I can imagine exceptions to the above, but they’re contrived.
So, the fix-a-mispelling case is one extreme, where the difference between S1 and S2 is very small. But as the difference increases, the value(S1+S2) = value(S2) ~= value(S1) equation becomes less and less acceptable. At the other extreme, I’m inclined to say that S2 is simply a separate sonnet, which was inspired by S1 but is distinct from it, and value(S1+S2) ~= value(S2) + value(S1).
And those extremes are really just two regions in a multidimensional space of sonnet-valuation.
Does that seem like a reasonable way to think about sonnets? (I don’t mean is it complete; of course there’s an enormous amount of necessary thinking about sonnets I’m not including here. I just mean have I said anything that strikes you as wrong?)
Does it seem like an equally reasonable way to think about identities?
Root access was probably a to metaphorical choice of words. Is “skeletal musculature privileges” clearer?
All those things like memories or skillsets you list as part of identity does seem weird, but even irrelevant software not nearly as weird as specific hardware. I mean seriously attaching significance to specific atoms? Wut? But of course, I know it’s really me thats weird and most humans do it.
I agree about what you say about sonnets, it’s very well put in fact. And yes identities do follow the same rules. Trying to come up with fitting tulpa stuff in the metaphor. Doesn’t really work though because I don’t know enough about it.
This is getting a wee bit complicated and I think we’re starting to reach the point where we have to dissolve the classifications and actually model things in detail on continuums, which means more conjecture and guesswork and less data and what data we have being less relevant. We’ve been working mostly in metaphors that doesn’t really go this far without breaking down. Also, since we’re getting into more and more detail, it also means th stuff we are examining is likely to be drowned out in the differences between brains, and the conversation turn into nonsense due to the typical mind fallacy.
As such, I am unwilling to widely sprout what’s likely to end up half nonsense at least publicly. Contact me by PM if you’re really all that interested in getting my working model of identities and mental bestiary.
Unless everything I think I understand about tulpas is wrong, this is at the very least significantly harder than just thinking yourself smarter without one. All the idea generating is done before credit is assigned to either the “self” or the “tulpa”.
What there ARE several examples of however are tulpas that are more emotionally mature, better at luminosity, and don’t share all their hosts preconceptions. This is not exactly smarts though, or even general purpose formal rationality.
One CAN imagine scenarios where you end up with a tulpa smarter than the host. For example the host might have learned helplessness, or the tulpa being imagined as “smarter than me” and thus all the brains good ideas get credited to it.
Disclaimer: this is based of only lots of anecdotes I’ve read, gut feeling, and basic stuff that should be common knowledge to any LWer.
I’m reminded of many years ago, a coworker coming into my office and asking me a question about the design of a feature that interacts with our tax calculation.
So she and I created this whole whiteboard flowchart working out the design, at the end of which I said “Hrm. So, at a high level, this seems OK. That said, you should definitely talk to Mark about this, because Mark knows a lot more about the tax code than I do, and he might see problems I missed. For example, Mark will probably notice that this bit here will fail when $condition applies, which I… um… completely failed to notice?”
I could certainly describe that as having a “Mark” in my head who is smarter about tax-code-related designs than I am, and there’s nothing intrinsically wrong with describing it that way if that makes me more comfortable or provides some other benefit.
But “Mark” in this case would just be pointing to a subset of “Dave”, just as “Dave’s fantasies about aliens” does.
See also ‘rubberducking’ and previous discussions of this on LW. My basic theory is that reasoning was developed for adversarial purposes, and by rubberducking you are essentially roleplaying as an ‘adversary’ which triggers deeper processing (if we ever get brain imaging of system I vs system II thinking, I’d expect that adversarial thinking triggers system II more compared to ‘normal’ self-centered thinking).
Yes. Indeed, I suspect I’ve told this story before on LW in just such a discussion.
I don’t necessarily buy your account—it might just be that our brains are simply not well-integrated systems, and enabling different channels whereby parts of our brains can be activated and/or interact with one another (e.g., talking to myself, singing, roleplaying different characters, getting up and walking around, drawing, etc.) gets different (and sometimes better) results.
This is also related to the circumlocution strategy for dealing with aphasia.
Obligatory link.
Yea in that case presumably the tulpa would help—but not necessarily significantly more than such a non-tulpa model that requires considerably less work and risk.
Basically, a tulpa can technically do almost anything you can… but the absence of a tulpa can do them to, and for almost all of them there’s some much easier and at least as effective way to do the same thing.
Mental process like waking up without an alarm clock at a specific time aren’t easy. I know a bunch of people who have that skill but it’s not like there a step by step manual that you can easily follow that gives you that ability.
A tulpa can do things like that. There are many mental processes that you can’t access directly but that a tulpa might be able to access.
I am surprised to know there isn’t such a step by step manual, suspect that you’re wrong about there not being one, and in either case know about a few people that could probably easily write one if motivated to do so.
But I guess you could make this argument; that a tulpa is more flexible and has a simpler user interface, even if it’s less powerful and has a bunch of logistic and moral problems. I dont like it but I can’t think of any counter arguments other than it being lazy and unaesthetic, and the kind of meditative people that make tulpas should not be the kind to take this easy way out.
My point isn’t so much that it impossible but that it isn’t easy.
Creating a mental device that only wakes me up will be easier than creating a whole Tupla but once you do have a Tulpa you can reuse it a lot.
Let’s say I want to practice Salsa dance moves at home. Visualising a full dance partner completely just for the purpose of having a dance partner at home wouldn’t be worth the effort.
I’m not sure about how much you gain by pair programming with a Tulpa, but the Tulpa might be useful for that task.
It takes a lot of energy to create it the first time but afterwards you reap the benefits.
Tulpa creation involves quite a lot of effort so it doesn’t seem the lazy road.
Hmm, you have a point, I hadn’t thought about it that way. If it wasn’t so dangerous I would have asked you to experiment.
I do not have “wake up at a specific time” ability, but I have trained myself to have “wake up within ~1.5 hours of the specific time” ability. I did this over a summer break in elementary school because I learned about how sleep worked and thought it would be cool. Note that you will need to have basically no sleep debt (you consistently wake up without an alarm) for this to work correctly.
The central point of this method is this: a sleep cycle (the time it takes to go from a light stage of sleep to the deeper stages of sleep and back again) is about 1.5 hours long. If I am not under stress or sleep debt, I can estimate my sleeping time to the nearest sleep cycle. Using the sleep cycle as a unit of measurement lets me partition out sleep without being especially reliant on my (in)ability to perceive time.
The way I did it is this (each step was done until I could do it reliably, which took up to a week each for me [but I was a preteen then, so it may be different for adults]):
Block off approximately 2 hours (depending on how long it takes you to fall asleep), right after lunch so it has the least danger of merging with your consolidated/night sleep, and take a nap. Note how this makes you feel.
Do that again, but instead of blocking off the 2 hours with an alarm clock, try doing it naturally, and awakening when it feels natural, around the 1.5h mark (repeating this because it is very important: you will need to have very little to no accumulated sleep debt for this to work). Note how this makes you feel.
Do that again, but with a ~3.5-hour block. Take two 1.5 hour sleep cycle naps one after another (wake up in between).
During a night’s sleep, try waking up between every sleep cycle. Check this against [your sleep time in hours / 1.5h per sleep cycle] to make sure that you caught all of them.
Block off a ~3.5 hour nap and try taking it as two sleep cycles without waking up in between them. (Not sure about the order with this point and the previous one. Did I do them in the opposite order? I’m reconstructing from memory here. It’s probably possible to make this work in either order.)
You probably know from step 4 how many sleep cycles you have in a night. Now you should be able to do things like consciously split up your sleep biphasically, or waking up a sleep cycle earlier than you usually do.
I then spent the rest of summer break with a biphasic “first/second sleep” rhythm, which disappeared once I was in school and had to wake up at specific times again.
To this day, I sleep especially lightly, must take my naps in 1.5 hour intervals, and will frequently wake up between sleep cycles (I’ve had to keep a clock on my nightstand since then so I can orient myself if I get woken unexpectedly by noises, because a 3:30AM waking is different from a 5AM waking, but they’re at the same point on the cycle so they feel similar). I also almost always wake up 10-45 minutes before any set alarms, which would be more useful if the spread was smaller (45 minutes before I actually need to wake up seems like a waste). It’s a cool skill to have, but it has its downsides.
Yes, I would expect this.
Indeed, I’m surprised by the “almost”—what are the exceptions?
Anything that requires you using your body and interacting physically with the world.
I’m startled. Why can’t a tulpa control my body and interact physically with the world, if it’s (mutually?) convenient for it to do so?
Well if you consider that the tulpa doing it on it’s own then no I can’t think of any specific exceptions. Most tulpas can’t do that trick though.
Well, let me put it this way: suppose my tulpa composes a sonnet (call that event E1), recites that sonnet using my vocal cords (E2), and writes the sonnet down using my fingers (E3).
I would not consider any of those to be the tulpa doing something “on its own”, personally. (I don’t mean to raise the whole “independence” question again, as I understand you don’t consider that very important, but, well, you brought it up.)
But if I were willing to consider E1an example of the tulpa doing something on its own (despite using my brain) I can’t imagine a justification for not considering E2 and E3 equally well examples of the tulpa doing something on its own (despite using my muscles).
But I infer that you would consider E1 (though not E2 or E3) the tulpa doing something on its own. Yes?
So, that’s interesting. Can you expand on your reasons for drawing that distinction?
I feel like I’m tangled up in a lot of words and would like to point out that I’m not an expert and don’t have a tulpa, I just got the basics from reading lots of anecdotes on reddit.
You are entirely right here- although I’d like to point out most tulpas wouldn’t be able to do E2 and E3, independent or not. Also, that something like “composing a sonnet” is probably more the kind of thing brains do when their resources are dedicated to it by identities, not something identities do, and tulpas are mainly just identities. But I could be wrong both about that and what kind of activity sonet composing is.
Interesting! OK, that’s not a distinction I’d previously understood you as making.
So, what do identities do, as distinct from what brains can be directed to do?
(In my own model, FWIW, brains construct identities in much the same way brains compose sonnets.)
I guess I basically think of identities as user accounts, in this case. I just grabbed the closest fitting language dichotomy to “brain” (which IS referring to the physical brain) and trying to define and it further now will just lead to overfitting, especially since it almost certainly varies far more than either of us expect (due to the typical mind fallacy) from brain to brain.
And yea, brains construct identities the same way they construct sonnets. And just like music it can be small (jingle, minor character in something you write) or big (long symphony, Tulpa). And identities only slightly more compose sonnets, than sonnets create identities.
It’s all just mental content, that can be composed, remixed, deleted, executed, etc. Now, brains have a strong tendency to in the lack of an identity create one and give it root access, and this identity end up WAY more developed and powerful than even the most ancient and powerful tulpas, but there is no probably no or very little qualitative difference.
There are a lot of confounding factors. For example, something that I consider impossibly absurd seems to be the norm for most humans; considering their physical body as a part of “themselves” and feel as if they are violated if their body is. Put in their perspective, it’s not surprising most people can’t disentangle parts of their own brain(s), mind(s), and identities without meditating for years until they get it shaved in their face via direct perception, and even then probably often get it wrong. Although I guess my illness has shaved it in my face just as anviliciouslly.
Disclaimer: I got tired trying to put disclaimers on the dubious sources on each individual sentence, so just take it with a grain of salt OK and don’t assume I believe everything I say in any persistent way.
OK… I think I understand this. And I agree with much of it.
Some exceptions...
I don’t think I understand what you mean by “root access” here. Can you give me some examples of things that an identity with root access can do, that an identity without root access cannot do?
This is admittedly a digression, but for my own part, treating my physical body as part of myself seems no more absurd or arbitrary to me than treating my memories of what I had for breakfast this morning as part of myself, or my memories of my mom, or my inability to juggle. It’s kind of absurd, yes, but all attachment to personal identity is kind of absurd. We do it anyway.
All of that said… well, let me put it this way: continuing the sonnet analogy, let’s say my brain writes a sonnet (S1) today and then writes a sonnet (S2) tomorrow. To my way of thinking, the value-add of S2 over and above S1 depends significantly on the overlap between them. If the only difference is that S2 corrects a mis-spelled word in S1, for example, I’m inclined to say that value(S1+S2) = value(S2) ~= value(S1) .
For example, if S1 → S2 is an improvement, I’m happy to discard S1 if I can keep S2, but I’m almost as happy to discard S2 if I can keep S1 -- while I do have a preference for keeping S2 over keeping S1, it’s noise relative to my preference for keeping one of them over losing both.
I can imagine exceptions to the above, but they’re contrived.
So, the fix-a-mispelling case is one extreme, where the difference between S1 and S2 is very small. But as the difference increases, the value(S1+S2) = value(S2) ~= value(S1) equation becomes less and less acceptable. At the other extreme, I’m inclined to say that S2 is simply a separate sonnet, which was inspired by S1 but is distinct from it, and value(S1+S2) ~= value(S2) + value(S1).
And those extremes are really just two regions in a multidimensional space of sonnet-valuation.
Does that seem like a reasonable way to think about sonnets? (I don’t mean is it complete; of course there’s an enormous amount of necessary thinking about sonnets I’m not including here. I just mean have I said anything that strikes you as wrong?)
Does it seem like an equally reasonable way to think about identities?
Root access was probably a to metaphorical choice of words. Is “skeletal musculature privileges” clearer?
All those things like memories or skillsets you list as part of identity does seem weird, but even irrelevant software not nearly as weird as specific hardware. I mean seriously attaching significance to specific atoms? Wut? But of course, I know it’s really me thats weird and most humans do it.
I agree about what you say about sonnets, it’s very well put in fact. And yes identities do follow the same rules. Trying to come up with fitting tulpa stuff in the metaphor. Doesn’t really work though because I don’t know enough about it.
This is getting a wee bit complicated and I think we’re starting to reach the point where we have to dissolve the classifications and actually model things in detail on continuums, which means more conjecture and guesswork and less data and what data we have being less relevant. We’ve been working mostly in metaphors that doesn’t really go this far without breaking down. Also, since we’re getting into more and more detail, it also means th stuff we are examining is likely to be drowned out in the differences between brains, and the conversation turn into nonsense due to the typical mind fallacy.
As such, I am unwilling to widely sprout what’s likely to end up half nonsense at least publicly. Contact me by PM if you’re really all that interested in getting my working model of identities and mental bestiary.