I would guess that there is some additional machinery involved in the ego compared to shoulder advisors (this might not contradict your description of ego as privileged shoulder advisor), as tulpas seem to be quite related to shoulder advisors while being ‘closer to ego’ in some sense. Probably this distinction is an important reason why shoulder advisors seem much less problematic from the standpoint of mental health.
My thoughts on this are mostly from introspection. When I try to imagine a shoulder advisor in comparison to my self (note that I do not have shoulder advisors currently), there seem to be some additional properties to my self which a should advisor would not have.
Trying to get at the differences, what comes up is:
bodily sensations and urges are ‘directly fed into and fuel (/delegate vote power to)’ my self, but not shoulder advisors
decisions on movement likewise are directly connected to myself, while shoulder advisors are only influencing my mental dialogue/perception
similarly with things like ‘felt responsibility for actions’, ‘identity’ etc.
I am not sure that ‘additional machinery’ is the right term for these differences. My impression is ‘the ego is much more strongly connected and fused with these other parts’
Thanks for your reply, am an unsure whether I am correctly understanding your position:
Would you agree that some of the aspects that make the ego/self different compared to shoulder advisors are the ones that I stated? (it doesn’t seem to contradict the formulation ‘privileged shoulder advisor’ as far as I can tell)
The ‘matter of degree’-question where our views differ is about the question whether there are such things as ‘shoulder advisors+’ that are e.g. halfway between a pure shoulder advisor and the ego?
If I am not misunderstanding you, this is a really interesting disagreement. To me, this topic is almost an alief-level view and I might have fallen for cognitive fusion. To possibly help with misunderstandings, I’ll just throw out my current thoughts on the topic. I expect there to be major mistakes or at least gaps in there, but I will not be able to improve them without working with them. Of course, it is also not necessary to solve our disagreement if you find it less interesting.
My current position: In principle, the agentic unity of the ego can be lost and instead scattered onto different agentic-type thought processes (think trauma, dissociation, identity disorders etc.). My impression is that this is usually quite harmful to the individual and generally not sth to strive for. Then there is a more or less separate ‘space of agentic-type thought processes’ that leans more toward ‘abstract world-modelling’, ‘agent-modelling’ and ‘abstract-goal-pursuing’ and less toward ‘episodic memories’, ‘bodily sensations’, ‘bodily urges’ and ‘fight or flight response’. Here, although most people tend towards unity from the ego plus a large number of simple goal-processes plus modelling people, it is possible and largely non-dangerous to build towards a number of larger bundles of thought processes such as shoulder advisors.
A, I missed a “t”. “can” → “cant”. Sorry about that typo. I mostly agree with it being a matter of degree. But I want to respond to this part of your comment:
In principle, the agentic unity of the ego can be lost and instead scattered onto different agentic-type thought processes
I wouldn’t say that this is what happens with Shoulder Advisors or with the no-self experience of meditation. There are many failure modes of the brain making sense of agency and identity. I think the default mode of society is to encourage and reinforce an interpretation around ego, identity, and agency which is stable and beneficial (at least in the sense of societal productivity, I guess there are cultures with very differt patterns that are stable but probably less scalable e.g. the Piraha).
I wouldn’t say that this is what happens with Shoulder Advisors or with the no-self experience of meditation. There are many failure modes of the brain making sense of agency and identity.
This sounds right. Maybe the cases that I am concerned about additionally contain fear responses, and purely having a non-unified or unclear sense of self is more normal/safer than I thought.
Consider that GPT doesn’t have any of that fancy stuff and yet can generate dialogues of semi-consistent characters. Shoulder advisors can be slightly-fancier text bots just by adding audio tone and facial expressions to what is being prompted and predicted.
True, though much of that fancy stuff is related to things that GPT doesn’t need to do, e.g. it doesn’t have any of the learning components because it comes pre-trained and doesn’t do any on-line training, and it’s also just one predictive engine rather than being embedded in a larger system that needs to decide when to apply that predictive engine and when to apply something else.
I do agree that if you look at just any given “shoulder advisor module” itself, and not any of the components concerned with updating it or deciding when it should run, it does seem quite similar to something like GPT.
Just to make explicit a connection that seems obvious to me but I’m not sure how obvious it is to others: the existence of this phenomena fits nicely together with a global workspace model of the mind, where the brain may spawn new subroutines that plug into the workspace and then learns various rules for when to activate them, as well as fine-tuning their properties when the system becomes aware of a mismatch between the model’s predicted outcome and what-the-target-person would actually have said.
If that is true that would also mean that the ego could be just one of these shoulder advisors—albeit a privileged one.
I would guess that there is some additional machinery involved in the ego compared to shoulder advisors (this might not contradict your description of ego as privileged shoulder advisor), as tulpas seem to be quite related to shoulder advisors while being ‘closer to ego’ in some sense.
Probably this distinction is an important reason why shoulder advisors seem much less problematic from the standpoint of mental health.
What additional machinery do you have in mind or what else makes you think that?
My thoughts on this are mostly from introspection. When I try to imagine a shoulder advisor in comparison to my self (note that I do not have shoulder advisors currently), there seem to be some additional properties to my self which a should advisor would not have.
Trying to get at the differences, what comes up is:
bodily sensations and urges are ‘directly fed into and fuel (/delegate vote power to)’ my self, but not shoulder advisors
decisions on movement likewise are directly connected to myself, while shoulder advisors are only influencing my mental dialogue/perception
similarly with things like ‘felt responsibility for actions’, ‘identity’ etc.
I am not sure that ‘additional machinery’ is the right term for these differences. My impression is ‘the ego is much more strongly connected and fused with these other parts’
You describe it as a matter of degree and I cant disagree with that.
Thanks for your reply,
am an unsure whether I am correctly understanding your position:
Would you agree that some of the aspects that make the ego/self different compared to shoulder advisors are the ones that I stated? (it doesn’t seem to contradict the formulation ‘privileged shoulder advisor’ as far as I can tell)
The ‘matter of degree’-question where our views differ is about the question whether there are such things as ‘shoulder advisors+’ that are e.g. halfway between a pure shoulder advisor and the ego?
If I am not misunderstanding you, this is a really interesting disagreement. To me, this topic is almost an alief-level view and I might have fallen for cognitive fusion. To possibly help with misunderstandings, I’ll just throw out my current thoughts on the topic. I expect there to be major mistakes or at least gaps in there, but I will not be able to improve them without working with them. Of course, it is also not necessary to solve our disagreement if you find it less interesting.
My current position: In principle, the agentic unity of the ego can be lost and instead scattered onto different agentic-type thought processes (think trauma, dissociation, identity disorders etc.). My impression is that this is usually quite harmful to the individual and generally not sth to strive for.
Then there is a more or less separate ‘space of agentic-type thought processes’ that leans more toward ‘abstract world-modelling’, ‘agent-modelling’ and ‘abstract-goal-pursuing’ and less toward ‘episodic memories’, ‘bodily sensations’, ‘bodily urges’ and ‘fight or flight response’.
Here, although most people tend towards unity from the ego plus a large number of simple goal-processes plus modelling people, it is possible and largely non-dangerous to build towards a number of larger bundles of thought processes such as shoulder advisors.
A, I missed a “t”. “can” → “cant”. Sorry about that typo. I mostly agree with it being a matter of degree. But I want to respond to this part of your comment:
I wouldn’t say that this is what happens with Shoulder Advisors or with the no-self experience of meditation. There are many failure modes of the brain making sense of agency and identity. I think the default mode of society is to encourage and reinforce an interpretation around ego, identity, and agency which is stable and beneficial (at least in the sense of societal productivity, I guess there are cultures with very differt patterns that are stable but probably less scalable e.g. the Piraha).
Ah, this makes sense thanks!
This sounds right. Maybe the cases that I am concerned about additionally contain fear responses, and purely having a non-unified or unclear sense of self is more normal/safer than I thought.
Scott’s book review of Origin Of Consciousness In The Breakdown Of The Bicameral Mind also seems related.
Consider that GPT doesn’t have any of that fancy stuff and yet can generate dialogues of semi-consistent characters. Shoulder advisors can be slightly-fancier text bots just by adding audio tone and facial expressions to what is being prompted and predicted.
True, though much of that fancy stuff is related to things that GPT doesn’t need to do, e.g. it doesn’t have any of the learning components because it comes pre-trained and doesn’t do any on-line training, and it’s also just one predictive engine rather than being embedded in a larger system that needs to decide when to apply that predictive engine and when to apply something else.
I do agree that if you look at just any given “shoulder advisor module” itself, and not any of the components concerned with updating it or deciding when it should run, it does seem quite similar to something like GPT.