Does internal bargaining and geometric rationality explain ADHD & OCD?
Self- Rituals as Schelling loci for Self-control and OCD
Why do people engage in non-social Rituals ‘self-rituals’? These are very common and can even become pathological (OCD).
High-self control people seem to more often have OCD-like symptoms.
One way to think about self-control is as a form of internal bargaining between internal subagents. From this perspective, Self-control, time-discounting can be seen as a resource. In the absence of self-control the superagent Do humans engage in self-rituals to create Schelling points for internally bargaining agents?
Why are exploration behaviour and lack of selfcontrol linked ? As an example ADHD-people often lack self-control, conscientiousness. At the same time, they explore more. These behaviours are often linked but it’s not clear why.
It’s perfectly possible to explore, deliberately. Yet, it seems that the best explorers are highly correlated with lacking self-control. How could that be?
There is a boring social reason: doing a lot of exploration often means shirking social obligations. Self-deceiving about your true desires might be the only way to avoid social repercussions. This probably explains a lot of ADHD—but not necessarily all.
If self-control = internal bargaining then it would follow that a lack of self-control is a failure of internal bargaining. Note that with subagents I mean both subagents in space *and* time . From this perspective an agent through time could alternatively be seen as a series of subagents of a 4d worm superagent.
This explains many of the salient features of ADHD:
[Claude, list salient features and explain how these are explained by the above]
Impulsivity: A failure of internal subagents to reach an agreement intertemporaly, leading to actions driven by immediate desires.
Difficulty with task initiation and completion: The inability of internal subagents to negotiate and commit to a course of action.
Distractibility: A failure to prioritize the allocation of self-control resources to the task at hand.
Hyperfocus: A temporary alignment of internal subagents’ interests, leading to intense focus on engaging activities.
Disorganization: A failure to establish and adhere to a coherent set of priorities across different subagents.
Emotional dysregulation: A failure of internal bargaining to modulate emotional reactions.
Arithmetic vs Geometric Exploration. Entropic drift towards geometric rationality
[this section obviously owes a large intellectual debt to Garrabrant’s geometric rationality sequence]
Sometimes people like to say that geometric exploration = kelly betting =maximizing geometric mean is considered to be ‘better’ than arithmetic mean.
The problem is that actually just maximizing expected value rather than geometric expected value does in fact maximize the total expected value, even for repeated games (duh!). So it’s not really clear in what sense geometric maximization is better in a naive sense.
Instead, Garrabrant suggests that it is better to think of geometric maximizing as a part of a broader framework of geometric rationality wherein Kelly betting, Nash bargaining, geometric expectation are all forms of cooperation between various kinds of subagents.
If self-control is a form of sucessful internal bargaining then it is best to think of it as a resource. It is better to maximize arithmetic mean but it means that subagents need to cooperate & trust each other much more. Arithmetic maximization means that the variance of outcomes between future copies of the agent is much larger than geometric maximization. That means that subagents should be more willing to take a loss in one world to make up for it in another.
It is hard to be coherent
It is hard to be a coherent agent. Coherence and self-control are resources. Note that having low time-discounting is also a form of coherence: it means the subagents of the 4d-worm superagent are cooperating.
Having subagents that are more similar to one another means it will be easier for them to cooperate. Conversely, the less they are alike the harder it is to cooperate and to be coherent.
Over time, this means there is a selective force against an arithmetic mean maximizing superagent.
Moreover, if the environment is highly varied (for instance when the agent select the environment to be more variable because it is exploring) the outcomes for subagents is more varied so there is more entropic pressure on the superagent.
This means that in particular we would expect superagents that explore more (ADHDers) are less coherent over time (higher time-discounting) and space (more internal conflict etc).
I feel like the whole “subagent” framework suffers from homunculus problem: we fail to explain behavior using the abstraction of coherent agent, so we move to the abstraction of multiple coherent agents, and while it can be useful, I don’t think it displays actual mechanistic truth about minds.
When I plan something and then fail to execute plan it’s mostly not like “failure to bargain”. It’s just when I plan something I usually have good consequences of plan in my imagination and this consequences make me excited and then I start plan execution and get hit by multiple unpleasant details of reality. Coherent structure emerges from multiple not-really-agentic pieces.
You are taking subagents too literally here.
If you prefer take another word like shard, fragment, component, context-dependent action impulse generator etc
When I read word “bargaining” I assume that we are talking about entities that have preferences, action set, have beliefs about relations between actions and preferences and exchange information (modulo acausal interaction) with other entities of the same composition. Like, Kelly betting is good because it equals to Nash bargaining between versions of yourself from inside different outcomes and this is good because we assume that you in different outcomes are, actually, agent with all arrtibutes of agentic system. Saying “systems consist of parts, this parts interact and sometimes result is a horrific incoherent mess” is true, but doesn’t convey much of useful information.
Does internal bargaining and geometric rationality explain ADHD & OCD?
Self- Rituals as Schelling loci for Self-control and OCD
Why do people engage in non-social Rituals ‘self-rituals’? These are very common and can even become pathological (OCD).
High-self control people seem to more often have OCD-like symptoms.
One way to think about self-control is as a form of internal bargaining between internal subagents. From this perspective, Self-control, time-discounting can be seen as a resource. In the absence of self-control the superagent
Do humans engage in self-rituals to create Schelling points for internally bargaining agents?
Exploration, self-control, internal bargaining, ADHD
Why are exploration behaviour and lack of selfcontrol linked ? As an example ADHD-people often lack self-control, conscientiousness. At the same time, they explore more. These behaviours are often linked but it’s not clear why.
It’s perfectly possible to explore, deliberately. Yet, it seems that the best explorers are highly correlated with lacking self-control. How could that be?
There is a boring social reason: doing a lot of exploration often means shirking social obligations. Self-deceiving about your true desires might be the only way to avoid social repercussions. This probably explains a lot of ADHD—but not necessarily all.
If self-control = internal bargaining then it would follow that a lack of self-control is a failure of internal bargaining. Note that with subagents I mean both subagents in space *and* time . From this perspective an agent through time could alternatively be seen as a series of subagents of a 4d worm superagent.
This explains many of the salient features of ADHD:
[Claude, list salient features and explain how these are explained by the above]
Impulsivity: A failure of internal subagents to reach an agreement intertemporaly, leading to actions driven by immediate desires.
Difficulty with task initiation and completion: The inability of internal subagents to negotiate and commit to a course of action.
Distractibility: A failure to prioritize the allocation of self-control resources to the task at hand.
Hyperfocus: A temporary alignment of internal subagents’ interests, leading to intense focus on engaging activities.
Disorganization: A failure to establish and adhere to a coherent set of priorities across different subagents.
Emotional dysregulation: A failure of internal bargaining to modulate emotional reactions.
Arithmetic vs Geometric Exploration. Entropic drift towards geometric rationality
[this section obviously owes a large intellectual debt to Garrabrant’s geometric rationality sequence]
Sometimes people like to say that geometric exploration = kelly betting =maximizing geometric mean is considered to be ‘better’ than arithmetic mean.
The problem is that actually just maximizing expected value rather than geometric expected value does in fact maximize the total expected value, even for repeated games (duh!). So it’s not really clear in what sense geometric maximization is better in a naive sense.
Instead, Garrabrant suggests that it is better to think of geometric maximizing as a part of a broader framework of geometric rationality wherein Kelly betting, Nash bargaining, geometric expectation are all forms of cooperation between various kinds of subagents.
If self-control is a form of sucessful internal bargaining then it is best to think of it as a resource. It is better to maximize arithmetic mean but it means that subagents need to cooperate & trust each other much more. Arithmetic maximization means that the variance of outcomes between future copies of the agent is much larger than geometric maximization. That means that subagents should be more willing to take a loss in one world to make up for it in another.
It is hard to be coherent
It is hard to be a coherent agent. Coherence and self-control are resources. Note that having low time-discounting is also a form of coherence: it means the subagents of the 4d-worm superagent are cooperating.
Having subagents that are more similar to one another means it will be easier for them to cooperate. Conversely, the less they are alike the harder it is to cooperate and to be coherent.
Over time, this means there is a selective force against an arithmetic mean maximizing superagent.
Moreover, if the environment is highly varied (for instance when the agent select the environment to be more variable because it is exploring) the outcomes for subagents is more varied so there is more entropic pressure on the superagent.
This means that in particular we would expect superagents that explore more (ADHDers) are less coherent over time (higher time-discounting) and space (more internal conflict etc).
I feel like the whole “subagent” framework suffers from homunculus problem: we fail to explain behavior using the abstraction of coherent agent, so we move to the abstraction of multiple coherent agents, and while it can be useful, I don’t think it displays actual mechanistic truth about minds.
When I plan something and then fail to execute plan it’s mostly not like “failure to bargain”. It’s just when I plan something I usually have good consequences of plan in my imagination and this consequences make me excited and then I start plan execution and get hit by multiple unpleasant details of reality. Coherent structure emerges from multiple not-really-agentic pieces.
You are taking subagents too literally here. If you prefer take another word like shard, fragment, component, context-dependent action impulse generator etc
When I read word “bargaining” I assume that we are talking about entities that have preferences, action set, have beliefs about relations between actions and preferences and exchange information (modulo acausal interaction) with other entities of the same composition. Like, Kelly betting is good because it equals to Nash bargaining between versions of yourself from inside different outcomes and this is good because we assume that you in different outcomes are, actually, agent with all arrtibutes of agentic system. Saying “systems consist of parts, this parts interact and sometimes result is a horrific incoherent mess” is true, but doesn’t convey much of useful information.