There is an ACX article on “trapped priors”, which in the Ayn Rand analogy would be… uhm, dunno.
The idea is that a subagent can make a self-fulfilling prophecy like “if you do X, you will feel really bad”. You use some willpower to make yourself do X, but the subagent keeps screaming at you “now you will feel bad! bad!! bad!!!” and the screaming ultimately makes you feel bad. Then the subagent says “I told you so” and collects the money.
The business analogy could be betting on company internal prediction market, where some employees figure out that they can bet on their own work ending up bad, and then sabotage it and collect the money. And you can’t fire them, because HR does not allow you to fire your “best” employees (where “best” is operationalized as “making excellent predictions on the internal prediction market”).
in my model that happens through local updates, rather than a global system
for instance, if i used my willpower to feel my social anxiety completely (instead of the usual strategy of suppression) while socializing, i might get some small or large reconsolidation updates to the social anxiety, such that that part thinks it’s needed in less situations or not at all
alternatively, the part that has the strategy of going to socialize and feeling confident may gain some more internal evidence, so it wins the internal conflict slightly more (but the internal conflict is still there and causes a drain)
i think the sort of global evaluation you’re talking about is pretty rare, though something like it can happen when someone e.g. reaches a deep state of love through meditation, and then is able to access lots of their unloved parts that are downstream TRYING to get to that love and suddenly a big shift happens to whole system simultaneously (another type of global reevaulation can take place through reconsolidating deep internal organizing principles like fundamental ontological constraints or attachment style)
“Global evaluation” isn’t exactly what I’m trying to posit; more like a “things bottom-out in X currency” thing.
Like, in the toy model about $ from Atlas Shrugged, an heir who spends money foolishly eventually goes broke, and can no longer get others to follow their directions. This isn’t because the whole economy gets together to evaluate their projects. It’s because they spend their currency locally on things again and again, and the things they bet on do not pay off, do not give them new currency.
I think the analog happens in me/others: I’ll get excited about some topic, pursue it for awhile, get back nothing, and decide the generator of that excitement was boring after all.
Hmm. Under your model, are there ways that parts gain/lose (steam/mindshare/something)?
There is an ACX article on “trapped priors”, which in the Ayn Rand analogy would be… uhm, dunno.
The idea is that a subagent can make a self-fulfilling prophecy like “if you do X, you will feel really bad”. You use some willpower to make yourself do X, but the subagent keeps screaming at you “now you will feel bad! bad!! bad!!!” and the screaming ultimately makes you feel bad. Then the subagent says “I told you so” and collects the money.
The business analogy could be betting on company internal prediction market, where some employees figure out that they can bet on their own work ending up bad, and then sabotage it and collect the money. And you can’t fire them, because HR does not allow you to fire your “best” employees (where “best” is operationalized as “making excellent predictions on the internal prediction market”).
in my model that happens through local updates, rather than a global system
for instance, if i used my willpower to feel my social anxiety completely (instead of the usual strategy of suppression) while socializing, i might get some small or large reconsolidation updates to the social anxiety, such that that part thinks it’s needed in less situations or not at all
alternatively, the part that has the strategy of going to socialize and feeling confident may gain some more internal evidence, so it wins the internal conflict slightly more (but the internal conflict is still there and causes a drain)
i think the sort of global evaluation you’re talking about is pretty rare, though something like it can happen when someone e.g. reaches a deep state of love through meditation, and then is able to access lots of their unloved parts that are downstream TRYING to get to that love and suddenly a big shift happens to whole system simultaneously (another type of global reevaulation can take place through reconsolidating deep internal organizing principles like fundamental ontological constraints or attachment style)
“Global evaluation” isn’t exactly what I’m trying to posit; more like a “things bottom-out in X currency” thing.
Like, in the toy model about $ from Atlas Shrugged, an heir who spends money foolishly eventually goes broke, and can no longer get others to follow their directions. This isn’t because the whole economy gets together to evaluate their projects. It’s because they spend their currency locally on things again and again, and the things they bet on do not pay off, do not give them new currency.
I think the analog happens in me/others: I’ll get excited about some topic, pursue it for awhile, get back nothing, and decide the generator of that excitement was boring after all.
ah that makes sense
in my mind this isn’t resources flowing to elsewhere, it’s either:
An emotional learning update
A part of you that hasn’t been getting what it wants speaking up.