Could you explain more about being coercive towards subagents? I’m not sure I’m picking up exactly what you mean.
A (probably-fake-)framework I’m using is to imagine my mind being made up of subagents with cached heuristics about which actions are good and which aren’t. They function in a sort-of-vetocracy—if any one subagent doesn’t want to engage in an action, I don’t do it. This can be overridden, but doing so carries the cost of the subagent “losing trust” in the rest of the system and next time putting up even more resistance (this is part of how ugh fields develop).
The “right” way to solve this is to find some representation of the problem-space in which the subagent can see how its concerns are adressed or not relevant to the situation at hand. But sometimes there’s not enough time or mental energy to do this, so the best available solution is to override the concern.
This seems right. One thing I would say is that kind of surprisingly it hasn’t been the most aversive tasks where the app has made the biggest difference, it’s the larger number of moderately aversive tasks. It makes expensive commitments cheap and cheap commitments even cheaper, and for me it has turned out that cheap commitments have made up most of the value.
Maybe for me the transaction costs are still a bit too high to be using commitment mechanisms, which means I should take a look at making this smoother.
Huh. Intuitively this doesn’t feel like it rises to the quality needed for a post, but I’ll consider it. (It’s in the rats tail of all the thoughts I have about subagents :-))
There used to be a lot more ‘conversation starter’ LW posts. Nowadays posts are generally longer but I feel those short ones often were highly valuable.
eg some of Wei Dai’s single pagers from a decade ago
This is completely speculation on my part and I think the general model regarding subagents is correct but I don’t think that using apps like this is purely coercion. Are you averse to this because it feels like putting yourself in a skinner box?
I have no evidence to back this up (my experience is based on doing IFS therapy) but I think a lot subagents have their blind spots and these app help alleviate these blind spots and actually spare the subagent from future pain.
Let’s say your exploratory subagent is not on board with a boring admin task and cannot be convinced about the future pain that will come about from not completing the task and also cannot anticipate the shaming your other subagents will dish out on the subagent for causing issues. I would venture that Incentive structures like forfeit can help make it clear to the subagent that there is a cost to ignoring the task that most of the system is on board with and helps it fall in line.
In my particular case the subagents that resist any boring or even just routine work are time blind because they always assume there will be time to complete the task and hunger for novelty (I have never had a consistent routine so far).
I’m on day 2 of using the app so this is just early enthusiasm probably but I would lean towards viewing these apps as corrective lenses for these subagents rather than pure coercion. Mildly uncomfortable to wear and put on (maintain) but likely worth it overall (based on William’s data and the rave reviews).
A (probably-fake-)framework I’m using is to imagine my mind being made up of subagents with cached heuristics about which actions are good and which aren’t. They function in a sort-of-vetocracy—if any one subagent doesn’t want to engage in an action, I don’t do it. This can be overridden, but doing so carries the cost of the subagent “losing trust” in the rest of the system and next time putting up even more resistance (this is part of how ugh fields develop).
The “right” way to solve this is to find some representation of the problem-space in which the subagent can see how its concerns are adressed or not relevant to the situation at hand. But sometimes there’s not enough time or mental energy to do this, so the best available solution is to override the concern.
Maybe for me the transaction costs are still a bit too high to be using commitment mechanisms, which means I should take a look at making this smoother.
I really like this. I’d wish this would become a top-level post.
If you would post this comment with minimal editing I think it would be worthwhile. Top level LWposts are too nowadays
Huh. Intuitively this doesn’t feel like it rises to the quality needed for a post, but I’ll consider it. (It’s in the rats tail of all the thoughts I have about subagents :-))
(Also: Did you accidentally a word?)
There used to be a lot more ‘conversation starter’ LW posts. Nowadays posts are generally longer but I feel those short ones often were highly valuable.
eg some of Wei Dai’s single pagers from a decade ago
This is completely speculation on my part and I think the general model regarding subagents is correct but I don’t think that using apps like this is purely coercion. Are you averse to this because it feels like putting yourself in a skinner box?
I have no evidence to back this up (my experience is based on doing IFS therapy) but I think a lot subagents have their blind spots and these app help alleviate these blind spots and actually spare the subagent from future pain.
Let’s say your exploratory subagent is not on board with a boring admin task and cannot be convinced about the future pain that will come about from not completing the task and also cannot anticipate the shaming your other subagents will dish out on the subagent for causing issues. I would venture that Incentive structures like forfeit can help make it clear to the subagent that there is a cost to ignoring the task that most of the system is on board with and helps it fall in line.
In my particular case the subagents that resist any boring or even just routine work are time blind because they always assume there will be time to complete the task and hunger for novelty (I have never had a consistent routine so far).
I’m on day 2 of using the app so this is just early enthusiasm probably but I would lean towards viewing these apps as corrective lenses for these subagents rather than pure coercion. Mildly uncomfortable to wear and put on (maintain) but likely worth it overall (based on William’s data and the rave reviews).