Agree with the recommendation of using such websites.
I’ve found them to be very effective, especially for highly aversive medium-term tasks like applying for jobs, or finishing a project. Five times I wanted a thing to happen, but was really procrastinating on it. And five times pulling out the big guns (beeminder) got it done.
I haven’t tried using them for long-term commitments, since my intuition is that using them is highly coercive towards subagents which then further entrench their opposition. So I’ve used these apps sparingly. Maybe I’ll still give it a try.
I think these apps work best if really aversive tasks are set to a estimated-bare-minimum commitment, which in reality is probably the median amount of what one can get done.
Could you explain more about being coercive towards subagents? I’m not sure I’m picking up exactly what you mean.
Some thoughts related to what I am interpreting you as meaning: I have found that my problems have shifted from not being able to follow through on things, to being almost too able to follow through on things, and thus getting entrenched in a pre-committed plan when I should change it (for instance repeating a time consuming daily habit that isn’t doing much for me).
I see this as a good problem to have. When I was less able to execute on things my plans were just as bad, I just never found out because I never got as far as completing them. The problem of planning is still hard though, and isn’t solved by the app, although I have found that it helps a lot.
I think these apps work best if really aversive tasks are set to a estimated-bare-minimum commitment, which in reality is probably the median amount of what one can get done.
This seems right. One thing I would say is that kind of surprisingly it hasn’t been the most aversive tasks where the app has made the biggest difference, it’s the larger number of moderately aversive tasks. It makes expensive commitments cheap and cheap commitments even cheaper, and for me it has turned out that cheap commitments have made up most of the value.
It has made a difference to the most aversive ones, I just don’t think the category of extremely aversive things makes up that much of the value in my life.
This is possibly a way in which it’s better than Beeminder, which I understand is oriented towards at least medium term goals (where you make several increments of progress along the yellow brick road). By number most of the things I have put in Forfeit have been one-off things that would be too small to be worth Beeminding (unless you could group them into a higher level of abstraction).
Could you explain more about being coercive towards subagents? I’m not sure I’m picking up exactly what you mean.
A (probably-fake-)framework I’m using is to imagine my mind being made up of subagents with cached heuristics about which actions are good and which aren’t. They function in a sort-of-vetocracy—if any one subagent doesn’t want to engage in an action, I don’t do it. This can be overridden, but doing so carries the cost of the subagent “losing trust” in the rest of the system and next time putting up even more resistance (this is part of how ugh fields develop).
The “right” way to solve this is to find some representation of the problem-space in which the subagent can see how its concerns are adressed or not relevant to the situation at hand. But sometimes there’s not enough time or mental energy to do this, so the best available solution is to override the concern.
This seems right. One thing I would say is that kind of surprisingly it hasn’t been the most aversive tasks where the app has made the biggest difference, it’s the larger number of moderately aversive tasks. It makes expensive commitments cheap and cheap commitments even cheaper, and for me it has turned out that cheap commitments have made up most of the value.
Maybe for me the transaction costs are still a bit too high to be using commitment mechanisms, which means I should take a look at making this smoother.
Huh. Intuitively this doesn’t feel like it rises to the quality needed for a post, but I’ll consider it. (It’s in the rats tail of all the thoughts I have about subagents :-))
There used to be a lot more ‘conversation starter’ LW posts. Nowadays posts are generally longer but I feel those short ones often were highly valuable.
eg some of Wei Dai’s single pagers from a decade ago
This is completely speculation on my part and I think the general model regarding subagents is correct but I don’t think that using apps like this is purely coercion. Are you averse to this because it feels like putting yourself in a skinner box?
I have no evidence to back this up (my experience is based on doing IFS therapy) but I think a lot subagents have their blind spots and these app help alleviate these blind spots and actually spare the subagent from future pain.
Let’s say your exploratory subagent is not on board with a boring admin task and cannot be convinced about the future pain that will come about from not completing the task and also cannot anticipate the shaming your other subagents will dish out on the subagent for causing issues. I would venture that Incentive structures like forfeit can help make it clear to the subagent that there is a cost to ignoring the task that most of the system is on board with and helps it fall in line.
In my particular case the subagents that resist any boring or even just routine work are time blind because they always assume there will be time to complete the task and hunger for novelty (I have never had a consistent routine so far).
I’m on day 2 of using the app so this is just early enthusiasm probably but I would lean towards viewing these apps as corrective lenses for these subagents rather than pure coercion. Mildly uncomfortable to wear and put on (maintain) but likely worth it overall (based on William’s data and the rave reviews).
In the subagent view, a financial precommitment another subagent has arranged for the sole purpose of coercing you into one course of action is a threat.
Plenty of branches of decision theory advise you to disregard threats because consistently doing so will mean that instances of you will more rarely find themselves in the position to be threatened.
Of course, one can discuss how rational these subagents are in the first place. The “stay in bed, watch netflix and eat potato chips” subagent is probably not very concerned with high level abstract planning and might have a bad discount function for future benefits and not be overall that interested in the utility he get from being principled.
Agree with the recommendation of using such websites.
I’ve found them to be very effective, especially for highly aversive medium-term tasks like applying for jobs, or finishing a project. Five times I wanted a thing to happen, but was really procrastinating on it. And five times pulling out the big guns (beeminder) got it done.
I haven’t tried using them for long-term commitments, since my intuition is that using them is highly coercive towards subagents which then further entrench their opposition. So I’ve used these apps sparingly. Maybe I’ll still give it a try.
I think these apps work best if really aversive tasks are set to a estimated-bare-minimum commitment, which in reality is probably the median amount of what one can get done.
Could you explain more about being coercive towards subagents? I’m not sure I’m picking up exactly what you mean.
Some thoughts related to what I am interpreting you as meaning: I have found that my problems have shifted from not being able to follow through on things, to being almost too able to follow through on things, and thus getting entrenched in a pre-committed plan when I should change it (for instance repeating a time consuming daily habit that isn’t doing much for me).
I see this as a good problem to have. When I was less able to execute on things my plans were just as bad, I just never found out because I never got as far as completing them. The problem of planning is still hard though, and isn’t solved by the app, although I have found that it helps a lot.
This seems right. One thing I would say is that kind of surprisingly it hasn’t been the most aversive tasks where the app has made the biggest difference, it’s the larger number of moderately aversive tasks. It makes expensive commitments cheap and cheap commitments even cheaper, and for me it has turned out that cheap commitments have made up most of the value.
It has made a difference to the most aversive ones, I just don’t think the category of extremely aversive things makes up that much of the value in my life.
This is possibly a way in which it’s better than Beeminder, which I understand is oriented towards at least medium term goals (where you make several increments of progress along the yellow brick road). By number most of the things I have put in Forfeit have been one-off things that would be too small to be worth Beeminding (unless you could group them into a higher level of abstraction).
A (probably-fake-)framework I’m using is to imagine my mind being made up of subagents with cached heuristics about which actions are good and which aren’t. They function in a sort-of-vetocracy—if any one subagent doesn’t want to engage in an action, I don’t do it. This can be overridden, but doing so carries the cost of the subagent “losing trust” in the rest of the system and next time putting up even more resistance (this is part of how ugh fields develop).
The “right” way to solve this is to find some representation of the problem-space in which the subagent can see how its concerns are adressed or not relevant to the situation at hand. But sometimes there’s not enough time or mental energy to do this, so the best available solution is to override the concern.
Maybe for me the transaction costs are still a bit too high to be using commitment mechanisms, which means I should take a look at making this smoother.
I really like this. I’d wish this would become a top-level post.
If you would post this comment with minimal editing I think it would be worthwhile. Top level LWposts are too nowadays
Huh. Intuitively this doesn’t feel like it rises to the quality needed for a post, but I’ll consider it. (It’s in the rats tail of all the thoughts I have about subagents :-))
(Also: Did you accidentally a word?)
There used to be a lot more ‘conversation starter’ LW posts. Nowadays posts are generally longer but I feel those short ones often were highly valuable.
eg some of Wei Dai’s single pagers from a decade ago
This is completely speculation on my part and I think the general model regarding subagents is correct but I don’t think that using apps like this is purely coercion. Are you averse to this because it feels like putting yourself in a skinner box?
I have no evidence to back this up (my experience is based on doing IFS therapy) but I think a lot subagents have their blind spots and these app help alleviate these blind spots and actually spare the subagent from future pain.
Let’s say your exploratory subagent is not on board with a boring admin task and cannot be convinced about the future pain that will come about from not completing the task and also cannot anticipate the shaming your other subagents will dish out on the subagent for causing issues. I would venture that Incentive structures like forfeit can help make it clear to the subagent that there is a cost to ignoring the task that most of the system is on board with and helps it fall in line.
In my particular case the subagents that resist any boring or even just routine work are time blind because they always assume there will be time to complete the task and hunger for novelty (I have never had a consistent routine so far).
I’m on day 2 of using the app so this is just early enthusiasm probably but I would lean towards viewing these apps as corrective lenses for these subagents rather than pure coercion. Mildly uncomfortable to wear and put on (maintain) but likely worth it overall (based on William’s data and the rave reviews).
In the subagent view, a financial precommitment another subagent has arranged for the sole purpose of coercing you into one course of action is a threat.
Plenty of branches of decision theory advise you to disregard threats because consistently doing so will mean that instances of you will more rarely find themselves in the position to be threatened.
Of course, one can discuss how rational these subagents are in the first place. The “stay in bed, watch netflix and eat potato chips” subagent is probably not very concerned with high level abstract planning and might have a bad discount function for future benefits and not be overall that interested in the utility he get from being principled.