But at each step, you discover another step you didn’t originally anticipate
That is the core of your problem. Since it’s happening repeatedly, you should stop assuming that you know the distance to completion and assign a probability distribution to the number of step (or time) needed to get the project done, likely with a long right tail.
If you constantly encounter the unexpected, you should acknowledge that your expectations are faulty and start to expect the unexpected.
That is the core of your problem. Since it’s happening repeatedly, you should stop assuming that you know the distance to completion and assign a probability distribution to the number of step (or time) needed to get the project done, likely with a long right tail.
You’ve already done this when you’ve updated your priors. If you wish, assume you calculate the expected cost given the probability distribution, and it’s still less than the expected value.
If you constantly encounter the unexpected, you should acknowledge that your expectations are faulty and start to expect the unexpected.
That doesn’t actually help you decide what to do, however.
I assume you’re familiar with the Hofstadter’s law as it seems to describe your situation.
If you updated your expectations and they turned out to be wrong again then your update was incorrect. If you have a pattern of incorrect updates, you should go meta and figure out why this pattern exists.
All in all, if you still believe the cost/benefit ratio is favorable, you should continue. Or is the problem that you don’t believe your estimates any more?
Say I’ve started a project which I can definitely see 5 days worth of work. I estimate there’ll be some unexpected work in there somewhere, maybe another day, so I estimate 6 days.
I complete day one but have found another day’s work. When should I estimate completion now ? Taking the outside view, finishing in 6 days (on day 7) is too optimistic.
Implicit in my original estimate was a “rate of finding new work” of about 0.2 days per day. But, now I have more data on that, so I should update the 0.2 figure. Let’s see, 0.2 is my prior, I should build a model for “rate of finding new work” and figure out what the correct Bayesian update is … screw it, let’s assume I won’t find any more work today and estimate the rate by Laplace’s rule of succession. My updated rate of finding new work is 0.5. Hmmm that’s pretty high, the new work I find is itself going to generate new work, better sum the geometric series … 5 known days work plus 5 more unknown, so I should finish in 10 days (ie day 11).
I complete day 2 and find another day’s work ! Crank the handle around, should finish in 15 days (ie day 17).
… etc …
If this state of affairs continues, my expected total amount of work grows really fast, and it won’t be very long before it becomes clear that it is not profitable.
Contrast this with: I can see 5 days of work, but experience tells me that the total work is about 15 days. The first couple of days I turn up additional work, but I don’t start to get worried until around day 3.
Why do I have a feeling you’re playing a “Yes, but...” game with a predetermined conclusion that you want us to reach?
And, by the way, if your workload is “getting adjusted” you’re not dealing with updating probabilities about uncaring Nature, but you’re in a game-theoretic situation which requires an entirely different line of analysis.
Why do I have a feeling you’re playing a “Yes, but...” game with a predetermined conclusion that you want us to reach?
Because I’m playing a “Yes, but...” game with you.
And, by the way, if your workload is “getting adjusted” you’re not dealing with updating probabilities about uncaring Nature, but you’re in a game-theoretic situation which requires an entirely different line of analysis.
From the summary: “There are game-theoretic ways of extracting value from those who follow a strict policy of avoiding engaging in the Sunk Cost Fallacy which happen all the time in IT”.
That’s exactly what this post is about—the introduction was intended to illustrate what that situation -feels like-. Seeing the Planning Fallacy in that situation makes you -more- vulnerable to this kind of mugging; you keep doing what you were doing, and keep getting mugged, and each time assume you’re the one at fault. I have seen people try to gaslight coworkers (no exaggeration—a phrase that gets bandied around in my company now comes from one of those attempts: “The requirements haven’t changed, your understand of the requirements changed”, after a database we were depositing data in had columns removed, added, and renamed, for the umpteenth time) to try to get them to keep coming for another round of mugging.
Would it clarify things if I changed the second part of the title to “Sunk Cost Mugging”?
You’re strongly provoking category confusion in this subthread.
In game-theoretic scenarios where the other party can change your payoffs (or the rules of the game) notions like the Sunk Cost Fallacy are not operational, it’s the wrong approach to a introduce them into the analysis. Of course it can be gamed, that’s a pretty obvious observation. It’s like trying to run regressions in the Milton Friedman’s thermostat situation.
You’re strongly provoking category confusion in this subthread.
There are about a dozen ways of interpreting this statement. I’ll assume I’m causing you category confusion? The post is designed to confuse.
In game-theoretic scenarios where the other party can change your payoffs (or the rules of the game) notions like the Sunk Cost Fallacy are not operational, it’s the wrong approach to a introduce them into the analysis. Of course it can be gamed, that’s a pretty obvious observation. It’s like trying to run regressions in the Milton Friedman’s thermostat situation.
Then the Sunk Cost Fallacy is never operational in the real world, because there are always parties which can change the payoffs.
It’s entertaining “mugging” the people who keep insisting the issue is with the calculations, rather than the game they’re inadvertently playing. Okay, that’s another round of mugging for you.
That is the core of your problem. Since it’s happening repeatedly, you should stop assuming that you know the distance to completion and assign a probability distribution to the number of step (or time) needed to get the project done, likely with a long right tail.
If you constantly encounter the unexpected, you should acknowledge that your expectations are faulty and start to expect the unexpected.
You’ve already done this when you’ve updated your priors. If you wish, assume you calculate the expected cost given the probability distribution, and it’s still less than the expected value.
That doesn’t actually help you decide what to do, however.
I assume you’re familiar with the Hofstadter’s law as it seems to describe your situation.
If you updated your expectations and they turned out to be wrong again then your update was incorrect. If you have a pattern of incorrect updates, you should go meta and figure out why this pattern exists.
All in all, if you still believe the cost/benefit ratio is favorable, you should continue. Or is the problem that you don’t believe your estimates any more?
Very rough toy example.
Say I’ve started a project which I can definitely see 5 days worth of work. I estimate there’ll be some unexpected work in there somewhere, maybe another day, so I estimate 6 days.
I complete day one but have found another day’s work. When should I estimate completion now ? Taking the outside view, finishing in 6 days (on day 7) is too optimistic.
Implicit in my original estimate was a “rate of finding new work” of about 0.2 days per day. But, now I have more data on that, so I should update the 0.2 figure. Let’s see, 0.2 is my prior, I should build a model for “rate of finding new work” and figure out what the correct Bayesian update is … screw it, let’s assume I won’t find any more work today and estimate the rate by Laplace’s rule of succession. My updated rate of finding new work is 0.5. Hmmm that’s pretty high, the new work I find is itself going to generate new work, better sum the geometric series … 5 known days work plus 5 more unknown, so I should finish in 10 days (ie day 11).
I complete day 2 and find another day’s work ! Crank the handle around, should finish in 15 days (ie day 17).
… etc …
If this state of affairs continues, my expected total amount of work grows really fast, and it won’t be very long before it becomes clear that it is not profitable.
Contrast this with: I can see 5 days of work, but experience tells me that the total work is about 15 days. The first couple of days I turn up additional work, but I don’t start to get worried until around day 3.
Assume it is. You continue, and your situation continues to get worse. Now what?
Why do I have a feeling you’re playing a “Yes, but...” game with a predetermined conclusion that you want us to reach?
And, by the way, if your workload is “getting adjusted” you’re not dealing with updating probabilities about uncaring Nature, but you’re in a game-theoretic situation which requires an entirely different line of analysis.
Because I’m playing a “Yes, but...” game with you.
From the summary: “There are game-theoretic ways of extracting value from those who follow a strict policy of avoiding engaging in the Sunk Cost Fallacy which happen all the time in IT”.
That’s exactly what this post is about—the introduction was intended to illustrate what that situation -feels like-. Seeing the Planning Fallacy in that situation makes you -more- vulnerable to this kind of mugging; you keep doing what you were doing, and keep getting mugged, and each time assume you’re the one at fault. I have seen people try to gaslight coworkers (no exaggeration—a phrase that gets bandied around in my company now comes from one of those attempts: “The requirements haven’t changed, your understand of the requirements changed”, after a database we were depositing data in had columns removed, added, and renamed, for the umpteenth time) to try to get them to keep coming for another round of mugging.
Would it clarify things if I changed the second part of the title to “Sunk Cost Mugging”?
You’re strongly provoking category confusion in this subthread.
In game-theoretic scenarios where the other party can change your payoffs (or the rules of the game) notions like the Sunk Cost Fallacy are not operational, it’s the wrong approach to a introduce them into the analysis. Of course it can be gamed, that’s a pretty obvious observation. It’s like trying to run regressions in the Milton Friedman’s thermostat situation.
There are about a dozen ways of interpreting this statement. I’ll assume I’m causing you category confusion? The post is designed to confuse.
Then the Sunk Cost Fallacy is never operational in the real world, because there are always parties which can change the payoffs.
Yes, but :-P
It’s entertaining “mugging” the people who keep insisting the issue is with the calculations, rather than the game they’re inadvertently playing. Okay, that’s another round of mugging for you.