OrphanWilde, do you envisage any scenario in which a project keeps (rationally) looking worthwhile despite lots of repeated slippages without this sort of drastic escalation?
Yes. Three cases:
First, the trivial case: You have no choice about whether or not to continue, and there are no alternatives.
Second, the slightly less trivial case: Every slippage is entirely unrelated. The project wasn’t poorly scheduled, and was given adequate room for above-average slippage, but the number of things that has gone wrong is -far- above average. (We should expect a minority of projects to fit this description, but for a given IT career, everybody should encounter at least one such project.)
Third, the mugging case: The slippages are being introduced by another party that is calibrating what they’re asking for to ensure you agree.
The mugging case is actually the most interesting to me, because the company I’ve worked for has been mugged in this fashion, and has developed anti-mugging policies. Ours are just to refuse projects liable to this kind of mugging—e/g, refuse payment-on-delivery fixed-cost projects. There are also reputation solutions, such as for dollar auctions—develop a reputation for -not- ignoring sunk costs, and you become less desirable a target for such mugging attempts.
Trivial case: obviously irrelevant, surely? If you have no choice then you have no choice, and it doesn’t really matter whether or not you estimate that it’s worth continuing.
Slightly less trivial case: If you observe a lot more apparently-unrelated slippages than you expected, then they aren’t truly unrelated, in the following sense: you should start thinking it more likely that you did a poor job of predicting slippages (and perhaps that you just aren’t very good at it for this project). That would lead you to increase your subsequent time estimates.
Mugging: as with the “slightly less trivial” case but more so, I don’t think this is actually an example, because once you start to suspect you’re getting mugged your time estimates should increase dramatically.
(There may be constraints that forbid you to consider the possibility that you’re getting mugged, or at least to behave as if you are considering it. In that case, you are being forced to choose irrationally, and I don’t think this situation is well modelled by treating it as one where you are choosing rationally and your estimates really aren’t increasing.)
Trivial case: obviously irrelevant, surely? If you have no choice then you have no choice, and it doesn’t really matter whether or not you estimate that it’s worth continuing.
Not irrelevant from a prediction perspective.
If you observe a lot more apparently-unrelated slippages than you expected, then they aren’t truly unrelated, in the following sense: you should start thinking it more likely that you did a poor job of predicting slippages (and perhaps that you just aren’t very good at it for this project). That would lead you to increase your subsequent time estimates.
If this happens consistently in projects, yes. If 5% of your projects fall out of normal ranges for a 95% confidence interval probability distribution—then your estimates were good.
as with the “slightly less trivial” case but more so, I don’t think this is actually an example, because once you start to suspect you’re getting mugged your time estimates should increase dramatically.
That assumes you realize you are being mugged. As one example, we had a client (since fired) who added increasingly complex-to-calculate database fields as a project went on, with each set of new sample files (they were developing a system concurrently with ours to process our output, and was basically dumping the stuff they didn’t want to do on us). We caught on that we were getting mugged when they deleted and renamed some columns; until then, we operated on the assumption of good faith, but the project just never went anywhere.
Yes. Three cases:
First, the trivial case: You have no choice about whether or not to continue, and there are no alternatives.
Second, the slightly less trivial case: Every slippage is entirely unrelated. The project wasn’t poorly scheduled, and was given adequate room for above-average slippage, but the number of things that has gone wrong is -far- above average. (We should expect a minority of projects to fit this description, but for a given IT career, everybody should encounter at least one such project.)
Third, the mugging case: The slippages are being introduced by another party that is calibrating what they’re asking for to ensure you agree.
The mugging case is actually the most interesting to me, because the company I’ve worked for has been mugged in this fashion, and has developed anti-mugging policies. Ours are just to refuse projects liable to this kind of mugging—e/g, refuse payment-on-delivery fixed-cost projects. There are also reputation solutions, such as for dollar auctions—develop a reputation for -not- ignoring sunk costs, and you become less desirable a target for such mugging attempts.
[Edited to replace “i/e” with “e/g”.]
Trivial case: obviously irrelevant, surely? If you have no choice then you have no choice, and it doesn’t really matter whether or not you estimate that it’s worth continuing.
Slightly less trivial case: If you observe a lot more apparently-unrelated slippages than you expected, then they aren’t truly unrelated, in the following sense: you should start thinking it more likely that you did a poor job of predicting slippages (and perhaps that you just aren’t very good at it for this project). That would lead you to increase your subsequent time estimates.
Mugging: as with the “slightly less trivial” case but more so, I don’t think this is actually an example, because once you start to suspect you’re getting mugged your time estimates should increase dramatically.
(There may be constraints that forbid you to consider the possibility that you’re getting mugged, or at least to behave as if you are considering it. In that case, you are being forced to choose irrationally, and I don’t think this situation is well modelled by treating it as one where you are choosing rationally and your estimates really aren’t increasing.)
Not irrelevant from a prediction perspective.
If this happens consistently in projects, yes. If 5% of your projects fall out of normal ranges for a 95% confidence interval probability distribution—then your estimates were good.
That assumes you realize you are being mugged. As one example, we had a client (since fired) who added increasingly complex-to-calculate database fields as a project went on, with each set of new sample files (they were developing a system concurrently with ours to process our output, and was basically dumping the stuff they didn’t want to do on us). We caught on that we were getting mugged when they deleted and renamed some columns; until then, we operated on the assumption of good faith, but the project just never went anywhere.