I think the objections raised by (e.g.) Unknowns, Lumifer and shminux are basically correct but they aren’t (I think) phrased so that they exactly match the scenario OrphanWilde is proposing. Let me try to reduce the impedance mismatch a little.
OrphanWilde’s scenario—where your schedule keeps slipping but even with perfectly rational updating continuing always looks like a win—is possible. But: it’s really weird and I don’t think it actually occurs in real life; that is, in reality, the scenarios that most resemble OrphanWilde’s are ones in which the updating isn’t perfectly rational and you would do well to cut your losses and reflect on your cognitive errors.
What would a real OrphanWilde scenario look like? Something like this.
You begin a project to (let’s say) build a bridge. You think it should be done in six months.
After four months of work, it’s clear that you underestimated and it’s now going to take longer. Your new estimate is another six months.
After another four months, it’s now looking like it will take only three months more—so you’re still going to be late, but not very. You no longer trust your prediction abilities, though (you were wrong the last two times), so you adjust your estimate: another six months.
After another four months, you’ve slipped further. Your error bars are getting large now, but you get a message from God telling you it’ll definitely be done in another six months.
After another four months, you’ve lost your faith and now there’s probably nothing that could (rationally) convince you to be confident of completion in 6 months. But now you get information indicating that completing the bridge is more valuable than you’d thought. So even though it’s likely to be 9 months now, it’s still worth it because extra traffic from the new stadium being built on the other side makes the bridge more important.
After another six months, you’re wearily conceding that you’ve got very little idea how long the bridge is going to take to complete. Maybe a year? But now they’re planning a whole new town on the other side of the bridge and you really need it.
After another nine months, it seems like it might be better just to tell the townspeople to swim if they want to get across. But now you’re receiving credible terrorist threats saying that if you cancel the bridge project the Bad Guys are going to blow up half the city. Better carry on, I guess...
What we need here is constant escalation of evidence for timely completion (despite the contrary evidence of the slippage so far) and/or of expected value of completing the project even if it’s really late—perhaps, after enough slippage, this needs to be escalating evidence of the value of pursuing the project even if it’s never finished. One can keep that up for a while, but you can see how the escalation had to get more and more extreme.
OrphanWilde, do you envisage any scenario in which a project keeps (rationally) looking worthwhile despite lots of repeated slippages without this sort of drastic escalation? If so, how? If not, isn’t this going to be rare enough that we can safely ignore it in favour of the much commoner scenarios where the project keeps looking worthwhile because we’re not looking at it rationally?
OrphanWilde, do you envisage any scenario in which a project keeps (rationally) looking worthwhile despite lots of repeated slippages without this sort of drastic escalation?
Yes. Three cases:
First, the trivial case: You have no choice about whether or not to continue, and there are no alternatives.
Second, the slightly less trivial case: Every slippage is entirely unrelated. The project wasn’t poorly scheduled, and was given adequate room for above-average slippage, but the number of things that has gone wrong is -far- above average. (We should expect a minority of projects to fit this description, but for a given IT career, everybody should encounter at least one such project.)
Third, the mugging case: The slippages are being introduced by another party that is calibrating what they’re asking for to ensure you agree.
The mugging case is actually the most interesting to me, because the company I’ve worked for has been mugged in this fashion, and has developed anti-mugging policies. Ours are just to refuse projects liable to this kind of mugging—e/g, refuse payment-on-delivery fixed-cost projects. There are also reputation solutions, such as for dollar auctions—develop a reputation for -not- ignoring sunk costs, and you become less desirable a target for such mugging attempts.
Trivial case: obviously irrelevant, surely? If you have no choice then you have no choice, and it doesn’t really matter whether or not you estimate that it’s worth continuing.
Slightly less trivial case: If you observe a lot more apparently-unrelated slippages than you expected, then they aren’t truly unrelated, in the following sense: you should start thinking it more likely that you did a poor job of predicting slippages (and perhaps that you just aren’t very good at it for this project). That would lead you to increase your subsequent time estimates.
Mugging: as with the “slightly less trivial” case but more so, I don’t think this is actually an example, because once you start to suspect you’re getting mugged your time estimates should increase dramatically.
(There may be constraints that forbid you to consider the possibility that you’re getting mugged, or at least to behave as if you are considering it. In that case, you are being forced to choose irrationally, and I don’t think this situation is well modelled by treating it as one where you are choosing rationally and your estimates really aren’t increasing.)
Trivial case: obviously irrelevant, surely? If you have no choice then you have no choice, and it doesn’t really matter whether or not you estimate that it’s worth continuing.
Not irrelevant from a prediction perspective.
If you observe a lot more apparently-unrelated slippages than you expected, then they aren’t truly unrelated, in the following sense: you should start thinking it more likely that you did a poor job of predicting slippages (and perhaps that you just aren’t very good at it for this project). That would lead you to increase your subsequent time estimates.
If this happens consistently in projects, yes. If 5% of your projects fall out of normal ranges for a 95% confidence interval probability distribution—then your estimates were good.
as with the “slightly less trivial” case but more so, I don’t think this is actually an example, because once you start to suspect you’re getting mugged your time estimates should increase dramatically.
That assumes you realize you are being mugged. As one example, we had a client (since fired) who added increasingly complex-to-calculate database fields as a project went on, with each set of new sample files (they were developing a system concurrently with ours to process our output, and was basically dumping the stuff they didn’t want to do on us). We caught on that we were getting mugged when they deleted and renamed some columns; until then, we operated on the assumption of good faith, but the project just never went anywhere.
I think the objections raised by (e.g.) Unknowns, Lumifer and shminux are basically correct but they aren’t (I think) phrased so that they exactly match the scenario OrphanWilde is proposing. Let me try to reduce the impedance mismatch a little.
OrphanWilde’s scenario—where your schedule keeps slipping but even with perfectly rational updating continuing always looks like a win—is possible. But: it’s really weird and I don’t think it actually occurs in real life; that is, in reality, the scenarios that most resemble OrphanWilde’s are ones in which the updating isn’t perfectly rational and you would do well to cut your losses and reflect on your cognitive errors.
What would a real OrphanWilde scenario look like? Something like this.
You begin a project to (let’s say) build a bridge. You think it should be done in six months.
After four months of work, it’s clear that you underestimated and it’s now going to take longer. Your new estimate is another six months.
After another four months, it’s now looking like it will take only three months more—so you’re still going to be late, but not very. You no longer trust your prediction abilities, though (you were wrong the last two times), so you adjust your estimate: another six months.
After another four months, you’ve slipped further. Your error bars are getting large now, but you get a message from God telling you it’ll definitely be done in another six months.
After another four months, you’ve lost your faith and now there’s probably nothing that could (rationally) convince you to be confident of completion in 6 months. But now you get information indicating that completing the bridge is more valuable than you’d thought. So even though it’s likely to be 9 months now, it’s still worth it because extra traffic from the new stadium being built on the other side makes the bridge more important.
After another six months, you’re wearily conceding that you’ve got very little idea how long the bridge is going to take to complete. Maybe a year? But now they’re planning a whole new town on the other side of the bridge and you really need it.
After another nine months, it seems like it might be better just to tell the townspeople to swim if they want to get across. But now you’re receiving credible terrorist threats saying that if you cancel the bridge project the Bad Guys are going to blow up half the city. Better carry on, I guess...
What we need here is constant escalation of evidence for timely completion (despite the contrary evidence of the slippage so far) and/or of expected value of completing the project even if it’s really late—perhaps, after enough slippage, this needs to be escalating evidence of the value of pursuing the project even if it’s never finished. One can keep that up for a while, but you can see how the escalation had to get more and more extreme.
OrphanWilde, do you envisage any scenario in which a project keeps (rationally) looking worthwhile despite lots of repeated slippages without this sort of drastic escalation? If so, how? If not, isn’t this going to be rare enough that we can safely ignore it in favour of the much commoner scenarios where the project keeps looking worthwhile because we’re not looking at it rationally?
The ‘even if never finished’ part resembles childrearing:)
A nice example of a task whose value (1) is partly attached to the work rather than its goal and (2) doesn’t depend on completing anything.
Yes. Three cases:
First, the trivial case: You have no choice about whether or not to continue, and there are no alternatives.
Second, the slightly less trivial case: Every slippage is entirely unrelated. The project wasn’t poorly scheduled, and was given adequate room for above-average slippage, but the number of things that has gone wrong is -far- above average. (We should expect a minority of projects to fit this description, but for a given IT career, everybody should encounter at least one such project.)
Third, the mugging case: The slippages are being introduced by another party that is calibrating what they’re asking for to ensure you agree.
The mugging case is actually the most interesting to me, because the company I’ve worked for has been mugged in this fashion, and has developed anti-mugging policies. Ours are just to refuse projects liable to this kind of mugging—e/g, refuse payment-on-delivery fixed-cost projects. There are also reputation solutions, such as for dollar auctions—develop a reputation for -not- ignoring sunk costs, and you become less desirable a target for such mugging attempts.
[Edited to replace “i/e” with “e/g”.]
Trivial case: obviously irrelevant, surely? If you have no choice then you have no choice, and it doesn’t really matter whether or not you estimate that it’s worth continuing.
Slightly less trivial case: If you observe a lot more apparently-unrelated slippages than you expected, then they aren’t truly unrelated, in the following sense: you should start thinking it more likely that you did a poor job of predicting slippages (and perhaps that you just aren’t very good at it for this project). That would lead you to increase your subsequent time estimates.
Mugging: as with the “slightly less trivial” case but more so, I don’t think this is actually an example, because once you start to suspect you’re getting mugged your time estimates should increase dramatically.
(There may be constraints that forbid you to consider the possibility that you’re getting mugged, or at least to behave as if you are considering it. In that case, you are being forced to choose irrationally, and I don’t think this situation is well modelled by treating it as one where you are choosing rationally and your estimates really aren’t increasing.)
Not irrelevant from a prediction perspective.
If this happens consistently in projects, yes. If 5% of your projects fall out of normal ranges for a 95% confidence interval probability distribution—then your estimates were good.
That assumes you realize you are being mugged. As one example, we had a client (since fired) who added increasingly complex-to-calculate database fields as a project went on, with each set of new sample files (they were developing a system concurrently with ours to process our output, and was basically dumping the stuff they didn’t want to do on us). We caught on that we were getting mugged when they deleted and renamed some columns; until then, we operated on the assumption of good faith, but the project just never went anywhere.