It’s that most forecasters aren’t actually seriously and single-mindedly trying to see into the future. If they were, they’d keep score and try to improve their predictions based on past errors. They don’t.
Here is an application for consideration. I’m not a software developer, but I get to specify the requirements for software that a team develops. (I’d be the “business owner” or “product owner” depending on the lingo.) The agile+scrum approach to software development notionally assigns points to each “story” (meaning approximately a task that a software user wants to accomplish). The team assigns the points ahead of time, so it I a forecast of how much effort will be required. Notionally, these can be used for forecasting. The problem that I have encountered is that the software developers don’t really see the forecasting benefit, so don’t embrace it fully. For examples in my experience, they don’t (1) focus as much in their “retrospectives” (internal meetings after finishing software) about why forecasts were wrong or (2) assign points to every single story that is written, to allow others to use their knowledge. They are satisfied if their ability to forecast is good enough to tell them how much work to take on in the current “sprint” (a set period of usually 2 or 3 weeks during which they work on the stories that are pulled into that sprint), which requires only an approximation of the work that might go into the current sprint.
Some teams tend to use it as a performance measure, so that they feel better if they increase the number of points produced per sprint over time (which they call “velocity”). Making increased velocity into a goal means that the developers have an incentive for point inflation. I think that use makes the approach less valuable for forecasting.
I believe there are a number of software developers in these forums. What is the inside perspective?
My inside perspective is that story points aren’t an estimate of how much time a developer expects to spend on a story (we have separate time estimates), but an effort to quantify the composite difficulty of the story into some sort of “size” for non-developers (or devs not familiar with the project). As such it’s more of an exercise in expectation management than forecasting.
This is something i have tried explaining multiple times, but I cant really say that i understand the point. It’s harder, so it takes longer, right? My response is that it is a combination of time to complete and probability that the estimate is wrong and it takes a lot longer. But it seems to me that it would be better to decompose those aspects. The benefit of putting it in one number is that it is easier ti use to manage expectations. It’s like giving an estimate that is higher than your true estimate based on risk. Frequently, you should end up with spare time that you use to offset the reputational impact of totally missing sometimes. From a manager’s perspective, it looks a bit like padding the estimates systematically, to offset all the biases in the system towards only hearing the earliest time possible.
My inside perspective from using this system is that, at least the way we use it, it is not useful for forecasting. In each sprint approximately 40-50% or so of the tasks actually get finished. Most of the rest we carry over, and occasionally a few will be deprioritized and removed.
The points values are not used very systematically. Some items aren’t even assigned a point value. For those that are, the values do generally tend to correspond to amounts of effort of ‘a day’, ‘a week’, or ‘a month’, but not with very much precision.
We certainly don’t focus on whether or not our points predictions were accurate to the amount of time actually taken during retrospectives. I think that in the past 12 months, all but one or two of our retrospectives have ended up being cancelled, because they aren’t seen as very important.
Probably we are not utilizing this system to its fullest capacity, but our group isn’t dysfuncitonal or anything. The system seems to work pretty well as a management tool for us (but is not so useful for forecasting!).
Here is an application for consideration. I’m not a software developer, but I get to specify the requirements for software that a team develops. (I’d be the “business owner” or “product owner” depending on the lingo.) The agile+scrum approach to software development notionally assigns points to each “story” (meaning approximately a task that a software user wants to accomplish). The team assigns the points ahead of time, so it I a forecast of how much effort will be required. Notionally, these can be used for forecasting. The problem that I have encountered is that the software developers don’t really see the forecasting benefit, so don’t embrace it fully. For examples in my experience, they don’t (1) focus as much in their “retrospectives” (internal meetings after finishing software) about why forecasts were wrong or (2) assign points to every single story that is written, to allow others to use their knowledge. They are satisfied if their ability to forecast is good enough to tell them how much work to take on in the current “sprint” (a set period of usually 2 or 3 weeks during which they work on the stories that are pulled into that sprint), which requires only an approximation of the work that might go into the current sprint.
Some teams tend to use it as a performance measure, so that they feel better if they increase the number of points produced per sprint over time (which they call “velocity”). Making increased velocity into a goal means that the developers have an incentive for point inflation. I think that use makes the approach less valuable for forecasting.
I believe there are a number of software developers in these forums. What is the inside perspective?
Max L.
My inside perspective is that story points aren’t an estimate of how much time a developer expects to spend on a story (we have separate time estimates), but an effort to quantify the composite difficulty of the story into some sort of “size” for non-developers (or devs not familiar with the project). As such it’s more of an exercise in expectation management than forecasting.
This is something i have tried explaining multiple times, but I cant really say that i understand the point. It’s harder, so it takes longer, right? My response is that it is a combination of time to complete and probability that the estimate is wrong and it takes a lot longer. But it seems to me that it would be better to decompose those aspects. The benefit of putting it in one number is that it is easier ti use to manage expectations. It’s like giving an estimate that is higher than your true estimate based on risk. Frequently, you should end up with spare time that you use to offset the reputational impact of totally missing sometimes. From a manager’s perspective, it looks a bit like padding the estimates systematically, to offset all the biases in the system towards only hearing the earliest time possible.
Max L
My inside perspective from using this system is that, at least the way we use it, it is not useful for forecasting. In each sprint approximately 40-50% or so of the tasks actually get finished. Most of the rest we carry over, and occasionally a few will be deprioritized and removed.
The points values are not used very systematically. Some items aren’t even assigned a point value. For those that are, the values do generally tend to correspond to amounts of effort of ‘a day’, ‘a week’, or ‘a month’, but not with very much precision.
We certainly don’t focus on whether or not our points predictions were accurate to the amount of time actually taken during retrospectives. I think that in the past 12 months, all but one or two of our retrospectives have ended up being cancelled, because they aren’t seen as very important.
Probably we are not utilizing this system to its fullest capacity, but our group isn’t dysfuncitonal or anything. The system seems to work pretty well as a management tool for us (but is not so useful for forecasting!).