Why—I have several reasons for doing this. One is that some Agile discourse is tantalizingly close to (what looks to me like) Bayes-thinking, for instant frequent references to the “PDCA loop”. On the other hand, there is (to my taste) little enthusiasm for actually putting some of the theses bandied about in the community to the test of experimental evidence.
So, having tried for myself the exercises of measuring my calibration on known but uncertain questions, of venturing predictions with attached probabilities about the real world and checking them against reality, and having found these exercises both unsettling and ultimately useful, I’ve now become interested in sharing with others the reasoning behind those conclusions.
Another reason is that this topic is quite close to another, that of task estimation, which has been a constant source of pain among software development professionals.
Finally, my earlier efforts at collecting empirical data on Agile projects were hampered by low participation. Trying to work my way around that, I reasoned that it was hard to get people to participate in something that gave them no discernible benefit. So I hit on the idea of something like a “prediction game” to give people actionable feedback on how accurately calibrated they were with respect to future evolution of their software projects; this would be a direct benefit, but also generate data (when arbitrating predictions) as a side-effect.
I now have a medium-priority project, conducted more or less according to Lean Startup principles, to develop this idea into a product. In the current stage I’m trying to gauge how much interest there is in predictions in general in the software community, and which presentations of the topic attract people vs turn them off.
Evangelizing calibrated-prediction tools to the Agile software community.
Why—I have several reasons for doing this. One is that some Agile discourse is tantalizingly close to (what looks to me like) Bayes-thinking, for instant frequent references to the “PDCA loop”. On the other hand, there is (to my taste) little enthusiasm for actually putting some of the theses bandied about in the community to the test of experimental evidence.
So, having tried for myself the exercises of measuring my calibration on known but uncertain questions, of venturing predictions with attached probabilities about the real world and checking them against reality, and having found these exercises both unsettling and ultimately useful, I’ve now become interested in sharing with others the reasoning behind those conclusions.
Another reason is that this topic is quite close to another, that of task estimation, which has been a constant source of pain among software development professionals.
Finally, my earlier efforts at collecting empirical data on Agile projects were hampered by low participation. Trying to work my way around that, I reasoned that it was hard to get people to participate in something that gave them no discernible benefit. So I hit on the idea of something like a “prediction game” to give people actionable feedback on how accurately calibrated they were with respect to future evolution of their software projects; this would be a direct benefit, but also generate data (when arbitrating predictions) as a side-effect.
I now have a medium-priority project, conducted more or less according to Lean Startup principles, to develop this idea into a product. In the current stage I’m trying to gauge how much interest there is in predictions in general in the software community, and which presentations of the topic attract people vs turn them off.