This is not true? It seems to me that iterative agile approaches are much more popular right now, and advise explicitly against this kind of waterfall process.
That’s great to know. I’m learning to code but am getting all my advice off the internet—never worked in the industry. Guess it’s been some bad advice I’ve been reading!
At the beginning of the project, ask yourself this:
do I know all relevant facts about the planned project?
what is the chance that something is wrong, because the customer didn’t think about their needs sufficiently, or forgot something, or there was a miscommunication with the customer?
what is the chance that during the project either the customer will change his mind, or the external situation will change so that the customer will need something different than originally planned?
If you are confident that you know all you need to know, and the situation is unlikely to change, then I would agree: a week of planning can save a month of coding.
Problem is that this type of situation happens frequently at school, but rarely in real life. During 15 years of my career, it happened to me twice. Those were the best projects in my life. I did some research and planning first, then wrote the software according to the plan. Relaxed pace, low stress. Those were the good times.
Unfortunately, most situations are different. Sometimes there are good reasons. Most of the time, in my opinion, someone just didn’t do their job properly, but it’s your problem anyway, because as a software developer you are at the end of the chain. You have two options: start your own company, or develop proper learned helplessness and accept the lack of analysis and planning as a fact of life.
Yes, this opinion is controversial. What happened, in my opinion, is that at first, there were good insights like “mistakes happen, circumstances change, we should program in a way that allows us to flexibly adapt”. However, when this became common knowledge, companies started using it as an excuse to not even try. Why bother doing analysis, if you can just randomly throw facts at programmers, and they are supposed to flexibly adapt? What was originally meant as a way to avoid disasters became the new normal.
Now people talk a lot about being “agile”. But if you study some agile methodologies, and then look at what companies are actually doing, you will notice that they usually choose a subset that is most convenient for the management. The parts they choose are “no big analysis” and “daily meetings”. The parts they ignore are “no deadlines” and “the team decides their own speed”. (With automated testing they usually go halfway: it is important enough so that there is no need to hire specialized testers, but it is not important enough to allocate a part of budget to actually doing it. So you end up with 5% code coverage and no testing, and if something breaks in production, it’s the developer’s fault. Or no one’s fault.) This is how you get the usual pseudo-agile, where no one makes proper analysis at the beginning, but they still specify the deadlines when the yet-unknown functionality must be completed. Now the team is free to choose which features they implement on which week, under the assumption that after six months they will implement all of them, including the ones even the management doesn’t know about yet.
Yep, I am quite burned out. Seen too much bullshit to believe the usual excuses.
Anyway, even on shorter scale it makes sense to plan first and code later. If your “sprint” takes two weeks, it still makes sense to spend the first day thinking carefully about what you are going to do. But again, the management usually prefers to see your fingers moving. Thinking without typing may result in greater productivity, but if often creates a bad impression. And where productivity is hard to measure, the impressions are everything.
I wouldn’t say it’s bad advice; it depends heavily on the context of the work. In an environment where you have some combination of:
1) a tight feedback loop with the relevant stakeholder (ideally the individual(s) who are going to be using the end product),
2) the product itself is amenable to quick iteration (i.e. composed of many smaller features, ideally with a focus on the presentation),
3) the requirements aren’t clear (for example, the client has a mostly intuitive sense of how certain features should work; perhaps there are many implicit business rules that aren’t formally written down anywhere but will come up as obviously “oh, it’s missing the ability to do [x]” as the product gains capabilities)
...then avoiding significant investment in upfront design and adopting an iterative approach will very often save you spending a bunch of time designing something that doesn’t fit your stakeholder’s needs.
On the other hand, if you’re operating in an environment where those conditions don’t exist, such as one where you’re mostly working on features or products that aren’t easily broken down into smaller components that can be individually released or demoed to a stakeholder, and you have fairly clear requirements upfront that don’t often change (or you have access to a product manager who you work with to iterate on the requirements until they’re sufficiently well-detailed), then doing upfront design can often save you a lot of headache in wandering down dark alleys of “oops, we totally didn’t account for how we’d incorporate this niche but relatively predictable use-case, so we optimized our design in ways which makes it very difficult to add without redoing a lot of work”.
Having some experience with both, I’ll say that the second seems better, in the sense that there are fewer meetings and interruptions, and the work is both faster and more pleasant, since there’s less context-switching, conditional on the planning and product design being competent enough to come up with requirements that won’t change too often. The downsides when it goes wrong do seem larger (throwing away three months of work feels a lot worse than throwing away two weeks), but ultimately that degenerates into a question of mitigating tail risk vs optimizing for upside, and I have yet to lose three months of work (though I did manage to lose almost two consecutive months working at an agile shop prior to this, which was part of a broader pattern that motivated my departure). I would recommend side-stepping that by attempting to find a place that does the “planning” thing well; at that point whether the team you’re on is shipping small features every week or two or working on larger projects that span months is more a question of domain rather than effective strategy.
That was the experience I had in the last corp I worked for. They traditionally were waterfall and trying to move towards agile. As T3t notes below, it’s best not to think of things as either or (and I suspect you are not suggesting such).
A couple of related thought come to mind though. One clearly is the cost of the bug and effort to fix—I don’t think all environments support ease of update to code base or code modules. Additionally, in different settings having something go wrong for 30 minutes might be a minor inconvenience (I’ll do something else this morning and come back this afternoon) while in other cases you might be talking about billions in damage/loses or even lives lost.
There is also something of a culture aspect here. Organizations and the staff who lived and breathed waterfall have a lot of business processes & procedures and human thought process in place that don’t really support agile, and vise versa.
However, I think the approach in the OP is fully compatible with either waterfall or agile development. But it might actually be more valuable to the former. Might also generalize pretty well into things like, say, vacation planning????
This is not true? It seems to me that iterative agile approaches are much more popular right now, and advise explicitly against this kind of waterfall process.
That’s great to know. I’m learning to code but am getting all my advice off the internet—never worked in the industry. Guess it’s been some bad advice I’ve been reading!
At the beginning of the project, ask yourself this:
do I know all relevant facts about the planned project?
what is the chance that something is wrong, because the customer didn’t think about their needs sufficiently, or forgot something, or there was a miscommunication with the customer?
what is the chance that during the project either the customer will change his mind, or the external situation will change so that the customer will need something different than originally planned?
If you are confident that you know all you need to know, and the situation is unlikely to change, then I would agree: a week of planning can save a month of coding.
Problem is that this type of situation happens frequently at school, but rarely in real life. During 15 years of my career, it happened to me twice. Those were the best projects in my life. I did some research and planning first, then wrote the software according to the plan. Relaxed pace, low stress. Those were the good times.
Unfortunately, most situations are different. Sometimes there are good reasons. Most of the time, in my opinion, someone just didn’t do their job properly, but it’s your problem anyway, because as a software developer you are at the end of the chain. You have two options: start your own company, or develop proper learned helplessness and accept the lack of analysis and planning as a fact of life.
Yes, this opinion is controversial. What happened, in my opinion, is that at first, there were good insights like “mistakes happen, circumstances change, we should program in a way that allows us to flexibly adapt”. However, when this became common knowledge, companies started using it as an excuse to not even try. Why bother doing analysis, if you can just randomly throw facts at programmers, and they are supposed to flexibly adapt? What was originally meant as a way to avoid disasters became the new normal.
Now people talk a lot about being “agile”. But if you study some agile methodologies, and then look at what companies are actually doing, you will notice that they usually choose a subset that is most convenient for the management. The parts they choose are “no big analysis” and “daily meetings”. The parts they ignore are “no deadlines” and “the team decides their own speed”. (With automated testing they usually go halfway: it is important enough so that there is no need to hire specialized testers, but it is not important enough to allocate a part of budget to actually doing it. So you end up with 5% code coverage and no testing, and if something breaks in production, it’s the developer’s fault. Or no one’s fault.) This is how you get the usual pseudo-agile, where no one makes proper analysis at the beginning, but they still specify the deadlines when the yet-unknown functionality must be completed. Now the team is free to choose which features they implement on which week, under the assumption that after six months they will implement all of them, including the ones even the management doesn’t know about yet.
Yep, I am quite burned out. Seen too much bullshit to believe the usual excuses.
Anyway, even on shorter scale it makes sense to plan first and code later. If your “sprint” takes two weeks, it still makes sense to spend the first day thinking carefully about what you are going to do. But again, the management usually prefers to see your fingers moving. Thinking without typing may result in greater productivity, but if often creates a bad impression. And where productivity is hard to measure, the impressions are everything.
I wouldn’t say it’s bad advice; it depends heavily on the context of the work. In an environment where you have some combination of:
1) a tight feedback loop with the relevant stakeholder (ideally the individual(s) who are going to be using the end product),
2) the product itself is amenable to quick iteration (i.e. composed of many smaller features, ideally with a focus on the presentation),
3) the requirements aren’t clear (for example, the client has a mostly intuitive sense of how certain features should work; perhaps there are many implicit business rules that aren’t formally written down anywhere but will come up as obviously “oh, it’s missing the ability to do [x]” as the product gains capabilities)
...then avoiding significant investment in upfront design and adopting an iterative approach will very often save you spending a bunch of time designing something that doesn’t fit your stakeholder’s needs.
On the other hand, if you’re operating in an environment where those conditions don’t exist, such as one where you’re mostly working on features or products that aren’t easily broken down into smaller components that can be individually released or demoed to a stakeholder, and you have fairly clear requirements upfront that don’t often change (or you have access to a product manager who you work with to iterate on the requirements until they’re sufficiently well-detailed), then doing upfront design can often save you a lot of headache in wandering down dark alleys of “oops, we totally didn’t account for how we’d incorporate this niche but relatively predictable use-case, so we optimized our design in ways which makes it very difficult to add without redoing a lot of work”.
Having some experience with both, I’ll say that the second seems better, in the sense that there are fewer meetings and interruptions, and the work is both faster and more pleasant, since there’s less context-switching, conditional on the planning and product design being competent enough to come up with requirements that won’t change too often. The downsides when it goes wrong do seem larger (throwing away three months of work feels a lot worse than throwing away two weeks), but ultimately that degenerates into a question of mitigating tail risk vs optimizing for upside, and I have yet to lose three months of work (though I did manage to lose almost two consecutive months working at an agile shop prior to this, which was part of a broader pattern that motivated my departure). I would recommend side-stepping that by attempting to find a place that does the “planning” thing well; at that point whether the team you’re on is shipping small features every week or two or working on larger projects that span months is more a question of domain rather than effective strategy.
That was the experience I had in the last corp I worked for. They traditionally were waterfall and trying to move towards agile. As T3t notes below, it’s best not to think of things as either or (and I suspect you are not suggesting such).
A couple of related thought come to mind though. One clearly is the cost of the bug and effort to fix—I don’t think all environments support ease of update to code base or code modules. Additionally, in different settings having something go wrong for 30 minutes might be a minor inconvenience (I’ll do something else this morning and come back this afternoon) while in other cases you might be talking about billions in damage/loses or even lives lost.
There is also something of a culture aspect here. Organizations and the staff who lived and breathed waterfall have a lot of business processes & procedures and human thought process in place that don’t really support agile, and vise versa.
However, I think the approach in the OP is fully compatible with either waterfall or agile development. But it might actually be more valuable to the former. Might also generalize pretty well into things like, say, vacation planning????
Yes the value of half-baked kernels and quick iteration loops generalizes to most “projects” including vacation planning :)