Test-driven development is a common subject of affective death spirals, and this post seems to be the product of one. In general, programmers ought to write more unit tests than they usually do, but not everything should be unit tested, and unit testing alone is absolutely not sufficient to ensure good code or good abstractions.
Writing tests is not free; it takes time, and while it often pays for itself, there are plenty of scenarios where it doesn’t. Time is a limited resource which is also spent on thinking about abstractions, coding, refactoring, testing by hand, documenting and many other worthy activities. The amount of time required to write a unit test depends on the software environment, and the specific thing being tested. Things that involve interactions crossing out of your program’s domain, like user interfaces, tend to be hard to test automatically. The amount of time required to write a test is not guaranteed to be reasonable. The benefits of a test also vary, depending on the thing being tested. Trivially simple code is not worth testing; for example, it would be wrong to test Java getters and setters except as an implicit part of a larger test.
All of the above is true. Also true is that most of the specific cases where you think you should skip testing first are errors, and you should have started with the test.
(Extensive unit testing without a good mocking and stubbing framework is hard. Testing around external interfaces is also hard (but not hard).) (“most” != “all”; and jim, your beard may be longer than mine, in which case you are assumed to be an exception to the vast over-generalisation I commit above.)
This seems like it ought to have some numbers attacheed. The project I’m currently working on is about 30% tests and test scaffolding by line count, and I think that’s about right for it. Another project I’m working on has a whole bunch of test data, driven by a small bit of code, for one particularly central and particularly hard algorithm, but no tests at all for the rest of it.
1:1 test code to app code ratio is about right usually, for highly testable languages like Ruby. The reason people don’t test much outside Ruby world has less to do with testing and more with their language being so bad at it. Have you ever seen properly unit-tested C++ code or layout CSS?
CSS is hard to unit-test because nearly all the places where it can be messed up are detected by a human who says “Hey, this looks really ugly/hard to read/misorganized”, a category of problems that is generally hard to write automated tests for. I don’t think it’s a fault in the language, but the application domain.
C++ is also hard to unit-test, but in that case I agree that it really is part of the language. I enjoy working with C++ and use it for some of my own projects, but if I’m being honest I have to admit that its near-total lack of reflectivity and numerous odd potholes and tripwires makes it much less convenient to do certain sorts of things with it, in-language automated testing being a prominent one of those.
I’m optimistic about Vala, an in-development C#/Javaish language that compiles to Glib-using C and supports native (but language-mediated) access to C libraries, so you get all the performance and platform-specific benefits of working in C/C++, but with modern language features and a noticeable lack of C++’s slowly expanding layers of cruft.
Much of the benefit of systematic testing shows up much later in the maintenance and reuse phases of the software development cycle. But only if maintaining the test code is given just as high a priority and visibility as maintaining the product code.
One test-intensive discipline of software development is “Extreme programming”, which adds pair programing, frequent refactoring, and short release cycles to TDD.
I agree with all you’ve said. I didn’t mean to imply by my article that TDD is a panacea, and perhaps I should put a sentence or two in describing that. Mostly the reason I didn’t list all the practical exceptions was because I was targetting my article at people with little or no programming experience (hence the lack of pseudo code or particularly realistic examples).
One of the best examples of when TDD doesn’t apply is when you’re writing a version 0 that you expect to throw away, just to get a feel for what you should have done. When you do that, screw extensive test suites and elegance and maintainability; you just want to get something working as quickly as you can, so that you can get quickly to the part where you rewrite from scratch.
Often you can’t really know what the issues are until you’ve already run afoul of them. Exploratory programming is a way of getting to that point, fast. You can bring out the tests when you’ve got a project (or piece of a project) that you’re pretty sure you’ll be using for a while.
Exploratory programming is a way of getting to that point, fast. You can bring out the tests when you’ve got a project (or piece of a project) that you’re pretty sure you’ll be using for a while.
There are two problems with this idea. First, I’ve found TDD to be extraordinarily effective at helping break down a problem that I have no idea how to solve. That is, if I don’t even know what to sketch, I sometimes start with the tests. (Test-Driven Development By Example has some good examples of when/why you’d do that.)
Second: we can be really bad at guessing whether or not something will get thrown away. Rarely does version 0 really get thrown away, and so by the time you’ve built up a bunch of code, the odds that you’ll go back and write the tests are negligible.
Test-driven development is a common subject of affective death spirals, and this post seems to be the product of one. In general, programmers ought to write more unit tests than they usually do, but not everything should be unit tested, and unit testing alone is absolutely not sufficient to ensure good code or good abstractions.
Writing tests is not free; it takes time, and while it often pays for itself, there are plenty of scenarios where it doesn’t. Time is a limited resource which is also spent on thinking about abstractions, coding, refactoring, testing by hand, documenting and many other worthy activities. The amount of time required to write a unit test depends on the software environment, and the specific thing being tested. Things that involve interactions crossing out of your program’s domain, like user interfaces, tend to be hard to test automatically. The amount of time required to write a test is not guaranteed to be reasonable. The benefits of a test also vary, depending on the thing being tested. Trivially simple code is not worth testing; for example, it would be wrong to test Java getters and setters except as an implicit part of a larger test.
All of the above is true.
Also true is that most of the specific cases where you think you should skip testing first are errors, and you should have started with the test.
(Extensive unit testing without a good mocking and stubbing framework is hard. Testing around external interfaces is also hard (but not hard).)
(“most” != “all”; and jim, your beard may be longer than mine, in which case you are assumed to be an exception to the vast over-generalisation I commit above.)
This seems like it ought to have some numbers attacheed. The project I’m currently working on is about 30% tests and test scaffolding by line count, and I think that’s about right for it. Another project I’m working on has a whole bunch of test data, driven by a small bit of code, for one particularly central and particularly hard algorithm, but no tests at all for the rest of it.
1:1 test code to app code ratio is about right usually, for highly testable languages like Ruby. The reason people don’t test much outside Ruby world has less to do with testing and more with their language being so bad at it. Have you ever seen properly unit-tested C++ code or layout CSS?
CSS is hard to unit-test because nearly all the places where it can be messed up are detected by a human who says “Hey, this looks really ugly/hard to read/misorganized”, a category of problems that is generally hard to write automated tests for. I don’t think it’s a fault in the language, but the application domain.
C++ is also hard to unit-test, but in that case I agree that it really is part of the language. I enjoy working with C++ and use it for some of my own projects, but if I’m being honest I have to admit that its near-total lack of reflectivity and numerous odd potholes and tripwires makes it much less convenient to do certain sorts of things with it, in-language automated testing being a prominent one of those.
I’m optimistic about Vala, an in-development C#/Javaish language that compiles to Glib-using C and supports native (but language-mediated) access to C libraries, so you get all the performance and platform-specific benefits of working in C/C++, but with modern language features and a noticeable lack of C++’s slowly expanding layers of cruft.
Much of the benefit of systematic testing shows up much later in the maintenance and reuse phases of the software development cycle. But only if maintaining the test code is given just as high a priority and visibility as maintaining the product code.
One test-intensive discipline of software development is “Extreme programming”, which adds pair programing, frequent refactoring, and short release cycles to TDD.
I agree with all you’ve said. I didn’t mean to imply by my article that TDD is a panacea, and perhaps I should put a sentence or two in describing that. Mostly the reason I didn’t list all the practical exceptions was because I was targetting my article at people with little or no programming experience (hence the lack of pseudo code or particularly realistic examples).
Okay, I’ve added some text about this to the paragraph starting with “Having a sincere desire...”
There’s a post or six in that. :)
One of the best examples of when TDD doesn’t apply is when you’re writing a version 0 that you expect to throw away, just to get a feel for what you should have done. When you do that, screw extensive test suites and elegance and maintainability; you just want to get something working as quickly as you can, so that you can get quickly to the part where you rewrite from scratch.
Often you can’t really know what the issues are until you’ve already run afoul of them. Exploratory programming is a way of getting to that point, fast. You can bring out the tests when you’ve got a project (or piece of a project) that you’re pretty sure you’ll be using for a while.
There are two problems with this idea. First, I’ve found TDD to be extraordinarily effective at helping break down a problem that I have no idea how to solve. That is, if I don’t even know what to sketch, I sometimes start with the tests. (Test-Driven Development By Example has some good examples of when/why you’d do that.)
Second: we can be really bad at guessing whether or not something will get thrown away. Rarely does version 0 really get thrown away, and so by the time you’ve built up a bunch of code, the odds that you’ll go back and write the tests are negligible.
That’s a choice. Some of us deliberately throw away the first draft for anything that matters, no exceptions.