That sounds unnecessarily uncomfortable
A response to “Cram your test driven development up your ass….” by
Samuel N. Hart
I was surfing the Web this evening and stumbled upon a rather uncomfortable-sounding suggestion by Samuel Hart regarding the disposition of Test-driven Development (TDD). Not wanting to spend my working hours squirming around on my seat, I felt that this blog deserves something of a reply in an attempt to address some of Hart’s objections to the practice.
Hart lists the following objections to the practice of TDD (summaries added by me):
- Extra, often useless, up-front development
- Much if not most development follows a path from some preconceived notion of what the customer wants, through a prototype stage towards the final goal of a piece of software that the customer can accept. The transition from stage to stage involves much refactoring and prototyping, with early code being discarded as the notions behind that code are overtaken by growing insight into the actual problem domain. Having to develop upfront tests (which will be discarded together with the tested code) at each stage is vastly inefficient.
- Development with blinders on
- A priori tests have the same effect on developers as blinders have on a race horse: they blind him to all outside influence and draw him towards a solution consisting entirely of his own preconceptions. The result is often an unwillingness to accept that a correct solution is another one than was developed.
- Tests are weighted more heavily than the code
- The emphasis on composing tests before the actual code of the application adds more importance to the tests than the actual code itself. Whereas the only important outcome of the development process is the production code, not the tests.
- Coding yourself into a corner
- The emphasis on the importance of testing makes it tempting to believe the test above all else and adapt code to fit the tests rather than to accept that the test is wrong and start over.
- Narrowly applicable uses
- It is generally known that TDD is not very applicable to UIs. However, since creating tests upfront can only distract rather than help with development, TDD is really not very applicable to much at all.
- Tests solve problems that don’t exist
- Tests written a priori don’t prevent bugs being found by testers or on live, nor are they expected to. Bugs found in these stages may be recorded in tests for later regression testing, but what do the upfront tests have to do with these bugs? And if the testers find a bug that conflicts with an upfront test, which is more important? The upfront test or what the testers found?
- Scope creep
- Part of TDD’s method involves breaking up larger tasks into smaller, more easily testable, ones. This generally implies insufficient design and requirements documentation up front, which invariably leads to scope creep. This is not a summary, but taken verbatim from Hart’s blog.
- If the same area of code needs to be re-engineered via pre-loaded test-driven development over and over again due to things like scope creep, it is much less efficient than just planning the project out before-hand and sticking to the project design documentation all along. This is not a summary, but taken verbatim from Hart’s blog.
- Impossible to develop realistic estimates of work
- Adequate and accurate planning can only be done with design and requirements documents written upfront. This makes TDD unsuitable to anything but projects without a deadline, like OSS projects.
What I say to all that
Extra, often useless, up-front development and Inefficiency
I’ll take these together, since they seem to be more or less the same complaint. Let’s start with the complaint about testing being inefficient in combination with prototyping.
It seems to me that this complaint hinges on one’s definition of a prototype. To me, a prototype is a piece of investigative code that is designed for two purposes:
- To prove a point of some sort
- To be thrown away
That last part, I believe, is the heart of the matter. If I start development with a prototype, I wouldn’t dream of allowing a letter of that code onto the live system. In fact, one of my cardinal rules for prototyping is that it must occur in a separate location from the actual development work and within a containing unit that can be deleted as a whole. And if I start with a prototype there isn’t a living cell in my body that would consider refactoring it into a final solution – once the prototype is done and ready for integration into the actual project codebase, only the lessons learned from the prototype will survive the integration process.
The above being the case, of course, it is unsurprising that I usually create prototypes without the aid of unit testing (although there have been exceptions). But once the time comes to transfer the ideas from the prototype to the real codebase, you had better believe that tests precede the final code.
A more general repetition of the prototype idea is that it is vastly inefficient to use tests if you must completely re-engineer your code over and over again. But let me ask you (the reader) a few questions on this point. First, how often do you completely re-engineer your codebase? And if it is really that often that testing upfront is inefficient, don’t you think that might be a symptom of a deeper problem?
A mistake that Hart seems to be making here is to believe that TDD is a methodology in and of itself, where TDD is in fact a practice that must be embedded in a full methodology. As such TDD is not a contradiction of any measure of upfront design (be it just enough to get going as in XP or a full design as in waterfall methodologies). In fact, it is bloody useless to employ TDD if you have absolutely no idea of where you are going; the tests you compose to guide you on your way are supposed to capture the desired postcondition of the code you are developing after all.
Development with blinders on, Tests are weighted more heavily than the code and Coding yourself into a corner
The general tenor of all of these is that wrong tests can lead you astray and you won’t want to come back because you will believe your tests before you will believe anything else. Both participles have in common that they are not failings of TDD.
It is very true that incorrect tests can lead you astray in your development. However, one must always remember that the tests are a means of capturing the intended postcondition of a piece of code. Just as the Technical Requirements Document, the formalized postcondition and any number of other means of writing down what you think it was the customer wanted from you. And guess what? In all of them, if you misunderstood what the customer wanted, you’ll develop the wrong thing. TDD will not prevent this and that is not its function.
The other part (about not wanting to come back) is also not a failing of TDD but rather a failing of its failing practitioners. This reminds me of the current political debates about Islamic fundamentalism: if you have an Islamic zealot, whom do you blame for his extremism? The Qu’ran, or the zealot? There are plenty of politicians who get that one wrong, by the way.
Tests solve problems that don’t exist
I’m hard pressed to consider this a complaint as much as a question, since the answer is so self-evident. Concerning the remark that a priori tests do not address the bugs found later on and are therefore useless: it must be nice to be able to know the future. To know that you wouldn’t have run into problems that were prevented by the a priori tests if you hadn’t had them. Very strange also, because in my case the code I write without a priori tests is always the code with the highest bug density in a posteriori testing….
Concerning the latter part, if there is a conflict between the a priori tests and what the testers discover then there are two possibilities:
- The a priori tests were wrong, meaning they don’t correctly capture the desired postcondition of the code. The tests must then be fixed or discarded and the code adjusted accordingly.
- The testers misunderstood the requirements and not the developer; their test scenarios must be adjusted.
Who is the judge of these possibilities? Why, the customer of course.
Narrowly applicable uses
Ahem. Let me rephrase the complaint: since TDD doesn’t work, TDD doesn’t work. Very nice, but it does rely on “TDD being no good” being an axiom. Let’s just say I disagree and leave it at that.
Insufficient design upfront often does lead to scope creep, indeed. But I’m somewhat stymied by the notion that that is a failing of TDD rather than of software development in general. Nor do I see why TDD leading to smaller code blocks (one per responsibility, say, as dictated by Responsibility Driven Development) is an indication of this. Designing involves modeling of software, but a model is always a simplification of the truth. It seems somewhat nonsensical to me to assume that any upfront design can or should fully dictate what the eventual code will look like.
Impossible to develop realistic estimates of work
This is one I just flatout disagree with. Estimation is ruled by a number of factors, but the important one is experience and not having an upfront design. You can learn to deliver accurate estimates using TDD in just the same way as you learn to do it with upfront design: by practice.
So what do I make of it all?
All in all it seems to me that Hart is getting some things mixed up and believes that Agile development (or XP, one of the two) is exactly the same as TDD. However, that is distinctly not the case: the one is embedded in the others. An important part of being a software engineer, I think, is knowing your way around your tools, your techniques, your practices and your methodologies. Which includes being able to tell them apart. I have an idea that Hart went off the track somewhere in that area and is allowing his misunderstanding of what TDD actually is to blind him as much as he feels that a wrong test will blind an aberrant developer.
So, I think I’ll leave my test-driven development exactly where it is for now.