That sounds unnecessarily uncomfortable

A response to “Cram your test driven development up your ass….” by
Samuel N. Hart

I was surfing the Web this evening and stumbled upon a rather uncomfortable-sounding suggestion by Samuel Hart regarding the disposition of Test-driven Development (TDD). Not wanting to spend my working hours squirming around on my seat, I felt that this blog deserves something of a reply in an attempt to address some of Hart’s objections to the practice.

Hart lists the following objections to the practice of TDD (summaries added by me):

Extra, often useless, up-front development
Much if not most development follows a path from some preconceived notion of what the customer wants, through a prototype stage towards the final goal of a piece of software that the customer can accept. The transition from stage to stage involves much refactoring and prototyping, with early code being discarded as the notions behind that code are overtaken by growing insight into the actual problem domain. Having to develop upfront tests (which will be discarded together with the tested code) at each stage is vastly inefficient.
Development with blinders on
A priori tests have the same effect on developers as blinders have on a race horse: they blind him to all outside influence and draw him towards a solution consisting entirely of his own preconceptions. The result is often an unwillingness to accept that a correct solution is another one than was developed.
Tests are weighted more heavily than the code
The emphasis on composing tests before the actual code of the application adds more importance to the tests than the actual code itself. Whereas the only important outcome of the development process is the production code, not the tests.
Coding yourself into a corner
The emphasis on the importance of testing makes it tempting to believe the test above all else and adapt code to fit the tests rather than to accept that the test is wrong and start over.
Narrowly applicable uses
It is generally known that TDD is not very applicable to UIs. However, since creating tests upfront can only distract rather than help with development, TDD is really not very applicable to much at all.
Tests solve problems that don’t exist
Tests written a priori don’t prevent bugs being found by testers or on live, nor are they expected to. Bugs found in these stages may be recorded in tests for later regression testing, but what do the upfront tests have to do with these bugs? And if the testers find a bug that conflicts with an upfront test, which is more important? The upfront test or what the testers found?
Scope creep
Part of TDD’s method involves breaking up larger tasks into smaller, more easily testable, ones. This generally implies insufficient design and requirements documentation up front, which invariably leads to scope creep. This is not a summary, but taken verbatim from Hart’s blog.
Inefficiency
If the same area of code needs to be re-engineered via pre-loaded test-driven development over and over again due to things like scope creep, it is much less efficient than just planning the project out before-hand and sticking to the project design documentation all along. This is not a summary, but taken verbatim from Hart’s blog.
Impossible to develop realistic estimates of work
Adequate and accurate planning can only be done with design and requirements documents written upfront. This makes TDD unsuitable to anything but projects without a deadline, like OSS projects.

What I say to all that

Extra, often useless, up-front development and Inefficiency

I’ll take these together, since they seem to be more or less the same complaint. Let’s start with the complaint about testing being inefficient in combination with prototyping.

It seems to me that this complaint hinges on one’s definition of a prototype. To me, a prototype is a piece of investigative code that is designed for two purposes:

  1. To prove a point of some sort
  2. To be thrown away

That last part, I believe, is the heart of the matter. If I start development with a prototype, I wouldn’t dream of allowing a letter of that code onto the live system. In fact, one of my cardinal rules for prototyping is that it must occur in a separate location from the actual development work and within a containing unit that can be deleted as a whole. And if I start with a prototype there isn’t a living cell in my body that would consider refactoring it into a final solution – once the prototype is done and ready for integration into the actual project codebase, only the lessons learned from the prototype will survive the integration process.

The above being the case, of course, it is unsurprising that I usually create prototypes without the aid of unit testing (although there have been exceptions). But once the time comes to transfer the ideas from the prototype to the real codebase, you had better believe that tests precede the final code.

A more general repetition of the prototype idea is that it is vastly inefficient to use tests if you must completely re-engineer your code over and over again. But let me ask you (the reader) a few questions on this point. First, how often do you completely re-engineer your codebase? And if it is really that often that testing upfront is inefficient, don’t you think that might be a symptom of a deeper problem?

A mistake that Hart seems to be making here is to believe that TDD is a methodology in and of itself, where TDD is in fact a practice that must be embedded in a full methodology. As such TDD is not a contradiction of any measure of upfront design (be it just enough to get going as in XP or a full design as in waterfall methodologies). In fact, it is bloody useless to employ TDD if you have absolutely no idea of where you are going; the tests you compose to guide you on your way are supposed to capture the desired postcondition of the code you are developing after all.

Development with blinders on, Tests are weighted more heavily than the code and Coding yourself into a corner

The general tenor of all of these is that wrong tests can lead you astray and you won’t want to come back because you will believe your tests before you will believe anything else. Both participles have in common that they are not failings of TDD.

It is very true that incorrect tests can lead you astray in your development. However, one must always remember that the tests are a means of capturing the intended postcondition of a piece of code. Just as the Technical Requirements Document, the formalized postcondition and any number of other means of writing down what you think it was the customer wanted from you. And guess what? In all of them, if you misunderstood what the customer wanted, you’ll develop the wrong thing. TDD will not prevent this and that is not its function.

The other part (about not wanting to come back) is also not a failing of TDD but rather a failing of its failing practitioners. This reminds me of the current political debates about Islamic fundamentalism: if you have an Islamic zealot, whom do you blame for his extremism? The Qu’ran, or the zealot? There are plenty of politicians who get that one wrong, by the way.

Tests solve problems that don’t exist

I’m hard pressed to consider this a complaint as much as a question, since the answer is so self-evident. Concerning the remark that a priori tests do not address the bugs found later on and are therefore useless: it must be nice to be able to know the future. To know that you wouldn’t have run into problems that were prevented by the a priori tests if you hadn’t had them. Very strange also, because in my case the code I write without a priori tests is always the code with the highest bug density in a posteriori testing….

Concerning the latter part, if there is a conflict between the a priori tests and what the testers discover then there are two possibilities:

  • The a priori tests were wrong, meaning they don’t correctly capture the desired postcondition of the code. The tests must then be fixed or discarded and the code adjusted accordingly.
  • The testers misunderstood the requirements and not the developer; their test scenarios must be adjusted.

Who is the judge of these possibilities? Why, the customer of course.

Narrowly applicable uses

Ahem. Let me rephrase the complaint: since TDD doesn’t work, TDD doesn’t work. Very nice, but it does rely on “TDD being no good” being an axiom. Let’s just say I disagree and leave it at that.

Scope creep

Insufficient design upfront often does lead to scope creep, indeed. But I’m somewhat stymied by the notion that that is a failing of TDD rather than of software development in general. Nor do I see why TDD leading to smaller code blocks (one per responsibility, say, as dictated by Responsibility Driven Development) is an indication of this. Designing involves modeling of software, but a model is always a simplification of the truth. It seems somewhat nonsensical to me to assume that any upfront design can or should fully dictate what the eventual code will look like.

Impossible to develop realistic estimates of work

This is one I just flatout disagree with. Estimation is ruled by a number of factors, but the important one is experience and not having an upfront design. You can learn to deliver accurate estimates using TDD in just the same way as you learn to do it with upfront design: by practice.

So what do I make of it all?

All in all it seems to me that Hart is getting some things mixed up and believes that Agile development (or XP, one of the two) is exactly the same as TDD. However, that is distinctly not the case: the one is embedded in the others. An important part of being a software engineer, I think, is knowing your way around your tools, your techniques, your practices and your methodologies. Which includes being able to tell them apart. I have an idea that Hart went off the track somewhere in that area and is allowing his misunderstanding of what TDD actually is to blind him as much as he feels that a wrong test will blind an aberrant developer.

So, I think I’ll leave my test-driven development exactly where it is for now.

Tagged on:         

6 thoughts on “That sounds unnecessarily uncomfortable

  • Pingback: Not Another TDD Fight! – Jamie Dobson

  • Pingback: Le Touilleur Express » TDD : avantages et dangers

  • September 26, 2008 at 2:51 pm
    Permalink

    It may well have been useful to be not so black-and-white, I agree. I suppose I was thinking that the statment was so outrageous, in the mode of “the world is flat”, that I felt saying “that’s a lie” was appropriate. I see now, maybe it wasn’t. It felt like a straw man argument, “TDD makes people wants to pass tests, therefore people will tweak code to pass the test instead of getting it working”. And I was trying to point out that wasn’t an argument, because all types of people game results.

    Reply
  • September 26, 2008 at 2:04 pm
    Permalink

    >>“and you’ll often find test-driven developers more willing to tweak their final code to match a flawed test than actually fix the test and change their initial design.”

    >That is a lie.

    It might be useful to note the difference between “that is a lie” and “that is not my experience”.

    The debate is hugely entertaining, in the same kind of way that the Wars of the Roses were. At some point, though, it might be useful to note that software development and engineering are governed by heuristics–fallible methods for solving problems, making decisions, and accomplishing tasks. Heuristics are presumed to be fallible, context-dependent, constrained by circumstance, enabled by available resources, used by someone with sufficient skill and judgment to make reasonable choices, and conducive to learning. TDD is a heuristic approach to software development: often helps; might fail.

    —Michael B.

    Reply
  • September 25, 2008 at 12:34 pm
    Permalink

    Well,

    It’s one of them isn’t it? A storm in a teacup? Sam, the author of the other article (and I have tried to log in to tell him but his site is broken) is, I think, a bit mixed up. Here is an example: “and you’ll often find test-driven developers more willing to tweak their final code to match a flawed test than actually fix the test and change their initial design.”

    That is a lie. I have seen many people, testers, managers and developers, drop tests, ignore them, comment them out and ignore much more evidence just to push a project through a gate on a spreadsheet. This is a problem with human nature, not test-driven-developers. I know, because I use test-driven development and I have never changed code to make a flawed test work.

    I won’t come back on every point, I value my time too much. But, I will say this, in regards to the religious nature of Agile and TDD, he is right (it’s a shame that he then does what all religious zealots do. That’s to say, he set up an artificial, black-and-white argument, putting himself on one side and his, for want of a better word, enemies on the other. His comparison to Dawkins is powerful. As an evolutionary biologist Dawkins should be focussing on why we keep reinventing religion and asking what, if any, is the evolutionary pay back of religion? Sam, like Dawkins, is focussing on the wrong thing. Dawkins makes my life as an atheist hard because now the whole world hates us and won’t listen. His rhetoric is too strong).

    The Americans are somewhat more zealous then we are over here in Europe. I am thinking of people like Bob Martin and Ron Jeffries. Martin’s writing I like, Jeffries I don’t. They sell TDD quite hard, and I think in a very evangelistic manner. The foot soldiers then pick this up, drive the whole world mad with their self-righteous rhetoric, and drive people like poor Sam mad. He, in an instant of rage (and good for him for having the courage to question what he sees as dogma), becomes what he beholds – uninformed and ignorant about this matter. However, if his central thesis is that TDD needs less rhetoric and less religious overtones, I agree (but, of course agree with hardly anything else in his rant). I am well in with the agile lot and even I turn green when I hear bullshit spouting from un-informed religious warriors. Moral of the story, we are all engineers, let’s stick to engineering. Making stuff, breaking stuff, understanding our tools, writing rants that make people write responses and hence increase knowledge… Well done Sam, and well done Ben (who I mainly agree with).

    And, I hope that we can log into Sam’s site and tell him about this feedback and maybe let him respond.

    I have posted both sites on my site, so brace yourselves for a discussion. http://jamiedobson.co.uk/?q=node/65

    Jamie.

    Reply
  • September 17, 2008 at 12:46 pm
    Permalink

    Again a great article Ben. I also am a strong believer of TDD, not that I am applying it to every line of code I write. Good tests give me comfort to see which tests fall down when I change something. Of course I have to trust the tests, but making a change to code without tests, what should I trust than? My own eyes? Tests make sure a developer starts thinking about what his code should actually do, and as a good side effect, he also checks if the code is useable. To me there are a lot of reasons why you should write tests, they have the most effect when done upfront. When done upfront, they can challenge your design without actually writing code and therefore you can improve the design of your code before you have even started.

    greetz Jettro

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>