It isn't quite TDD, but I like it

masak on 2010-06-17T13:04:32

In several tightly controlled projects over the past few years, I seem to either follow or approximate this sequence of steps:

  1. Write a test suite skeleton.
  2. Flesh it out into a test suite. (Make the tests run with a minimal implementation skeleton.)
  3. Make the tests pass by fleshing out the implementation.

I haven't seen such a way of working mentioned elsewhere, so I thought I'd make note of it here.

The idea with the first step is to separate most of the thinking from the automatic task of writing the tests. I find if I do this, I get better test coverage, because separation allows me to retain an eagle-eye view of the model, whereas if I were to switch back and forth between thinking about the whole and writing about the parts, I'd lose sight of the whole. To some degree. Also, having something to flesh out cancels out the impulse to cheat and skip writing tests.

Step two ignores the mandate of TDD to write only one failing test at a time. I still prefer to have the whole test suite done before starting the implementation, again because I get rid of some context-switching. Usually I treat the implementation process in much the same way as if I had written the tests on-demand. It occasionally happens that a test already passes as soon as I write the minimal scaffold needed to run the tests. As I currently understand TDD, this is also "frowned upon". I leave them in there, because they're still part of the specification, and might even catch regressions in the future.

I tried this out last weekend, and it was a really nice match with the problem domain — an I/O-free core of a package installer:

  1. Write a test suite skeleton: Just a bunch of prose comments.
  2. Flesh it out into a test suite: one commit per skeleton test file.
  3. Make the tests pass: one commit per subpart.

And presto, a complete (core) implementation with great test coverage.

Those who follow the links to actual commits will note that mistakes are corrected during the implementation phase. That's a symptom of the haltingproblem-esque feature of code in general; you don't know its true quality until you've run it in all possible ways.


Inspiring....

larard on 2010-06-17T15:44:45

I always find you stories of TDD inspiring, almost to the point that I begin to write tests ;)

But what I'm waiting for is tote! Or tote for perl 5 perhaps;) http://use.perl.org/user/masak/journal/39639

Re:Inspiring....

masak on 2010-06-17T15:54:04

Oh, nice, someone who is actually waiting for tote. :)

Now that I know that's the case, I will re-prioritize it to arrive sooner. The short story of my discoveries after the original post is that one shouldn't over-design tote; it mostly needs to be a loop that runs the tests on any changes, and has sensible rules for what constitutes "regression" and "progress". Having it do more (for example the double loop suggested in the original post) tends to be more of an obstacle than an actual help.

Using tote works very well with the workflow outlined in this post.

I use a "fake" variant of tote quite a lot locally. I should get my act together and publish it as a real project. Thanks for reminding me. :)

Re:Inspiring....

Aristotle on 2010-06-18T00:00:59

Count me among those who’d like to see tote exist as a project.

(Btw: elsewhere, someone I mentioned tote to suggested adding to it something like unit testing achievements, for unparalleled addictiveness.)

Unit testing achievements

masak on 2010-06-18T00:16:41

Ooh! Yes, that's certainly worth remembering. Many of these can probably be added quite easily.

(I let out an involuntary snort when reading the first one: "A suite of at least 50 tests takes less than a second to run." I wish. On my laptop, a 2.4 GHz Intel Core 2 Duo, it never takes less that 1.1 seconds to run a program consisting purely of an empty for statement doing 50 iterations. There are still improvements to be made on the speed front.)