A testing story with a happy ending

petdance on 2004-03-22T21:22:20

It all started easily enough. There was a little note sent to the Perl bug database:

    perlop man page mentions: Binary "x" is the repetition operator
    ... repeated the number of times specified by the right operand.
    It should mention what about if the right operand is negative, e.g.,
    print '-' x -80

I figured I could make a quick documentation fix, and maybe even add some automated tests to the Perl test suite.

The x operator in Perl does repetition on a scalar or list, as appropriate. For example:

    $a = "abc" x 2;     # $a = "abcabc";
    @a = ("abc") x 2;   # @a = ("abc","abc");

If the right-hand operator is 0, then you get an empty scalar or list, as appropriate. If the right-hand operator was negative, it was the same effect as having it be zero. As the bug said, the man page didn't say anything about it.

I added a little sentence to the paragraph describing the operator, and then I added some tests. If it's worth documenting, it's worth testing. Documentation and tests are as much a part of the code as the code itself.

t/op/repeat.t already had a lot of tests in it, like:

    is('-' x 5, '-----',    'compile time x');
    is('-' x 1, '-',        '  x 1');
    is('-' x 0, '',         '  x 0');

So I added the obvious add-ons:

    is('-' x -1, '',        '  x -1');
    is('-' x undef,'',      '  x undef');

And then went to add them to the list-related sections:

    @x = qw( a b c );
    is(join('', (@x) x -14), '', '(@x) x -14');

Before I sent the patch in, I ran a full make test and found that the last test didn't pass. In fact, it caused a Panic in Perl, and the program died. I boiled it down to a simple:

    perl -e'@x=(1);@y=(@x)x-1'

Turns out that that case of a negative or zero operand wasn't handling the stack correctly (in the bleadperl only, fortunately). A quick patch made it all better.

Some morals to this story:

  1. Never underestimate the power of one little test.
  2. There is no such thing as a dumb test.
  3. Your tests can often find problems where you're not expecting them.
  4. Test that everything you say happens actually does happen.
  5. If it's worth documenting, it's worth testing.


May I quote you?

VSarkiss on 2004-03-23T02:43:25

I want to get automated tests going in our shop, and I'd like to frame your points 1 and 2, and put them on the wall.

Re:May I quote you?

petdance on 2004-03-23T02:57:45

Sure, go ahead. Point #2 is in one of my presentations at http://petdance.com/perl/. I also made a slightly different version that was less code-intensive if that would help.

Heck, I'll come talk to your user group about testing, if you want...

Re:May I quote you?

JerseyTom on 2004-03-23T04:19:34

I can't agree more. Testing is underappreciated in coding, as well as in system administration. I've seen people do major upgrades without doing any testing afterwords. ugh. (and if it's a good enough test to do after an upgrade, why isn't it automated and added to your Nagios configuration?)

Oh well. I'm preaching to the converted.

a few more

goon on 2004-03-25T11:22:36


  • Hi Andy,

        I'll add a few (for what it's worth - after spending some time writing test code today) to the list (and a few q's)
btw I'm coming from a python
testing pov so I'm curious how testing approachs differs for perl.

*automate where possible
-write automated test tools to generate test code stubs from source code
  to save time, effort and concentrate on thinking about tests.
*make tests pass by default
-lots of talk about failing code by default. the reverse is faster
  as reports are cleaner so it's easier to see if the current test your working on
  is ok.
*test by intention
-and then comment what the test is trying to acheive
-try reading old test code after many months and try to understand *why* you wrote it.
*ship test code
-I see this with most perl code anyway but it's worth including.

what I would be interested to know is do you?

*write the tests before writing the code?
-when do you write them?
-for me it's... think about errors->code->auto-gen test->write testcode.
*average number of tests you have per module?

When to write tests? How many?

petdance on 2004-03-25T15:32:22

*write the tests before writing the code? -when do you write them?

Short version:

  1. Think about what the code should do.
  2. Think about the API.
  3. Write the documentation for the code, explaining what the parms do.
  4. Write the test code that uses the API.
  5. Keep doing #3 and #4 until all the cases are covered. "Oooh, I hadn't thought of the case where a length is negative." So you type in the docs "If the length passed is negative, then a warning is thrown," and then write the test that tests that. Or vice versa.
  6. Write the code. You may go back to #3 and #4 as necessary because you think of more things as you write.
  7. Check it into CVS. Move on, knowing that you're covered.
Tests and documentation are as much a part of the code as the code itself.

*average number of tests you have per module?

As many as it takes to cover it. As many as you need to get all the weird corner cases handled. There's no way to put a number on it.

A test coverage tool like Devel::Cover can help make sure that you've exercised all your cases.