Consider the following:
sub reciprocal { return 1/shift; }
Of course, you see the bug right away, but you don't throw an exception as Perl will thoughtfully die for you (though you might want to 'croak' to correctly report where the error came from):
sub reciprocal { return 1/shift; }
So what do you test?
is reciprocal(2), .5, 'The reciprocal of 2 should be correct'; throws_ok { reciprocal(0) } qr/Illegal division by zero/, "division by 0 is illegal";
But what if you pass "undef" or a string? Should be the same thing. Do you test that? What if you pass no arguments or more than one? How large or small of a number can you pass in? Can I pass in overloaded objects? What about complex numbers?
Obviously with different languages, different test may be useful, depending on their capabilities, but a more complex function (factorial, for example) may have more stringest testing requirements.
Me? I'd probably be happy in Perl with the two tests above, but not if I were writing in C.
How do you decide when you've tested enough? It's obviously not when you've hit 100% coverage, because the first test I wrote above would hit 100% for this function. Adding in all of the other "silly" tests would likely be a waste of time, but not if you're writing software to control a nuclear reactor.
When I write a test, I tend to subconsciously run through a checklist:
What do you do to figure out a testing strategy?
Adding in all of the other "silly" tests would likely be a waste of time, but not if you're writing software to control a nuclear reactor.
I like Schwern's "test bugs" philosophy for any sort of maintenance testing.
For new code, I probably wouldn't even write a unit test for reciprocal() -- but I wouldn't have a function for reciprocal() except in a general-purpose math library. And in such a library, yes I would probably have more "silly" tests, but also better diagnostics built into the function.
But for specific project code, I tend to focus tests at a level about halfway between the highest-level API and unit tests. I do this because fine-grained unit tests turn into makework when you're evolving an architecture/API -- and they create too much momentum against change. If you're testing functions at about 1-2 levels above something like reciprocal(), you still get adequate coverage and your test code looks a lot like the code you plan to use in production.
I guess it comes down to a matter of abstraction-size, so I would say "test the problem you're trying to solve". If you're breaking the problem into sub-problems, test those solutions. Or: anything that took you more than a few minutes to think through.
I'm probably not doing a good job of answering "what do you test", but that's okay, my test suite gives me a pass [grin].
I'd be worried enough that someone would accidentally throw some code together that uses reciprocal
as an object method (and therefore uses an int-ified string-ified object as the divisor) that I would probably add some parameter validation, and then just test that my validation worked. I wouldn't feel the need to test the/
operator.
-- Douglas Hunter
This is one of the cases where a good strict typing system really helps (though I'd want the system to enforce all overloading of the division operator such that mathematic invariants hold).
Re:Don't forget to test ...
Ovid on 2009-04-06T13:58:08
You're just making life difficult
:)
You can simply enforce typing:
use Carp::Assert::More;
sub recip {
my ($num) = @_;
assert_nonzero($num);
return 1/$num;
}
Then write one test to make sure that it works:
is( recip(2),
one to show that it dies correctly:
dies_ok sub{recip(0)};
and your done, any bugs that come up later can be tested then.