The Test::Most From HELL!

Ovid on 2008-06-20T12:52:24

So far Test::Most has garnered me nothing but kudos. People love the fine-grained control over their test suites and now it makes plans almost irrelevant. However, here's what it exports into your namespace:

TODO
BAIL_OUT
all
all_done
any
array
array_each
arrayelementsonly
arraylength
arraylengthonly
bag
bail_on_fail
blessed
bool
can_ok
cmp_bag
cmp_deeply
cmp_methods
cmp_ok
cmp_set
code
diag
die_on_fail
dies_ok
eq_array
eq_deeply
eq_hash
eq_or_diff
eq_or_diff
eq_or_diff_data
eq_or_diff_data
eq_or_diff_text
eq_or_diff_text
eq_set
explain
fail
hash
hash_each
hashkeys
hashkeysonly
ignore
is
is_deeply
isa
isa_ok
isnt
like
listmethods
lives_and
lives_ok
methods
noclass
num
ok
pass
plan
re
reftype
regexpmatches
regexponly
regexpref
regexprefonly
require_ok
restore_fail
scalarrefonly
scalref
set
shallow
skip
str
subbagof
subhashof
subsetof
superbagof
superhashof
supersetof
throws_ok
todo
todo_skip
unlike
use_ok
useclass
warning_is
warning_like
warnings_are
warnings_like

Today I tracked down a nasty bug where someone was importing these (but not using them) into a module which inherits from Class::Accessor. This wound up inappropriately overriding the &set method and it was no end of trouble trying to find this.

I'm trying to consider how to deal with this. One way is to check to see if caller->can($function), but that doesn't work if the offending function is created after Test::Most is used. I could check this in a CHECK block, but I'm unsure if this is the best way to manage this. There need to be other strategies here, but my primary goal is to keep this simple.


die_on_fail(&?)

melo on 2008-06-20T14:04:14

Have you consider using

      die_on_fail {
            # your tests go here.
      }

?

Re:die_on_fail(&?)

Ovid on 2008-06-20T14:11:22

Actually, I've been thinking about putting a failure handler there. I think it's a better solution:

use Test::Most
    tests   => 3,
    on_fail => sub {
        # whatever you want
    };

That gives end users maximum flexibility.

Right now, though, I've discovered that the latest release, which incorporates Test::Deep, doesn't play well with Test::Class .

Re:die_on_fail(&?)

melo on 2008-06-20T15:32:33

But that only gives you a global handler, with little knowledge of the test that failed.

The other suggestion is more local. And even lexical.

Re:die_on_fail(&?)

Ovid on 2008-06-20T15:42:00

Actually, I'd be passing in the same information the current die handler gets, so you would have complete knowledge of the test which failed (well, as much as Test::Harness has). And it would be file scoped, not global.

Plans? We don't need no stinking plans!

djberg96 on 2008-06-20T19:47:13

...and now it makes plans almost irrelevant.
This is one of those bits of yak shaving that Perl programmers seem to think is necessary. It always baffled me, even when I was a full fledged Perl guy, that I had to specify my plan up front. Just run the tests I give you!

Re:Plans? We don't need no stinking plans!

Ovid on 2008-06-20T20:28:18

Plans haven't saved me a lot, but when they have, they have (particularly on ridiculously obscure occasions when tests exit prematurely but with a 0 status code. There's no reason to know anything is wrong without a plan).

Re:Plans? We don't need no stinking plans!

Aristotle on 2008-06-20T22:48:25

So how do you know you didn’t run fewer tests than you meant to?

Yes, OK, that was easy. Now tell me how you know you didn’t run more tests than you meant to.

Re:Plans? We don't need no stinking plans!

bart on 2008-06-21T03:36:20

Just how are you supposed to know how many tests you planned?

Anyway... how about an end marker, "yes I'm done", a sub you call at the end of your test file. All it has to do is set a flag in the test module.

done;
If the tests are interrupted, it won't get called, and then you'll know, as the test module will check the status of that flag in its END block.

Something like that.

Re:Plans? We don't need no stinking plans!

Ovid on 2008-06-21T08:16:56

As noted, Test::Most does this.

Re:Plans? We don't need no stinking plans!

Aristotle on 2008-06-21T10:11:01

Just how are you supposed to know how many tests you planned?

What am I supposed to say to that?

If the tests are interrupted, it won’t get called, and then you’ll know

Right, that was the easy part I talked about.

Re:Plans? We don't need no stinking plans!

bart on 2008-06-21T12:38:05

What am I supposed to say to that?
What I was aiming as was that with the old school test modules, you have to state how many tests you plan on executing. But how can you know? Counting them manually? Running the tests, toping they all get executed, and check how many were done in the error message?

That is not ideal. I'd even prefer a dry run, using ok/nok to just count the tests without actually doing them, to this. Maybe do the real test in a live run immediately after that.

Re:Plans? We don't need no stinking plans!

chromatic on 2008-06-23T06:22:21

But how can you know?

How much confidence do you have in a test if you don't know exactly what you expect it to do?

Re:Plans? We don't need no stinking plans!

Ovid on 2008-06-23T07:34:02

When you read this, the number one thing to remember is to take it with a grain of salt! Whether or not you choose to adopt a plan depends on whether or not you feel it brings enough added value to offset your dislike of it. If it doesn't, don't use a plan. I'm not dogmatic :)

What I was aiming as was that with the old school test modules, you have to state how many tests you plan on executing. But how can you know? Counting them manually? Running the tests, toping they all get executed, and check how many were done in the error message?

I've been trying to understand what you were meaning by this because I think that there's a miscommunication here. I could be wrong.

If I'm presented with a 300 line test program, you're absolutely right that I can't just know that it's supposed to run 113 tests. I have to trust that the module author did the right thing.

That being said, if I am the module author, then there's no real problem. I don't sit down and write a 300 line test program. I write the tests incrementally and every step of the way I know what's going on. So here are two ways that plans will save you in this scenario:

use Test::More tests => 3;

ok this_function();
ok that_function();
ok the_other_function();   # calls exit(0) internally

Because the exit() gets called before the ok(), Test::More realizes we've not run enough tests. I've had this happen before where I'm using some module from the cpan which thinks calling exit is a good idea. Don't believe me? Try running this on your installed modules sometime:

ack '^\s*exit\b' lib/perl5/

Admittedly, many of those are false positives (documentation), but many of them are not.

Another way a plan can save you:

foreach my $attr ( $inspect->attributes($foo) ) {
    can_ok $foo, $attr;
    lives_ok { $foo->$attr } "... and we should be able to call '$attr'";
}

Now what happens if someone adds an attribute and forgets to update the plan? Some people say "yeah, just assert the number of attributes first". There's nothing wrong with that approach, but that means I always have to remember to do that when I have potentially varying number of tests. Or I can always assert a plan and never have to worry about it.

Re:Plans? We don't need no stinking plans!

bart on 2008-06-23T19:00:40

Running the tests, toping they all get executed, and check how many were done in the error message?

I've been trying to understand what you were meaning by this because I think that there's a miscommunication here. I could be wrong.

There was a typo that I noticed after I pressed "submit". What I wanted to say was tis:

>Running the tests, hoping they all get executed, and check how many were done in the error message?
There error message is, of course, the info of Test::* complaining about "It looks like you planned to run 20 tests, but you actually ran 23" so you can fix the plan number to 23.

Or I can always assert a plan and never have to worry about it.
My problem is that when I'm writing tests for a module, I may be adding a test every few minutes and I always have to update the plan. And that is an annoying chore. I constantly have to worry about it.

Re:Plans? We don't need no stinking plans!

Aristotle on 2008-06-23T21:45:07

You can get around that too.

use Test::More;
plan tests => my $num_tests;

BEGIN { $num_tests+=2 }
is( 1, 1, 'identity' );
isnt( 1, 2, 'identity' );

BEGIN { $num_tests++ }
is( 1 + 1, 2, 'addition' );

# etc

Re:Plans? We don't need no stinking plans!

bart on 2008-06-24T03:03:56

That's clever! And it is possible because plan is a runtime function call, instead of the usual parameter to the use statement.

I must say, that plan is well hidden in the docs. It doesn't even get its own entry in the module's list of functions.

Test::Deep's "kinda" bugs with default exports

kappa on 2008-06-21T21:51:33

See Test::Deep in rt.cpan.org. Incompatible isa() and blessed() functions lead to very hard to catch bugs.

AUTOLOAD

AndyArmstrong on 2008-06-22T16:06:08

Instead of exporting all those subs you could export an AUTOLOAD into main and have that import them on demand ;)