As I mentioned before, I'm maintaining some programs to write test scripts to check the output of a dynamic website (comparing baseline to prototype implementations). Thanks to TAP (the Test::More / Test::Harness protocol), I get a very nice summary when things succeed, and highlights of the failures as they occur.
However, because these tests are auto-generated, knowing that I failed test #510 is meaningless. The test that was 510 of 789 yesterday may be test 432 of 608 today. Sometimes the received/expected output is enough to figure out what's wrong. (as in "November 12, 2004" ne "November 30, 2004"; a common false negative).
When I need to focus on a single test script with failing tests, I use this hack:
$ perl my_test_suite/000.t 2>&1 | vim -Works like a charm. Looking at the diagnostics of the surrounding tests (both passing and failing), I can zoom in on the tests that are failing and isolate the cause. From here, I can switch back to make test to prove I didn't mess anything else up, and find the next bug to fix.
I had second thoughs when I read petdance's journal, and now that I see the name being used like this, even more...
Man... I usually fly with TAP
Tappity tap tap tap
ziggy on 2004-11-30T16:49:52
To me, TAP means something completely different.;-) How much more black could this test protocol be? None. None more black. It's like a testing mirror.Re:Tappity tap tap tap
cog on 2004-11-30T17:01:54
Yeah, but... how would you like flying in an airplane from a company called "Perl"?
I suspect the "which test was 527 this run" could be helped if the protocol included a mechanism for comments on the ok/nok lines that let you self-identify each test. The code could skip the comment if the test was OK to reduce noise.
nok 572 # file: t/t25.t; section: filetest; test file: muchwhitespace.tst; sub test: 3 blank, 2 tab, CR
Perhaps having start and end comments that the test analysis routines collected and discarded, would let all of the nested testing issues be handled. So, the "file", "section", and "test file" fields in the above comment could be issued as start and end comments, and the ones that were currently in scope would be displayed when a NOK test failure came along.
These could be also used by the test running routines to select specific tests to be run, so you could take the nok output, cut and paste it into a command line to rerun the test process, going directly to the specific test that failed but provide extra detail.
Re:TAP protocol definition?
jdavidb on 2004-11-30T17:54:46
If the protocol is not documented by anything other than implementation, I have a hunch it soon will be. Michael Schwern did a lot recently to enhance the protocol and the related modules, including adding the comment feature you mention. Several others have produced tutorials on testing in Perl, including Andy Lester, who is now driving the Phalanx project to increase test coverage and quality for 100 key Perl modules. Since Andy drove the process to pick the TAP name for the protocol, I suspect he won't rest until the protocol is adequately documented, if it isn't already. (Schwern might have taken care of that.)