Ran Devel::Cover over our code last night. Took about 70 minutes. Not too shabby, considering a normal test run (i.e., without Devel::Cover's performance hit) used to take 80 minutes. Was also happy to see that our coverage was at 90.1%. I would be happier if I trusted that number. Here's the summary from ./Build testcover
Files=51, Tests=11221, 4093 wallclock secs ( 2.20 usr 0.46 sys + 3686.52 cusr 64.89 csys = 3754.07 CPU) Result: FAIL Failed 2/51 test programs. 0/11221 subtests failed.
What? Two programs failed but no tests did? I copied all of the results from the buffer into an editor to clean them up. Seems that using Devel::Cover causes a lot of strange warnings to show up. Editing them out so I could see just what's left revealed nothing. I can do a binary search through the test runs to see what's going on, but given an over an hour for a test suite run, this seems painful. I'm guessing that some test programs exited prematurely, but I can't tell. I'm also going to run this with Test::Harness 2.64 to find out if this is a regression.
What's worse, here's a normal test run:
All tests successful. Files=54, Tests=12274, 893 wallclock secs ( 2.11 usr 0.36 sys + 682.85 cusr 32.02 csys = 717.34 CPU) Result: PASS
Hey, did we really run an extra 1000 tests? My spidey sense is tingling.