In my CPAN Testing talk, I mentioned some thoughts I had for improving CPAN testing. The first two are already part of the test reports, but the third requires a biggish change.
[More METADATA]
When reports are submitted, aside from the status, distribution and platform for the test, there are also the Perl and the Operating System versions included. Having submitted many Win32 reports over the last few months, it's not obvious that I'm testing on Windows 2000 Professional, with ActivePerl 5.6.1. Does this mean the same test on Windows 98 with ActivePerl 5.8.2 will have the same result? In most instances the answer is probably yes, but there are a number of key differences between the two OSs and Perls. The same is true of other OSs too. Aside from just checking the platform tests, it would be nice to firstly see the OS and Perl version appear more prominently on the distribution test pages (html/yaml) and database, and secondly for CPANPLUS to be a bit more thorough when verifying whether the distribution has been tested on the current setup (it currently just checks platform and the number of FAILs and UNKNOWNs).
[WITH WARNINGS]
One thing I have been doing manually for quite sometime, is submitting reports to the testers list and authors when distributions PASS but produce several warnings. So far I haven't received any negative feedback for doing this, but have had a couple of emails thanking me for highlight potential problems. I've always tried to fix warnings in my code (I was quite a dab-hand with lint when a C programmer), and would like others to let me know of warnings should they arise when testing my code. However, to do this with CPAN Testing would require quite a bit of change. I had a brief look at how it all works in CPANPLUS and it's quite complex. I haven't got it working yet, so still do it by hand, but hope to have something figured out eventually. Unless of course the CPANPLUS team think it's a good idea and implement it themselves.
Anyone with thoughts about this. Are they good ideas, worth doing, worth prompting Leon, Antrijus and the CPANPLUS team to take a look?
Re:Metadata
barbie on 2004-01-19T15:36:38
Will try look into it further when I have more time (and submit the odd patch or two to help out). The testers database side of things is the relatively easy bit. It's the CPANPLUS bit that got me bogged down in trying to figure where everything gets called and parsed.
Re:Warnings
barbie on 2004-01-20T10:44:42
Some authors are aware of potential warnings and print messages as to why they may occur. However, if that's documented then at least if a Warning Report is generated, the potential user can make a judgement as to whether the warnings are relevant. However, I would think any user that is potentially thinking of using a module for production code, may be concerned if warnings exist. In most cases warnings are due to (slightly) broken tests, but there are some that highlight problems on specific platforms (typically Windows;)). When you say an author "*wants* to spew warnings", I assume mean the additional test messages, which many modules do, as opposed to the kind such as:
Scalar value @arr[4] better written as $arr[4] at t\06_********.t line 21.
Scalar value @arr[3] better written as $arr[3] at t\06_********.t line 22.
Name "main::err" used only once: possible typo at t\06_********.t line 20.
Use of uninitialized value in numeric eq (==) at C:\.cpanplus\5.6.1\build\**********\blib\lib/****/*******.pm line 256.Extra test messages, such as listing available DBD drivers or current test settings, aren't picked up as warnings. Having a Warning Report might also prompt authors to remedy the situation.
If a distribution keeps generating warnings, I would like to think that an author could approach a tester, before uploading to CPAN, to test whether their distribution still has warnings. I've done this on several occasions already for authors who don't have a Win32 box to test on.
I couldn't agree more with your comments. I've had a few test failures recently, but alas the test failure report doesn't tell me enough to have any idea why the module failed on their system, but passes all tests on mine!
After some digging I think I've figured out my most recent failure, the test report only told me pass on 5.6.x systems, and fail on 5.8 systems. It actually turned out to be an artifact caused by the change in hash ordering between 5.6.x and 5.8.x in the test suite. It's isn't anything to do with the hash order, but the problem is masked in the test suite on 5.6 because of it...
Without wishing to sound ungreatful, I do thank everyone who sends in test reports, they are lot better than nothing, and do help to some extent.