The growing problem of core modules with non-trivial FAILs

Alias on 2007-12-11T03:09:47

For some reason I don't quite understand yet, I've noticed an increasing number of core, dual life or massively-depended on modules move from having little to no CPAN Testers FAIL results to having substantial numbers.

The typical rate seems to be about 10% failures, and the list of modules is quite scary.

Scalar::Util, File::Temp, File::Spec, Exporter, base, Test::More, clone, version, Path::Class, YAML and more...

I know in at least a few of these cases they were largely FAIL-free until recently.

This problem would seem to be compounded by the situation most of the authors are in. Most are extremely busy, or have maintained these modules for a long period of time, and don't necessarily have time to push them to 100% PASS any more.

Because of recursion, failures in these modules have a HUGE impact on the userbase.

I have a small thread of work I've been trying to find time-slices for to find a way to measure this more accurately, to generate weightings for modules based on dependencies and such.

Hopefully we can then apply these weights to things like CPAN Testers results to find the "worst" bugs and modules from the perspective of the entire CPAN.

At the same time, the next phase of my own module maintenance is to ignore RT for a while and focus on CPAN Testers, to get everything up to 100% PASS.

So while there may be bugs in the code, but at least all the bugs that are tested for are confirmed to be fixed across the board.


Exporter?

ferreira on 2007-12-11T03:22:35

When I look at

http://cpantesters.perl.org/show/Exporter.html

I see FAILs that were corrected in 5.61 and NAs because actual Exporter code does not support (out of the box) pre-5.6 perls.

Re:Exporter?

Alias on 2007-12-11T04:08:13

Just noticed that as well, and yet the CPAN dependencies site says there's fails...

Buggy testing environments and 5.005

autarch on 2007-12-11T05:25:08

Those two things have been the cause for a bunch of my modules recently. I don't bother trying to make anything work with 5.005 any more, so that's one problem. I've also gotten a rash of weird failures from testers where the failure was clearly a problem of their build environment. It'd be cool if testers could delete failure reports some how.

Re:Buggy testing environments and 5.005

Alias on 2007-12-11T06:18:46

If your module was tested on 5.005, it's only because your Makefile.PL said (by omission) that it was compatible with 5.005.

If your module DOESN'T support 5.005, then you should be reporting that.

Then CPAN Testers won't test the module, and you won't accumulate FAIL reports.

Re:Buggy testing environments and 5.005

bart on 2007-12-11T11:07:48

A little bit of handholding would be much appreciated...

Can you tell exactly what line we would need to add in Makefile.PL to achieve this?

Re:Buggy testing environments and 5.005

ferreira on 2007-12-11T11:57:15

Can you tell exactly what line we would need to add in Makefile.PL to achieve this?

Just include use 5.006; and the toolchain will understand the distribution is not for pre-5.6 perls. Someone has said that this use was a 5.6 thing, but I've seen it work ok with 5.5. I don't know if even older perls would understand it as well or if they would need a more barroque thing like:

BEGIN { require 5.006; }

Re:Buggy testing environments and 5.005

mw487 on 2007-12-11T14:32:24

If a dependency of a module is declared within the dependency to be incompatible with the perl version installing the module, does the installer get a warning during the typical "perl Makefile.pl; make; make test; make install"?

Re:Buggy testing environments and 5.005

ferreira on 2007-12-11T15:35:29

As far as I know, the installer will try to get away without the successful installation of the dependency, hoping for the best. If the best doesn't happen, the failed test will appear as UNKNOWN in that case. So it correctly won't add it to the row of FAILed tests.

Re:Buggy testing environments and 5.005

srezic on 2007-12-11T19:28:19

Actually, I also thought that "use 5.xxx" is a new thing, but it works at least with 5.004, maybe also with older perl versions. And there are for sure no testers around who use something older than 5.005.

Re:Buggy testing environments and 5.005

bart on 2007-12-27T12:52:07

I could find it in perldelta for 5.004, but it appears to be supported in the only pre 5.004 perl on CPAN, 5.003_07, too.

Re:Buggy testing environments and 5.005

bart on 2007-12-27T12:57:33

Thanks, a Google search reveals that use VERSION; is indeed quite common in Makefile.PL files on CPAN.

I wasn't sure it was the right way to do it.

Not so bad

srezic on 2007-12-11T19:52:04

I think you should take a closer look at the reasons for the fails, e.g. by looking at the CPAN Testers Matrix to find patterns in the failures.

For the mentioned distributions it looks like:

  • Scalar-List-Utils: FAILs only with devel perl
  • YAML: mostly only devel and old perls have FAILs
  • File-Spec, Exporter, base: looks OK
  • Test-Simple: granted, there are some unexpected red spots
  • File-Temp: 0.18 completely OK, new problems with 0.19

I also don't think you should target for 100% PASS. There are always problems which you cannot control, like bugs in the toolchain modules causing FAILs in the tested module (I've seen such cases with some ExtUtils::MakeMaker, Module::Build and Test::Harness versions), problems with some perl versions (this was often the case with the development perl 5.9.x), bad testers setup...

It would be nice to have a way to retract invalid test reports, but in the current testers infrastructre it's not possible.

Re:Not so bad

Alias on 2007-12-11T21:47:17

I'm not sure I like the idea of retracting reports, because who is to say what is invalid?

I know some situations where authors have said reports are invalid for things like not working with Perl 5.005...

Re:Not so bad

srezic on 2007-12-11T22:18:02

If you look at the Tk-804.027 reports, then you see a lot of FAIL reports which are sort-of invalid: testers who don't have a running X server, hence almost all tests fail. Sure, the test suite could check first if there's a running X server (in fact, this is done for Tk-804.028-tobe). Well, now it's more work for me to find the legitimate reports.

Maybe not retracting reports, but have some means of commenting them would be enough?

I have to disagree

speters on 2007-12-13T02:49:33

Over the past year, I've tested hundred of modules with bleadperl, and I have to say that quality has actually improved dramatically over the past year. I agree that I had dozens of modules failing when I started. But by opening bug reports, having Andreas find the root cause change if it was a new failure, and having developers who care about their modules really made a huge difference in improving overall module quality. Are there still modules that fail? Yes. There will always be, but I can say that I do not think that its quite as much of the problem it was.