Lately, I've been finding David Cantrell's brilliant CPAN Dependencies website easier and more useful than the CPAN Testers website itself.
The CPAN Dependencies summaries are just do darned concise, while the CPAN Testers pages for an author are much harder to read (but then the dozen or so authors with dozens of modules tend to see these sites become unworkable far earlier than others).
In particular, the CPAN Dependencies summaries make it far easier to zero in on the most troublesome modules at a glance.
What I'd REALLY love to see on the CPAN Dependencies site would be some sort of "virtual distribution" for the entire author, showing ALL the modules for the author (INCLUDING all the distributions in the index, but EXCLUDING distributions that have been deprecated and aren't in the index, but for which CPAN Testers results remain recorded).
That way I could just look at my virtual distro (lets notionally call it Author::ADAMK) and see not only testing results for all my distributions, but also all the results for all the modules by author authors that I depend on.
So, on to the improvements in this release automation cycle.
I've removed the Module::Install Build.PL compatibility pass-through script. With the advent of configure_depends: I see Module::Install becoming Makefile.PL-only eventually and I don't plan to keep the very buggy Module::Build backend.
I've also, to my regret, given up on trying in my fight to try to use multiple pod-checking and such testing modules in one big 99_author.t test script.
Test::Pod had the short-sightedness to try and declare it's own plan automatically, even if a plan already existed, with no option to avoid it. And so the other modules just cargo-culted from that (although RJBS was nice enough to add a workaround to Test::MinimumVersion for me).
So, unfortunately, now I have to have TWO test scripts (I'm doing Test::Pod and Test::MinimumVersion on all my modules when AUTOMATED_TESTING is enabled).
Having seperate 98_pod.t and 99_pmv.t tests also has the benefit of fixing all the FAIL results I was getting. These didn't impact real end-users (since they only FAIL'ed on CPAN Testers) but were adversely impacting the APPEARANCE of the module's success rate, particularly on the CPAN Dependencies site.
This also probable means I need to do a ton (maybe three dozen) incremental CPAN releases to flush out all the broken ones.
So maybe it's time to re-investigate finishing my "automated module incrementing doohicky" module again (using a combination of PPI and Module::Inspector).
What is this short-sightedness of Test::Pod to try and declare it's own plan automatically of which you speak? Maybe I'm missing some cool functionality doing it the following way, but it seems to work well enough for me.
if (!$ENV{PERL_AUTHOR_TESTING}) {
plan skip_all => 'PERL_AUTHOR_TESTING environment variable not set (or zero)';
exit;
}
eval qq{use Test::Pod};
my $has_test_pod = $@ ? 0 : 4;
eval qq{use Test::Pod::Coverage};
my $has_test_pod_coverage = $@ ? 0 : 4;
my $pod_tests = $has_test_pod + $has_test_pod_coverage;
if ($pod_tests > 0) {
plan tests => $pod_tests;
}
else {
plan skip_all => 'POD testing modules not installed';
}
Or are you thinking of Test::Kwalitee? That sets up its own plan, and as such I think it forces you give it its own test file.
I have been toying with the idea of patching it to make it coexist with other tests in a file, but I'll wait until CPANTS, the backend CPANTS modules and Test::Kwalitee are all back in sync.
Re:A man, a plan, a canal
Alias on 2007-10-22T10:06:03
It's all fine at THAT point... then when you call all_pod_tests_ok, or whatever the function is, Test::Pod will declare ANOTHER plan, and boom multiple plans explosion tests fail.
Doctor! Doctor! It hurts when I do this!
grinder on 2007-10-22T20:40:46
Don't do that, then.
Use pod_file_ok and spell it out longhand. My latest thinking on the matter looks something like this, which I think is on the right track, although I'm still not happy with the redundancies in, for instance, $test_pod_coverage_tests and @coverage.
Re:Doctor! Doctor! It hurts when I do this!
Alias on 2007-10-22T22:54:38
There's no way in hell I'm going to maintain 156 versions of the same pod testing script, written out longhand.
The entire point of the short versions is to have one simple script I can have my release automation automatically copy into place as it's building the release distribution.
Manually maintaining it all would be a massive waste of my time.Re:Doctor! Doctor! It hurts when I do this!
Aristotle on 2007-10-23T00:27:50
Do what I do, use the
testpod
andtestpodcoverage
or equivalent targets of your favourite installer module, and include an inert sham to fool CPANTS into thinking you have test files for these things.Re:Doctor! Doctor! It hurts when I do this!
Alias on 2007-10-23T05:33:41
Use the targets?
I have no control over that either... CPAN Testers run what they want to run...
Also, as far as I'm aware, there is as yet no target for "testperlminimumversion".
Plus I have a hacked POD tester that includes support for the upcoming "begin block titles" that I wrote for Test::Inline and that Allison has said she'll add to Pod::Simple for a year now.
And I refuse to do sham things just to shameless exploit CPANTS.Re:Doctor! Doctor! It hurts when I do this!
Aristotle on 2007-10-23T05:48:16
Ah. My approach is for authors who only want to run their author tests themselves. If you want the CPAN Testers to run your author tests, then you will have to write them as test files.
I have no issue gaming CPANTS because I do run POD and POD coverage tests as part of the release process, which I believe makes me entirely eligible for those Kwalitee point.
Re:Doctor! Doctor! It hurts when I do this!
Alias on 2007-10-23T06:08:09
I never run author tests.
They get run by my release automation, and they get run by CPAN Testers, but it would be a waste of my time to run them myself.
Ruthless automation and intentional ignorance of non-critical issues is the only way to deal with large numbers of modules.Re:Doctor! Doctor! It hurts when I do this!
Aristotle on 2007-10-23T08:15:22
Irrelevant arguing about semantics. What you do on your own machine is completely inconsequential to my point. Your release automation could run the targets just as well as it runs the author tests.
The question is whether you want CPAN Testers to run POD and POD coverage tests. If you do, they need to be test files. If you do not, you can use the targets instead.
Re:Doctor! Doctor! It hurts when I do this!
Alias on 2007-10-23T12:13:10
Indeed I do want them to run them.
It's useful for uncovering unexpected weird cases.
Additionally, having them as test scripts makes sure that the next person that takes over my modules (presumably starting from a tarballs) CONTINUES to run the tests.Re:Doctor! Doctor! It hurts when I do this!
grinder on 2007-10-23T08:35:30
Manually maintaining it all would be a massive waste of my time.Oh yeah, I forgot that you do have quite a number of modules, don't you
:) I guess I'm not playing in the same league. Still, could this not be driven by extracting candidats from MANIFEST? Looking for all
.pm files under lib or the base directory of the unpacked distribution? Re:Doctor! Doctor! It hurts when I do this!
Alias on 2007-10-23T09:26:45
I also auto-generate the MANIFEST:)
An Example Module in my SVN Repository
The only two files that I need in the root of my CPAN modules in the repository are Makefile.PL and Changes, which hold actual canonical information.
The rest can be derived (yes I know I shouldn't use pod2man for README files, but I don't have anything better yet).
Re:Doctor! Doctor! It hurts when I do this!
Alias on 2007-10-23T09:28:16
Oh wait, you mean that a different way.
That's what the all_pod_tests_ok type methods in Test::Pod and such do, they do a scan for files, create a plan based on the number of files, then test them
Re:Doctor! Doctor! It hurts when I do this!
grinder on 2007-10-25T08:00:14
Yes, precisely that. Something like:
my @file;
if (open my $MAN, '<', 'MANIFEST') {
while (<$MAN>) {
chomp;
push @file, $_ if/\.pm$/;
}
close $MAN;
}
else {
diag "failed to read MANIFEST: $!";
}
...
SKIP: {
skip( 'Test::Pod not installed on this system', scalar(@file) )
unless $test_pod_tests;
pod_file_ok($_) for @file;
}But I don't know how to automate the extraction of the Perl package names, to do the equivalent POD public function coverage:
SKIP: {
skip( 'Test::Pod::Coverage not installed on this system', scalar(@coverage) )
unless $test_pod_coverage_tests;
pod_coverage_ok( $_, "$_ POD coverage is go!" ) for @coverage;
}At the moment it's a hard-coded list.