Thanks to yi.org and the incredibly helpfull Tyler MacDonald I proudly present the new CPANTS site!
While things look mostly the same on the surface a lot has changed beneath, and even more will change. I'm now using DBIx::Class as a ORM (and still using Catalyst), Module::CPANTS::Analyse and the still unreleased Module::CPANTS::ProcessCPAN, which is build to allow incremental testing (i.e. test only the dists released since the last run).
It will still take a bit of work to get the incremental testing ready. E.g. I want to save some condensed stats of old results so that I can plot the kwalitee evolution of dists during time.
Another open issue (and I'll appreciate any ideas) is how to handle the ranking of authors in the cpants game. Currently it's based on average kwalitee. I would like to include the number of dists into the rank, because it's harder to get high kwalitee if you have lot of dists. Any ideas?
Just one question -- What's the "bad permissions" bit mean? I noticed it for my XML-Genx and I'd like to fix it...
Thanks,
-Dom
Re:Thanks
domm on 2006-05-17T13:11:28
What's the "bad permissions" bit mean?
There once was a metric called 'no_bad_permissons' which checked if the dists only contains files writeable by the user. Because I hate it if after doing a manual install I end up with a dir I cannot remove (without doing 'sudo rm -r dir'). The metric was dumped as several people objected. The metadata remains...
One minor issue, at the bottom of the pages you have text:
CPANTS data generated with Module::CPANTS::Analyse
but Module::CPANTS::Analyse is then a link to Module::CPANTS::Generator on search.cpan.org.
From memory previous versions gave details (somehow) on the exact causes of errors, for example in earlier versions of Imager I could see it list lib/Imager/Cookbook.pm as not having use strict. Unfortunately I don't remember exactly where that was linked from - is the detailed scan information available now without installing M::C::Analyse?
This would be handy in tracking down kwalitee failures, for example: Imager fails the no_pod_errors
test, but the test suite includes a Test::Pod test script and doesn't seem to be failing it.Re:Tracking down kwalitee errors
Alias on 2006-05-18T07:48:53
I have a few modules that use Test::Inline 2 that also get caught up in this.
In my case, it's due to not regonizing the (not yet common) "extended begin" syntax.
=begin testing foo after bar...
=end testing