Ever since Test::Pod and friends introduced the idea of Author Tests (also sometimes known as Extended Tests), they've been adopted with enthusiasm by many CPAN authors. Particularly authors with zealot'ish tendencies.
But the use of these tests have been plagued by the problems that accompany ideas driven by theory rather than engineering. They are most commonly problems caused by failing to make compromises. Or by my favourite enemy, the Conflict of Interest.
The problems started with authors treating these extended testing dependencies like regular test dependencies, and having the tests runs unconditionally.
This isn't a big surprise. These were tests, tests mean dependencies, and tests should be run. This line of thinking makes perfect sense. But only in the short term.
Dependencies rot in the same way code rots.
Bugs are uncovered, authors make mistakes, bad releases sometimes get out the door. Sometimes these mistakes don't get fixed for a long time.
All tests are considered critical by CPAN clients. The judgement call they make is that any bug important enough to make a test fail is important enough to prevent installation, so that users aren't tricked into using something that will fail for them in the future, after they've put a lot of effort into integrating it.
In pretty much every case, this judgement call doesn't apply to the extended tests.
A POD bug, a misformatted META.yml, or a bad "use 5.005" declaration almost always has zero functional impact on the user once the module passes the regular tests and installs correctly.
An attitude that the user somehow owes the author a debt and should run the author's tests to help the author isn't valid, because it ignores proportionality. Helping the author fix a POD bug at the cost of not being able to use a module at all is an unfair exchange. Fortunately, this attitude is one that few people now hold.
A related but more subtle problem is the running of tests under AUTOMATED_TESTING. This is generally a more positive thing to do, because a failure of the tests on CPAN Testers doesn't hurt anyone directly.
But even under automated testing, you probably shouldn't be declaring the extended test modules as dependencies. If you insist on installing dependencies on automated testing platforms, then you can actually decrease the testing of your module.
Unusual and interesting platforms are both the most valuable testing runs you will get, and the most likely to have one of the extended dependencies fail.
The trade off you are really making when you install dependencies on AUTOMATED_TESTING is to increase the intensity of testing at the cost of diversity of testing.
The final mistake is that in some cases people are still specifying the dependencies when running under RELEASE_TESTING (the equivalent of the AUTOMATED_TESTING flag for authors).
I assume this is being done either as a convenience for the authors, or to enhance documentation. This isn't as big a deal as the other two environments, but putting RELEASE_TESTING dependencies into your Makefile.PL can result in having those dependencies end up in the public META.yml file for the module.
This has no functional impact, but for software that is processing the dependency graph of the entire CPAN, it creates false edges in the dependency graphs that don't reflect the actual test dependencies. By leaving the dependencies out of the META.yml file, analysis algorithms that run on top of the dependency graphs will be able to form better conclusions.
If the META.yml specification supported some form of release_requires dependency, then they would belong there. But since we don't have that, it's better to leave the dependencies out completely.
To deal with these problems comprehensively we can apply the following behaviour.
1. Extended tests should always check the version of the test module they are using and only run if the module is new enough.
2. Never specify extended testing dependencies in Makefile.PL or META.yml.
3. When running in an end-user installation environment, never run extended tests.
4. When running in an automated testing environment, only run the extended tests if the test modules are already installed and are a new enough version.
5. When running in an author/release environment, always run the tests but don't specify dependencies and allow the tests to crash/fail if the dependencies are not installed.
Pending the creation of some kind of release_testing dependency in META.yml or other changes in the CPAN testing architecture, these rules provide the best compromise between installability, coverage, and diversity across all three testing environments.
You can see an example of these test rules in action here.
http://cpansearch.perl.org/src/ADAMK/Algorithm-Dependency-1.110/t/98_pod.t
To help authors, toolchain modules, and code generators produce better author tests, I've created Test::XT.
Test::XT generates test files that follow the pattern described above.
It can be used to both generate tests for arbitrary extended testing modules, and also provides prebuilt patterns for the three extended testing modules that I use myself.
I hope to expand this set to cover all of the main extended testing modules (you are welcome to commit code for additional extended testing modules to the module yourself).