There is a growing amount of material available about automated testing in the perl community. I have noticed that often times when an engineer reads an article or listens to a talk on this topic that he or she will respond either by giving a reason that they can't implement it in their workplace, or by asking how they can implement it in their specific workplace given some set of reasons that it's tough.
In the past 10 months, I have transformed my huge mod_perl application from one with no unit tests to having about 50% test coverage. Each of the 7 engineers who have been on the team have used the tests and added to them. The application and the testing environment that I have for it are not perfect, but they're better than they were before, and they'll soon be better than they are now. I did it without issuing an edict requiring unit testing or by having management require it, or even know about it.
Looking back on the progress, I don't really think it was that hard. In fact, I think that there were three main methods that I used to grow this testing system into what it is now. I've essentially tricked my peers (and myself) into becoming better programmers with these methods, and I'll bet that you can too.
Boiling a Frog
They say that you can put a frog in a pan of water, set it on the stove, and bring the water to a boil. This will cook the frog without it jumping out. If you drop the frog in hot water, he'll jump right out. Apparently, it's the sudden change in environment that scares the frog. It's probably not true, but I don't really care to try. Nonetheless, it makes a good analogy.
When I started adding unit tests to my code, I didn't really see that it was necessary to convince all of the other members of my team to fully test their code. I knew that if I tested all of the code that I touched or added eventually we would have dozens and then hundreds of tests that tested parts of the whole application.
As I wandered around the application fixing bugs and adding new features in my normal course of work, I would add unit tests. I knew that each time I checked in a test case the rest of the team would see the new code in the svn commit emails. They knew that unit tests were being added even if they didn't contribute to them. (You do have subversion set to send out emails with patches each time someone commits code, don't you?)
Occasionally, someone checked in a change that broke a few tests when the nightly smokebot was run. I fixed the code or the tests, checked in the patch, and sent a mail to the developer that broke the test cases. I let him know that I made a small change to preserve backwards compatibility, or something like that and moved on. Eventually, the water boiled so to speak and a few other developers running the test suite themselves and adding tests to try out their new code. This meant that I was no longer the only one writing unit tests. The test suite was growing more quickly. That's powerful.
The Ping-Pong Method
The "ping-pong" method is another trick I used to build up our test suite. After I noticed another member of my team check in a new method for an object, I'll try to use it. I write a test case that instantiates the object and tries to call that method. Typically, I find a place where the documentation doesn't really match the behavior. So, I'll send over a mail to the author:
"Hey, Carl. I'm trying to use the new 'bark' method I saw that you just added to the Dog object, and I can't quite figure it out. The docs say that I can optionally pass a volume paramteter, but if I leave it out, the dog doesn't bark. Is it required, or should there be a default?"
Typically, they'll write back and say that there's something missing from the documentation or the code and often even check in a patch or mail one to me. I'll update the test cases that I'm working on and them check them in. Through this back and forth exchange, I can make sure that I understand most of the new code that goes into the application, that the code does what the original author expects it to do, and that there exist test cases to prove it. Knowing that I will be writing test cases for their code eventually started to encourage the other team members to write well-documented, usable code and eventually even their own test cases. After a few weeks of "ping-pong", I'll bet most developers will start writing their own unit tests.
The Ratchet Method
Once the members of the team agreed for the most part that we should be writing test cases for our code and adhering to some kind of coding standard, I checked in a few tests that touch all of the files in the application. These test for violations of POD syntax and POD coverage and for violations tested by Perl::Critic.
When I got started, of course, not all files could pass POD checks or Perl::Critic, so I listed these that failed and made them TODO tests. If a file passed Perl::Critic at level 5, but not severity level 4, I made sure that it was tested at 5 and added a TODO test for it at 4, or whatever level we're shooting for. When files are modified, if the new code causes to file to no longer pass a POD syntax or coverage test or a Perl::Critic test, a developer will know that he needs to clean up his code to match the level of quality that was already there. If a new file is added, since it won't be in the list of current exceptions, it will be required to pass all of the tests at the desired level.
By creating tests like this that test current levels of some measure of code quality, I can ensure that new code added will not be worse than the current level. The tests act as a ratchet that constantly ensures new code is measured as good as or better than the current code.
I think that these three methods are pretty good ways to inject unit testing into your application. At the very least, they're pretty good responses next time someone claims that they can't start writing unit tests for their code since not all members of the team have "drunk the Kool-Aid". Now go start writing better code.
Thanks!