Unit tests aren't much good if you don't run them. Duh. Unit tests get in the way if they take a long time to run. Developers will avoid running them, even when social pressure is applied.
On my current project, we've tried to achieve a balance. The standard "build" that happens after normal editing runs the in-memory subset of our unit tests. (On a fast laptop, it adds less than 7 seconds to the build.) And I've arranged to fail the build if any test fails. Nobody complains, and large parts of our application stay unbroken.
The "live data" tests run separately, since they required creating a test schema and populating it with test data (and this requires access to a server, which isn't guaranteed for various semi-legimate reasons). This separation of tests works fairly well except that I can't get people to run the damn database tests on a regular basis. Waiting for the overnight build to catch problems takes too long; problems can multiply unseen in the interim.
There are technical aspects of this problem (e.g., we could, for a small expenditure of time, set up a continuous integration and test environment), but I think the bigger problem here is social.
Social pressure works well in colocated projects. One need only walk a few steps before applying a bitchslap. And "Whoever checks in broken code has to wear the funny hat to the morning meeting" can be very effective. I'm looking for techniques for applying pressure in a distributed team.
Re:How about this...
dws on 2004-02-20T20:53:37
It's a bit hard to get at their desktops, but the project home page... Oh Yeah!
More than once I've seen folks sign off on tests, but there was no way they could have run the tests since by observation the test suite they were supposed to use was obviously broken for a number of months. There was also no real way of taking them to task over this. Needless to say, I quite like working on smaller projects at the moment.
Unfortunately, I think the only thing that can be done for stuff like this to have everyone agree on the procedure and then verifying it occurs by automating (i.e. the carrot) and auditing tests (i.e. the stick).
Re:How difficult is it?
dws on 2004-02-21T02:01:10
The procedure for compiling the system, running the in-memory unit tests, and building a.war file for deployment is To do the same, and install in a Tomcat server, the command isantTo compile the system, run the in-memory tests, then run the databases tests, typingant installdoes the trick, assuming that you have access to a live database server. Each developer gets their own sandbox, so there's no risk of A stepping on B.ant dbtestAll of this is incredibly simple, but people who should know better are skipping the last step.
On my current project we have an automated background process running the full test suite every couple of hours. If the tests pass, the code is tagged and checked out onto the staging server which is the first time the client gets to see/play with it. If anyone breaks the build, then no code gets promoted to staging. There's always at least one of us with some new code we want in front of the client so if the stick needs to be waved, we take it in turns to wave it.