Satisfaction

Ovid on 2003-03-04T01:48:10

Some tests depend on others, thus we number them. If tests break, we fix lower numbered ones first and this frequently takes care of the higher numbered ones.

I think we have finally started really getting the "testing" mindset.

$ perl test_all.pl
01class..............ok
01dbi................ok
02account............ok
02category...........ok
02company............ok
02companyboth........ok
02contact............ok
02country............ok
02customer...........ok
02manf...............ok
02pricelevel.........ok
02pricetypes.........ok
02retailer...........ok
02roletype...........ok
02saleitem...........ok
02saletender.........ok
02session............ok
02states.............ok
02status.............ok
02taxrate............ok
02tendertype.........ok
02term...............ok
02uom................ok
02vendor.............ok
03handlerbase........ok
03pluginhtmltable....ok
03product............ok
03sales..............ok
04completesale.......ok
04handleraccount.....ok
04handlercompany.....ok
04handlercontact.....ok
04handlercustomer....ok
04handlerlogin.......ok
04handlerproduct.....ok
04handlerretailer....ok
04handlertaxrate.....ok
04handlerterms.......ok
All tests successful.
Files=38, Tests=1547, 125 wallclock secs ( 0.00 cusr + 0.00 csys = 0.00 CPU)

I'm still not sure why the individual times on the Test::Harness output are zeroes. Plus, when I run tests on my linux box at home, I frequently get many line breaks between lines of output. I was also getting this in Cygwin at work, so it's not just my home 'puter.


Non-granularity

petdance on 2003-03-04T15:04:38

Personally, I don't like having tests that rely on one another. I think that tests should stand on their own, so that they can be run on their own during development.

See my thoughts on this at http://petdance.com/perl/automated-testing/

Re:Non-granularity

Ovid on 2003-03-04T18:24:00

I guess I wasn't clear. All of the tests do run on their own. Any of those tests programs can be separated from the others and it will work just fine. However, rather than do a bunch of white box unit testing (which is what we were doing) I have found that black box integration testing, while sacrificing some fine-grained control, gets me the results I need.

The higher number tests don't depend on the lower number tests per se, but if a higher level test fails I immediately look for a failure of a lower level test and fix that first. While that does mean my tests appear to be more tightly coupled than they should, I can still run tests individually and everything should be fine.

The reason I've started doing this is because we were running a lot of white box unit tests with systems that have multiple tiers before you drill down to the database. I might have something like this:

presentation -> dispatch -> handlers -> business objects -> persistence layer -> database

We might have a small tweak in the database or persistence layer, but by writing unit tests that were isolated from those layers, the unit tests would pass like a charm and we'd miss the bugs that propagated through the system. But what happens with integration tests that fail at the dispatch layer? Maybe the bug is in that layer, maybe it's in one of the four layers below it. This drives up debugging time considerably! However, by numbering the layers and fixing test failures in the lower tiers first, we get part of the unit test benefit of narrowing down where a bug really is, but we also get the integration test benefit of knowing if we have API errors between tiers. It's been a very useful compromise.

If you have a rebuttal, I'm all ears :)