Our test suite now takes half an hour to run, and that's when no one else is here. If I'm fighting them for resources, it's forty minutes. We need to get this back under control.
There are several interesting ideas on how to do this. Most of our time is spent in XML::XPath and in the DBIx::Class and how the latter interacts with the database. We need to fix the namespaces in our XML to switch to XML::LibXML, but right now, we have two interesting ideas for the the database problems.
The first is creating a pool of test databases. We already have these. They are on a per user, per branch basis. So if I'm working on our 'segments' branch, I would have a pips3_test_poec01_segments database created just for me. What I want to do is, at a first pass, is have pips3_test_poec01_segments_01 and pips3_test_poec01_segments_02. While a test is running against one, the other is being rebuilt in the background. The tests then won't have to worry about that. While some might be fast enough that the rebuild isn't done, because they're database tests, they usually won't be.
The second idea is interesting. We have test fixtures where the code can look something like this:
my $ce = $class->change_event_builder($schema); my $service = $schema->resultset('Service')->find( { api_public_name => 'bbc_one_london', }); my $ondemand_service = $schema->resultset('Service')->find( { api_public_name => 'iplayer_streaming', }); my $pip_rs = $schema->resultset('Pip'); # Brand: Waking the Dead my $wtd = $pip_rs->create_brand( { title => 'Waking the Dead', pid => 'brwtd', crid => 'crid://bbc.co.uk/b/10366', }); $ce->add_change_event($wtd); # Series: Series 5 my $s5 = $pip_rs->create_series( { title => 'Series 5', pid => 'seri5', crid => 'crid://bbc.co.uk/b/10360', } ); # lots more stuff adding episodes, versions, credits, and so on ...
A test can load a fixture with something like (fudging here):
$fixture->load($fixture_name);
And then the test can proceed on its merry way.
What if we cache the SQL for that? We could create an md5 hash for each fixture file and if it changes, we rerun the fixture and cache the SQL. Otherwise, we just run the SQL directly to add the fixture data. This raises the obvious question of "how do we capture this SQL?"