Google Test Automation Conference

brian_d_foy on 2006-09-10T18:02:00

This week I participated in the Test Automation Conference hosted by Google in London. They are based just by Victoria Station and have pretty shiny offices indeed with a pool table, table football and lots of little perks like fridges full of Innocent smoothies. This two-day conference really interested me as I'm really interested in large scale testing - as the tshirt we got pointed out: "because life is too short for manual testing".

It's not easy organising a conference for 230 people but this was wonderfully pulled off: the schedule was full with a good mix of academic, Google staff and real world experience and we were very well fed. I particularly thought the Google staff talks were really interesting. However, I think the talks could easily have been 30 minutes instead of an hour. They are planning to stick videos of all the talks on Google Video, but I thought I'd share some highlights with you about it all.

It all kicked off with the SmartFrog crew asserting that "any application without adequate system tests does not exist. This seemed a little like a research project which can automatically deploy and test entire systems with many machines - but maybe onto virtual machines. In this talk, as in many others, there was a demo. However, watching characters scroll up the screen isn't very exciting (and that's if the demo doesn't crash and burn): please show me pretty charts and statistics instead. It looked interesting, and really pushed the idea of full system tests at the beginning of the conference.

On the other side, there were a few talks which amounted to FIT is interesting. I do like the idea of letting the user choose the functional tests, but at the moment I rarely have users that know what they want, so it's hard to implement for me.

Some of the talks did mention coding little domain specific languages in Java. I'm a fan on DSLs, but in Java you need to have so many braces I'm not sure it's worth it: in more dynamic languages you can get away with it much easier.

I particularly liked AutoTest, in which they extracted contracts from Eiffel applications and created random testing with clever optimisations like reducing the test state space and producing minimal test cases for failures using static program slicing. Something similar for perl is LectroTest.

The highlight of the first day was the last talk: a great double act about building testable AJAX applications. You don't need to squash the MVC in JavaScript all together: if you do split it out more sensibly, it can be much easier to test JavaScript. We had to wait until the last talk of the day to hear a joke!

The highlight of the second day was Goranka Bjedov explaining how she used open source tools for performance testing (mostly JMeter). She shared a great depth of knowledge with us all, from performance, stress, load (don't stress Linux with > 80% load, memory used), benchmark, scalability, performance, reliablility (becuase at any point in time a thousand systems are failing) and availability testing. She liked open source tools ("Why do we build tools from subatomic particles when we have bricks?") such as JMeter ("free as in puppy") and shared some pretty stats (with memory and CPU both as percentage) showing that "developers are totally delusional about software". A totally wonderful talk.

There was a Selenium talk, which should have been very interesting. Instead of talking about Selenium, however, he tried to show us a demo of using Selenium to record screencasts of testing web apps in virtual machines so that you can see what went wrong. It crashed and burned. Never do live demos, it's not worth it: always fake it with a screencast.

I also liked "Testing Metro Wifi": throwing cheap LinkSys routers and palmtops around Mountain View and testing with iperf. Very nice.

The shorter the talk, the better it is. So we finished up the conference with lightning talks: all were wonderful, particularly Ovid introducing the testing world to TAP.

Many thanks to Google and all the speakers and attendees. I learnt a lot from you all, especially the hallway track ;-)


Handy Perl testing tools

bryce on 2006-09-13T01:41:58

I didn't get a chance to attend this testing automation talk, but I'd like to share a few testing tools we've been developing at OSDL, that are available on CPAN:

Linux::Bootloader is a general interface for updating different bootloader config files (lilo, grub, elilo, yaboot). Quite handy if you're doing kernel testing.

WWW::PkgFind is a software package spider. You point it at an FTP or web site and specify regexp's of names of packages to pull. It'll even pull from SourceForge's mirror system. It flags new downloads so your test harness can queue up whatever tests are needed. Great for situations where you have several open source projects scattered around the net that you want to follow.

Test::Parser and Test::Reporter are a pair of tools for generating pretty reports from various test log files. This one is pretty new and still a WIP.

Crucible is a set of daemon processes for operating a testing facility, that takes advantage of all the above tools, and can perform client/server testing (it was developed originally for testing NFSv4).

I've been posting some data from our testing at http://crucible.osdl.org/ if you want to see the visuals...

Anyone interested in this...

DAxelrod on 2006-09-16T19:09:37

Make sure to see acme's followup post, with pointers to videos of this conference