Databases are whacky things. Programming for them is very different to hacking Perl. Well it's not *so* different, but where you get performance differences is always, umm, odd.
Now after my earlier annoyance the postgresql team have documented this temp tables bug in their FAQ, and also put in a work around which is to use EXECUTE (which executes a string as SQL) so that the table name isn't cached.
After all that I refactored some of it. It turned out that my incling to move everything into a stored procedure was premature. Pg's SP language (plpgsql) isn't all that fast, so things were actually quicker being done inside perl-space, via a pre-prepared statement.
Then I discovered something I should have figured out a while ago - what was taking so long was the inserts into the temporary table. So I got cunning. I figured out a way to implement it using a couple of views and a simple join. So now I could make everything happen in a single perl loop, a dash of sort (using the Schwartzian Transform), and a slice of umm... slice.
The other thing I did was found a great short article about optimising Pg on linux systems. It amounts to increasing the shared memory and setting a few extra Pg parameters. Doing that pretty much tripled the speed of the thing.
So now what was taking between 600 and 1000ms takes about 60ms. Which means I can actually deploy this now (if we decide to do so). Schweet.
Maybe one day I can actually talk about what the hell it is I'm doing with this database ;-)
Update: The article in question was this one at phpbuilder.
Re:Article
Matts on 2002-08-28T14:22:05
I updated the journal with a link.