I stuck a couple of counters and such into the code so that I could try to determine where it slows down, and found that it gets slower as time goes by. I suspect the size of the hash it's inserting into is the problem, not the basic processing time.
To that end, I'm considering using a database to hold the results, in case this thing will actually be fast enough that way. I'm not 100% this is going to work -- it's going to add the overhead of DBI to the mix -- but it's better than giving up.
If I'm overlooking something obvious, I'd love to know it.
my @head, @tail, @middle;
sub add_counts {
my $ref = shift;
while( @_ ) {
my $charindex = shift;
$ref =
defined $ref->[charindex]
? $ref->[$charindex]
: ($ref->[$charindex] = []);
++$ref->[0];
}
}
my %mapchar = (
a=>1,
b=>2,
#...
z=>26 );
sub charlist {
my $word = shift;
map { $mapchar{$_} } ($word =~ m/(.)/g);
}
sub mapword {
my @chars = charlist shift;
add_counts( \@head, @chars );
add_counts( \@tail, reverse @chars );
pop @chars;
shift @chars;
while( @chars ) {
add_counts( \@middle, @chars );
shift @chars;
}
}
sub maxtrav {
my $ref = shift;
my $count = 0;
while( $char = shift ) {
last unless defined $ref->[$char];
++$count;
}
$count;
}
sub reversible {
my @chars = charlist shift;
my $hlen = maxtrav( \@head, @chars );
my $tlen = maxtrav( \@tail, reverse @chars );
my $mlen = maxtrav( \@middle, @chars );
($hlen+$tlen) >= @chars || $mlen >= @chars;
}