One of the biggest complaints/problems with Moose has always been it's compile time cost. Well with the newest release of Moose (0.33) and Class::MOP (0.49) we have made a major step towards reducing that. With judicious use of XS (konobi++ for that) and some caching in key parts of Class::MOP we have managed to cut Moose load time almost in half, here are the stats from my machine (MacBook Pro w/ 2.4 GHz Intel Core 2 Duo).
Moose 0.32 & Class::MOP 0.48 => 0.46 real 0.43 user 0.02 sys
Moose 0.33 & Class::MOP 0.49 => 0.27 real 0.22 user 0.02 sys
This change also brings allowed us to take advantage of some 5.10 specific improvements as well. In 5.8.* we use the PL_sub_generation interpreter global to determine when to invalidate our method cache, which was one of the big parts of the speed win. In 5.8.* this is incremented every time a package is changed, which means we were probably invalidating our cache even when we didn't need to. Now in 5.10 we are using the mro::get_pkg_gen function, which provides the same feature but increments on a per-package basis, which means less incorrect cache invalidation.
All in all, not bad for a few hours of work by the folks on #moose.
- Stevan
NOTE: I got my stats wrong in the ChangeLogs, I called it a ~45% speed increase, but it is really 45% less slow.
Re:Correct use of ratio percentages
educated_foo on 2007-12-15T00:31:30
I think that would be either "63% more fasting" or "50% less gluttony."
Re:Cool
Stevan on 2007-12-15T18:37:05
A major reason for this recent efforts to cut startup time is actually because we are working with Max Kanat-Alexander (of Bugzilla fame) to try and get Moose fast enough to be used in the refactoring/rewriting of Bugzilla. Max is already a fan of Moose and has used it on his (very cool) "All your VCS are belong to us" module VCI module. I suspect that by mid-spring you will be able to deploy Moose apps under vanilla-CGI without a worry.
- Stevan
Re:Cool
sigzero on 2007-12-17T20:02:54
You make me drool!
Re:Keep it going :)
Stevan on 2007-12-15T18:29:38
Well, we will never beat you on memory usage, but I feel I must remind you that Moose::Tiny is 8% shorter to type than Object::Tiny
;) Also, we are really breathing down your neck with our accessors (from the Moose::Tiny POD):
And reallly, the difference in speed here is simply because Moose::Tiny checks to make sure you don't try and use the read-only accessors to assign a value with, whereas Object::Tiny simply swallows the value silently without even a simpleBenchmarking accessors...
Rate moose tiny
moose 485/s -- -19%
tiny 599/s 23% --warn "RTFM!"
.On a more serious side, we have also been experimenting with a module we are calling MooseX::Compile. It does two things really, 1) it caches the generated meta-objects with Storable and only loads them if you ask for them and 2) writes out the accessors, constructors and destructors that we 'eval' in Moose into a
.pm and/or .pmc file. The proof of concept prototype that we put together actually loaded as fast as our hand-coded-plain-old-vanilla-perl control file (something like 0.01s), and was actually pretty good in terms of memory usage (not on the scale of Object::Tiny of course, but hey, you can't win them all). This module will also offer (at least) two options for compilation, a first-time-compile penalty which writes out the PMC for you, or an install-time penalty which will allow you to deploy your module on CPAN in it's compiled form. And I saved the best for last too, since Moose is a meta-circular system, it should be possible to actually turn the MooseX::Compile goodness on Moose itself! So, as the saying goes, its not the size that matters but how you use it
;) - Stevan