Curiously in the past 24 hours two different entities have effectively asked the same question - "given development funding, can the Perl core be made faster?"
Having spent a couple of years trying to benchmark, profile and accelerate the core, generally with no measurable impact, I'm coming to the general conclusion "no". Of course, there are specifics - there are still some algorithmic wins to be made in the regexp engine, and Yves' recent work on these is very welcome. In turn, there are improvements that can be made to the regexp engine's optimiser (the code that attempts to avoid even calling into the engine itself). But apart from that, my considered opinion after banging my head against the problem for so long is
I reserve the right to be proved wrong. Only working proofs will be accepted. :-)
I can describe a working method for making the perl core faster, but it involves upgrading the processor you are using. And all of us will probably be doing it at least once in the next 3-7 years or so.
Re:OK I'll bite...
chromatic on 2006-06-28T18:23:58
The trick is refactoring (or reimplementing) Perl 5 so that it runs on Parrot. It ought to make things faster though.
Re:OK I'll bite...
Matts on 2006-06-28T18:28:17
Right. That's what I figured. I seem to distantly remember a project to do just that:-) Re:OK I'll bite...
davorg on 2006-06-29T07:46:21
I seem to distantly remember a project to do just that:-) I think you mean PONIE. Shame that none of the links on that page seem to go anywhere useful.
Did we ever get an official announcement on what was happening with PONIE? There was a lot of action three years ago, but it all seems to have died out now?
send out the search parties
grinder on 2006-07-02T20:35:08
I believe Ponie is dead. Nicholas stepped down as pumpking some time back, and a call went out for a replacement, and exactly zero candidates came forward.
Arthur Bergman might still be working on Ponie, I'm not sure, but in any event I haven't heard any noises of activity coming out of the lab (stable?) for some time. Certainly nothing so far as p5p is concerned, nor any other of the various perl fora and lists I pay attention to.
But that's ok, we all know Perl is a volunteer-based operation.Re:send out the search parties
nicholas on 2006-08-24T19:36:08
As it's not my project, I've never been in a position to make any offical statements. However, TPF have made an offical statement on Ponie
Re:OK, another stab...
nicholas on 2006-07-02T21:48:09
This sounds like a lot of work. If someone want to volunteer to write such a thing, then great, patches welcome - although they should be warned that there can be no guarantee in advance that code will be accepted.
But I don't think that any of the current maintainers have a sufficient itch to scratch personally, so I cannot see such a project happening "by itself".
Re:OK, another stab...
Matts on 2006-07-02T22:50:54
Sure. But you did say "given development funding". So I took that as to mean "given infinite funding and infinite available monkeys". Hey I can dream can't I;-) Re:OK, another stab...
nicholas on 2006-07-04T01:29:08
A right. I didn't pick up on this as an approach to solving the "make perl faster". I was seeing the requestor's desire more to be to make their existing perl code faster (at least, without substantial re-writes). And this seems more like a way to make new code faster.
In turn, such an approach might have the same "speedup" as pseudo-hashes. Offhand I don't know the URL to the analysis, but someone [Schwern, IIRC] demonstrated that the real 15% speedup that pseudo-hashes provided for
use fields
over regular hashes came with the side effect of slowing regular hashes down by about 15%. So best case was break even, and the general case was lose. Likewise, I fear that attribute access might require tentacles all over the existing method call or hash lookup code (particularly if duck typing is to work), so doing this sort of thing right might actually slow the current approach down further.Finally, the timescale for anything as major as this is would be 5.12, not 5.10, and even if major releases got back to something like once every 2 years, that would still see it as being 3 years out. By which time I hope we have non-prototype Perl 6.
Re:OK, another stab...
Matts on 2006-07-04T02:25:11
Thinking of Perl 6 being ready in 3 years is dreaming. Sorry, but someone has to say it.
Classes need not be accessed through the hash API. I think of a class (or object) as another xV type, rather than another extension on SVs. But I know zip about the perl core - I'm purely speculating based on the little work I have done in XS.
I remember Schwern's analysis, but I also remember it being rather flawed because there's lots in later perls that slow it down (most notably unicode) that weren't taken into consideration. But my memory may be hazy on that.Re:OK, another stab...
audreyt on 2006-07-06T08:38:48
I am sorry, but v6.pm works today; the underlying support modules (Moose, Pugs::Compiler::Rule, ModulE::Compile) are useful in production; Perl 6 is just another CPAN module, and we are working to make it useful for production in this Christmas.But you can use Moose or any other support modules without the v6.pm syntax sugar; if so, then it's ready even sooner.
Re:OK, another stab...
audreyt on 2006-07-06T08:44:12
Also, by "CPAN module" I mean "pure perl 5 CPAN module with some XS dependencies like PadWalker", so you do not need to install GHC or Parrot or any other runtime to make Perl 6 work. which makes it far easier to deploy, too...Re:OK, another stab...
Matts on 2006-07-06T13:36:03
We were talking about different things here (perl 6 being a fully finished and working perl 6 on its own interpreter). But saying v6.pm is working today is a long way off saying it's a finished perl 6. The perl 6 schedules have been way off from the start and I think my expectation of us being ready to transition people to perl 6 in 3 years is probably about right given what I know of the current development stage (unless there will be a Blue Peter "and here's one I made earlier" phase;-)).
In no way do I mean to insult the excellent work you've done Audrey - nor anyone else on the perl6 project. I think it's an important direction and one that takes a lot of time and energy, and I salute you for putting that effort in - I certainly can't say the same for myself.Re:OK, another stab...
audreyt on 2006-07-06T14:30:44
Frankly I think the Perl 6 on its own interpreter idea doesn't work at all.It's true that Perl 6 will run on multiple interpreters (including Parrot, JavaScript and more), but it's the perl5 interpreter that will get us to an incrementally-deployed production soonest.
That is, the transition to Perl 6 will be no different to, say, the transition to DateTime.pm; modules start using it, or part of it, when it makes sense, but it doesn't need to be an all-of-nothing process.
:-) Re:OK, another stab...
Matts on 2006-07-06T14:41:39
Interesting. Probably very wise.
Re:and option #3
chromatic on 2006-06-29T08:43:42
Machines are getting faster...Only the new ones.
Re:and option #3
nicholas on 2006-06-29T10:14:12
Also for large systems there comes the tipping point where developer time actually becomes cheaper than the costs of machines, maintenance and rack space. It's these sort of entities who were asking the question.
Re:and option #3
hfb on 2006-06-30T02:34:27
As someone who works in a joint with a Sun 25k, I realise that there are slower, cheaper options:) However, at the point where you have the cost of a new machine exceeding the cost of labour, does the speed of the archaic code matter that much and, if it does, doesn't it make more sense to just rewrite it using something else...like C? Re:and option #3
nicholas on 2006-07-01T09:46:07
It was firms with over 50,000 lines of perl code that were thinking about it, one of which I know is running code across several hundred servers. I doubt that re-writing the whole thing in C (or anything) is easy, as I'm guessing that the cost of validating that the behaviour is the same is prohibitive, but re-writing parts might well be, if those parts can be identified. But I think that both were thinking that it still might be easier to concentrate resources on the core, as that could speed up all code, and the core does have very good regression tests. So I believe that their question is worth asking, to allow them to make a decision. But from this side of the fence, I can't see any big wins to be made in the core.
Re:and option #3
hfb on 2006-07-02T08:07:08
Well, with something like that you have to consider so many other possible bottlenecks, especially since much of the clustering software, if that's what they're using, do exact a speed toll. I'd call them outright awful, but a few of them do actually manage to work on occasion.
And, given the pain and suffering involved with changing most anything in the perl core, *ahem*, I'd guess that rewriting large parts or the whole thing would be faster and far less hassle with a greater performance gain. Anyone with 50k+ lines of perl expecting zippy performance gets what they deserve.
:) The question was worth asking, but the answer may not be the one they like.
I know this comment is late, but I was wondering if it would be feasible to inline function calls to avoid the function call overhead? I remember chromatic mentioning that he looked into this and saw that the pads were going to slow it down regardless (I could be misremembering) and I suspect that the difficulty in distinguishing between methods and subroutines would make this idea pretty much dead, but if it was feasible, would this speed up Perl?
Re:Inline functions?
nicholas on 2006-08-09T21:29:44
Well, you have to assume that the function isn't going to get dynamically redefined. (or even memoized). So inlining would be more like macros than true functions. I think someone was mentioning this on p5p at some point, but I've no idea how evolved (or involved) the implementation would be.
Re:Opcode fiddling presents an opportunity?
Aristotle on 2006-11-12T19:33:37
Problem with that is identifying the right places to build tailor-made code so you don’t waste your time optimising things no one uses.
Also, TMTOWTDI means you have to make a reasonable guess at other ways of asking for the same thing. F.ex., off the top of my head, I can think of
@list = $str =~ m//g;
to do the same.But sure, if you can come up with good common optimisations, you can possibly do a lot. An area where this has already been done and that IMHO still merits attention is sort blocks.