go faster stripes

nicholas on 2009-04-11T14:38:41

As Matt noted there was recent talk on p5p about LLVM. A key part of what Yuval said was

However, in the long run this is still a micro optimization. Ruby is reportedly about twice as slow as perl for most things. That doesn't keep anyone from writing successful applications in ruby.
Java is probably many times faster than Perl. None of us seem to be switching to it because you can crunch numbers faster in it.
A JIT can't make dramatic high level performance improvements.
I suspect the main benefit would actually be in marketing (look how excited we're getting about python being "5x faster"), and in demonstrating that the runtime is able to adapt and be modernized, perhaps paving the path to more ambitious language and runtime improvements.
Furthermore, IMHO most apps are not even bound by those limitations, but rather by programmer laziness/sloppiness. If better performance is a low hanging fruit that remains unpicked, JIT is really not going to make much of a difference.
Even if it runs 2x faster, it doesn't mean it'll be 2x cheaper to run. This is the "you are not google" axiom of scalability. Even if people want this performance, for most of them it won't actually help their business.

It's interesting reading what the Twitter folks say about migrating from Ruby to Scala:

Steve Jenson: One of the things that I’ve found throughout my career is the need to have long-lived processes. And Ruby, like many scripting languages, has trouble being an environment for long lived processes. But the JVM is very good at that, because it’s been optimized for that over the last ten years. So Scala provides a basis for writing long-lived servers, and that’s primarily what we use it for at Twitter right now. Another thing we really like about Scala is static typing that’s not painful. Sometimes it would be really nice in Ruby to say things like, here’s an optional type annotation. This is the type we really expect to see here. And we find that really useful in Scala, to be able to specify the type information.
Robey Pointer: Also, Ruby doesn’t really have good thread support yet. It’s getting better, but when we were writing these servers, green threads were the only thing available. Green threads don't use the actual operating system’s kernel threads. They sort of emulate threads by periodically stopping what they are doing and checking whether another “thread” wants to run. So Ruby is emulating threads within a single core or a processor. We wanted to run on multi-core servers that don’t have an infinite amount of memory. And if you don’t have good threading support, you really need multiple processes. And because Ruby’s garbage collector is not quite as good as Java’s, each process uses up a lot of memory. We can’t really run very many Ruby daemon processes on a single machine without consuming large amounts of memory. Whereas with running things on the JVM we can run many threads in the same heap, and let that one process take all the machine’s memory for its playground.

(Straight line) speed is not their problem - it's memory, and concurrency. Which, again, is why Unladen Swallow's plans for the Python G.I.L. are interesting. Oh, and did I mention memory before? :-)

Also interesting was a blog post How JRuby Makes Ruby Fast. It starts with a wise and eloquent grumble:

At least once a year there's a maelstrom of posts about a new Ruby implementation with stellar numbers. These numbers are usually based on very early experimental code, and they are rarely accompanied by information on compatibility. And of course we love to see crazy performance numbers, so many of us eat this stuff up.
Posting numbers too early is a real disservice to any project, since they almost certainly don't represent the eventual real-world performance people will see. It encourages folks to look to the future, but it also marginalizes implementations that already provide both compatibility and performance, and ignores how much work it has taken to get there. Given how much we like to see numbers, and how thirsty the Ruby community is for "a fastest Ruby", I don't know whether this will ever change.

It then carries on, as billed, with a clearly explained guide to various techniques a Java implementation of Ruby can use to get progressively more speed, at some cost to compatibility with the canonical C Ruby implementation. There are quite big trade offs to be made between completeness, correctness, and speed, which I don't think I've seen described this clearly before. But before you start to think that JRuby, or some other "alternative" implementation will be the saviour of Ruby, a last word from the Twitter folks:

Bill Venners: Did you consider JRuby?
Alex Payne: We did. At the time we looked into it, we simply couldn't boot our Rails app on JRuby. Too many of the Ruby Gems we make use of require C extensions, and haven't been ported to JVM-friendly versions. The performance of JRuby was also not even on par with MRI (the C implementation of Ruby), much less a language like Scala. We're open to trying out JRuby again in the future, but we're also hoping that some Ruby patches will help in the meantime.

R is a letter not entirely unlike P. I suspect that the lessons transfer.


Necessary but not Sufficient

chromatic on 2009-04-11T17:03:56

JIT can help, if you have a tracing scheme (or a sufficiently simple language). One of the most important realizations is you must distinguish between heap and stack allocations -- making allocation as cheap as possible.

If you have a good tracing scheme, and if you can inline calls and branches cheaply, and if you have a parameter passing scheme which has as few memory copies as possible (go registers!), and if you have a sufficiently clever register allocation scheme which works across basic blocks, a JIT can go very, very fast.

Without those, a JIT can sometimes go faster.