Retries in Net::Amazon::S3

acme on 2008-02-28T16:46:57

One of the slight issues with Amazon's Simple Storage Service is that by design sometimes the service returns a HTTP 500 Internal Server Error response and the client should retry the request. Previously, applications which use my Net::Amazon::S3 module, such as Brackup, had to handle the retries themselves - but no more! With the magic that is LWP::UserAgent::Determined you can now pass a retry option into the version 0.42 of Net::Amazon::S3 and it will handle the retries with exponential backoff for you.


Thank you

hoggardb on 2008-02-28T19:32:15

Thanks a lot for this, Leon. I'm so happy I don't have to implement this on my own now.

Exponential?

jrockway on 2008-02-28T22:10:32

Why Exponential and not Fibonacci? It probably won't matter in real life (since it will only take a single retry), but I think Fibonacci is the generally preferred algorithm. Exponential gets unreasonable after just a few iterations. (16 seconds, then 32 seconds?)

Re:Exponential?

acme on 2008-02-29T07:57:12

Mostly because it's the one Amazon recommend. Many retries aren't often necessary, Amazon S3 for Science Grids: a Viable Solution (PDF) mentions:

We have observed an availability rate of 99.03% after the original download attempt, 99.55% after the first retry and a full 100% availability after two retries. Additional retries were never needed during the observed period. These results are convergent with Amazon's stated 99.99% availability target.