a fix for slashdotted web sites

jmm on 2005-01-12T17:57:03

Yesterday, I once again went to a site that had been slashdotted - the news reference that sent me there had caused a flash mob (read Larry Niven) of other people going to the same site that had overloaded its capacity.

This time around, a possible solution came to mind: to use a BitTorrent type of mechanism.

Suppose that there was an extension made to the http protocol. This extension would allow a browser to set a flag when it requested a page. The flag would say that the browser was willing and capable of acting as an auxilliary server.

Most of the time, this bit would be ignored and the server would just send back the page. However, if the request load suddenly increased, the server would start delegating some requests to recent browsers which had volunteered to act as helpers (sending a number of requests in a bundle along with a list of additional helpers - the other site would then work with the extra named helpers to provide the cached copy of the page to the list of requesters).

This would mean that you don't need to make your host powerful enough for the biggest peak that will come along, but simply big enough to handle the normal load and it can get help on the peaks from the same computers that are causing that peak to happen. This is especially useful for private sites which normally have a small audience, but might someday get "discovered" for its 15 minutes of fame.

This change would have to be supported both the the server and by browsers which are capable of acting as servers (or are connected to a server - a connection proxy that merges the request streams of all your users and connects them to you site server might do the job).

A heavily loaded server could, also like BitTorrent, give preference to requests that have the flag set, and if it is too heavily loaded, prefer to drop requests from users who don't offer to offload the work. That would provide incentive for people to move to a browser that supported this extension.


nyud.net

jdavidb on 2005-01-12T20:45:43

The current fix to the problem is to append .nyud.net:8090 to any requests.

One possible flaw in your idea (which I like) is that my browser may be privy to pages I don't want others to access. Each user's experience on a site may be customized in ways that would render P2P sharing of pages either useless or insecure. Would you want your browser to help distribute the load for use perl when the cached contents are what use perl looks like when you log in? How about for your online bank? ;)

Re:nyud.net

jmm on 2005-01-12T20:54:23

Yes, this is only for basic "page for the world to see" situation. A bank would not use the extended protocol and would never refer a request on to an earlier viewer.

Someone could not use your browser to get access to a site that you are privy too, but they are not - they would not be passed on to you by the original site. (However, the original site will have to check fo valid users and only pass on requests that it would have actually honoured, which is probably expensive enough to not bother with this protocol for security restricted sites.)