Distributed DNS blacklists

MGLEE on 2003-09-24T15:51:20

Now that yet another DNS blacklist (monkeys) has been retired due to a continuing massive denial of service attack perhaps its time to rethink DNS blacklists.

Using DNS to rapidly query a server to check if an IP address is listed in it is a great idea. Its fast, little overhead involved, DNS is a well known and supported protocol. Querying against a single server allows the owners of the list to rapidly ammend the list when needed.BUT it also provides a single point of attack.

Somebody (maybe a spammer ?) has taken it upon themselves to launch continued denial of service attacks on the servers hosting the DNS lists. DNS wasnt designed to withstand such attacks, but surely with all the knowledge that has gone into designing distributed P2P networks there must be another way of distributing DNS blacklists.

Napster got taken out due to its client server architecture, Gnutella continues due to being entirely distributed. You can force a single server to go down through court action, or a DoS attack, but that will only affect a small part of the network, the rest continues unaffected.

So how can we apply this architecture to DNS blacklists ?

continued tomorrow...


P2P blacklists wouldn't work

gav on 2003-09-24T16:38:08

The problem with a P2P blacklist is that you allow people into your network that you don't trust. Spammers would just get smart and use zombies to join the network and un-blacklist everything.

Re:P2P blacklists wouldn't work

MGLEE on 2003-09-25T12:55:07

Yup, P2P networks have problems with concurrency, deciding which is the most upto date version of the list.
Plus as you say, there is no authentication, anyone can post a file labelled sbl-blacklist.txt identical to the official file, but with completely different IP addressed.
But they are very good at surviving denail of service attacks, which is a big problem with the DNS blacklists at the moment.

Re:P2P blacklists wouldn't work

johnseq on 2003-09-25T18:34:08

Doesn't a simple digital signature on the text file ensure it's authenticity? I believe the FSF is going to do this for their software after their FTP servers got cracked. This should prevent future rogue code injections, or at least prevent them from going unnoticed. The same should work to authenticate blacklists, no?

Re:P2P blacklists wouldn't work

gav on 2003-09-26T00:12:55

If I was going to be evil, instead of serving a signed file I'd just join the network with as many machines as possible and have a distributed honeypot. Making the spam check timeout would work just as well as returning bad data. You could also run a DDOS on the root nodes.

Re:P2P blacklists wouldn't work

johnseq on 2003-09-26T14:56:07

Wasn't freenet designed to withstand just such an attack?

I think the idea is to not have a root node. Just a single public key you can validate the text file against. The blacklist could be distributed and mirrored using a variety of technologies -- http, ftp, nntp, irc, konspire, email, jabber, kazaa, freenet, etc. It's seems not just possible for pretty straightforward to have the original blacklist be periodically injected into the 'net, as opposed to having a dependency on a particular server being up and running.

Re:P2P blacklists wouldn't work

MGLEE on 2003-09-29T12:58:20

Blacklists are dynamic; new IP addresses get added, some get delisted. A single authorative source is required, otherwise you have big problems with version control.
Signed incremental updates are the way to go, but I'm still unsure about how you authenticate the identity of a signee (without resorting to blindly beleiving Verisign)(which would then leave Verisign as a single point of failure in the system).