I attended the New Scientist salon on spam last night (also attended by Gregor. It was actually hosted by Simson Garfinkle and Paul Graham. Simson's claimed that only about 200 people accounted for the world's supply of spam. His (yes, facetious) theory was that only extrajuditial means would solve the spam problem -- meaning hunting down and killing a number of spammers sufficient to deter the remainder, like John Travolta at the end of Operation Swordfish. Since spammers have both teamed up with and provided a profit motive for previously harmless crackers, we now have armies of compromised machines which will make future attempts at micro-payments and digital signatures (and other end-user dependent schemes) pointless.
I do not think they're pointless, but they probably won't fly on their own. I remember reading about a simulation of a internet super-worm -- a virus that spreads via several vectors at once and aggressively scams for and propagates itself to other machines. The authors of the study determined that it could spread to all vulnerable net-connected hosts in 15 minutes BUT if machines had an extremely simple limit on outbound IP connections it could not even spread fast enough to be a threat.
Generalizing this super-simple virus-fighting behavior a bit, I think our machines should establish baselines for things like outbound IP connections and the amount of email we send out. For the average user on a machine with a consistent usage profile, it should require some time of user intervention to perform network scans oustide the baseline. This is the equivalent of the credit card fraud division calling you up when they notice your recent purchases of Snoop Dogg in a Tiajuana Record store. Is this fantasy technology that we're years away from having available? Well, I talked to a company named Okena that was writing this software for Windows and Linux a couple years ago. They instrumented and rolled up the behavior of desktop applications to a central server, so that they could define deviant behavior by comparing a machine with it's peers. They could then stop behavior as it emerge, instead of retroactively looking for infected file signatures.
Microsoft recently floated a trial balloon about enabling firewalls by default and implementing some sort of behavior profiling in the OS. While I'm realistic that this is more about escalation than an end-game, it will be interesting to see what kind of traction it gets with MS's money (and, at this point, desperation) behind it.
Re:Happy Fun Big Brother!
johnseq on 2003-11-12T16:49:35
I don't think the data comprising a valid behavior baseline needs to be rolled up to a central internet presence to be useful. I happily use MyNetWatchman on my gateway, an perl IDS which
does just that, but I don't have real privacy problems sending Windows virus attack data to someone who might help do something about it. My desktop behavior is different.
Lets look at the home user's desktop system. Emily checks email, sends a few a day, sometimes with photos, and surfs a bit. Emily does not run apps that scan for nearby IP addresses at a rate of hundreds of connections per second. Nor does she run outbound SMTP services, send lots of Windows Messaging messages, host FTP servers or run P2P apps. All these behaviors could be historically distinguished from normal ones without comparing them to a central source. No AI or big brother needed.
One way to think of it is greylisting at the OS behavior level. This type of system will work differently for the lone user than it will in a huge company LAN (Okena's market), where rolling behavior up and doing metrics on aggregate behavior is no worse from a big brother perspective than what they're already doing for IDS and virus and spam fighting.
As far as the legacy problem -- yep, that's a problem I don't know the answer to. I suspect liability issues or simple cost issues prevent ISPs from detecting and unplugging infected computers. But the cost of dealing with that problem is fixed, the cost of inaction continues to grow.