Spam Conference notes

babbage on 2003-01-19T10:04:58

Update: A condensed version of this writeup appeared in the June 2003 issue of ;login:, the magazine of Usenix & SAGE. I'm a happy camper :-) A PDF of the article has been made available.

I was waiting for the review to show up on Slashdot, as the conference was really good. The audio proceedings have been put online, but I'm not sure if they can take a Slashdotting, so please be gentle :) If you have 8 hours to spare, the whole day was pretty good & worth listening to, but the schedule as planned isn't exactly the sequence people spoke in, so you may have to jump around the RealAudio stream a little bit.

Turning my notes for the day into something vaguely coherent, here are some hightlights from the proceedings. There are a couple of speakers that I didn't write anything down for, but from mid-morning on this should be pretty comprehensive. Apologies in advance if my notes lead me to attribute certain comments to the wrong speaker -- if anyone notices any mistakes please feel free to add corrections:

  • Bill Yerazunis - CRM114 & MailFilter

    Because Perl "freaks him out", Yerazunis came up with the CRM114 minilanguage (points for anyone that gets the joke in the name without googling for it :), then wrote MailFilter in CRM114 as an implementation of a filter that can be used with Procmail or SpamAssassin or what have you. The basic idea is to decompose a message into a set of "features" composed of various permutations of single words, consecutive words, words appearing within a certain distance of one another, etc, such that the set of features N is very much bigger than the set of words X. You then analyze the features in various ways and if you get above a certain arbitrary threshold, you flag the message as spam & handle it accordingly.

    He claimed that with this software he could get better than 99.9% accuracy in nailing spam, and a similar percentage in avoiding "ham" (the term everyone was using for false positives -- legit mail that was falsely identified as spam). One of Yerazunis' observations is that the best way to defeat the spam problem is to disrupt the economics: if a 99.9% or better filter rate were to become the norm, then the cost of delivering spam can be pushed higher than the cost of traditional mail and the problem will naturally go away without requiring legislation (which would be nice anyway, but we can't count on it).

    The drawback of CRM114/MailFilter is that it can only handle about 20k of text per second, so it's not appropriate for large scale use yet. Still an interesting project to watch though: crm114.sourceforge.net

  • John Graham-Cumming - POPfile

    Most of his very entertaining talk was about the ingenious tricks that spammers resort to to obfuscate spam against filters, including most diabolically one example that placed each column of monospace text in the message into an HTML column, so that the average HTML-capable mail client would render the message properly, but it would be absolute gibberish to most mail filters. The ultimate lesson was that any good filter has to focus not on "ascii-space" (the literal bytes as transmitted) but the "eye space" (the rendered text as seen by the user), which by extension may mean that any full scale spam parser/filter could also have to include a full-scale HTML & Javascript engine. Yikes!

    As for Graham-Cumming's software, it's a Perl application, available for all platforms (Windows, Mac, & of course Linux) that allows users to filter POP3 mail. Interesting stuff if you're a POP user: popfile.sourceforge.net

  • John Draper - ShopIP

    Most of Draper's work seemed to be focused on profiling spammers, as opposed to profiling spam itself, by throwing out a series of honeypot addresses & using data collected to hunt down spammers. spambayes.sourceforge.net

  • Paul Judge, CipherTrust

    Judge's big argument, which no one really disagrees with, is that spam has become not just a nuisance, but an actual information security issue. To that end, he is advocating much more collaborative effort to address the problem than we have seen to date: conferences like this, mailing list discussions, better tools, and public data repositories of known spam [and ham]. To that last point, one of his observations (which others made as well) was that there are no universally agreed on standards for what qualifies as spam, so repositories for spam will not be accurate for all users (spam for your programmers will be the bread & butter of your marketing department, etc). Plus, there are obvious privacy issues in publishing your spam & ham for public scrutiny. And to add another wrinkle, one danger of public spam/ham databases is that spammers can poison them with false data, screwing things up for everyone. That said, he encouraged users to help out with building spamarchive.org.

  • Paul Graham

    The man who organized the conference and kicked everything this week off with his landmark paper from last fall, A Plan for Spam. Graham's spam filtering technique famously makes use of Bayesian statistics, a technique popular with nearly all of the speakers. The nice thing about a statistical approach, as opposed to heuristics, simple phrase matching, RBLs, etc, is that they can be very robust & accurate; the down sides are that they have to be trained against a sufficiently large "corpus" of spam (most techniques have this property though) and they have to be continually retrained over time (again, this is common). Graham was too modest to produce numbers, but subjectively his results seemed to be even better than what Yerazunis gets with MailFilter, by an order of magnitude or more.

    Like other speakers, he predicted that spammers are going to make their messages appear more & more like "normal" mail, so we're always going to have to be persistent about this -- as one example, he showed us an email he received IN ALL CAPS from a non-English speaker asking for programming help, and although it was legit, the filters insisted otherwise. "That message is the one that keeps me up at night."

    Everyone interested in the spam issue should go read Graham's paper immediately.

  • Robert Rothe, eXpurgate

    Rothe works for Eleven, an ASP company from Berlin selling a spam management service/application called eXpurgate. His talk was short on details about how the tool worked (mainly that it searches for bulk mail), focusing instead on the high level functionality it provides to users -- basically, they classify mail as safe, questionable, or dangerous, and let the users handle them accordingly. Another speaker that sees spam as a network security issue, so they built their system accordingly, with privacy of the client's mail content in mind etc.

    Like many speakers, he warned about the dangers of an anti-spam "monoculture": that Bayesian techniques might be great, but if that's all anyone uses then spammers will catch on and adjust their messages to look more like normal mail, to the point that Bayesian filters won't work anymore. As a result, we're going to need to attack the problem from several angles, using different techniques, to keep the spammers off balance as much as possible.

  • Matt Sergeant, SpamAssassin

    SA is a well known Perl application for heuristically profiling messages as spam, adding headers to the message saying for example "I am 72% sure this is spam because it has X Y Z", and passing off the message to procmail or whatever to be handled accordingly. SpamAssassin can handle a message throughput great enough that it can be deployed at the network level (whereas some of the others, which might have somewhat better hit rates, are still too inefficient at this point). Deployed this way, the differences in effectiveness for single vs. multiple users becomes very apparent, as 99% effective rates fall down into the 95-80% range. This happens because, again, different users define different things as spam, so mapping one fingerprint to all users can never work quite right. For an example of a tool that your company can deploy right now & get fast, decent results, SA looks like a good choice; but for the long run it looks like a Bayesian technique is going to get better performance, and SA is adding a statistical component to its toolkit. Good talk.

  • Barry Warsaw, Python Labs

    This was another example of the "monocultures are dangerous" philosophy, as Warsaw explained how he is helping to use a variety of anti-spam techniques -- from clever Exim MTA configuration to good use of Spam Assassin & Procmail to fine tuning of the MailMan mailing list engine -- to work together to manage the spam problem for all things Python (Python.org, Zope, many mailing lists, a few employees, etc).

    He pointed out that some very simple filters can be surprisingly effective: run a sanity check on the message's date; look for obviously forged headers; make sure the recipients are legit; scan for missing Message-Id headers; etc. In response to the person that originally posted the article, yes, he did mention blocking outgoing SMTP as an effective element of a many tiered spam management approach.

    Among other tricks for getting the different filtering tiers to play nice together, they make heavy use of the X-Warning header so that if an alarm goes off in one tier of their mail architecture, other components can respond appropriately. Cited projects included ElSpy and SpamBayes.

  • Barry Shein, founder & CEO of The World -- or as he laughingly put it, "President of the World". Har har har

    This talk was mostly a let down for me -- Shein has made his views very well known, and his ranting, rambling talk didn't really introduce any new ideas for anyone that had read that interview (some good jokes & quotes though).

    His core argument is that spam is "the rise of organized crime on the internet", that filters are nice but that the mail architecture itself is fundamentally flawed, and that ISPs like his -- in 1989, The World was the world's first dialup ISP -- are being killed by the problem. Shein was very annoyed that all these talented people are having to clean up a mess like this when we should be out working on more interesting stuff, and not having to worry about this issue. His big hope seemed to be that legislation will someday come to the rescue, but he sounded very pessimisstic. (Others in the room seemed to feel that this was a very interesting machine learning problem, and weren't really fazed by his pessimism -- but then most of the people in the room don't run ISPs.)

    He also suggested that we need to find a way to make spammers pay for the bandwidth they are consuming (rather than having users & ISPs shoulder the burden) but didn't seem to know how we might go about implementing this. At all.

    Fun rant to cheer along to, but for me it wasn't very constructive in the end.

  • Jean-David Ruvini, eLabs SmartLook

    This was an interesting product. Ruvini's company is developing an extension to Outlook 2000 & XP that will watch the way users categorize messages into folders, come up with a profile for what kinds of messages end up in which folders, and then try to offer similar categorization on an automatic basis. Think of it as Procmail for Outlook, without having to mess with (or even be aware of!) all the nasty recipies.

    Obviously if you have a spam folder, then spam will be one of the categories it looks for, but more broadly it will try to categorize all your mail as you would ordinarily categorize it. This makes SmartLook a broader tool than "just" a spam manager.

    SmartLook is another statistical filter, though it uses non-Bayesian algorithms to get results. eLabs' tests suggest that the product is able to properly categorize messages about 96% of the time, with no false positives, and (for their tests, mind you) that it performed better than Bayes filters over three months of usage.

    One nice property of this tool was that it works well with different [human] languages -- some strategies fall apart &/or need retraining when you switch from English to some other language. For certain markets (eLabs seems to be a European company, perhaps French?) this is a crucial feature, and having a tool that works with one of the biggest mail clients out there (most people don't use Mutt or Pine, sadly enough) can be very valuable. Very clever -- watch for the inevitable embrace & extend three years from now.

  • Eric Raymond

    He didn't say anything about guns, but he did try to correct one of the other speakers for misusing the term "hacker."

    Like Graham, ESR is a Lisp fan, but he knows that the vast majority of people aren't, and he also knows that the vast majority of people need to be using something like Graham's spam software. So on a lark, he came up with a clean version in C, named it BogoFilter, and put it on Sourceforge, where a community sprung up to, well, embrace & extend it.

    As good as Graham's Bayesian algorithm is, ESR felt -- as did many of the other speakers -- that the nature of your spam/ham corpus is much more significant than the relative difference among any handful of reasonably good algorithms. (Back to the often repeated point about how corpus effectiveness falls apart when used for a group of users, as opposed to individuals.) To that end, he strongly feels that the best way to deal with the spam problem is to get good tools into the hands of as many people as possible, and to make them as easy to use as possible (ahh, the old "open source UIs always suck" argument :). As an example, one of the first things he did was to patch the Mutt mail agent so that it had two delete keys: one for general deletion, one for "get rid of this because it's spam." That second key, and interface touches like it, seem like the way to get average people to start using filters on a regular basis.

  • Joshua Goodman, Microsoft Research

    Unlike ESR, Goodman felt that algorithm selection does make a big difference, but this being Microsoft he refused to disclose what algorithms his team is working with -- except to say that, when delivered, they will be more accessible for average users than SpamAssassin, Procmail recipies, or Mutt :)

    Microsoft has been working on the spam problem since 1997, but because of how big they are they've had unique problems in bringing solutions to market. As a case in point, they tried to introduce spam filters to a 1999 Outlook Express release, but were immediately sued by email greeting card company Blue Mountain because their messages were being inaccurately categorized as spam. With that in mind, they have been very reluctant to bring new anti-spam software out since then because they would like to see legislation protecting "good faith spam prevention efforts."

    As a very large player, Microsoft faced certain difficulties in developing useful filters -- it may make sense for you as an individual to filter all mail from Korea, but this doesn't work so well if you are trying to attract customers *from* Korea :). This has forced them to put a lot of work into thoroughly testing different strategies before offering them to the public.

    In spite of what millions of webmail users may have expected, Hotmail & MSN are currently being filtered by Brightmail's service, and plans are underway to reintroduce spam management features to client side software again. (Just imagine how bad it would be if they weren't paying someone to filter for them! Unfortunately, no hecklers piped up to ask if they are really selling Hotmail's user database to spammers, and if that is a source of annoyance for his team.)

    An interesting barrier his group has had to grapple with was what he called the "Chinese menu" or "madlibs" spam generation strategy: that it's easy to come up with a template for spam -- "[a very special offer] [to make your penis bigger] [and please your special lady friend all night!" vs. "[an exclusive deal] [for genital enlargement] [that will boost your sex life!]" etc -- and have a small handful of options for each 'bucket' multiplying into a huge variety of individual messages that are easy for a human to group together but almost impossible for software to identify.

  • Michael Salib, extremely funny MIT student

    Unlike nearly all other filter writers of the day, Salib's approach was heuristic: find a handful of reasonable spam discriminators, throw them all against his mail, and see how much he can identify that way. "It's sketchy, but this is a class project. I don't have to be realistic. [...] These results may be completely wrong."

    Much to his surprise, he's trapping a lot of spam. He pulls in a little bit of RBL data ("the first two or three links from Google, whatever"), looks for some patterns and so on, and then churns it through LMMSE, an electrical engineering technique that as far as he can tell doesn't seem to be known in other fields. Basically this involves running the messages through a series of scary-but-fast-to-calculate linear equations). It turns out that he can process this much faster than a Bayes filter, to the point that customizing his approach for each user in a network would actually be feasible.

    For a small spam corpus, he got results better than SpamAssassin did, though for a large corpus his results were worse; he couldn't really account for why this would be the case, or predict how things would scale as the corpus continued to grow.

    When questioned about the RBL tactic by a member of the audience [who was apparently familiar to Salib -- I don't know who it was] about whether authenticating remote users might be the answer, Salib's response was "yes, I agree, but then you *do* work for Verisign, who is in the verification business, so you would say that."

    Right on, Salib -- his talk was easily the funniest & breezy of the day :)

  • David Lewis, general researcher

    The core of Lewis' argument, as ESR said earlier in the day, is that for any machine learning technique the quality of the learning corpus is much more important than the algorithm used. Bayes is one such algorithm, but there are many other good ones in the literature. In a dig at Goodman's refusal to disclose algorithms, Lewis pointed out that all of this has been publicly discussed since the first machine learning paper was published in 1961.

    Observations: "lots of task inspecific stuff works badly, but task specific stuff helps a lot." It is important to use different corpuses [corpi?] for training and for general use, so that you don't train your machine to focus too much on certain types of input (this is a point that Microsoft's Goodman made as well).

    As Graham did, Davis emphasized that spam is going to slowly start looking more like natural text, and we're going to have to deal with this as time goes on. www.daviddlewis.com/events/

  • Jon Praed, Internet Law Group

    To a burst of tremendous applause, this talk began with the sentence "my name is Jon Praed, and I sue spammers."

    He brought a legal take on the "not everything is spam to everybody" angle, emphasizing that we need a precise definition of what qualifies as Unsolicited Commercial Email (UCE). In particular, it has been difficult trying to pin down if the mail was really unsolicited, as this is where the spammers have the most wiggle room. However, if you can track down the spammer, they have to date rarely been able to verify that the user asked for mail, and so Praed has been able to successfully prosecute several spammers on this angle. He doesn't expect this to work forever though.

    According to Praed, "laws against spam exist in every state, and more are pending", but he doubts that a legal solution will ever be completely effective as long as spam is lucrative. By analogy, he pointed out that people still rob banks and that has never been legal.

    Praed informed the audience that there are several ways to get back at spammers, including injunctions, bankruptcy, and contempt, and all of these can be very effective. He pointed out that, to be blunt, a lot of these people are desperate low-lifes, and spam has been their biggest success in life. After these legal responses, their lives all get much worse. It hadn't occured to me to see spammers as pitiful before, but I can now. Most importantly, Praed stressed that these legal remedies can be very effective, and he strongly warned against taking vigilante action. This is almost always worse than the spam itself, and it only serves to get you in even deeper trouble than the spammer.

    Identifying the sources of spam, most comes from offshore spam houses, abuse of free mail accounts (Hotmail & Yahoo, free signups at ISPs, etc) and bulk software (which may apparently soon become illegal in certain areas, provided that a law can be found to ban spam software while allowing things like MailMan or MajorDomo). Interestingly, he questioned the idea that header spoofing is a big problem, and claimed that in every case he has dealt with he has been able to track down the messages to a legit source sooner or later.

    Suggestion: if you get a spam citing a trademarked product [e.g. Viagra], forward it to the trademark holder and they will almost always follow up on it. Suggestion: be fast in trying to track down spammers, as some of them have gotten in the habit of leaving sites up long enough for mail recipients to visit, but taking them down before investigators get a chance to take a look. Legal observation: spam is almost always fraud, and can be prosecuted accordingly.

    Praed wrapped up his talk by citing the encouraging precedent that the famous Verizon Online vs. Ralsky case set: [a] that the court is interested in where the harm occurs, not where the person doing harm was when causing it (so if you send spam to someone in Alaska and spam is a capital offence in Alaska, you can be tried as a citizen of that state even if you caused the harm from somewhere else), and [b] it is assumed that you have to be familiar with a remote ISPs acceptable usage policies, and ignorance is no defence (just as you can't say "I didn't know it was illegal to shoot someone", Ralsky couldn't say that he didn't know Verizon prohibits spam -- (he had to have known that the AUP wouldn't allow what he was doing, so he deliberately didn't read it)). That precedent makes future prosecution of spammers much more encouraging. While, again, legal solutions may never eliminate the spam problem, a precendent like this can be an important supplement to filtering efforts (the stick to the filter's carrot, or something -- my lousy analogy, not Praed's).

  • David Berlind, ZDNet executive editor

    His talk was primarily about how he receives a huge quantity of email from ZDNet readers, and he can't afford to use any spam filtering solution strategy that would allow *any* false positives. As one of the speakers said -- sorry, I forget who (Microsoft's Goodman?) -- getting a 0% false positive rate is easy: just classify nothing as spam. Getting a 100% hit rate is also easy: just classify everything as spam. Any solution besides those two is always going to have some degree of error either way, and determing how much of what kind of error you want to accept is up to you. Most users will tolerate a moderate false negative rate (some spam gets through) if it means that the false positive rate (legit mail is deleted) is very low. In Berlind's case, the false positive rate has to be vanishingly small, because reading all customer mail is a critical sign of respect for him.

    Further, his business is also a legitimate mass emailer, sending out millions of free newsletters to users every day, and if Shein's proposal to bill bulk mailers were to catch on then even a very low rate would quickly put his company in the red. One obvious solution, which wasn't mentioned: start charging a subscription for these mailings, and make them profitable. I don't want to see this happen but if it did then the economics would tilt back toward making things feasible again.

    Berlind is appreciative of the anti-spam work that is being done, but at the same time is skeptical of how pragmatic most of what is being proposed can really be. He feels we need a massive effort to rework the way mail is handled [Y2K anyone? It could get IT people back to work...], and to that end hopes ZDNet can help promote such a cooperative effort between the parties working on this. They don't want to be involved -- they are journalists & publishers, not standards developers -- but they are eager to get things going & want to cover the story as it progresses.

    Like Shein said, he feels it's a waste for all these talented people to be working on combating penis enlargement offers, and hopes that we can find a way to get past this and work on real problems, "like world peace." This comment got a chuckle from the audience, but he seemed like the kind of guy that really meant that, and more importantly, he was right. A smart guy like Paul Graham or Bill Yerazunis shouldn't have to waste time tinkering with how many Viagra offers he can automagically delete when there are more fun things to be doing.

  • Ken Schneider, Brightmail

    As mentioned earlier, Brightmail provides an ASP service for real time filtering of both incoming & outgoing mail. As would perhaps be expected, bigger ISPs and networks attract larger amounts of spam: 50% of mail coming into big ISPs and 40% coming into big companies is now spam. Brightmail offers the Probe Network, a <slashdot-killfile-term>patented</slashdot-killfil e-term> system of decoy honeypot addresses that gather data for analysis at their logistics center, which in turn distributes spam filtering rules to their clients where a plugin for $MTA (using the open source or proprietary MTA of the client's choice) can act on the database.

    An interesting property of their system is that they have a mechanism for both aging out dormant rules as well as for reactivating retired ones, so that the currently active ruleset can be kept as lean & effient as possible. A big source of difficulty for them is legitimate commercial opt-in lists, because things have gotten more shady & blurry over time and it's now hard to tell this mail from much of the spam out there. Whitelists help here, but the problem is still difficult.

After each speaker had his turn, there was a panel discussion, but not much really happened there, and the moderator cut things short after only a couple of minutes. The original plan was for everyone to go out for Chinese food afterwards and continue the discussions over dinner, but when 580 people signed up that plan obviously fell apart. :) And so, here ends the notes...


DO NOT LEAVE IT IS NOT REAL


Real video w/time-code indices

crysflame on 2003-01-19T15:14:52

To make it easier to jump to a specific presentation, I have created a timetable. It was missing from the original web site. Enjoy these highly interesting talks.

Oliver Schmelzle, in Webcast Timetable for Spam Conference Sessions, provided the following timetable for the webcasts found at spamconference.org:

Session 1

0:00:30, Teodor Zlatanov, spam.el Maintainer, "Gnus vs. Spam"
0:10:00, Bill Yerazunis, MERL, "Sparse Binary Polynomial Hash Message Filtering and The CRM114 Discriminator"
0:32:30, Jason Rennie, MIT AI Lab, "Adaptive Spam Filtering"
0:52:00, John Graham-Cumming, POPFile, "The Spammers' Compendium"

Session 2

0:00:00, John Draper, ShopIP, "Following Their Patterns"
0:14:00, Paul Judge, CipherTrust, "The Case for Spam Research Infrastructures"
0:37:00, Paul Graham, Arc Project, "Better Bayesian Spam Filtering"
0:56:00, Robert Rothe, eleven GmbH, "eXpurgate: a different approach in filtering E-Mail and detecting SPAM"

Session 3

0:01:30, Matt Sergeant, MessageLabs, "Spam Filtering at the Network Level"
0:21:30, Barry Warsaw, Pythonlabs at Zope Corporation, "Anti-Spam Techniques at Python.org"
1:05:00, Jean-David Ruvini, e-lab Bouygues SA, "Smartlook: An E-Mail Classifier Assistant for Outlook"
0:41:00, Barry Shein, The World, "Spam: Threat or Menace? An ISP's View"
1:23:00, Eric Raymond, Open Source Initiative, "Lessons from Bogofilter"
1:44:30, Joshua Goodman, Microsoft Research, "Spam Filtering: From the Lab to the Real World"

Session 4

0:00:00, Michael Salib, MIT, "Integrating Heuristics with n-grams using Bayes and LMMSE"
0:22:00, David Lewis, Independent Consultant, "Forty Years of Machine Learning for Text Classification"
0:34:00, Jon Praed, Internet Law Group, "How Lawsuits Against Spammers Can Aid Spam-Filtering Technology: A Spam Litigator's View From the Front Lines"
1:01:30, David Berlind, CNET, "Desperately Seeking: An Anti-Spam Consortium"
1:26:30, Ken Schneider, Brightmail, "Fighting Spam in Real Time"
1:47:00, Panel Discussion

Thanks, Oliver!

Thank-you

Damian on 2003-01-19T16:11:36

No additional content here. Just wanted to thank you for an excellent report.

CRM114

Matts on 2003-01-19T18:10:49

One thing worth pointing out is that the testing for CRM114 isn't very well done - the figures are merely real time figures for his personal email. As all well and good as that is, the other project's test data was based on split training/validation corpuses in lab conditions. And that produces different results.

So everyone seems pretty happy with the CRM114 results, most of the other Bayesian projects are getting equivalent results. That plus CRM114's slowness (16 times slower than every other bayesian project) will in my opinion mean that it won't find all that much success.

PS: Please don't take this as bias because I'm affiliated with spamassassin - I'm all for any solution that works better, but in my personal testing CRM114 wasn't any better and was significantly slower than everything else.

MS is afraid of Blue Mountain?

runrig on 2003-01-19T20:33:58

but were immediately sued by email greeting card company Blue Mountain because their messages were being inaccurately categorized as spam.

You would think something like this would be configurable as an exception anyway. I delete'em as soon as I get them. My mother-in-law send these things, and when she asks "did you get my card?", I say "oh sure I did"...

I'm not opening them until there's some possibility that one of 'em is a job offer...

Addendum

babbage on 2003-01-21T19:33:44

The material above was originally posted as a comment on Slashdot, before being pasted into journal entries on Slashdot and use.perl.org. Each version of the writeup has attracted comments & emails, for which I thank you. A couple of corrections have come up, and I don't want the eventual archived versions of this not to reflect those contributions (hello, future Google spelunkers!), so here's a general cross-linked addendum:
  • http://use.perl.org/user/babbage/journal/10069:

    Chrysflame posted detailed minutes for the proceedings, as pasted from Oliver Schmelzle's TechBlog.Readers may find it useful to cross-check my notes against his times when looking for talks they would like to listen to.

    Matt Sergeant politely replied as well, noting that the impressive claims about CRM114's accuracy were yet to be thoroughly tested, that in other tests CRM114 had not been significantly more accurate than other Bayesian strategies, and that the current performance of CRM114 is so much slower than many of the alternatives that any gains it may have to offer are more than offset by the low volume it can currently handle. Grain of salt taken :)

  • http://slashdot.org/user/babbage/journal/21771/:

    No comments as of this writing.

  • http://slashdot.org/comments.pl?sid=51208&cid=5112383:

    An anonymous coward added a couple of corrections which are worth noting:

    • Jon Praed was questioning IP spoofing, not message header spoofing. It is relatively easy to fake at least some of the headers on an email, but when tracked down & brought before a judge, no spammer has ever been able to explain a credible technique for spoofing IP data in any trial Praed was aware of. When this comment was made to the audience, ESR spoke up saying that he could show Praed how to do it, but I don't know what if anything came of any conversation they had after the talk.

    • The AC also expanded on Michael Salib's talk & how much mileage Salib was seeing out of a comically non-buzzword compliant filtering strategy, but came back to the point that his results were "probably unrepeatable and it would probably be best if we all just treated them as outright lies." As the AC noted, Salib seems to have played a big role in organizing the conference -- I think I read somewhere that when the attendee list swelled to 500+ people, he helped to find a last minute venue big enough to accomodate everyone. So not only do we have to thank Salib for an entertaining spiel of quackery, but also for bringing everyone together in the first place. :)

    I never said my notes were perfect :)

  • Emails sent to me directly:
    • Brad Spencer wrote to me asking if anyone had mentioned relay spam honeypots, citing http://jackpot.uk.net/ as an example, and claiming that they are "100% accurate and can be devastating.". Respectfully Brad, I'm not sure that the speakers gathered together last week would agree that any approach is "100% accurate" unless you have a very generous definition of "accurate" (as in, "delete everything as spam" is 100% accurate, but 100% useless :). More fairly though, Brad claims that "if you deal with spam at the relay level you can be dumb -- it is the spammers who are forced to be smart. If they make an incremental move towards being smart you move beyond them." I won't argue with that, it sounds like a fine idea. I suggest taking ideas like this to Barry Shein et al, who would probably love to discuss these ideas & implement anything that works well.

      In his email, Spencer went on to expand on the value of honeypots, and how they seem like a very promising tactic for handling the spam problem. I agree, and maybe my writeup didn't give this enough attention, but I think many or all of the conference speakers would have agreed as well. Ken Schneider made it clear that Brightmail in particular seems to make heavy use of honeypot addresses: it sounded like when they set up service for an organization, they plant one or more dummy addresses at that organization as data points for spam collection efforts, and have mechanisms in place to gather & analyze this data in real time. Spencer suggests that honeypot addresses would be very hard for spammers to detect if they resemble legit MTAs as much as possible, and I have the impression that this is exactly what Brightmail is doing. I'm sure that others are using tactics like this as well, but Schneider was the most vocal user of the tactic that I noticed.

    • John Hanna wrote to me saying that he runs an anti-spam project at http://assp.sf.net, and noticed a surge in traffic after the conference. To answer John's question, I did not notice anyone mentioning ASSP [caps?] during any of the talks, but it could well be that people were discussing it amongst themselves off stage. *shrug*

    • Ashley Pomeroy wrote to a mailing list where I posted my notes, asking:It may have been raised before, but does the specific use of 'ham' to mean 'good' and 'spam' to mean 'bad' leave all these good people open to abuse from the people who make Spam, the nutritious meat-based food?

      I assume that Spam(r) is cool about the use of the term 'spam' to mean junk e-mail, but adding a converse makes it explicitly clear that 'spam=bad'.

      And what do the pigs think about all this? Its their flesh we're talking about. The ultimate expression of love is to consume the flesh of another being; we are sending out a mixed message as to whether we love pigs or not, which will surely effect the quality of the eggs they lay.

      By this token eating one's fingernails/bogies/earwax is a form of self-love, which is perfectly natural.

      To which I have no comment :)

If I get any other material relevant to the conference, I may add it to the Slashdot or use.perl journals, but in any case I wanted to get this up while the pages are still getting traffic, so readers of one variation of the page are not missing out on what may be added to other variations. Thanks all for the feedback! :)

my take on why spam happens

mary.poppins on 2003-02-14T09:13:44

Spam is an abuse of a public-writable area of your attention. It appears visually as billboards, unwanted email, and junk snail mail; audially as phone salespeople; and in full video as TV adverts.

Individualist solutions such as spam filtering, whether email spam or visual spam, obviously have limits. It puts the onus on each and every person to do the filtering, and doesn't prevent all the effort that went into creating and distributing the spam. As the people at the conference observed, all these smart people should be working on something other than spam.

These are all facets of the same problem: that large parts of our society are controlled in an undemocratic way. Surely people would not choose to erect a giant Toyota(TM) or NewPort(TM) billboard in their neighborhood for their own amusement. Nor would the workers of an ISP tolerate someone using their network to send fraudulent Viagra(TM) adverts, unless their boss told them to.

The solution to the spam problem is to fix our society so that the workers actually have bottom-up control of the productive capacity of society. That would free up not just the workers currently stuck on anti-spam solutions, but also all those smart people at Lockheed(TM) etc. who spend their days figuring out really neat and clever ways to hideously mangle people by the score.

Anarchism (democratic, grassroots control by the workers) has worked very well for software (this *is* a Perl site, after all :). Let's extend it to the rest of society.