VBSpam comparative review July 2011

2011-07-15

Martijn Grooten

Virus Bulletin, UK
Editor: Helen Martin

Abstract

The 14th VBSpam test showed both a number of excellent performances as well as some that leave room for improvement. Martijn Grooten has all the details.


Introduction

Those who follow the security news will have seen reports (such as http://www.virusbtn.com/virusbulletin/archive/2011/07/vb201107-news1) of a significant drop in spam levels over recent months. This, of course, is very good news for users and system administrators alike, but in the VBSpam tests we look at a different aspect of spam, and consider not the quantity but, if you like, the ‘quality’ of spam: what percentage of it makes it through to users’ inboxes and which anti-spam solutions do the best job of blocking those spam messages, ideally without blocking legitimate ones too.

The corpus for this month’s test was actually several times larger than that used in previous months. The reason for this was not that we received more spam but that we solved some network issues that had been troubling us for several months (the issues proved to have been caused by a router that was not able to deal with the amount of traffic used in the tests). With the issues resolved we could finally open the tap and send almost 20,000 spam messages a day through each of the products.

To give an idea how much traffic that is, it amounts to more than a dozen messages per minute – far too many to deal with manually, in case anyone still thinks that it is possible to deal with spam in that way. Of course, many organizations receive far greater volumes of email than this.

Apart from the increased volume of spam, we also intended to introduce a new stream in this test: legitimate newsletters. We subscribed to a large number of these, aiming to report on products’ performance against this corpus as some additional information in the test. Unfortunately, while subscribing to newsletters turned out to be easy (in many cases probably too easy, as the majority did not check whether the subscription request came from the actual user of the address) it was difficult to obtain a large corpus of newsletters without having it dominated by a handful of ‘daily newsletters’.

In the end it was decided that the number of newsletters successfully subscribed to (154 if we restrict it to those that required confirmed opt-in and included up to five newsletters of each kind) was too small to draw upon for meaningful conclusions.

What we did look at was whether, for example, using confirmed opt-in and/or DKIM increased the likelihood of a newsletter being delivered. While results suggested that this was the case, it could not be shown with sufficient statistical significance. Hopefully, in the next test (by which time we intend to have increased the number of newsletters subscribed to) we will have a clearer picture.

We also hope to be able to report on individual products’ performance on filtering newsletters (although this will not count towards the final score or VBSpam certification).

This month’s test – the 14th VBSpam test – included 17 full solutions (two of which were new to the VB test bench) and two DNS blacklists. All of the full solutions achieved a VBSpam award, but only a handful of products did so without blocking any legitimate email.

The test set-up

The VBSpam test methodology can be found at http://www.virusbtn.com/vbspam/methodology/. As usual, email was sent to the products in parallel and in real time, and products were given the option to block email pre-DATA. Three products chose to make use of this option.

As in previous tests, the products that needed to be installed on a server were installed on a Dell PowerEdge R200, with a 3.0GHz dual core processor and 4GB of RAM. The Linux products ran on SuSE Linux Enterprise Server 11; the Windows Server products ran on either the 2003 or the 2008 version, depending on which was recommended by the vendor.

To compare the products, we calculate a ‘final score’, which is defined as the spam catch (SC) rate minus five times the false positive (FP) rate. Products earn VBSpam certification if this value is at least 97:

SC - (5 x FP) ≥ 97

The email corpus

The test ran for 16 consecutive days, from 12am GMT on Saturday 18 June 2011 until 12am GMT on Monday 4 July 2011.

The corpus contained 293,757 emails, 291,304 of which were spam. Of these, 186,284 were provided by Project Honey Pot and 105,020 were provided by Abusix; in both cases, the messages were relayed in real time, as were the 2,453 legitimate emails.

Figure 1 shows the average catch rate of all full solutions throughout the test. To avoid the average being skewed by poorly performing products, we excluded the highest and lowest catch rate for each hour.

Average catch rate of all full solutions throughout the test.

Figure 1. Average catch rate of all full solutions throughout the test.

Those comparing this month’s results with those of previous tests will notice an increase in the number of false positives seen this month, with just four full solutions avoiding them altogether. This is largely because of one sender in the ham corpus that for several days found itself listed on a number of blacklists – allegedly because the sender’s server had been abused and used for sending spam.

No one will deny that blacklisting an IP address that is also used for sending legitimate email will cause problems for the latter. The anti-spam community is divided over when (and whether) blocking is justified in such a case. We take the conservative approach and count legitimate emails missed in this manner as false positives.

Of course, we also understand the view that by blacklisting an IP, the system administrator is forced to fix their problems and therefore the total damage incurred may actually be less than if the IP were left unblocked. To avoid results being excessively skewed by a single blacklisted sender, we count no more than four false positives per IP address per product.

Results

BitDefender Security for Mail Servers 3.0.2

SC rate: 99.88%

FP rate: 0.00%

Final score: 99.88

Project Honey Pot SC rate: 99.88%

Abusix SC rate: 99.90%

BitDefender’s anti-spam solution continues to be the only product to have won a VBSpam award in all 14 of our tests. This month’s results show that this is more than just a matter of the developers being brave enough to submit their product every time – thanks in part to a total lack of false positives, the product achieved this month’s second highest final score.

Fortinet FortiMail

SC rate: 99.79%

FP rate: 0.00%

Final score: 99.79

Project Honey Pot SC rate: 99.74%

Abusix SC rate: 99.88%

It seems that 13 is no unlucky number for Fortinet, as this month the company’s FortiMail appliance achieved its 13th VBSpam award in as many entries – and with the third highest final score. Moreover, it was one of just four products that did not block any legitimate emails.

GFI MailEssentials

SC rate: 99.61%

FP rate: 0.16%

Final score: 98.79

Project Honey Pot SC rate: 99.49%

Abusix SC rate: 99.82%

On several occasions we have seen a product suffer from a number of false positives in its debut test, but significantly reduce that number in the next test – probably because its developers found settings that were more suited to our test set-up. This was certainly the case with GFI’s MailEssentials, which on this occasion only misclassified legitimate emails from the aforementioned heavily blacklisted IP address. Thus, with four false positives and a good spam catch rate, MailEssentials easily wins its second VBSpam award.

Halon Mail Security

SC rate: 99.57%

FP rate: 0.04%

Final score: 99.36

Project Honey Pot SC rate: 99.73%

Abusix SC rate: 99.28%

In its third VBSpam test, Halon almost equalled its spam catch rate from the last test, but did so while incurring a single false positive – the product’s first since it joined the tests. That is still fewer than the average product, though, and with a decent final score the virtual appliance achieves its third VBSpam award.

Kaspersky Anti-Spam 3.0

SC rate: 99.48%

FP rate: 0.00%

Final score: 99.48

Project Honey Pot SC rate: 99.41%

Abusix SC rate: 99.60%

During the running of this test, Kaspersky Lab celebrated its 14th birthday – no small achievement in an industry where things seem to change daily. The Russian company receives another VBSpam award as a belated birthday present, albeit one that was worked hard for. A slightly improved spam catch rate and the fact that this was one of only a small number of products without false positives should give the developers something to be pleased about.

Libra Esva 2.0

SC rate: 99.97%

FP rate: 0.16%

Final score: 99.15

Project Honey Pot SC rate: 99.95%

Abusix SC rate: 99.99%

SC rate pre-DATA: 97.95%%

For the second time in a row, Libra Esva caught more spam than all but one other product, missing fewer than 100 out of more than 290,000 spam messages. Unfortunately for the Italian product’s developers, it blocked the same legitimate emails as many other products, which saw its final score drop to a slightly lower position in the league tables. The company’s eighth VBSpam award in as many tests should keep the developers motivated to improve this again though.

McAfee Email Gateway (formerly IronMail)

SC rate: 99.88%

FP rate: 0.12%

Final score: 99.27

Project Honey Pot SC rate: 99.88%

Abusix SC rate: 99.88%

As in previous tests, McAfee’s Email Gateway appliance demonstrated an excellent spam catch rate, missing fewer than one in 800 spam messages. There were three false positives this time – all from the same IP address (though not ‘that’ IP address) – but that didn’t stop the product from achieving its 12th consecutive VBSpam award

McAfee SaaS Email Protection

SC rate: 99.91%

FP rate: 0.33%

Final score: 98.28

Project Honey Pot SC rate: 99.90%

Abusix SC rate: 99.93%

The fact that email security is not a one-size-fits-all business is demonstrated by the different kinds of products in our tests, and the fact that even a single company such as McAfee offers a variety of solutions to protect the inboxes of various types and sizes of organizations.

McAfee SaaS Email Protection is the third product from the security giant to participate in the VBSpam tests. As the name suggests, this is a hosted solution that receives email ‘in the cloud’ and then relays only the filtered email to the customer. If done well, this can save the customer valuable time and resources. Many customers will also be interested in the product’s numerous additional features, varying from malware scanning to email encryption and other kinds of policy-based controls on outbound email.

In our tests, we only looked at the product’s inbound spam-filtering capabilities. The spam catch rate was certainly impressive, with fewer than one in 1,000 spam messages missed. There were eight false positives (from two different email addresses), which means that users may occasionally have to search the product’s quarantine for legitimate email, but this may partly be because it was the product’s first VBSpam test. In any case, the final score was high enough to win the product a VBSpam award.

OnlyMyEmail’s Corporate MX-Defender

SC rate: 99.999%

FP rate: 0.00%

Final score: 100.00

Project Honey Pot SC rate: 99.999%

Abusix SC rate: 100.00%

With only two missed spam messages and no false positives in the last test, OnlyMyEmail's MX-Defender had set the bar extremely high for this time around. Despite this, the hosted solution managed to match its last performance - once again avoiding false positives altogether and missing just two spam emails, making this the fifth time in a row that the product has achieved the highest spam catch rate.

With such a stunning result and a final score rounded to 100.00, MX-Defender ensures that the bar remains very high not just for itself, but for all products in all tests to come.

Sophos Email Appliance

SC rate: 99.90%

FP rate: 0.24%

Final score: 98.67

Project Honey Pot SC rate: 99.87%

Abusix SC rate: 99.94%

Despite a significantly improved spam catch rate, six false positives caused the final score for Sophos’s Email Appliance to drop to a mid-table position. No doubt this will cause some disappointment among the product’s developers but it wasn't enough to deny the product its 10th VBSpam award and it should motivate the developers to work on improving the final score for next time.

SPAMfighter Mail Gateway

SC rate: 99.78%

FP rate: 0.08%

Final score: 99.38

Project Honey Pot SC rate: 99.78%

Abusix SC rate: 99.79%

For the third time in a row SPAMfighter sees both its false positive rate decrease (just two legitimate emails were missed this time) and its spam catch rate improve. This shows that the product’s developers are working hard and an 11th consecutive VBSpam award is their reward.

SpamTitan

SC rate: 99.94%

FP rate: 0.04%

Final score: 99.73

Project Honey Pot SC rate: 99.94%

Abusix SC rate: 99.93%

Unlike many other products in this test, SpamTitan saw its false positive rate reduced – on this occasion the virtual appliance missed just one legitimate email. The product’s spam catch rate equalled that of the previous test and was outdone by just two other products on the test bench. As a result, the Irish product easily achieves another VBSpam award – its 11th in as many tests.

Spider Antispam

SC rate: 99.63%

FP rate: 0.33%

Final score: 98.00

Project Honey Pot SC rate: 99.75%

Abusix SC rate: 99.43%

The Czech Republic is home to a number of security companies and Amenit, the developer of Spider Antispam, is one of them. The company offers a number of email security solutions. We tested its standard anti-spam solution Spider Antispam, but the company also offers Spider Mail Protection, which combines spam filtering with two anti-malware engines.

Both products are hosted solutions, where the bad messages are filtered out remotely, away from the customer’s premises – in this case the filtering is performed in a number of different data centres spread throughout Europe. Like most software-as-a-service solutions, Spider Antispam keeps email even when your mail server is down and it can also be used for outbound spam filtering, as well as adding DKIM-signatures to outgoing email. With the growing importance of sender reputation, this could certainly help delivery rates of outgoing email.

In our test, we only looked at the product’s inbound spam-filtering capabilities. These were certainly good, with 99.63% of all spam caught by the product. There were eight false positives, which lowered the final score a little (and gives the developers something to work on), but this was not enough to prevent the product from winning a VBSpam award in its first test.

Symantec Messaging Gateway 9.5 powered by Brightmail

SC rate: 99.87%

FP rate: 0.04%

Final score: 99.67

Project Honey Pot SC rate: 99.86%

Abusix SC rate: 99.90%

Symantec Messaging Gateway is one of those products to have combined a high spam catch rate with almost no false positives. It missed one legitimate email this time (fewer than most), which gave it the fifth highest final score and a VBSpam award to add to its collection.

The Email Laundry

SC rate: 99.80%

FP rate: 0.24%

Final score: 98.58

Project Honey Pot SC rate: 99.73%

Abusix SC rate: 99.94%

SC rate pre-DATA: 98.73%

I continue to be amazed by the fact that The Email Laundry manages to block close to 99% of all spam during the SMTP transaction before it has even seen the content of the email. This occasionally comes at a price though, as three of the six legitimate emails the hosted solution missed in this test were blocked at this stage of the transaction as well (although it is fair to say that this would also have increased the likelihood of the sender receiving a bounce message). After the content filtering fewer than one in 500 emails were missed – meaning that the product is worthy of its eighth consecutive VBSpam award.

Vade Retro Center

SC rate: 99.03%

FP rate: 0.20%

Final score: 98.01

Project Honey Pot SC rate: 98.91%

Abusix SC rate: 99.23%

In the previous review, we reported that Vade Retro Center had some problems filtering spam from the Abusix feed. The developers must have taken that feedback to heart as the product performed significantly better on that feed and, as a result, almost halved the percentage of missed spam. Hopefully, in the next test the product will see its false positive rate reduced (it missed five legitimate emails this time), but for now it wins its eighth VBSpam award with a slightly improved final score.

Vamsoft ORF

SC rate: 98.70%

FP rate: 0.16%

Final score: 97.89

Project Honey Pot SC rate: 98.84%

Abusix SC rate: 98.46%

ORF had not missed a single legitimate email in the last three tests, so no doubt the product’s developers will be a little disappointed to find that they missed four legitimate emails on this occasion, even if they came from the same heavily blacklisted sender that pestered many other products. Thankfully, this did not get in the way of the product earning an eighth consecutive VBSpam award.

Spamhaus ZEN+DBL

SC rate: 98.63%

FP rate: 0.16%

Final score: 97.82

Project Honey Pot SC rate: 98.59%

Abusix SC rate: 98.71%

SC rate pre-DATA: 97.49%

The ten previous VBSpam tests have shown that Spamhaus makes a concerted effort to keep its reputation blacklists from causing false positives. This, the product’s eleventh test, marked only the second time some legitimate senders in our ham corpus were blacklisted. However, it may well be that the overall damage was lessened by blacklisting the senders – and this should act as a warning to users of DNS blacklists that using them is no guarantee that only spam will be blocked.

SURBL

SC rate: 57.27%

FP rate: 0.00%

Final score: 57.27

Project Honey Pot SC rate: 42.50%

Abusix SC rate: 83.48%

SURBL, which stands for ‘Spam URI Realtime Blocklist’, is a blacklist of domains used (solely) for spamming purposes. Like most blacklists, it uses the DNS protocol to allow for quick look-ups of domain names against the blacklist and thus can be used for real-time scanning of spam messages.

SURBL was included in a few VBSpam tests last year as part of the MXTools suite but now returns on its own. As before, a slightly modified version of the ‘uribl’ plug-in for qpsmtpd was used to detect URLs in emails and to perform the look-ups. Users implementing SURBL may improve its performance by, for instance, following redirects in URLs and looking for URLs in emails that in some way or another are encoded. Of course, this will require more processor and/or network resources.

It should be noted that SURBL is a partial solution that ought to be used as part of a full solution rather than on its own. Its performance should not be compared with that of any of the full solutions in this test or with that of Spamhaus, another partial solution, but one that acts on different parts of the email. While the nature of the product means its final score did not come close to the VBSpam award threshold, it certainly did not ‘fail’ the test.

With more than 57% of messages blocked, SURBL does a good job of getting rid of a lot of unwanted email, and no legitimate email was blocked. More telling perhaps, and certainly more impressive, is the fact that of all spam messages in which our plug-in detected at least one URL (not necessarily a malicious one), 83.60% of spam was blocked.

Tables

Conclusion

Having spent a great many hours trying to solve increasing network problems – which, in the end, proved to be caused by an incompetent router – and then having spent a lot of time subscribing to what turned out to be an insufficient number of newsletters, from the tester’s point of view this felt like the test of wasted time.

Of course, that was not the case and the results show both a number of excellent performances (of which, OnlyMyEmail’s final score of 100.00 is worth repeating) as well as a number of others that leave some room for improvement. The large number of products with false positives was a bit of a disappointment – even if this was largely caused by a small number of senders – but it is good to see products notching up a sufficiently high spam catch rate to make up for such glitches.

Having been unable to include results on newsletter filtering in this report, we will work hard to ensure that we have enough data available for the next test to enable us to present some interesting results on filtering, both for individual products and more generically.

The next VBSpam test will run in August 2011, with the results scheduled for publication in September. Developers interested in submitting products should email [email protected].

twitter.png
fb.png
linkedin.png
hackernews.png
reddit.png

 

Latest reviews:

VBSpam comparative review

The Q1 2024 VBSpam test measured the performance of nine full email security solutions, one custom configured solution and one open‑source solution.

VBSpam comparative review

The Q4 2023 VBSpam test measured the performance of eight full email security solutions, one custom configured solution, one open-source solution and one blocklist.

VBSpam comparative review

In the Q3 2023 VBSpam test we measured the performance of eight full email security solutions, one custom configured solution, one open-source solution and one blocklist.

VBSpam comparative review

In the Q2 2023 VBSpam test we measured the performance of nine full email security solutions, one custom configured solution, one open-source solution and one blocklist.

VBSpam comparative review

In the Q1 2023 VBSpam test we measured the performance of eight full email security solutions, one custom configured solution, one open-source solution and one blocklist.

We have placed cookies on your device in order to improve the functionality of this site, as outlined in our cookies policy. However, you may delete and block all cookies from this site and your use of the site will be unaffected. By continuing to browse this site, you are agreeing to Virus Bulletin's use of data as outlined in our privacy policy.