VB100 Comparative review on Ubuntu Linux 10.04 LTS

2011-02-01

John Hawes

Virus Bulletin
Editor: Helen Martin

Abstract

This month's VB100 test on Ubuntu Linux saw a considerably more modest field of entrants than the Windows-based tests of late, and a strong batch of performances. John Hawes has the details.


Introduction

2010 saw some setting – and breaking – of records in the VB test lab, with several tests proving to be of truly epic proportions. 2011 promises no let up in the steady onslaught of new solutions participating in our tests, and in the coming months we hope to see more of the diversity and innovation spotted in some of 2010’s products. We also hope to see less of the fragility, instability, lack of clarity and general bad design we saw in many others.

For now, however, we leave the cluttered Windows space behind to make our annual probe into the murkier, less cuddly but generally more robust world of Linux. In a market space that is much less crowded with small-time niche players, our Linux tests tend to be less over-subscribed than many others, and generally feature only the most committed, comprehensive security providers (most of whom make up our hardest core of regular entrants). For only the second time, we decided to run the test on the explosively popular Ubuntu distribution, which was first seen on the VB100 test bench almost three years ago (see VB, June 2008, p.16).

Despite being later than usual, the product submission deadline of 5 January caused problems for developers in some regions thanks to varying holiday times. For at least a couple of major vendors there was no submission this month, either due to lack of support for the platform chosen or due to a lack of resources to prepare a submission so close to the New Year. Other vendors chose to skip this month’s test for other reasons. Nevertheless, a strong field of 14 entrants arrived on deadline day, covering the bulk of our regulars.

Platform and test sets

Having last looked at Ubuntu version 8.04 a few years ago, we expected a few improvements in the current Long Term Support version 10.04, released in mid-2010. However, the installation process showed little sign of such improvement, with a fairly rudimentary command-line-driven set-up system, which nevertheless did the job adequately. The most fiddly part was the software selection system, which seemed far from intuitive, but fortunately we required little beyond the basics of a fileserver, intending to add in any additional dependencies on a per-product basis. Once the vagaries of the interface had been conquered, the actual work of installing was rapid and relatively undemanding, and with the system up and running, standard controls enabled implementation of all the settings we required in short order.

As in the previous test on this platform, the installer provided no graphical desktop by default, which seems a sensible approach for a server platform; graphical interfaces are generally unnecessary in the day-to-day running of services, and can be both a performance drain and a security risk. It seems likely that many if not most machines running the platform under test would operate like this, and indeed even in a setting as small as the VB test lab we run a number of Linux machines with no windowing system, including some with older versions of Ubuntu. Nevertheless, some of the vendors taking part indicated that their solutions were geared towards graphical operation, so we had to hope that traditional command-line methods would also be supported.

The main issue we expected to see was with on-access scanning, which is always slightly fiddly on Linux. In the past there have been three main approaches: protecting Samba shares only, using Samba vfs objects, which usually entails little more than an added line or two in the Samba configuration file; the open-source dazuko system, which allows more granular control of protection over different areas of the system; and proprietary methods, which can vary greatly from provider to provider. Dazuko has been somewhat awkward to set up in the past, involving compilation from sources and often with special flags required depending on the platform, but in some quick trials this month there were no problems in getting it up and running. A new and improved version, dazukofs, is also available and looked likely to be used by some products.

Building this month’s test sets proved something of a challenge however, after some problems with hardware, software and the human factor set things back several weeks. The imposed delays allowed time to integrate several new malware feeds into our collection processes, which added considerably to the number of samples included in the raw sets. With time pressing, test sets were built with minimal initial filtering – the verification and classification process continued while the tests were run. These issues having eaten heavily into our already shortened month, and well aware that the large test set sizes would mean longer test times for all, we were in some hurry to get things moving.

Fortunately, little work was required in building the core certification sets. The latest WildList available on the deadline date (the November list, released in late December) featured mainly standard worms and online gaming password-stealers, and none of the major new file infectors of the sort that have been causing problems for many products of late. As replication of these in large numbers takes considerably longer than verification of less complex items, the set was compiled quickly. With several older file infectors removed from the list it shrank to a little over 5,000 samples, the bulk of which were from two remaining file infectors, a single strain of W32/Virut and the venerable W32/Polip.

The clean set was expanded with the usual batch of new items, focusing mainly on business-related tools this month to reflect the typical user base of the platform. The speed sets were augmented as usual by a selection of Linux files, this time taken from the core directories of one of our standard server systems. Some doctoring of our test automation processes was required to fit with the different platform – the on-access tests and performance measures were all run from a Windows XP Professional SP3 client system, to emulate normal usage for a Samba file server protecting a network of Windows machines. To ensure fairness, speed tests were run one at a time with other network activity kept to a minimum.

With everything set up, all that remained was to get to grips with the solutions themselves. From past experience, we expected to see some nice, simple designs in between more challenging approaches, with ease of use depending greatly on the clarity of documentation as well as use of standard Linux practice.

Results

Avast Software avast! for Linux 3.2.1

Version information: VPS 110105-1.

Avast started things off nicely, with a compact 37MB install bundle in tar.gz format, containing three .DEB packages. Instructions were short and simple, running through the steps of installing the .DEBs, making a few tweaks to the system and getting the dazuko modules compiled and installed. With concise and comprehensive advice, it took only a minute or two to get everything set up just as we wanted.

With ample man pages and all the required executables easy to find, setting up and automating the full test suite was a simple process, and running through it was as rapid as usual. Scanning speeds on demand were pretty decent, with on-access overheads perhaps somewhat higher than expected, but still pretty decent.

On checking the results we found solid scores in all sets, as expected, but in the RAP sets there was a bit of a surprise in that scores for the last few weeks were completely absent. We re-ran the tests keeping a close eye on the console output, and quickly diagnosed that the scanner had crashed on a malformed sample in the extended set, with a segmentation fault error. The test was repeated, skipping the offending section of the set. No further problems emerged, and even with this doubling of effort the hugely impressive speed of scanning infected files meant that all tests were completed in well under 24 hours. The core sets were handled effortlessly, and Avast notches up another successful VB100 pass.

Please refer to the PDF for test data

 
 
 

AVG 8.5.863

Version information: Virus database version 271.1.1/3356; Scanner version 8.5.850.

AVG’s Linux solution is a little more bulky, arriving as a 93MB .DEB package along with a licence key to activate it. The set-up process was simple enough to start with, but once the package was installed considerably more work was required to decrypt a rather fiddly configuration system. This involved passing configuration changes into the product as long and easily mistyped strings, rather than making changes to human-readable and self-explanatory configuration files, as is generally the case for Linux software. The layout, with multiple binaries with overlapping and bewildering names and functions, was also less than helpful, and man pages proved pretty eye-watering too, but in the end we got things working just well enough to get through the tests. The product has options to provide on-access protection through several methods, but we opted for the dazuko approach as the most simple to operate.

Once the configuration had been adjusted to our needs and the syntax of the scanner tool figured out, running the tests was much less of a headache. Speeds and overheads were good, and detection rates splendid, and with no issues in the core sets, AVG comfortably earns a VB100 award.

Please refer to the PDF for test data

 
 
 

Avira AntiVir Server 3.1.3.4

Version information: SAVAPI-Version 3.1.1.8; AVE-Version 8.2.4.136; VDF-Version 7.11.1.20 created 20110104.

Avira’s Linux server solution was provided as a 55MB tar.gz archive bundle, along with an extra 37MB of updates. Inside the main bundle was a folder structure containing an install script, which ran through the set-up process clearly and simply, including compilation and insertion of dazuko. Some additional options included a GUI for the Gnome desktop and a centralized management system, and the installation even informs you where the main control binaries are located, to avoid the scrabbling around often experienced with less helpful products. Despite its clarity and simplicity, the set-up still ends by urging the user to read the product manual for more detailed information.

After this exemplary install, using the product proved similarly unfussy and user-friendly, adhering to standard Linux practices and thus making all the required controls both easy to locate and simple to operate. Documentation was also clear and comprehensive. Speeds were super-fast and super-light, and detection rates were as excellent as ever. With no problems in the core sets, Avira easily earns another VB100 award.

Please refer to the PDF for test data

 
 
 

BitDefender Security for Samba File Servers 3.1.2

BitDefender’s product was a little different from most, with its 100MB submission provided as a .RUN file. When run as the filename suggested, this installed the packages and set things up as required. Part of the set-up involved compiling components (the Samba vfs object code required for the on-access component), and several other dependencies also had to be met prior to installation, but it was not too much effort and completed in reasonable time. Once again, configuration was geared towards complexity rather than user-friendliness, with lengthy and fiddly commands required to bring about any change in settings, but it wasn’t too horrible once the esoteric formulae for generating adjustments had been worked out.

Running through the tests proved smooth and stable, although both on-demand speeds and on-access overheads were somewhat heavier than might be expected, but detection rates were impeccable and the core sets were stomped through without a problem, earning BitDefender a VB100 award.

Please refer to the PDF for test data

 
 
 

Central Command Vexira Professional 6.3.14

Version information: Virus database version: 13.6.130.0.

Central Command has recently become a fixture in our comparatives, with a run of successes under its belt. This month the product was presented as a pair of .tgz archive files, the main product measuring 57MB and the additional update bundle 65MB. Unpacking the main bundle revealed a handful of .DEB files and a Perl install script. This ran through tidily, getting everything set up in good order. Some additional instructions were kindly provided by the developers with details of updating and adjusting settings. A secondary set-up script was also provided to change the settings of the Samba configuration, enabling on-access protection.

With everything set up, testing proved a breeze, although configuration of the on-access scanner was somewhat limited – at least as far as could be judged from the sparse documentation. Nevertheless, the default settings did well and it tripped along at a good pace. Scanning speeds were not bad and overheads were light, with the usual fairly decent level of detections. With no problems in the clean set or WildList set Central Command earns another VB100 award for its growing collection.

ItW: 100.00%
ItW (o/a): 100.00%
Linux: %
Macro: %
Standard: %
Polymorphic: 90.52%
 
 
 

eScan for Linux File Servers 5.0-2

The Linux version of eScan comes as a handful of .DEB packages, installation of which required resolving a few dependencies, including for one package several components of the X desktop system – clearly this was one of those products leaning towards graphical rather than command-line usage. This was not a problem, as despite there being no evidence of configuration for some aspects (notably the on-access protection) at the local console level, it was easily accessible through a browser-based web administration tool. Checking this out from another machine, we found it fairly clear, but in places a little prone to flakiness – resetting our changes on a number of occasions as soon as ‘apply’ was clicked. Local console documentation also seemed a little sparse, but we soon figured things out and got the test moving along.

Speeds held up well against the rest of the field, and detection was solid. All looked to be going swimmingly until a single item went undetected in the WildList set on access – with a default setting to ignore files larger than 13MB (a reasonably sensible level), eScan was extremely unlucky in that this month’s WildList contained a larger sample than this (25MB). This bad luck denies eScan a VB100 award this month, despite a generally decent performance.

Please refer to the PDF for test data

 
 
 

ESET File Security 3.0.20

ESET’s Linux edition was provided as a single 41MB .DEB package, and installed easily with minimal fuss. Clear instructions showed how to set up protection of Samba shares using a vfs object (dazuko-style protection was also available), and the commands and configuration were properly laid out, conforming to expected norms.

Running the test was fairly painless, although a couple of files in our extended sample sets did cause segmentation faults and required the restarting of scans. Speeds were pretty zippy and overheads nice and light, particularly in the binaries section. Detection rates were solid across the sets. The clean set threw up a few warnings of potentially unwanted items (most identified precisely and accurately) and a couple of packer warnings, but nothing could stop ESET’s inexorable progress towards yet another VB100 award.

Please refer to the PDF for test data

 
 
 

Frisk F-PROT Antivirus for Linux File Servers 6.3.3.5015

Version information: Engine version: 4.5.1.85; virus signatures 2011010407446e8837db11f3f34f0bfe050aa91a01a9.

Frisk’s Linux product came as a 24MB .tgz archive, with an accompanying 26MB of updates and a small patch file. Installation was basic and rudimentary, with a little install script creating symlinks to the main components without moving them from where they had originally been unpacked – a nice, unobtrusive approach as long as it is expected. Getting things up and running proved a breeze, with both dazuko and Samba vfs objects supported (dazuko was used for all our on-access tests), and configuration and operation were made easy thanks to conformance with the expected behaviour for Linux solutions.

With good scanning speeds and no stability problems, tests completed in excellent time. Scores were decent, with a dip in the later parts of the RAP sets but a strong resurgence in the proactive week. No problems were noted in the core certification sets and a VB100 award is duly earned by Frisk.

Please refer to the PDF for test data

 
 
 

Kaspersky Anti-Virus for Linux File Servers 8.0.0.136

Despite the fact that our test deadline clashed with the Russian Christmas holidays, Kaspersky managed to submit two products this month – both from a new and heavily re-engineered Linux range, and with the slightly worrying assertion that the developers had intended them to be operated via a GUI. Installing the first – which seemed to be slightly more business-focused – proved fairly simple at the outset, with a handful of installer packages provided in different formats and a readme file for instructions. Sadly this proved not to be displayable, let alone legible, and after initially running through the set-up steps of the .DEB package and finding more help was needed, we resorted to consulting the PDF documentation provided on the company’s website.

This showed a horrendously complex layout for operating the product from the command line, which was eventually mastered and rendered reasonably usable with some practice and much perusal of the 215-page manual, but left us hankering for some nice simple, readable configuration files. We tried some work using a web admin GUI, but found this equally fiddly, clumsy and unresponsive. Logging was also a major problem, with detection events dumped from logs after they reached a certain size – despite having set limits in the product’s controls to a considerably higher level than they ever reached.

Eventually we got things moving along though, and scanning speeds proved to be very good, with excellent overheads on access with the default settings; turning the settings up to include archive formats and the like added to the overheads considerably, of course. Detection scores were very good, with no problems in the WildList set, but a single false positive in the clean sets was enough to deny Kaspersky’s business product a VB100 award.

Please refer to the PDF for test data

 
 
 

Kaspersky Endpoint Security for Linux Workstations 8.0.0.24

The second Kaspersky product seemed just about the same as the first, only with different names for some components and no sign of the web admin tool. Once again, we had to consult the manual and follow its advice to create a 40-odd-line configuration file to tweak the update settings, then enter a >50-character command to get it read in by the product, but once this was done things were all ready for us. There seemed to be some proprietary on-access system in use alongside Samba vfs objects, but it looked similar enough to dazuko to make little difference. Once again, on-access speeds were excellent, hinting that some nifty improvements had been made in this area.

Once again logging proved problematic, with the default cap set even lower this time – an initial run produced suspect results despite backing up the log database file every 30 seconds. Retrying once this had been spotted showed that the cap was removed after several restarts of the main service, but with time pressing some of the potentially suspect data still remained in the final results (which may thus be slightly inexact). Nevertheless, scores seemed close to those of Kaspersky’s first product – a fraction higher in most sets – but the same false positive, on a highly popular IM client, was enough to spoil Kaspersky’s chances of any VB100 awards this month despite a generally strong showing and solid coverage of the WildList set.

Please refer to the PDF for test data

 
 
 

Norman Endpoint Protection 7.20

Norman’s product proved one of the most problematic at submission time, thanks to the requirement that it be installed with a live web connection. This was hastily performed on the deadline day – a little too hastily as it turned out, as the install process announces itself to be complete and returns control to the command line well before it has actually finished running. Our first attempt – when the network was reset to internal only as soon as the install seem to be done – was missing large portions of the product and a second attempt was needed. This time all went OK, but we found that most of the components refused to function without an X Windows system in place. We eventually managed to get some on-demand work done, but found that configuration for the on-access component was not possible without a graphical set-up (there was some confusion over whether or not a web-based GUI was expected to be fully functional – either way, we had no luck trying to use it).

In the end, we went ahead and installed the Ubuntu desktop system on one of the test machines – which was something of a mammoth task as it was not included with the standard install media and took some two hours to download, prepare and set up. With this done we finally got to see the interface, which closely resembled those of Norman’s Windows products, and was plagued with the same wobbliness, time lags and occasional freakouts. All we used in the end was the option not to automatically clean files spotted on access, and the desktop was then shut down for the speed measures.

These showed the usual fairly slow times on demand, as the Sandbox system carefully picks each file apart. Much the same was observed on access, for the first run at least, but in the ‘warm’ measures, where files were checked for the second and subsequent time, an impressive improvement was observed.

Scanning of the infected sets was extremely slow – in part thanks to the deep Sandbox analysis – and occasionally flaky, with several runs failing to complete, or stopping output to logs part-way through. Several re‑runs over two full weeks and on several systems, were still not quite complete several days after the deadline for this report, and as a result some of the data presented relies in part on on-access scores, which may be a fraction lower than the product’s full capability on demand. Detection rates were less than staggering, but not too disappointing, and with the WildList and clean sets causing no problems, a VB100 award is just about earned after all our efforts.

Please refer to the PDF for test data

 
 
 

Quick Heal Anti-Virus for Linux 12.00

Version information: Virus database: 04 January 2011.

Back to something much simpler and more user-friendly, Quick Heal’s 141MB zip archive unpacked to reveal several folders and a nice install script, which took us through the steps of getting everything up and running. After resolving a single dependency, all went smoothly, including the set-up of dazukoQuick Heal was one of only a few products to do this itself rather than dumping the work on the sysadmin.

Configuration and documentation was clear, although man pages were lacking, and with simple, intuitive controls, testing went ahead without problems. Speeds were not brilliant, and overheads perhaps a little on the heavy side, but detection rates were impressive throughout. With no problems in the core sets, Quick Heal comfortably makes the grade for VB100 certification this month.

Please refer to the PDF for test data

 
 
 

Sophos Anti-Virus for Linux 7.2.3

Version information: Engine version 3.15.0; Virus data version 4.61; User interface version 2.07.298.

Sophos was another product that took most of the load off the installer’s shoulders, with its 232MB .tgz bundle containing a comprehensive installer utility. Detection of platform, compilation and insertion of required modules and so on was all carried out smoothly and automatically. A proprietary on-access hooking module is included.

Configuration was again via several control utilities, which were perhaps less than clear in their usage instructions and difficult to operate from a purely command-line setting. A web interface was also provided, but we never got it working, mainly because the default settings got us through most of the jobs we needed to carry out without much trouble.

Scanning speeds were excellent (especially using the default settings, where no archive types are analysed), and on-access overheads were among the very lightest. Detection rates were not bad, with RAP scores a little below what we have come to expect from this product, but fairly strong nevertheless. The core sets presented no issues, and Sophos easily earns its VB100 award this month.

Please refer to the PDF for test data

 
 
 

VirusBuster for Samba Servers 1.2.3_3-1.1_1

Version information: Scanner 1.6.0.29; virus database version 13.6.130.0; engine 5.2.0.28.

VirusBuster’s product proved one of the simplest to set up and test, thanks to a very similar process having already been performed with the Central Command solution. Running the installer scripts and following instructions to set up Samba settings took just a few minutes. The on-demand scanner has a slightly quirky syntax but is soon rendered familiar and friendly. However, trawling through the several configuration files in /etc in the vain hope of finding some settings for the on-access scanner was abandoned quickly.

Scanning speeds proved very good indeed, with similarly impressive on-access lags, and detection rates were pretty decent too. With just a single item in the clean sets warned about, being protected with the Themida packer, VirusBuster has no problems claiming its latest VB100 award.

Please refer to the PDF for test data

 
 
 

Results tables

Conclusions

As is usually the case with our Linux tests, it was something of a roller-coaster month, with moments of joy and comfort intermingled unpredictably with moments of bafflement and horror. For the most part, the products lived or died by the clarity of their documentation and the simplicity of their approach; the usability of a tool is usually significantly greater if it runs along the same lines as other things of a similar ilk, rather than attempting a radical new approach. For those wishing to try something new, demanding that the user read carefully through several hundreds of pages of documentation – which cannot even be displayed on the machine they’re trying to use the product on – may be a little much.

Thankfully, stability has been no more than a minor problem here – as one would perhaps expect from a platform which tends to need far fewer restarts than some others. Nevertheless, we did see a few problems – notably with GUIs and with those command-line tools which try to hijack and do overly funky things with the console display, returning it to its owner bedraggled, battered and occasionally broken. All in all, we saw a strong batch of performances, with a high percentage of passes; an unlucky maximum file size setting and a single clean sample (a popular product, but a fairly old version with limited usage) caused the only issues in the certification sets. Part of this is doubtless down to the solid field of regular high‑achievers, but part may also be thanks to the absence of any new complex viruses.

We expect to see a tougher task next time around, when we revisit Windows XP and see just how many other products there are out there.

Technical details

Test environment. All products were tested on identical machines with AMD Phenom II X2 550 processors, 4GB RAM, dual 80GB and 1TB hard drives, running Ubuntu Linux Server Edition 10.04.1 LTS i386. On-access tests were performed from a client system running Windows XP Professional SP3, on the same hardware.

Any developers interested in submitting products for VB's comparative reviews should contact [email protected]. The current schedule for the publication of VB comparative reviews can be found at http://www.virusbtn.com/vb100/about/schedule.xml.

Appendix – test methodology

The following is a brief précis of how our tests are conducted. More detail is available at http://www.virusbtn.com/vb100/about/100procedure.xml.

Core goals

The purpose of the VB100 comparative is to provide insight into the relative performance of the solutions taking part in our tests, covering as wide a range of areas as possible within the limitations of time and available resources. The results of our tests should not be taken as a definitive indicator of the potential of any product reviewed, as all solutions may contain additional features not covered by our tests and may offer more or less protection depending on the configuration and operation of a specific setting and implementation.

VB100 certification is designed to be an indicator of general quality and should be monitored over a period of time. Achieving certification in a single comparative can only show that the solution in question has met the certification requirements in that specific test. A pattern of regular certification and few or no failed attempts should be understood to indicate that the solution’s developers have strong quality control processes and strong ties to industry-wide sample sharing initiatives – ensuring constant access to and coverage of the most prevalent threats.

Alongside the pass/fail data, we recommend taking into account the additional information provided in each report, and also suggest consultation of other reputable independent testing and certification organizations.

Malware detection measures

In all cases, details of malware detection rates recorded in this report cover only static detection of inactive malware present on the hard drive of the test system, not active infections or infection vectors.

For on-demand tests, products are directed to scan sample sets using the standard on-demand scan from the product interface. Where no option to scan a single folder is provided a context-menu or ‘right-click’ scan is used; if this is not possible either, any available command-line scanning tool is used as a last resort.

In all cases the default settings are used, with the exception of automatic cleaning/quarantining/removal, which is disabled where possible, and logging options, which are adjusted where applicable to ensure the full details of scan results are kept for later processing.

In on-access measures sample sets are accessed using bespoke tools which spark products with on-read protection capabilities to check, and where necessary block access to malicious files. Again, automatic cleaning and removal is disabled where possible. In solutions which provide on-write but not on-read detection, sample sets are copied from one partition of the test system to another, or written to the test system from a remote machine. In the case of solutions which offer on-read detection but default to other methods only, settings may be changed to enable on-read for malicious test sets to facilitate testing.

It is important in this setting to understand the difference between detection and protection. The results we report show only the core detection capabilities of traditional malware technology. Many of the products under test may include additional protective layers to supplement this, including but not limited to: firewalls, spam filters, web and email content filters and parental controls, software and device whitelisting, URL and file reputation filtering including online lookup systems, behavioural/dynamic monitoring, HIPS, integrity checking, sandboxing, virtualization systems, backup facilities, encryption tools, data leak prevention and vulnerability scanning. The additional protection offered by these diverse components is not measured in our tests. Users may also obtain more or less protection than we observe by adjusting product settings to fit their specific requirements.

Performance measures

The performance data included in our tests is intended as a guide only, and should not be taken as an indicator of the exact speeds and resource consumptions a user can expect to observe on their own systems. Much of the data is presented in the form of relative values compared to baselines recorded while performing identical activities on identical hardware, and is thus not appropriate for inferring specific performances in other settings; it should instead be used to provide insight into how products perform compared to other solutions available.

On-demand speed figures are provided as a simple throughput rate, taken by measuring the length of time taken to scan a standard set of clean sample files using the standard on-demand scan from the product interface. The size of the sample set is divided by the time taken to give a value in megabytes of data processed per second. On-access speeds are gathered by running a file-opening tool over the same sets; speeds are recorded by the tool and compared with the time taken to perform the same action on an unprotected system (these baselines are taken several times and an average baseline time is used for all calculations). The difference in the times is divided by the size of the sample set, to give the additional time taken to open the samples in seconds per megabyte of data.

Both on-demand and on-access measures are made with the default settings, with an initial ‘cold’ measure showing performance on first sight of the sample sets and ‘warm’ measures showing the average of several subsequent scans over the same sets. This indicates whether products are using smart caching techniques to avoid re-scanning items that have already been checked.

An additional run is performed with the settings adjusted, where possible, to include all types of files and to scan inside archive files. This is done to allow closer comparison between products with more or less thorough settings by default. The level of settings used by default and available is shown in the archive type table. These results are based on scanning and accessing a set of archives in which the Eicar test file is embedded at different depths. An uncompressed copy of the file is also included in the archives with its file extension changed to a random one not used by any executable file type, to show whether solutions rely on file extensions to determine whether or not to check them.

System resource usage figures are recorded using the Windows performance monitor tool. Levels of memory and CPU usage are recorded every five seconds during each of several tasks. The on-access speed test periods plus an additional on-access run over the system partition are used for the ‘heavy file access’ measures, and periods of inactivity for the ‘idle system’ measures. During all these measures the solution’s main interface, a single instance of Windows Explorer and a single command prompt window are open on the system, as well as any additional windows required by the testing tools. The results are compared with baseline figures obtained during the same baseline test runs used for the on-access speed calculations, to produce the final results showing the percentage increase in resource usage during the various activities covered.

Sample selection and validation

The sample sets for the speed tests are built by harvesting all available files from a selection of clean systems and dividing them into categories of file types, as described in the test results. They should thus represent a reasonable approximation of the ratios of different types of files on a normal system. The remaining portion of the false positive sample set is made up of a selection of items from a wide range of sources, including popular software download sites, the download areas of major software development houses, software included on pre-installed computers, and CDs and DVDs provided with hardware and magazines.

In all cases packages used in the clean sets are installed on test systems to check for obvious signs of malware infiltration, and false positives are confirmed by solution developers prior to publication wherever possible. Samples used are rated for significance in terms of user base, and any item adjudged too obscure or rare is discarded from the set. The set is also regularly cleaned of items considered too old to remain significant.

Samples used in the infected test set also come from a range of sources. The WildList samples used for the core certification set stem from the master samples maintained by the WildList Organization. These are validated in our own lab, and in the case of true viruses, only fresh replications generated by us are included in the test sets (rather than the original samples themselves). The polymorphic virus set includes a range of complex viruses, selected either for their current or recent prevalence or for their interest value as presenting particular difficulties in detection; again all samples are replicated and verified in our own lab.

For the other sets, including the RAP sets, any sample gathered by our labs in the appropriate time period and confirmed as malicious by us is considered fair game for inclusion. Sources include the sharing systems of malware labs and other testing bodies, independent organizations and corporations, and individual contributors as well as our own direct gathering systems. All samples are marked with the date on which they are first seen by our lab. The RAP collection period begins three weeks prior to the product submission deadline for each test, and runs until one week after that deadline; the deadline date itself is considered the last day of ‘week -1’.

The sets of trojans and ‘worms and bots’ are rebuilt for each test using samples gathered by our labs in the period from the closing of the previous RAP set until the start of the current one. An exception to this rule is in the ‘worms and bots’ set, which also includes a number of samples which have appeared on WildLists in the past 18 months.

All samples are verified and classified in our own labs using both in-house and commercially available tools. To be included in our test sets all samples must satisfy our requirements for malicious behaviour; adware and other ‘grey’ items of potentially unwanted nature are excluded from both the malicious and clean sets as far as possible.

Reviews and comments

The product descriptions, test reports and conclusions included in the comparative review aim to be as accurate as possible to the experiences of the test team in running the tests. Of necessity, some degree of subjective opinion is included in these comments, and readers may find that their own feelings towards and opinions of certain aspects of the solutions tested differ from those of the lab test team. We recommend reading the comments, conclusions and additional information in full wherever possible, and congratulate those whose diligence has brought them this far.

twitter.png
fb.png
linkedin.png
hackernews.png
reddit.png

 

Latest reviews:

VBSpam comparative review

The Q1 2024 VBSpam test measured the performance of nine full email security solutions, one custom configured solution and one open‑source solution.

VBSpam comparative review

The Q4 2023 VBSpam test measured the performance of eight full email security solutions, one custom configured solution, one open-source solution and one blocklist.

VBSpam comparative review

In the Q3 2023 VBSpam test we measured the performance of eight full email security solutions, one custom configured solution, one open-source solution and one blocklist.

VBSpam comparative review

In the Q2 2023 VBSpam test we measured the performance of nine full email security solutions, one custom configured solution, one open-source solution and one blocklist.

VBSpam comparative review

In the Q1 2023 VBSpam test we measured the performance of eight full email security solutions, one custom configured solution, one open-source solution and one blocklist.

We have placed cookies on your device in order to improve the functionality of this site, as outlined in our cookies policy. However, you may delete and block all cookies from this site and your use of the site will be unaffected. By continuing to browse this site, you are agreeing to Virus Bulletin's use of data as outlined in our privacy policy.