Testing times ahead?

2007-04-01

Richard Ford

Florida Institute of Technology, USA

Attila Ondi

Florida Institute of Technology, USA
Editor: Helen Martin

Abstract

What lies ahead for anti-virus testing programmes with the introduction of new protection schemes that move away from scanner-based detection? Richard Ford and Attila Ondi look to the future of AV testing.


Introduction

Product reviewing has a long and sometimes contentious history in the anti-virus world. Furthermore, unlike word processors or video games, a user of an anti-malware product is ill-placed to measure the utility of the particular protection scheme, for although its usability and performance can readily be determined, the crucial question of how much protection a product provides is elusive.

As if this were not enough, new developments in anti-malware research mean that this problem could worsen considerably, leaving not only users but reviewers confounded when attempting to evaluate new product developments. As forewarned is forearmed, this article outlines some of the history of anti-virus product reviews and certification, and highlights some of the challenges new technology could bring with it.

The goal of the tester

The goals of the product tester vary dramatically depending on the audience of the tests. Despite these variations, testing an array of products usually involves either putting them through some sort of ‘pass/fail’ tests (like product certification schemes), or ranking them in an ordinal way – usually by deciding which is best when measured against a particular set of criteria.

To date, tests of anti-virus software have evolved greatly from their fairly simple beginnings. Whereas initially products were ranked based only on raw detection scores, added emphasis was soon placed on the detection of viruses ‘in the wild’ (though the concept of ‘in the wild’ has become increasingly dated with time, and it is no longer entirely representative of the threats facing users), response time, and the overhead the product imposed on the host operating system.

Clearly, how one balances the importance of different areas is somewhat subjective – for example, is missing one zoo virus 100 times less important than missing one wild virus? Most people are fairly comfortable reading these kinds of tests, simply because they are so common.

However, one of the important things to realize is that current tests are easy because the information provided by the products has always essentially been binary: products generally detect an object or they don’t. Thus, it has been easy to count false negative and false positive rates, and provide readers with good, useful data.

The binary nature of traditional anti-virus products - detect or not - meant that there really wasn’t much meaning in any middle ground. Files detected using heuristics or other ‘softer’ techniques generally count toward detection, so reviewers have not had to deal with degrees of detection in any great way.

In addition, tests can usually be performed using just one or two machines; complex networks are seldom required. The benefit of the anti-virus software is measurable at the individual machine level: the better-protected a single machine is, the better for all. Thus, our review criteria are geared toward classifiers that are single-unit centric.

Coming to a machine near you...

It would be wonderful if reviewers could continue these reviewing practices for the foreseeable future, but the truth is that as new threats evolve, new protection paradigms also emerge.

For example, in the anti-spam world, several different products have investigated the use of throttling techniques to limit the number of spam messages a user receives. Such a product never actually determines whether or not an individual message is spam. Instead, as mail from a particular endpoint exceeds some threshold or other metric, a delay is injected into the processing of subsequent messages.

This technique can be extremely effective, but its local impact is hard to measure without some understanding of the global picture. If a customer implements the system, what levels of spam reduction might (s)he expect? Testing the system against a corpus of spam/non-spam messages is meaningless; only by understanding the dynamics of the whole system can one determine the expected outcome.

Similarly, a product that works by the throttling of network traffic for worm limiting is difficult to evaluate: from the perspective of a single computer, protection is not provided. Holistically, however, the speed (and therefore, ultimately, the magnitude) of outbreaks can be shifted radically. This is in step with biologically inspired systems, for example, which are relatively accepting of sacrificing single units or cells for the good of the whole.

Other new technologies in the anti-virus world are equally challenging for reviewers, as they break away from single machine solutions. Essentially, anything that doesn’t operate at the level of a classifier (‘run this, don’t run that’) is well outside the reach of current reviewing methodologies.

Looking at the big picture

While it would be nice to ignore these issues, doing so tends to exacerbate the anti-virus industry’s legendary resistance to new ways of doing things. New ideas that emerge need to be compared meaningfully to current best practice.

However, if we need to examine the system as a whole, testing becomes a whole lot more difficult as it’s not practical to do this for real. Instead, we need some way of creating our own reality for the purpose of experimentation. That means either an analytical model, a testbed, or a simulation; unfortunately, there’s no obvious right answer, as each solution has its own problems.

Simplistically, a testbed seems like the most attractive approach, as it’s really quite close to what reviewers do already. A test environment is created and the products are tested under ‘real world’ conditions. Unfortunately, for products that act more systemically, that’s a tall order. Many thousands of nodes might be required to really understand how the product would fare under typical scenarios. Furthermore, generating realistic conditions (for example, realistic simulation of a user browsing the web) is non-trivial.

Analytical models are attractive in that they are elegant and provide a clean way of calculating how a product might fare. Unfortunately, anything but the simplest scenarios lead rapidly to intractable mathematics. Furthermore, most systems incorporate a large amount of randomness. Consider an email worm, for example, which hits a large mailing list by chance early on in its spread. Such a worm would likely be more widespread than one which initially was ‘unlucky’ in spread.

Getting such data from analytical models is difficult, but an important question for those interested in a product’s effectiveness. It’s much more useful, for example, to know that 90% of the time the solution works fine, but in 10% of the cases all machines get infected, than it is to know that 30% of machines get infected on average.

The last approach is simulation. Up front, we should disclose that we have a certain amount of bias: Florida Tech has spent a lot of time developing Monte-Carlo-based simulators of virus/worm spread. However, this choice of platform was based upon a careful assessment of the needs for the solution. Simulation provides a relatively cost-effective way of determining the likely range of results from a prevention technique, and can be reconfigured rapidly to explore the effect of different parameters.

We need help!

Regardless of what technique is chosen, products that affect the population dynamics of spread must be evaluated in terms of those dynamics. Here, a new issue emerges: there is no real agreement on how to compare different spread dynamics.

Virus researchers will be very familiar with the ‘S’ curve shown in Figure 1 illustrating the spread of a ‘typical’ worm over time. Essentially, these curves differ in terms of speed of spread and overall magnitude. However, real-world viruses and worms don’t necessarily follow these curves at all. Different spread methods and preventatives all affect the shape of the spread. How can one compare the spread shapes meaningfully? Figure 2, for example, shows two different worm outbreaks plotted as a function of time. Which shape is more desirable from the perspective of a defender? Are they both bad or is one indicative that a particular product is better than another?

The ‘S’ curve often seen by virus researchers illustrating the spread of a ‘typical’ worm.

Figure 1. The ‘S’ curve often seen by virus researchers illustrating the spread of a ‘typical’ worm.

Two different worm outbreaks plotted as a function of time. Which curve is more desirable?

Figure 2. Two different worm outbreaks plotted as a function of time. Which curve is more desirable?

Part of the challenge is that no protection scheme can be evaluated in vacuo. That is, a protection scheme which slows down a virus/worm outbreak is only meaningful if the slowdown is sufficient to allow other protection schemes to be deployed. For example, slowing the global spread of a worm by one minute will have relatively little impact, but slowing it by three days is more significant if users are able to react in this timeframe. If the delay is sufficient, other defences can be put in place; below a certain amount of time, some slowdown makes no real difference to the size of the outbreak.

Part of the challenge for reviewers in the future will be quantifying ‘how much is good enough’. Furthermore, in order for the reviewer to avoid the pitfalls of subjectivity, there will have to be some globally accepted models of reaction times and modification times. How does the public change its behaviour in the face of a particularly nasty worm? These and other difficult questions will have to be explored in order really to compare products which are not simple virus detection engines.

Looking to the future

New protection schemes will cause fairly significant problems for those involved in product reviews and certification. In particular, products that move away from binary ‘good/bad’ classification, and which benefit the system macroscopically instead of microscopically, are well beyond the current experience both of reviewers and certification bodies.

This is not ideal for anyone; at best, it leaves users with no information regarding product effectiveness. At worst, it discourages early adopters and could blunt the penetration of innovations which really do benefit the community as a whole. That’s bad for everyone, because at the end of the day the security of the Internet is literally becoming a matter of life and death.

As researchers, as users, and as developers, we need to start thinking about how and why we test products, and how we can prepare for the next wave of innovation. Not doing so is simply shortsighted.

twitter.png
fb.png
linkedin.png
hackernews.png
reddit.png

 

Latest articles:

Nexus Android banking botnet – compromising C&C panels and dissecting mobile AppInjects

Aditya Sood & Rohit Bansal provide details of a security vulnerability in the Nexus Android botnet C&C panel that was exploited to compromise the C&C panel in order to gather threat intelligence, and present a model of mobile AppInjects.

Cryptojacking on the fly: TeamTNT using NVIDIA drivers to mine cryptocurrency

TeamTNT is known for attacking insecure and vulnerable Kubernetes deployments in order to infiltrate organizations’ dedicated environments and transform them into attack launchpads. In this article Aditya Sood presents a new module introduced by…

Collector-stealer: a Russian origin credential and information extractor

Collector-stealer, a piece of malware of Russian origin, is heavily used on the Internet to exfiltrate sensitive data from end-user systems and store it in its C&C panels. In this article, researchers Aditya K Sood and Rohit Chaturvedi present a 360…

Fighting Fire with Fire

In 1989, Joe Wells encountered his first virus: Jerusalem. He disassembled the virus, and from that moment onward, was intrigued by the properties of these small pieces of self-replicating code. Joe Wells was an expert on computer viruses, was partly…

Run your malicious VBA macros anywhere!

Kurt Natvig wanted to understand whether it’s possible to recompile VBA macros to another language, which could then easily be ‘run’ on any gateway, thus revealing a sample’s true nature in a safe manner. In this article he explains how he recompiled…


Bulletin Archive

We have placed cookies on your device in order to improve the functionality of this site, as outlined in our cookies policy. However, you may delete and block all cookies from this site and your use of the site will be unaffected. By continuing to browse this site, you are agreeing to Virus Bulletin's use of data as outlined in our privacy policy.