In search of a secure operating system

2013-10-10

Mark Fioravanti

Florida Institute of Technology, USA

Richard Ford

Florida Institute of Technology, USA
Editor: Helen Martin

Abstract

Over the last decade or so, security has steadily become more of an issue for OS vendors due to the changing threat environment. Mark Fioravanti and Richard Ford look to the past in search of a secure operating system.


Modern operating systems (OSs) are designed to allow multiple users (and their associated services, processes and accounts) to share and utilize system resources efficiently and safely. An important concept in achieving this requirement is isolation; that is, isolating data and programs from each other in a way that attackers should not be able to abuse while allowing authorized persons to utilize resources as needed.

Over the last decade or so, security has steadily become more of an issue for OS vendors due to the changing threat environment. For example, Microsoft’s popular MS DOS OS essentially had no security, in that any program executing was free to use the entire system and its resources however it wished. As threats have increased and network connectivity has become ubiquitous, end-users have been provided with an ever-increasing array of security features, ranging from hardware enhancements (such as Supervisory Mode Execute Protection or SMEP) to system-wide software features (Microsoft’s Mandatory Integrity Control). Despite the inclusion of these advanced security features, the threats are increasing rapidly and continuing to adapt in order to counter these defences. A simple glance at any current malware prevalence table makes it clear that we have much further to go.

In this article, however, we look not towards the future, but back at the past. While the current generation of computer users would be forgiven for thinking we are only now discovering how to build systems more securely, it turns out that many of the ‘innovations’ we see today have their roots planted firmly in the research of yesteryear.

Where we have been

At present, computing is composed of a large number of different OSs: Microsoft Windows, Apple OS X (including the iOS version implemented on mobile devices such as the iPhone, iPod and iPad), more common GNU/Linux distributions (such as RedHat Linux, Canonical’s Ubuntu and Google’s Android), and the various Berkeley Software Distributions (BSD) including (OpenBSD, FreeBSD, NetBSD, etc.). While these are some of the more commonly encountered OSs, there are in fact a raft of other modern OSs. Many of these trace their origins back to a much earlier OS, ‘Multiplexed Information and Computing Service’ or as it is now known, ‘Multics’. The others were created independently but almost universally they rely on concepts introduced or developed within the Multics environment. What is interesting is that Multics had many outstanding security features and had dramatically better security than many of the OSs that succeeded it, including the ones we see today. We will take a closer look at that history and discuss why these security enhancements are only now being rediscovered.

The Multics project was started in 1964 with the plan for the system to be delivered in 1965. Despite a design that is almost half a century old, the security architecture and functionality would have allowed it to mitigate and deal effectively with some of the security issues that plague today’s computers. Subsequent to the original system design, the Honeywell SCOMP project attempted to move beyond what Multics had accomplished, working entirely within a Multilevel Security (MLS) environment [1].

From the outset, Multics was designed with security as a critical requirement [2]. It was created as a mainframe system and supported multiple concurrent users, allowing them to share and utilize resources on the system efficiently. Multics featured the following design principles:

  • By default, Multics was implemented to deny access to all resources. If a user did not have positive permissions that were explicitly associated with a subject, then access was denied.

  • Authorizations were revalidated as new accesses were attempted on the system. As the system was a time-sharing system, it was recognized that a user’s permissions could change between tasks. This made it necessary for the system to periodically revalidate permissions and authorizations.

  • Multics avoided the use of ‘security by obscurity’; it was designed to be open in nature. The architecture attempted to rely on as few secrets as possible; only those secrets that were necessary, such as passwords and keys, were kept.

  • Least privilege was used extensively throughout the system. This design was evident in the call rings and access rings that the system used to control process execution. When a higher level process performed a task which only required a few privileges, the surplus privileges were dropped.

  • Multics utilized a simple user interface. During the design, it was recognized that the more difficult a user interface is to use, the less likely users would be to take advantage of the security features offered.

Beyond those design principles, Multics also made use of a number of other technologies including the design of a supervisor and a gatekeeper. The code in the supervisor was small compared to modern kernels, which allowed for code reviews and inspections. The gatekeeper attempted to validate the parameter of any call that involved a transition between rings. This validation was intended to avoid problems which could result in vulnerabilities such as the exploitation of a ‘confused deputy’.

In many ways, the SCOMP Trusted Operating Program was built on the same design principles as Multics and can be thought of as its successor. SCOMP was designed by Honeywell and built upon the secure architecture of Multics. While Multics made use of the Access Isolation Module (AIM), which attempted to implement a Mandatory Access Control (MAC) model for system accesses [3], SCOMP attempted to implement this more fully by including MLS controls in the file system, inter-process communication (IPC), operating commands/processes and isolation/creation of a security administrator.

The security goals of any system are defined as ensuring that the confidentially, integrity and availability objectives of the system are met [4]. To determine if a system satisfies these requirements, a variety of different approaches can be used based on concepts either proposed or already in practice. While security can be included in the software development lifecycle (SDL or SDLC), it is not common for it to be included either until an incident has occurred or until there is a business case. Multics was one of the few OSs to be designed from the outset with security as a critical goal [2]. Some systems such as SCOMP have deemed that security is such a critical factor that the security should be formally verified to determine if the system has been designed and implemented. Most OSs have some level of review, but very few are subjected to formal verification. A more common method for determining the level of trust to be associated with a system is through security testing. Security testing is a widely known and well used method for determining the security of a system, but its limits are often poorly understood or misrepresented.

Each of these methods has its own strengths and weaknesses. Integrating security into the SDLC requires continual upper management support and approval as it typically increases the time and/or cost required for products to be released into the marketplace. Furthermore, it requires that the development and software testing staff be provided with the necessary training and tools to implement security properly.

In order to formally verify the security of an OS, formal methods must be used. These work by using a formal mathematical model of the system and by utilizing theorem provers to prove that the system meets a particular requirement. This approach is limited in its applicability as there are difficulties associated with demonstrating that large code bases (and all of the supporting hardware) are provably secure. In order to attempt validation via formal methods, a complete and unambiguous description of the OS and operational hardware is required. Consequently, the application cannot be provably secure if the specification is incomplete or inconsistent. The security of the system cannot be proved if the application is operating on different hardware.

Relying on testing as a method for demonstrating security has difficulties as it is infeasible to test all of the states that a system can achieve. In addition, it is dependent upon the tester’s skill level, the amount of time the tester has to validate the system, and the validation objectives. Testing to validate conformance to a standard such as the Trusted Computer System Evaluation Criteria’s (TCSEC) ‘Orange Book’ [5], Information Technology Security Evaluation Criteria (ITSEC), the Common Criteria for Information Technology Security Evaluation (CC) [6] or Federal Information Security Management Act (FISMA) requires different testing methodologies from penetration testing or ethical hacking. By and large, these schemes have focused on requirement and specification testing.

Where we are

Unlike Multics and SCOMP, most modern OSs have a strong focus on performance and usability. Security may be a factor taken into consideration during development, but rarely is it the primary design goal. Furthermore, security is often seen as being in conflict with performance and usability design principles. As a result, security is only included when it is an explicit requirement or when enough weaknesses have been exposed to the public for the brand to suffer – one could argue that the Microsoft Windows family of OSs falls into this category. Microsoft OSs and server services were successfully exploited by a significant number of worm attacks beginning in mid 2001 and, partly in response, the company introduced the Trustworthy Computing (TwC) initiative. Part of TwC implemented the Security Development Lifecycle (SDL) at Microsoft in an attempt to reduce the attack surface of the Microsoft OSs.

Modern OSs have traditionally relied on security controls such as Discretionary Access Controls (DAC) to ensure the confidentiality and integrity objectives of a system are met. Although the implementation of DAC is important it has done little to prevent interconnected systems from being compromised or information from being exfiltrated. Some OSs have implemented stronger confidentiality controls such as MAC, or access controls which are based on organizational policy rather than user classification. Multics implemented MAC through the AIM, and SCOMP was designed to include MAC through its support of MLS. MAC is a requirement for the higher security levels of TCSEC. A number of more modern OSs have attempted to implement MAC, most notably Linux with the Security Enhanced Linux (SELinux) project or Solaris with the Trusted Solaris (TSOL) extensions. SELinux is available for all of the major Linux distributions yet this defence is often not enabled as most system administrators either disable the mechanism or remove it entirely. Despite the potential security benefit associated with MAC, it is commonly removed as it increases the administrative overhead associated with the system. The latest iterations of the Microsoft Windows family of OSs have attempted to implement an integrity model based on Biba’s Integrity Model [7] under the name of the Windows Integrity Mechanism or Mandatory Integrity Controls.

Most modern OSs are required to support a wide variety of hardware configurations; practically anything that a consumer would purchase. In contrast, more secure OSs such as Multics and SCOMP were designed to function on a specific and limited hardware set. In the case of SCOMP, the hardware and software was architected such that it increased both the security and the performance of the system. Memory access controls were initially mediated by the OS, and then were off-loaded and controlled by the hardware. Modern OSs attempt to support as many different hardware configurations as possible; this dramatically increases the complexity of the OS when interfacing with the underlying hardware. This does not mean that Multics was not designed to allow for users to use the system freely; Multics was designed as a general purpose computing system and provided the functionality which would allow developers to create applications as needed.

The hardware supporting modern OSs appears to be providing the tools to allow a fundamental shift in architecture. Computing is mostly performed on von Neumann architectures, or an architecture which allows data and instructions to be stored in the same memory. Although von Neumann architectures are useful (and prevalent), the mixture of data and instructions allows stack based buffer overflow attacks to facilitate code injection. With the recent addition of No-Execute (NX)/Data Execution Prevention (DEP) hardware extensions, OS developers have additional options to start migrating away from a pure von Neumann architecture. NX/DEP was an effort to make stack based buffer overflow execution more difficult by marking data (text) memory as non-executable; it attempts to force the system toward a more Harvard-like architecture (within the Harvard architecture, instructions and data are strictly isolated). Multics had already implemented this isolation through the separation of procedure and data segments.

Although OSs supported by different architectures would help to alleviate some issues in computing, there are classes of attacks that would not be mitigated. Attackers would still be able to perform privilege escalation attacks and abuse a ‘confused deputy’ to reuse legitimate services to accomplish their objectives. Recently, Microsoft incorporated the functionality supplied by Intel’s CPU Supervisory Mode Execute Protection (SMEP) into the Windows family of OSs. SMEP attempts to help mitigate privilege escalation attacks and the confused deputy problem. Multics utilized the gatekeeper as a parameter validation mechanism to protect against confused deputy attacks and the call gate structure to automatically reduce privileges when they were not needed. The potential advantages of changing from a ring structure to a lattice structure [8] have been discussed previously. SMEP can almost be seen as a very limited first step towards implementing a lattice that would allow ‘Ring 0 to be protected from Ring 0’ attacks, or preventing an adversary from compromising the kernel and leveraging that foothold to pivot into other privileged functions.

Not all OS defences rely on forms of access or integrity controls to prevent adversaries from exploiting a system. Some defences work by reducing the accuracy of the critical information available to an adversary. One such defence is the implementation of Address Space Layout Randomization (ASLR). ASLR attempts to mitigate some attacks by randomizing the location of the stack, the heap and the locations of loaded system and application libraries. This configuration requires an adversary to guess or brute force the memory location of a vulnerable library or their own injected shellcode. There have been flaws in the amount of entropy associated with early implementations of ASLR, and the newest version of Microsoft Windows introduces High Entropy-ASLR (HE-ASLR). HE-ASLR increases the difficulty of guessing the location in memory of specific data by increasing the randomness associated with the set of possible addresses. Hiding information is helpful but unless other techniques are used in conjunction with it, an adversary can cause the system to leak information which can be used to reduce the number of required guesses.

Lately, significant effort has been invested into utilizing virtualization as a security mechanism instead of it simply being a resource-sharing and hardware consolidation mechanism. There are serious issues with this approach:

  • Isolation is not complete. There must be information exchanged between the guest and the host otherwise the guest would not be able to communicate with outside resources [9].

  • Management is often handled with remote management tools which provide web-server level access into the hypervisor [10].

  • Increased management costs. Previously there was a single set of hardware and systems supporting the enterprise, now there is the same level of resources plus the additional infrastructure for the implementation and management of the hypervisors [10].

  • Merger of the guest and host APIs. In order to increase the performance of the VM guest, some of the functions that the guest would normally handle are instead handled by the hypervisor. This blurs the lines of isolation between the guest and the host even more than the first issue.

  • Resource provider versus reference monitor. The hypervisor is expected to perform two essential functions if it is being used as a security mechanism: it provides access to resources and monitors access to resources. This leads to confusion between duties and, since performance and security are typically in conflict, security will usually lose to performance [10].

  • Use of the hypervisor as a reference monitor is also difficult. A reference monitor (1) should always be invoked, (2) cannot be tampered with, and (3) should be small enough to be verified [11]. The code base of a hypervisor is sufficiently large that it is unlikely that it can be verified at all, let alone formally.

Modern OSs feature a number of security countermeasures as defences against weaknesses. Some of these weaknesses are introduced during the design phase while others are introduced during the implementation phase.

Where we are going

Research into secure OSs and their defensive mechanisms will continue apace as we become increasingly aware of the insecurity of most modern OSs. Historically, Multics and SCOMP demonstrate that secure OSs can be constructed and can be user-friendly, at least to some extent. While no system is perfect (for example, the development and purchase costs for these systems were high), these older systems can be considered to be more secure than any of today’s consumer OSs in many important ways.

An interesting aspect of modern security research is that it appears that significant effort is spent mitigating the exploitation techniques used by attackers. NX/DEP was developed to mitigate stack-based buffer overflows. In response, attackers developed return-to-libc and eventually Return Oriented Programming (ROP) [12]. NX/DEP, combined with ASLR, attempts to mitigate these techniques. Attackers have adapted by employing heap spraying techniques to land in a portion of code that they control or simply by disabling ASLR before attempting to execute the remainder of their payload. Recently, Address Space Re-Randomization (ASRR) was proposed as a method for defending against return to kernel text attacks [13]. This escalatory arms race between the attacker and defender will continue with no real end in sight.

Furthermore, significant research and development time will continue to be spent on identifying specific attack techniques and applying countermeasures to prevent those attacks on deployed systems. Although this will protect existing and future systems, it does not apply much evolutionary or selective pressure to force the attacker to change their techniques. More effort should be placed on ensuring that application and system programmers are not only able to write secure code, but that it is also difficult for them to write insecure code. Alternatively, more time and effort could (and perhaps should) be spent on researching and developing more systems like Multics, which was not only designed to be tolerant of poorly written applications, but which actively tried to defend against malicious programs.

Typically, innovations in OS defences are rolled out over extended periods of time. NX/DEP was first introduced into Microsoft Windows via the Service Pack 2 for Windows XP (August 2004). It was optional and not enabled by default. NX/DEP was only recently turned on by default in Windows 7 (July 2009) and applications are able to opt out of participating in NX/DEP. Almost five years passed between the initial release of NX/DEP for Windows until it became the default option. The deployment of ASLR for Microsoft Windows followed a similar delay; it was optionally introduced in Visual Studio for Windows Server 2003 and Windows Vista targets. Applications are able to opt out of participating and if any library within an application opts out of participating in ASLR, the entire application is loaded without ASLR enabled. Microsoft Windows 8 includes functionality to force an application to participate in ASLR even if it attempts to opt out. Unfortunately, these delays, which are required to allow the software ‘ecosystem’ time to adapt, provide attackers ample opportunity to respond with new exploitation techniques.

If countermeasures are considered from the perspective of being selection agents which influence a population’s strategies, the case of slowly applying a countermeasure can cause more problems in the long run. There are cases in which the application of a small amount of pesticides (countermeasures) have facilitated rapid mutations which allowed the pest (attackers) to more rapidly become resistant to the pesticides [14]. This is also becoming increasingly common in bacteria which have not had contact with antibiotics gaining resistance, tolerance and even immunity to antibiotics through horizontal gene transfer.

Another aspect is that all of these defences are constitutive; they are present all of the time [15]. Every time another countermeasure is applied to the system, it increases the overhead and costs. In some situations, systems and applications are attempting to gain access to every possible optimization, and security will slow them down. These countermeasures have the effect of increasing the tension between the system’s performance and security goals. Some security countermeasures and defences impose a large cost while others impose small costs. All of these costs are cumulative and work against the availability requirements of the system.

There are also induced defences, which are another type of defensive strategy, but unlike constitutive defences they are only employed when they are needed. The organism that is utilizing an induced defence does not pay the cost for utilizing it until it is needed, as opposed to a constitutive defence which is active all of the time so the cost must continually be paid. There are multiple reasons for maintaining induced defences and there are some restrictions: (1) there need to be reliable cues, (2) the induced defence needs to be effective, and (3) there must be benefits for not utilizing the induced defence all of the time (otherwise it would become a constitutive defence) [16]. Researching induced defence strategies could offer benefits and attempt to reduce the tension between performance and security goals. It may be possible to convert an expensive constitutive defence into an induced defence or attempt to develop newer countermeasures which act as induced defences.

Conclusion

The design and implementation of a highly secure OS is difficult but not impossible. Based on our view of history, it is also something of a lost art. In the 70s and 80s, we had very secure platforms in use. The broad adoption of computers changed the ecosystem and the needs of consumers created considerable evolutionary pressure that moved us away from the solid design principles of Multics and SCOMP. These OSs had significant defences but there were significant costs associated with those defences, such as the required time and resources for development and the costs associated with purchase.

From an evolutionary perspective, these OSs are similar to the Dunkleosteus terrelli, a type of now extinct Placodermi or ‘Armoured fish’ which existed during the late Devonian between 360 and 380 million years ago. These were large apex predators which featured an armoured head and a body covered with smaller scales. There are many possible explanations for their extinction but there were no other predators alive at the time which could have preyed upon them, so they could have not been reduced to extinction by predation. One possibility is that they became extinct through interspecies competition and the resources required to create their ‘armour plating’. Other, smaller ‘bony’ fish, which were more vulnerable to predators, were eventually more successful due to the smaller construction cost and overhead of their defences.

When reviewing the possible security functionality that can be designed and/or included in the construction of an OS, it is evident that security is not always included from the outset in modern OSs and sometimes it only becomes a concern when an incident or business case arises. When considering the causes of the lack of security features there are a couple of questions that surface. Is there a reason why these security controls are not being included? Is it because the knowledge or skill has been lost? Is it because the knowledge only exists in specialized fields? Or is it because the costs associated with building highly secure systems are too high? The National Security Agency spent time and resources on developing SCOMP, but in the end when the product was ready it only purchased a small number of units and procured a large number of consumer-grade OSs. This outcome is reminiscent of one of the possible causes of D. Terrelli’s extinction; it was not that it could not compete in the environment but rather the resources required to grow and maintain its armour were too expensive. Given time it was replaced by a population of smaller and individually more vulnerable organisms. Only now that we live in an infinitely more dangerous world has the value of armour become clearer.

Bibliography

[1] Fraim, L. J. (1983). SCOMP: A solution to the multilevel security problem. Computer, 26–34. doi:10.1109/MC.1983.1654440.

[2] Saltzer, J. H. (1974). Protection and the control of information sharing in multics. Communications of ACM 17, 7, 388–402. DOI=10.1145/361011.361067. http://doi.acm.org/10.1145/361011.361067.

[3] Green, P. (2005). Multics virtual memory – tutorial and reflections. Retrieved from ftp://ftp.stratus.com/pub/vos/multics/pg/mvm.html.

[4] Stoneburner, G. S.; Hayden C.; Feringa A. (2004). NIST Special Publication 800–27 Rev A, Engineering Principles for Information Technology Security (A Baseline for Achieving Security), Revision A.

[5] Department of Defense (DOD), TCSEC. (1985). Trusted computer system evaluation criteria. DoD 5200.28-STD, 83.

[6] Common Criteria (2012, September). Common Criteria for Information Technology Security Evaluation. Version 3.1, Revision 4. Retrieved from http://www.commoncriteriaportal.org/cc/.

[7] Biba, K. J. United States Air Force, Electronic Systems Division, Air Force Systems Command. (1977). Integrity considerations for secure computer systems (ESD-TR-76-372). Retrieved from http://oai.dtic.mil/oai/oai?verb=getRecord&metadataPrefix=html&identifier=ADA039324.

[8] Bratus, S.; Locasto, M. E.; Ramaswamy, A.; Smith, S. W. (2010). VM-based security overkill: a lament for applied systems security research. In Proceedings of the 2010 Workshop on New Security Paradigms (NSPW ‘10). ACM, New York, NY, USA, 51–60. DOI=10.1145/1900546.1900554. http://doi.acm.org/10.1145/1900546.1900554.

[9] Bellovin, S. M. (2006). Virtual machines, virtual security? Communications of the ACM, 49(10), 104.

[10] Bratus, S.; Johnson, P. C.; Ramaswamy, A.; Smith, S. W.; Locasto, M. E. (2009). The cake is a lie: privilege rings as a policy resource. In Proceedings of the 1st ACM Workshop on Virtual Machine Security (pp.33–38). ACM. DOI=10.1145/1655148.1655154. http://doi.acm.org/10.1145/1655148.1655154.

[11] Anderson, J. P. (1972). Computer Security Technology Planning Study. Volume 2. Anderson, J.P. and Co. Fort Washington, PA.

[12] Shacham, H. (2007). The geometry of innocent flesh on the bone: Return-into-libc without function calls (on the x86). In Proceedings of the 14th ACM Conference on Computer and Communications Security (pp.552–561). ACM. DOI=10.1145/1315245.1315313. http://doi.acm.org/10.1145/1315245.1315313.

[13] Giuffrida, C.; Kuijsten, A.; Tanenbaum, A. S. (2012). Enhanced operating system security through efficient and fine-grained address space randomization. In Proceedings of the 21th USENIX conference on Security.

[14] Gressel, J. (2011). Low pesticide rates may hasten the evolution of resistance by increasing mutation frequencies. Pest Management Science, 67(3), 253–257.

[15] Tollrian, R.; Harvell, C. D. (Eds.). (1998). The Ecology and Evolution of Inducible Defenses. Princeton University Press.

[16] Harvell, C. D. (1990). The ecology and evolution of inducible defenses. Quarterly Review of Biology, 323–340. http://www.jstor.org/stable/2832369.

[17] Karger, P.A.; Schell R.R. (2002). Thirty years later: lessons from the Multics security evaluation. Computer Security Applications Conference, 2002. http://dx.doi.org/10.1109/CSAC.2002.1176285. Retrieved from http://ieeexplore.ieee.org/xpl/freeabs_all.jsp%3Freload=true%26arnumber=1176285.

[18] Lampson, B. W. (1974). Protection. SIGOPS Operating Systems Review. 8, 1, 18–24. DOI=10.1145/775265.775268. http://doi.acm.org/10.1145/775265.775268.

[19] Spinellis, D. (2008). A tale of four kernels. In Proceedings of the 30th International Conference on Software Engineering (pp.381–390). ACM. DOI=10.1145/1368088.1368140. http://doi.acm.org/10.1145/1368088.1368140.

[20] Harrison, M. A.; Ruzzo, W. L.; Ullman, J. D. (1976). Protection in operating systems. Communications of the ACM, 19(8), 461–471. DOI=10.1145/360303.360333. http://doi.acm.org/10.1145/360303.360333.

twitter.png
fb.png
linkedin.png
hackernews.png
reddit.png

 

Latest articles:

Nexus Android banking botnet – compromising C&C panels and dissecting mobile AppInjects

Aditya Sood & Rohit Bansal provide details of a security vulnerability in the Nexus Android botnet C&C panel that was exploited to compromise the C&C panel in order to gather threat intelligence, and present a model of mobile AppInjects.

Cryptojacking on the fly: TeamTNT using NVIDIA drivers to mine cryptocurrency

TeamTNT is known for attacking insecure and vulnerable Kubernetes deployments in order to infiltrate organizations’ dedicated environments and transform them into attack launchpads. In this article Aditya Sood presents a new module introduced by…

Collector-stealer: a Russian origin credential and information extractor

Collector-stealer, a piece of malware of Russian origin, is heavily used on the Internet to exfiltrate sensitive data from end-user systems and store it in its C&C panels. In this article, researchers Aditya K Sood and Rohit Chaturvedi present a 360…

Fighting Fire with Fire

In 1989, Joe Wells encountered his first virus: Jerusalem. He disassembled the virus, and from that moment onward, was intrigued by the properties of these small pieces of self-replicating code. Joe Wells was an expert on computer viruses, was partly…

Run your malicious VBA macros anywhere!

Kurt Natvig wanted to understand whether it’s possible to recompile VBA macros to another language, which could then easily be ‘run’ on any gateway, thus revealing a sample’s true nature in a safe manner. In this article he explains how he recompiled…


Bulletin Archive

We have placed cookies on your device in order to improve the functionality of this site, as outlined in our cookies policy. However, you may delete and block all cookies from this site and your use of the site will be unaffected. By continuing to browse this site, you are agreeing to Virus Bulletin's use of data as outlined in our privacy policy.