Home Page
|
|
Are Good Virus Simulators Still a Bad Idea?By Sarah Gordon [email protected] © 1995 Elsevier Press.Elsevier Advanced Technology, Oxford, UK. This document may not be reproduced in whole or in part, stored on any electronic information system, or otherwise be made without prior express written consent.
Introduction
A brief history of Simulators
Between October 1991 and April 1992, a new program entered the scene. This program, Virlab [2], was also a simulator, albeit a more advanced implementation. Rather than a collection of visual virus trigger effects, it was promoted as simulating an entire computing environment, on-screen. An examination of this program showed it to be functional, although it appears to have compatibility problems with a number of computing systems. About this time, a new type of virus simulator made its appearance. This simulator, creatively titled "Virus Simulator" [3], which is still being used by some people today, claimed to provide the user with a way to test anti-virus products without having to use real viruses. Since the introduction of these simulators, a number of companies and individuals have offered various types of visual and aural simulators for educational purposes. However, Virus Simulator is the only one of which we are aware of that offered simulated viruses for testing purposes. Although Virus Simulator seems to have made something of a reappearance in recent months in a newer archive, the main executable appears unchanged from that circulated in 1991. Not much has changed since those early days of simulator programs. If anything, simulation of virus trigger effects has become more common, with some vendors - most notably Kami - including the more infamous or aesthetic displays in virus encyclopedias. The reasons cited by advocates for the use of these different types of virus simulators range from wanting to see what viruses "do" to wanting to carry out tests of the efficacy of detection software. As we examine these programs and their usage in more detail, we will focus first on the category of "educational simulators".
Educational Simulators
Benefits and Disadvantages
Having described what the simulators do, it is now time to turn our attention to the pros and cons of using virus simulators. First, we will examine the roles played by visual and aural demonstrations, before turning our attention to the somewhat thornier issue of simulators for testing purposes.
The argument most commonly put forward for the use of simulators is that they can be an excellent aid to raising the awareness of a generally lackadaisical user population. However, there is a flipside to this: users can learn to associate the concept of viruses with letters falling of the screen, or large highly visual scrolling display. An infected computer will announce its presence in clear and concise terms. This type of awareness is flawed. Many types of viruses have no visual or aural effect whatsoever. Of those that do provide the user with some indication, many do so only after the damage is done... an infected computer looks pretty much exactly like the one which you have on your desktop (though hopefully not identical...). |
|
Flawed Awareness
No letters are falling off my screen
|
|
Thus, while simulators can be a valuable tool for educating
users, it is vital to get the message that the way to prevent
viruses is by using anti-virus software, not by waiting
patiently for an obvious disaster to strike.
We will now focus on tests carried out using simulated viruses; that is, programs which are supposed to in some way simulate viral activity. As we will show, there are a number of potential ethical and technical concerns. On a technical level, we will first examine whether or not such tests provide a valid measurement of a product's ability. We will do this by discussing two tests which were conducted using both simulated and real viruses.
According to Luca Sambucci of the Italian Computer Virus Research Institute, the results of an AV-test done with simulated viruses are "misleading and in some cases harmful". In his paper "Virus Simulator Test" [5], Sambucci states files created by Virus Simulator "aren't viruses at all; they're only parts of viruses. Not all AV programs will detect them as viruses, so the detection rates can be very different compared with the rates of a test on real viruses". Sambucci states that the program Virus Simulator 2c was programmed in 1991 (the file date on the executable portion of the product); at that time there were not as many viruses as there are now, with PC viruses numbering about 150. Currently we know of over 9500 PC viruses. Thus, even if the simulated viruses were suitable for testing purposes, the test collection would be woefully dated and small. Another problem with the program is the use of non-standard names which makes calculating variants difficult. Sambucci conducted first a test with some of the more well known anti-virus products. His test results are reproduced here with his permission:
|
Name | Version | Date (MM/DD/YY) | Producer |
AntiVir IV (AVScan) | 1.64 | 08/03/94 | H+BEDV GmbH |
AV Toolkit Pro (-V) | 2.00e | 07/13/94 | KAMI Ltd. |
VTK (Findviru) | 6.64 | 05/11/94 | S&S; Int. Plc. |
F-Prot | 2.13a | 07/27/94 | Frisk Soft. Int. |
IBM Antivirus/DOS | 1.06 | 07/11/94 | IBM Corp. |
Integrity Master | 2.22a | 05/25/94 | Stiller Research |
Sweep | 2.64 | 08/01/94 | Sophos Plc. |
TBAV (TbScan) | 6.22 | 07/11/94 | ESaSS B.V. |
Virex PC (VPCScan) | 2.94 | 07/05/94 | Datawatch Corp. |
VirusScan | 2.1.0 | 07/18/94 | McAfee Inc. |
Antivirus Product | % of simulations detected as infected |
AVScan 1.64 | 98 % |
AVP 2.00e | 0 % |
Findviru 6.6 | 0 % |
F-Prot 2.13a | 71 % |
IBMAV 1.06 | 55 % |
I-Master 2.22a | 100 % |
Sweep 2.64 | 60 % |
TbScan 6.22 | 42 % |
VPCScan 2.94 | 100 % |
VirusScan 2.1.0 | 45 % |
Rank | Product |
1. | Integrity Master / Virex |
2. | AntiVir IV |
3. | F-Prot |
4. | Sweep |
5. | IBM AntiVirus |
6. | VirusScan |
7. | TbScan |
8. | AntiViral Toolkit Pro / Dr. Solomon's AVTK |
Antivirus Product | % of infected files correctly detected as infected |
AVScan 1.64 | 100 % |
AVP 2.00e | 100 % |
Findviru 6.6 | 100 % |
F-Prot 2.13a | 100 % |
IBMAV 1.06 | 100 % |
I-Master 2.22a | 99 % |
Sweep 2.64 | 100 % |
TbScan 6.22 | 100 % |
VPCScan 2.94 | 99 % |
VirusScan 2.1.0 | 96 % |
Why this startling difference between 'real' and 'simulated' viruses? The answer lies in how anti-virus software works, and how it has developed over time. When there were few viruses, and the concept of polymorphism had yet to become widespread, most virus scanners worked in much the same way, searching a file for a simple pattern of bytes located anywhere in the file. When scanners worked in this way, the idea of a simulator made a certain (limited) amount of sense. By taking a chunk of virus code (preferably the piece which the scanner was looking for) and inserting it anywhere into a dummy test file (say one which simply prints a message to the screen and exits), one had a rudimentary way to 'test' anti-virus software. This appears to be the concept (at least in part) behind Virus Simulator. However, as virus became more complex, so did virus scanners. In order to increase speed, or simply to provide more accurate virus detection, scanners began looking only where the viruses were likely to be found in the file, or even tracing the path of execution through the file. Thus, modern scanners are unlikely to find any virus in 'simulated' test files; exceptions to this rule may be legacy code still extant within the scanner, or code added explicitly to detect these simulated viruses. As it has been several years since these tests were performed by Sambucci, we decided to re-examine the performance of several of the scanners previously tested: Command Software's F-PROT Professional, McAfee Scan, Dr Solomon's Anti-Virus ToolKit and Symantec's Norton Anti-Virus. Although we attempted to duplicate the same 'simulation' files as used by Sambucci, this was difficult, for the same reasons as noted by Sambucci in his original test. The names of the virus test strings used by Virus Simulator are inexact, making it difficult to tell which viruses are actually supposed to be represented. Thus, 70 samples which most closely represented those used by Sambucci were chosen and the scanners were tested against these files. Finally, we tested these same scanners against real viruses, again coming as closely as possible to the viruses used in the original tests. The virus samples used were all second generation samples, replicated onto standard goat files. The results of the tests are shown below: (Tested F-PROT Professional version 2.21, NAV version 95.0.a, AVTK version 7.55 and SCAN version 2.01/2.2.9) |
Product | Simulated Viruses | Real Viruses |
F-PROT Professional | 0% | 100 % |
TBAV | 0% | 100 % |
NAV | 0% | 100 % |
SCAN | 20% | 100 % |
It is interesting to note that while all four scanners detected all of the real viruses, three of them detected none of the simulated "virus" files produced by the Virus Simulator. Only one package (SCAN) detected any (20%) of the simulated viruses. At this point, it becomes important to step back somewhat from the issue. How can we objectively measure a virus simulator's performance? The answer to this question depends on what exactly one is measuring. A virus simulator should simulate viruses in ways its author designs. In the case of the Rosenthal simulator, it is designed to create files which contain non-executed fragments of supposedly viral code. It does this. The problem enters when we take the philosophical and practical jump from "does it perform as intended" (which it does) to "can it be used to produce a test which measures product detection capability". We can use it to determine how many different files a product identifies as infected, but this number does not represent how well a product will perform in a real situation against viruses; it is a measurement of how well a product detects dummy files created for "testing". The nature of the test defines the fitness of the simulator. While it is easy to show that the simulated viruses are of little use in product testing, as they are not in fact viruses, the Virus Simulator offers a mutation engine virus supplement (for a fee). Can these MtE viruses be used to test the accuracy of a scanner's polymorphic detection? Once again, we believe not. Further, we believe the sale of viruses for 'testing' purposes to be detrimental to the long term health of the anti-virus industry, and the computing community in general. The unsuitability of the viruses for testing purposes is simple. Although these viruses do use a genuine polymorphic engine, the Mutation Engine, detection of these samples is no guarantee of detection of any other viruses which employ the MtE. Even if this were not the case, polymorphic detection is made up of much more than just the MtE. Testers are well advised to concentrate their efforts on those viruses which are known to be spreading in the wild, not on so-called 'Zoo' viruses. A good argument for this approach is given in [6]. In addition to the technical concerns, many industry experts feel that creating and selling viruses is irresponsible. IFIP, the International Federation for Information Processing, has issued a strong positional statement regarding the writing and distribution of computer viruses. Formulated by the then-chairman of IFIP's Technical Committee TC11 "Computer Security", Professor William J. Caelli of Queensland University, Brisbane, Australia and the then chairman-elect of IFIP's TC-9 "Computer and Society", Professor Klaus Brunnstein of Hamburg University, the resolution called for three actions. First, it called for all computer professionals to recognise the disastrous potential of computer viruses. Second, it called for computer educators to impress upon their students the dangers of virus programs. Finally, it called for publishers to refrain from publication of the details of virus programs. Unfortunately, merely calling for responsible behaviour does not cause people to act responsibly. Viruses are not only published but intentionally created for "test" purposes as we have shown. Some developers do detect these simulations for varied reasons. The most common reason is that the viruses are trivial to make unsafe, and therefore represent a latent threat to users. Conversely, it can be argued that since all good scanners have the capability to detect MtE reliably, non-detection of a non-in-the-wild-virus wrapped in the MtE is unimportant, except to the virus author. Once again, then, we are left with a situation where the detection/non-detection of a test virus tells us little about a product's abilities (except, of course, its ability to detect test viruses!). The purchase of viruses for testing is clearly something which requires a great deal of informed thought prior to making the decision. Whether or not you determine detection of such viruses important or an appropriate part of a test of anti-virus software appears to be a matter of personal choice. Alternatives to purchasing viruses for testing include using a responsible technically competent independent test body to perform tests. A good test of anti-virus products requires stringent criteria, a defined methodology and hopefully provides some form of measurement which will be meaningful to the user. Such tests are in their infancy in the anti-virus industry. Work being done by Joe Wells, , SECURE Computing (in the form of the CHECKMARK scheme), ITSEC, and Virus Bulletin contributes to formation of such test standards on an on-going basis.
The viability of using simulated viruses for testing of installation etc. is worth mentioning. However, as 'simulated' viruses seem to have an ever decreasing chance of being detected, many of the companies in the anti-virus industry agreed upon a particular file which they would all agree to identify as infected. The following information on this test file (known as the EICAR test file) is based on information taken from www.commandcom.com/html/virus/eicar.html. [7] The purpose of the EICAR test file is to provide an industry standard solution to the following questions:
The idea is that anti-virus programs detect the file exactly like they detect viruses, and treat it as one - for example as a 78-byte overwriting virus (when disinfecting). The same purpose would of course be served by a custom test file for each program, but this may simplify the testing process, in particular when multiple products are being tested and evaluated. The EICAR test file looks like this: X5O!P%@AP[4\PZX54(P^)7CC)7}$EICAR-STANDARD-ANTIVIRUS-TEST-FILE!$H+H*and should be saved to a file with a .COM extension (EICAR.COM being the most obvious choice). When run, it will display a string indicating it is the EICAR simulated anti-virus test file. This string may vary from product to product. It will appear to be something like: "EICAR-STANDARD-ANTIVIRUS-TEST-FILE!". You should consult your vendor for the actual string your anti-virus product will display. (*Note: not all vendors comply with this standard.) As a side note, the file is made printable so that it can easily be printed in a manual, included in the documentation, dictated over the phone or sent by fax. It is not recommended that the file be included "stand alone" in the anti-virus package in "binary form", as users might run the anti-virus program on the package before realizing what the file is for. The EICAR test file is the result of a cooperative effort between various anti-virus researchers. It is not vendor specific and is being made available free of charge. The existence of the EICAR test file, we feel, obviates the need for simulated viruses used for testing purposes.
Conclusion
The question arises again and again. "Is a good virus simulator still a bad idea?" For demonstrating viruses, simulators can provide a useful tool, providing the instructors has stressed that viruses are not primarily visual or aural agents. The simulators can be an excellent tool for increasing general awareness; they are relatively safe and relatively inexpensive -- in some cases, free. However, the usefulness of the this type of simulator is directly related to the instructors knowledge about viruses and the ability of the instructor to reinforce accurate perceptions. The use of simulators to create files to 'test' a scanner performance has been shown to be questionable at best. A test of anti-virus software should include a test for protection against viruses which are known to be in the wild. This subset of viruses are documented in an industry-wide accepted list called "The Wildlist" which is publicly available. This list is publicly available and its contents consist of virus reports from industry and academic researchers. A scientifically valid test additionally requires that its results be reproducible by other people. Test results produced using virus simulators are very different from those results of university researchers, specialist agencies, and individuals with expertise and credibility in product testing and analysis. The explanation for this is that these people use clearly defined and documented testing criteria and test methodology which allow scientifically reproducible tests, which measure a meaningful quantity. For this reason, we would discourage the use of simulated viruses for testing, and encourage the reliance on technically competent testers using real, well-maintained collections of viruses, regardless of their company affiliation.
Bibliography
Sarah Gordon's work in various areas of IT Security can be found profiled in
various publications including the New York Times, Computer Security Journal
and Virus Bulletin. She is a frequent speaker at such diverse conferences
as those sponsored by NSA/NIST/NCSC and DEFCON. Recently appointed to the
Wildlist Board of Directors, she is actively involved in the development
of anti-virus software test criteria and methods. She may be reached as
[email protected]
|