SlideShare uma empresa Scribd logo
1 de 14
Baixar para ler offline
Whole Product Dynamic “Real‐World” Protection Test – (March‐June 2013)  www.av‐comparatives.org 
‐ 1 ‐ 
Whole Product Dynamic
“Real-World” Protection Test
March-June 2013
Language: English
July 2013
Last revision: 25th
July 2013
www.av-comparatives.org
Whole Product Dynamic “Real‐World” Protection Test – (March‐June 2013)  www.av‐comparatives.org 
‐ 2 ‐ 
Content
Test Procedure ..........................................................4 
Settings....................................................................5 
Preparation for every testing day ...............................5 
Testing Cycle for each malicious URL..........................5 
Test Set....................................................................6 
Tested products.........................................................7 
Test Cases.................................................................7 
Results.....................................................................8 
Summary Results (March-June) ..................................9 
Award levels reached in this test................................12
Whole Product Dynamic “Real‐World” Protection Test – (March‐June 2013)  www.av‐comparatives.org 
‐ 3 ‐ 
Introduction
The threat posed by malicious software is growing day by day. Not only is the number of malware pro-
grams increasing, also the very nature of the threats is changing rapidly. The way in which harmful code
gets onto computers is changing from simple file-based methods to distribution via the Internet. Mal-
ware is increasingly infecting PCs through e.g. users deceived into visiting infected web pages, installing
rogue/malicious software or opening emails with malicious attachments.
The scope of protection offered by antivirus programs is extended by the inclusion of e.g. URL-blockers,
content filtering, anti-phishing measures and user-friendly behaviour-blockers. If these features are per-
fectly coordinated with the signature-based and heuristic detection, the protection provided against
threats increases.
In spite of these new technologies, it remains very important that the signature-based and heuristic
detection abilities of antivirus programs continue to be tested. It is precisely because of the new threats
that signature/heuristic detection methods are becoming ever-more important too. The growing frequen-
cy of zero-day attacks means that there is an increasing risk of malware infection. If this is not inter-
cepted by “conventional” or “non-conventional” methods, the computer will be compromised, and it is
only by using an on-demand scan with signature and heuristic-based detection that the malware can be
found (and hopefully removed). The additional protection technologies also offer no means of checking
existing data stores for already-infected files, which can be found on the file servers of many companies.
Those new security layers should be understood as an addition to good detection rates, not as replace-
ment.
In this test all features of the product contribute protection, not only one part (like signatures/ heuristic
file scanning). So the protection provided should be higher than in testing only parts of the product. We
would recommend that all parts of a product should be high in protection, not only single components
(e.g. URL blocking protects only while browsing the web, but not against malware introduced by other
means or already present on the system).
The Whole-Product Dynamic “Real-World” Protection test is a joint project of AV-
Comparatives and the University of Innsbruck’s Faculty of Computer Science and Quality
Engineering. It is partially funded by the Republic of Austria.
The methodology of our Real-World Protection Test has received the following awards and
certifications:
 Constantinus 2013 – given by the Austrian government
 Cluster Award 2012 – given by the Standortagentur Tirol – Tyrolean government
 eAward 2012 – given by report.at (magazine for Computer Science) and the Office of
the Federal Chancellor
Whole Product Dynamic “Real‐World” Protection Test – (March‐June 2013)  www.av‐comparatives.org 
‐ 4 ‐ 
Test Procedure
Testing dozens of antivirus products with hundreds of URLs each per day is a great deal of work, which
cannot be done manually (as it would involve visiting thousands of websites in parallel), so it is neces-
sary to use some sort of automation.
Lab Setup
Every security program to be tested is installed on its own test computer. All computers are connected to
the Internet (details below). The system is updated and then frozen at a particular date, with the operat-
ing system and security program installed. The entire test is performed on real workstations; we do not
use any kind of virtualization. Each workstation has its own Internet connection with its own external IP
address. We have special agreements with several providers (failover clustering and no traffic blocking)
to ensure a stable Internet connection for each PC, which is live during the tests. We have taken the
necessary precautions (with specially configured firewalls etc.) not to harm other computers (i.e. not to
cause outbreaks).
Hardware and Software
For this test we use identical workstations, a control and command server and network attached storage.
Vendor Type CPU RAM Hard Disk
Workstations Dell Optiplex 755 Intel Core 2 Duo 4 GB 80 GB SSD
Control Server Supermicro Microcloud Intel Xeon E5 32 GB 4 x 500 GB SSD
Storage Eurostor ES8700-Open-E Dual Xeon 32 GB 140 TB Raid 6
The tests are performed under Microsoft Windows 7 Professional SP1 64-Bit, with updates as at 4th
March
2013. Some further installed vulnerable software includes:
Vendor Product Version Vendor Product Version
Adobe Flash Player ActiveX 11.5 Microsoft Office Home Premium 2010
Adobe Flash Player Plug-In 11.5 Microsoft .NET Framework 4.0
Adobe Acrobat Reader 11.0 Mozilla Firefox 18.0.1
Apple QuickTime 7.7 Oracle Java 1.7.0.11
Microsoft Internet Explorer 9.0 VideoLAN VLC Media Player 2.0.5
The use of more up-to-date third-party software and an updated Windows 7 64-Bit made it much harder
to find exploits in-the-field for the test. This should also remind users always to keep their systems and
applications up-to-date, in order to minimize the risk of being infected through exploits which use un-
patched software vulnerabilities.
Whole Product Dynamic “Real‐World” Protection Test – (March‐June 2013)  www.av‐comparatives.org 
‐ 5 ‐ 
Settings
We use every security suite with its default settings. Our Whole-Product Dynamic Protection Test aims to
simulate real-world conditions as experienced every day by users. If user interactions are required, we
always choose “Allow” or equivalent. If the product protects the system anyway, we count the malware
as blocked, even though we allow the program to run when the user is asked to make a decision. If the
system is compromised, we count it as user-dependent. We consider “protection” to mean that the sys-
tem is not compromised. This means that the malware is not running (or is removed/terminated) and
there are no significant/malicious system changes. An outbound-firewall alert about a running malware
process, which asks whether or not to block traffic form the users’ workstation to the Internet, is too
little, too late and not considered by us to be protection.
Preparation for every testing day
Every morning, any available security software updates are downloaded and installed, and a new base
image is made for that day. Before each test case is carried out, the products have some time to down-
load and install newer updates which have just been released, as well as to load their protection modules
(which in several cases takes some minutes). In the event that a major signature update for a product is
made available during the day, but fails to download/install before each test case starts, the product will
at least have the signatures that were available at the start of the day. This replicates the situation of an
ordinary user in the real world.
Testing Cycle for each malicious URL
Before browsing to each new malicious URL we update the programs/signatures (as described above).
New major product versions (i.e. the first digit of the build number is different) are installed once at the
beginning of the month, which is why in each monthly report we only give the main product version
number. Our test software monitors the PC, so that any changes made by the malware will be recorded.
Furthermore, the recognition algorithms check whether the antivirus program detects the malware. After
each test case the machine is reset to its clean state.
Protection
Security products should protect the user’s PC. It is not very important at which stage the protection
takes place. It could be while browsing to the website (e.g. protection through URL Blocker), while an
exploit tries to run, while the file is being downloaded/created or when the malware is executed (either
by the exploit or by the user). After the malware is executed (if not blocked before), we wait several
minutes for malicious actions and also to give e.g. behaviour-blockers time to react and remedy actions
performed by the malware. If the malware is not detected and the system is indeed infect-
ed/compromised, the process goes to “System Compromised”. If a user interaction is required and it is up
to the user to decide if something is malicious, and in the case of the worst user decision the system
gets compromised, we rate this as “user-dependent”. Because of this, the yellow bars in the results graph
can be interpreted either as protected or not protected (it’s up to each individual user to decide what
he/she would probably do in that situation).
Whole Product Dynamic “Real‐World” Protection Test – (March‐June 2013)  www.av‐comparatives.org 
‐ 6 ‐ 
Due to the dynamic nature of the test, i.e. mimicking real-world conditions, and because of the way sev-
eral different technologies (such as cloud scanners, reputation services, etc.) work, it is a matter of fact
that such tests cannot be repeated or replicated in the way that e.g. static detection rate tests can. An-
yway, we log as much data as reasonably possible to support our findings and results. Vendors are invited
to provide useful log functions in their products that can provide the additional data they want in the
event of disputes. Vendors were given after each testing month the possibility to dispute our conclusion
about the compromised cases, so that we could recheck if there were maybe some problems in the auto-
mation or with our analysis of the results.
In the case of cloud products, we can only consider the results that the products achieved in our lab at
the time of testing; sometimes the cloud services provided by the security vendors are down due to faults
or maintenance downtime by the vendors, but these cloud-downtimes are often not disclosed to the us-
ers by the vendors. This is also a reason why products relying too heavily on cloud services (and not mak-
ing use of local heuristics, behavior blockers, etc.) can be risky, as in such cases the security provided by
the products can decrease significantly. Cloud signatures/reputation should be implemented in the prod-
ucts to complement the other local/offline protection features, but not replace them completely, as e.g.
offline cloud services would mean the PCs being exposed to higher risks.
Test Set
We aim to use visible and relevant malicious websites/malware that are currently out there and present a
risk to ordinary users. We usually try to include about 50% URLs that point directly to malware executa-
bles; this causes the malware file to be downloaded, thus replicating a scenario in which the user is
tricked by social engineering into following links in spam mails or websites, or installing some Trojan or
other malicious software. The rest are drive-by exploits - these are usually well covered by almost all
major security products, which may be one reason why the scores look relatively high.
We use our own crawling system to search continuously for malicious sites and extract malicious URLs
(including spammed malicious links). We also search manually for malicious URLs. If our in-house crawler
does not find enough valid malicious URLs on one day, we have contracted some external researchers to
provide additional malicious URLs (initially for the exclusive use of AV-Comparatives) and look for addi-
tional (re)sources.
In this kind of testing, it is very important to use enough test cases. If an insufficient number of sam-
ples is used in comparative tests, differences in results may not indicate actual differences in protective
capabilities among the tested products1
. In fact, even in our tests (with thousands of test cases) we
consider products in the same protection cluster to be more or less equally good – provided that they do
not wrongly block clean files/sites more than the industry average.
In total, 3,163 malicious test cases were tested, of which 1,191 exploits were ineffective due to
the patch level – those test cases were therefore not counted in the test.
1
Read more in the following paper: http://www.av-comparatives.org/images/stories/test/statistics/somestats.pdf
Whole Product Dynamic “Real‐World” Protection Test – (March‐June 2013)  www.av‐comparatives.org 
‐ 7 ‐ 
Comments
Microsoft Security Essentials, which provides basic malware protection, can easily be installed from Win-
dows Update, and is used as the basis of comparison for malware protection.
Microsoft Windows 7 includes a firewall and automatic updates, and warns users about executing files
downloaded from the Internet. Furthermore, most modern browsers include pop-up blockers, phish-
ing/URL-Filters, and warn users about downloading files from the Internet. These are just some of the
build-in protection features, but despite all of them, systems can become infected anyway. The reason
for this in most cases is the ordinary user, who may be tricked by social engineering into visiting mali-
cious websites or installing malicious software. Users expect a security product not to ask them if they
really want to execute a file etc. but expect that the security product will protect the system in any case
without them having to think about it, and despite what they do (e.g. executing unknown files).
Tested products
Many vendors make a simple antimalware program (usually called “Antivirus”) and a suite (normally
called “Internet Security”), whereby the latter includes some other security-related features such as the
vendor’s own firewall. Some vendors additionally produce a third type of suite with additional features
like backup, which are not directly related to security. For this test we normally use the Internet Security
suite, as any protection features that prevent the system from being compromised can be used. However,
a vendor can choose to enter their Antivirus product instead, if they prefer. The main product versions of
the products tested in each monthly test run are shown below:
Vendor Product Version
March
Version
April
Version
May
Version
June
AhnLab V3 Internet Security 8.0 8.0 8.0 8.0
Avast Free Antivirus 8.0 8.0 8.0 8.0
AVG Internet Security 2013 2013 2013 2013
Avira Internet Security 2013 2013 2013 2013
Bitdefender Internet Security 2013 2013 2013 2013
BullGuard Internet Security 2013 2013 2013 2013
Emsisoft Anti-Malware 7.0 7.0 7.0 7.0
eScan Internet Security 14.0 14.0 14.0 14.0
ESET Smart Security 6.0 6.0 6.0 6.0
F-Secure Internet Security 2013 2013 2013 2013
Fortinet FortiClient Lite 5.0 5.0 5.0 5.0
G DATA Internet Security 2013 2014 2014 2014
Kaspersky Internet Security 2013 2013 2013 2013
McAfee Internet Security 2013 2013 2013 2013
Microsoft Security Essentials 4.2 4.2 4.2 4.2
Panda Cloud Free Antivirus 2.1.1 2.1.1 2.1.1 2.1.1
Sophos Endpoint Security 10.2 10.2 10.2 10.2
ThreatTrack Vipre Internet Security 2013 2013 2013 2013
Trend Micro Titanium Internet Security 2013 2013 2013 2013
Products that are available only in Chinese language are included only in the Chinese report version on
our website (http://www.av-comparatives.org/dynamic-tests/).
Whole Product Dynamic “Real‐World” Protection Test – (March‐June 2013)  www.av‐comparatives.org 
‐ 8 ‐ 
Test Cases
Test period Test-cases
11th
to 25th
March 2013 422
3rd
to 23th
April 2013 545
3rd
to 28th
May 2013 431
5th
to 21st
June 2013 574
TOTAL 1972
Results
Below you can see an overview of the individual testing months2
.
March 2013 – 422 test cases April 2013 – 545 test cases
May 2013 – 431 test cases June 2013 – 574 test cases
We purposely do not give exact numbers for the individual months in this report, to avoid the minor
differences of a few cases being misused to state that one product is better than the other in a given
month and with a given test-set size. We provide the total numbers in the overall reports, where the size
of the test-set is bigger, and differences that are more significant may be observed.
2
Interested users who want to see the exact protection rates (without FP rates) every month can see the monthly
updated interactive charts on our website: http://chart.av-comparatives.org/chart1.php
Whole Product Dynamic “Real‐World” Protection Test – (March‐June 2013)  www.av‐comparatives.org 
‐ 9 ‐ 
Summary Results (March-June)
Test period: March – June 2013 (1972 Test cases)
Blocked
User
dependent
Compromised
PROTECTION RATE
[Blocked % + (User dependent %)/2]3 Cluster4
Bitdefender,
Kaspersky
1970 - 2 99,9% 1
F-Secure 1968 3 1 99,9% 1
Trend Micro 1968 - 4 99,8% 1
Emsisoft 1954 14 4 99,4% 1
McAfee 1959 - 13 99,3% 1
eScan 1956 5 11 99,3% 1
Fortinet 1951 - 21 98,9% 1
Avast 1946 12 14 98,9% 1
G DATA 1937 19 16 98,7% 1
ESET 1944 1 27 98,6% 1
BullGuard 1922 16 34 97,9% 2
AVIRA 1914 15 43 97,4% 2
Panda 1920 - 52 97,4% 2
AVG 1848 102 22 96,3% 2
Vipre 1897 - 75 96,2% 2
Sophos 1896 - 76 96,1% 2
Microsoft 1825 - 147 92,5% 3
AhnLab 1774 - 198 90,0% 4
The graph below shows the overall protection rate (all samples), including the minimum and maximum
protection rates for the individual months.
3
User-dependent cases are given half credit. For example, if a program blocks 80% by itself, and another 20% of
cases are user-dependent, we give half credit for the 20%, i.e. 10%, so it gets 90% altogether.
4
Hierarchical Clustering Method: defining four clusters using average linkage between groups (Euclidian distance)
based on the protection rate (see dendrogram on page 12).
Whole Product Dynamic “Real‐World” Protection Test – (March‐June 2013)  www.av‐comparatives.org 
‐ 10 ‐ 
Whole-Product “False Alarm” Test (wrongly blocked domains/files)
The false-alarm test in the Whole-Product Dynamic “Real-World” Protection Test consists of two parts:
wrongly blocked domains (while browsing) and wrongly blocked files (while downloading/installing). It is
necessary to test both scenarios because testing only one of the two above cases could penalize products
which focus mainly on one type of protection method, either URL filtering or on-
access/behaviour/reputation-based file protection.
a) Wrongly blocked domains (while browsing)
We used around one thousand randomly chosen popular domains. Blocked non-malicious domains/URLs
were counted as false positives (FPs). The wrongly blocked domains have been reported to the respective
vendors for review and should now no longer be blocked.
By blocking whole domains, the security products not only risk causing a loss of trust in their warnings,
but also possibly causing financial damage (besides the damage to website reputation) to the domain
owners, including loss of e.g. advertisement revenue. Due to this, we strongly recommend vendors to
block whole domains only in the case where the domain’s sole purpose is to carry/deliver malicious code,
and otherwise block just to the malicious pages (as long as they are indeed malicious). Products which
tend to block URLs based e.g. on reputation may be more prone to this and score also higher in protec-
tion tests, as they may block many unpopular/new websites.
b) Wrongly blocked files (while downloading/installing)
We used about one hundred different applications listed either as top downloads or as new/recommended
downloads from about a dozen different popular download portals. The applications were downloaded
from the original software developers’ websites (instead of the download portal host), saved to disk and
installed to see if they are blocked at any stage of this procedure. Additionally, we included a few clean
files that were encountered and disputed over the past months of the Real-World Protection Test.
The duty of security products is to protect against malicious sites/files, not to censor or limit the access
only to well-known popular applications and websites. If the user deliberately chooses a high security
setting, which warns that it may block some legitimate sites or files, then this may be considered ac-
ceptable. However, we do not regard it to be acceptable as a default setting, where the user has not
been warned. As the test is done at points in time and FPs on very popular software/websites are usually
noticed and fixed within a few hours, it would be surprising to encounter FPs with very popular applica-
tions. Due to this, FP tests which are done e.g. only with very popular applications, or which use only the
top 50 files from whitelisted/monitored download portals would be a waste of time and resources. Users
do not care whether they are infected by malware that affects only them, just as they do not care if the
FP count affects only them. While it is preferable that FPs do not affect many users, it should be the goal
to avoid having any FPs and to protect against any malicious files, no matter how many users are affect-
ed or targeted. Prevalence of FPs based on user-base data is of interest for internal QA testing of AV
vendors, but for the ordinary user it is important to know how accurately its product distinguishes be-
tween clean and malicious files.
Whole Product Dynamic “Real‐World” Protection Test – (March‐June 2013)  www.av‐comparatives.org 
‐ 11 ‐ 
The below table shows the numbers of wrongly blocked domains/files:
Wrongly blocked clean domains/files
(blocked / user-dependent5
)
Wrongly
blocked score6
AhnLab, AVG, ESET, Microsoft - / - (-) 0
Kaspersky, Sophos 1 / - (1) 1
G DATA 1 / 1 (2) 1.5
AVIRA 3 / - (3) 3
Avast - / 10 (10) 5
Emsisoft 5 / - (5) 5
BullGuard, eScan, Panda, Trend Micro 6 / - (6) 6
average (8) average (7)
Vipre 17 / - (17) 17
Bitdefender 24 / - (24) 24
F-Secure 22 / 13 (35) 28.5
Fortinet, McAfee 31 / - (31) 31
To determine which products have to be downgraded in our award scheme due to the rate of wrongly
blocked sites/files, we backed up our decision by using statistical methods and by looking at the average
scores. The following products with above-average FPs have been downgraded: Bitdefender, Fortinet, F-
Secure, McAfee and Vipre.
As an experiment, we looked at which products block URLs or the download/execution of files just be-
cause they are unknown (i.e. block also completely innocent files) and/or have no reputation infor-
mation. When downloading a new self-written innocent/clean program from one of our websites, Avast
gave a warning about the unknown reputation of the file (user-dependent). F-Secure, McAfee and Trend
Micro wrongly blocked the download/URL completely and handled the file/URL as if it were malicious.
Although such protection techniques based on metadata, including e.g. reputation and whitelists, help
to produce high scores in protection tests, it has to be kept in mind that this may also lead to detec-
tion/blocking of innocent/clean files and URLs just because they are unknown to the cloud and/or be-
cause the hosted files are too new.
5
Although user dependent cases are extremely annoying (esp. on clean files) for the user, they were counted only
as half for the “wrongly blocked rate” (like for the protection rate).
6
Lower is better.
Whole Product Dynamic “Real‐World” Protection Test – (March‐June 2013)  www.av‐comparatives.org 
‐ 12 ‐ 
Illustration of how awards were given
The dendrogram (using average linkage between groups) shows the results of the hierarchical cluster
analysis. It indicates at what level of similarity the clusters are joined. The red drafted line defines the
level of similarity. Each intersection indicates a group (in this case 4 groups). Products that had above-
average FPs (wrongly blocked score) are marked in red (and downgraded according to the ranking system
below).
Ranking system
Protection score
Cluster7
4
Protection score
Cluster 3
Protection score
Cluster 2
Protection score
Cluster 1
<  FPs Tested Standard Advanced Advanced+
>  FPs Tested Tested Standard Advanced
The graph page 9 shows the test results compared to the average "out-of-box" malware protection in
Microsoft Windows (red line). In Windows 8, this is provided by Windows Defender, which is pre-installed
by default with the operating system. The equivalent in Windows 7 is Microsoft Security Essentials, which
is not pre-installed, but can easily be added for free as an option via the Windows Update service.
7
See protection score clusters on page 9.
Whole Product Dynamic “Real‐World” Protection Test – (March‐June 2013)  www.av‐comparatives.org 
‐ 13 ‐ 
Award levels reached in this test
The awards are decided and given by the testers based on the observed test results (after consulting
statistical models). The following awards8
are for the results reached in the spring 2013 Whole-Product
Dynamic “Real-World” Protection Test:
AWARD LEVELS PRODUCTS
Kaspersky
Trend Micro
Emsisoft
eScan
Avast
G DATA
ESET
Bitdefender*
F-Secure*
McAfee*
Fortinet*
BullGuard
AVIRA
Panda
AVG
Sophos
Vipre*
AhnLab
* downgraded by one rank due to the score of wrongly blocked sites/files (FPs); see page 12
Expert users who do not care about wrongly blocked files/websites (false alarms) are free to rely on the
protection rates on page 9 instead of our awards ranking which takes FPs in consideration.
8
Microsoft security products are not included in the awards page, as their out-of-box protection is (optionally)
included in the operating system and is therefore out-of-competition.
Whole Product Dynamic “Real‐World” Protection Test – (March‐June 2013)  www.av‐comparatives.org 
‐ 14 ‐ 
Copyright and Disclaimer
This publication is Copyright © 2013 by AV-Comparatives e.V. ®. Any use of the results, etc. in whole or
in part, is ONLY permitted with the explicit written agreement of the management board of AV-
Comparatives e.V., prior to any publication. AV-Comparatives e.V. and its testers cannot be held liable for
any damage or loss which might occur as a result of, or in connection with, the use of the information
provided in this paper. We take every possible care to ensure the correctness of the basic data, but liabil-
ity for the correctness of the test results cannot be taken by any representative of AV-Comparatives e.V.
We do not give any guarantee of the correctness, completeness, or suitability for a specific purpose of
any of the information/content provided at any given time. No-one else involved in creating, producing
or delivering test results shall be liable for any indirect, special or consequential damage, or loss of prof-
its, arising out of, or related to, the use (or inability to use), the services provided by the website, test
documents or any related data. AV-Comparatives e.V. is a registered Austrian Non-Profit-Organization.
For more information about AV-Comparatives and the testing methodologies please visit our website.
AV-Comparatives e.V. (July 2013)

Mais conteúdo relacionado

Mais procurados

5 howtomitigate
5 howtomitigate5 howtomitigate
5 howtomitigatericharddxd
 
White Paper - Are antivirus solutions enough to protect industrial plants?
White Paper - Are antivirus solutions enough to protect industrial plants?White Paper - Are antivirus solutions enough to protect industrial plants?
White Paper - Are antivirus solutions enough to protect industrial plants?TI Safe
 
Patch Management - 2013
Patch Management - 2013Patch Management - 2013
Patch Management - 2013Vicky Ames
 
Cq3210191021
Cq3210191021Cq3210191021
Cq3210191021IJMER
 
Sa No Scan Paper
Sa No Scan PaperSa No Scan Paper
Sa No Scan Papertafinley
 
Accuracy and time_costs_of_web_app_scanners
Accuracy and time_costs_of_web_app_scannersAccuracy and time_costs_of_web_app_scanners
Accuracy and time_costs_of_web_app_scannersLarry Suto
 
IRJET- A Study on Penetration Testing using Metasploit Framework
IRJET- A Study on Penetration Testing using Metasploit FrameworkIRJET- A Study on Penetration Testing using Metasploit Framework
IRJET- A Study on Penetration Testing using Metasploit FrameworkIRJET Journal
 
AV-Comparatives’ 2017 business software review
AV-Comparatives’ 2017 business software reviewAV-Comparatives’ 2017 business software review
AV-Comparatives’ 2017 business software reviewJermund Ottermo
 
GrrCon 2014: Security On the Cheap
GrrCon 2014: Security On the CheapGrrCon 2014: Security On the Cheap
GrrCon 2014: Security On the CheapJoel Cardella
 
Alien vault _policymanagement
Alien vault _policymanagementAlien vault _policymanagement
Alien vault _policymanagementMarjo'isme Yoyok
 
Better Security Testing: Using the Cloud and Continuous Delivery
Better Security Testing: Using the Cloud and Continuous DeliveryBetter Security Testing: Using the Cloud and Continuous Delivery
Better Security Testing: Using the Cloud and Continuous DeliveryGene Gotimer
 
Mitigating Rapid Cyberattacks
Mitigating Rapid CyberattacksMitigating Rapid Cyberattacks
Mitigating Rapid CyberattacksErdem Erdogan
 
Open-Source Security Management and Vulnerability Impact Assessment
Open-Source Security Management and Vulnerability Impact AssessmentOpen-Source Security Management and Vulnerability Impact Assessment
Open-Source Security Management and Vulnerability Impact AssessmentPriyanka Aash
 

Mais procurados (19)

5 howtomitigate
5 howtomitigate5 howtomitigate
5 howtomitigate
 
Avc beh 201207_en
Avc beh 201207_enAvc beh 201207_en
Avc beh 201207_en
 
White Paper - Are antivirus solutions enough to protect industrial plants?
White Paper - Are antivirus solutions enough to protect industrial plants?White Paper - Are antivirus solutions enough to protect industrial plants?
White Paper - Are antivirus solutions enough to protect industrial plants?
 
Patch Management - 2013
Patch Management - 2013Patch Management - 2013
Patch Management - 2013
 
Avc fd mar2012_intl_en
Avc fd mar2012_intl_enAvc fd mar2012_intl_en
Avc fd mar2012_intl_en
 
Avc fd mar2012_intl_en
Avc fd mar2012_intl_enAvc fd mar2012_intl_en
Avc fd mar2012_intl_en
 
Cq3210191021
Cq3210191021Cq3210191021
Cq3210191021
 
Avc Report25
Avc Report25Avc Report25
Avc Report25
 
Sa No Scan Paper
Sa No Scan PaperSa No Scan Paper
Sa No Scan Paper
 
Accuracy and time_costs_of_web_app_scanners
Accuracy and time_costs_of_web_app_scannersAccuracy and time_costs_of_web_app_scanners
Accuracy and time_costs_of_web_app_scanners
 
Vulnerability and Patch Management
Vulnerability and Patch ManagementVulnerability and Patch Management
Vulnerability and Patch Management
 
Patch management
Patch managementPatch management
Patch management
 
IRJET- A Study on Penetration Testing using Metasploit Framework
IRJET- A Study on Penetration Testing using Metasploit FrameworkIRJET- A Study on Penetration Testing using Metasploit Framework
IRJET- A Study on Penetration Testing using Metasploit Framework
 
AV-Comparatives’ 2017 business software review
AV-Comparatives’ 2017 business software reviewAV-Comparatives’ 2017 business software review
AV-Comparatives’ 2017 business software review
 
GrrCon 2014: Security On the Cheap
GrrCon 2014: Security On the CheapGrrCon 2014: Security On the Cheap
GrrCon 2014: Security On the Cheap
 
Alien vault _policymanagement
Alien vault _policymanagementAlien vault _policymanagement
Alien vault _policymanagement
 
Better Security Testing: Using the Cloud and Continuous Delivery
Better Security Testing: Using the Cloud and Continuous DeliveryBetter Security Testing: Using the Cloud and Continuous Delivery
Better Security Testing: Using the Cloud and Continuous Delivery
 
Mitigating Rapid Cyberattacks
Mitigating Rapid CyberattacksMitigating Rapid Cyberattacks
Mitigating Rapid Cyberattacks
 
Open-Source Security Management and Vulnerability Impact Assessment
Open-Source Security Management and Vulnerability Impact AssessmentOpen-Source Security Management and Vulnerability Impact Assessment
Open-Source Security Management and Vulnerability Impact Assessment
 

Destaque (17)

Android pcsl 201208_en
Android pcsl 201208_enAndroid pcsl 201208_en
Android pcsl 201208_en
 
Avcomparatives Survey 2011
Avcomparatives Survey 2011Avcomparatives Survey 2011
Avcomparatives Survey 2011
 
Hii assessing the_effectiveness_of_antivirus_solutions
Hii assessing the_effectiveness_of_antivirus_solutionsHii assessing the_effectiveness_of_antivirus_solutions
Hii assessing the_effectiveness_of_antivirus_solutions
 
Kis2010 ru
Kis2010 ruKis2010 ru
Kis2010 ru
 
Drweb curenet-manual-ru
Drweb curenet-manual-ruDrweb curenet-manual-ru
Drweb curenet-manual-ru
 
Avc per 201210_en
Avc per 201210_enAvc per 201210_en
Avc per 201210_en
 
Summary2010
Summary2010Summary2010
Summary2010
 
Security survey2013 en
Security survey2013 enSecurity survey2013 en
Security survey2013 en
 
Avc fdt 201209_en
Avc fdt 201209_enAvc fdt 201209_en
Avc fdt 201209_en
 
Dtl 2013 q1_home.1
Dtl 2013 q1_home.1Dtl 2013 q1_home.1
Dtl 2013 q1_home.1
 
Zero days-hit-users-hard-at-the-start-of-the-year-en
Zero days-hit-users-hard-at-the-start-of-the-year-enZero days-hit-users-hard-at-the-start-of-the-year-en
Zero days-hit-users-hard-at-the-start-of-the-year-en
 
Dtl 2012 q4_home.1
Dtl 2012 q4_home.1Dtl 2012 q4_home.1
Dtl 2012 q4_home.1
 
Rpt repeating-history
Rpt repeating-historyRpt repeating-history
Rpt repeating-history
 
Avc per 201206_en
Avc per 201206_enAvc per 201206_en
Avc per 201206_en
 
Online payments-threats-2
Online payments-threats-2Online payments-threats-2
Online payments-threats-2
 
Avc per 201304_en
Avc per 201304_enAvc per 201304_en
Avc per 201304_en
 
Ncr mobile europe_russia
Ncr mobile europe_russiaNcr mobile europe_russia
Ncr mobile europe_russia
 

Semelhante a Avc prot 2013a_en

AV Comparatives 2013 (Comparación de Antivirus)
AV Comparatives 2013 (Comparación de Antivirus)AV Comparatives 2013 (Comparación de Antivirus)
AV Comparatives 2013 (Comparación de Antivirus)Doryan Mathos
 
ANTIVIRUS
ANTIVIRUSANTIVIRUS
ANTIVIRUSfauscha
 
AV-Comparatives Performance Test
AV-Comparatives Performance TestAV-Comparatives Performance Test
AV-Comparatives Performance TestHerbert Rodriguez
 
AV-Comparatives Real World Protection Test
AV-Comparatives Real World Protection TestAV-Comparatives Real World Protection Test
AV-Comparatives Real World Protection TestHerbert Rodriguez
 
Is Using Off-the-shelf Antimalware Product to Secure Your Medical Device a Go...
Is Using Off-the-shelf Antimalware Product to Secure Your Medical Device a Go...Is Using Off-the-shelf Antimalware Product to Secure Your Medical Device a Go...
Is Using Off-the-shelf Antimalware Product to Secure Your Medical Device a Go...Jose Lopez
 
From Code to Customer: How to Make Software Products Secure
From Code to Customer: How to Make Software Products SecureFrom Code to Customer: How to Make Software Products Secure
From Code to Customer: How to Make Software Products SecureKaspersky
 
Outpost Anti-Malware 7.5
Outpost Anti-Malware 7.5Outpost Anti-Malware 7.5
Outpost Anti-Malware 7.5Lubov Putsko
 
Automated Regression Testing for Embedded Systems in Action
Automated Regression Testing for Embedded Systems in ActionAutomated Regression Testing for Embedded Systems in Action
Automated Regression Testing for Embedded Systems in ActionAANDTech
 
Outpost Security Pro 7.5: What's Inside?
Outpost Security Pro 7.5: What's Inside?Outpost Security Pro 7.5: What's Inside?
Outpost Security Pro 7.5: What's Inside?Lubov Putsko
 
Penetration testing dont just leave it to chance
Penetration testing dont just leave it to chancePenetration testing dont just leave it to chance
Penetration testing dont just leave it to chanceDr. Anish Cheriyan (PhD)
 
Configuration testing
Configuration testingConfiguration testing
Configuration testingfarouq umar
 
Window Desktop Application Testing
Window Desktop Application TestingWindow Desktop Application Testing
Window Desktop Application TestingTrupti Jethva
 
Astaro Orange Paper Oss Myths Dispelled
Astaro Orange Paper Oss Myths DispelledAstaro Orange Paper Oss Myths Dispelled
Astaro Orange Paper Oss Myths Dispelledlosalamos
 
Desktop applicationtesting
Desktop applicationtestingDesktop applicationtesting
Desktop applicationtestingAkss004
 

Semelhante a Avc prot 2013a_en (20)

AV Comparatives 2013 (Comparación de Antivirus)
AV Comparatives 2013 (Comparación de Antivirus)AV Comparatives 2013 (Comparación de Antivirus)
AV Comparatives 2013 (Comparación de Antivirus)
 
ANTIVIRUS
ANTIVIRUSANTIVIRUS
ANTIVIRUS
 
AV-Comparatives Performance Test
AV-Comparatives Performance TestAV-Comparatives Performance Test
AV-Comparatives Performance Test
 
Performance dec 2010
Performance dec 2010Performance dec 2010
Performance dec 2010
 
AV-Comparatives Real World Protection Test
AV-Comparatives Real World Protection TestAV-Comparatives Real World Protection Test
AV-Comparatives Real World Protection Test
 
Is Using Off-the-shelf Antimalware Product to Secure Your Medical Device a Go...
Is Using Off-the-shelf Antimalware Product to Secure Your Medical Device a Go...Is Using Off-the-shelf Antimalware Product to Secure Your Medical Device a Go...
Is Using Off-the-shelf Antimalware Product to Secure Your Medical Device a Go...
 
From Code to Customer: How to Make Software Products Secure
From Code to Customer: How to Make Software Products SecureFrom Code to Customer: How to Make Software Products Secure
From Code to Customer: How to Make Software Products Secure
 
Outpost Anti-Malware 7.5
Outpost Anti-Malware 7.5Outpost Anti-Malware 7.5
Outpost Anti-Malware 7.5
 
smpef
smpefsmpef
smpef
 
Automated Regression Testing for Embedded Systems in Action
Automated Regression Testing for Embedded Systems in ActionAutomated Regression Testing for Embedded Systems in Action
Automated Regression Testing for Embedded Systems in Action
 
Outpost Security Pro 7.5: What's Inside?
Outpost Security Pro 7.5: What's Inside?Outpost Security Pro 7.5: What's Inside?
Outpost Security Pro 7.5: What's Inside?
 
Mac review 2012_en
Mac review 2012_enMac review 2012_en
Mac review 2012_en
 
Penetration testing dont just leave it to chance
Penetration testing dont just leave it to chancePenetration testing dont just leave it to chance
Penetration testing dont just leave it to chance
 
Configuration testing
Configuration testingConfiguration testing
Configuration testing
 
Window Desktop Application Testing
Window Desktop Application TestingWindow Desktop Application Testing
Window Desktop Application Testing
 
Securing the Cloud
Securing the CloudSecuring the Cloud
Securing the Cloud
 
It kamus virus security glossary
It kamus virus   security glossaryIt kamus virus   security glossary
It kamus virus security glossary
 
Astaro Orange Paper Oss Myths Dispelled
Astaro Orange Paper Oss Myths DispelledAstaro Orange Paper Oss Myths Dispelled
Astaro Orange Paper Oss Myths Dispelled
 
Test plan
Test planTest plan
Test plan
 
Desktop applicationtesting
Desktop applicationtestingDesktop applicationtesting
Desktop applicationtesting
 

Último

Potential of AI (Generative AI) in Business: Learnings and Insights
Potential of AI (Generative AI) in Business: Learnings and InsightsPotential of AI (Generative AI) in Business: Learnings and Insights
Potential of AI (Generative AI) in Business: Learnings and InsightsRavi Sanghani
 
How to write a Business Continuity Plan
How to write a Business Continuity PlanHow to write a Business Continuity Plan
How to write a Business Continuity PlanDatabarracks
 
Assure Ecommerce and Retail Operations Uptime with ThousandEyes
Assure Ecommerce and Retail Operations Uptime with ThousandEyesAssure Ecommerce and Retail Operations Uptime with ThousandEyes
Assure Ecommerce and Retail Operations Uptime with ThousandEyesThousandEyes
 
The Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptx
The Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptxThe Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptx
The Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptxLoriGlavin3
 
DevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platformsDevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platformsSergiu Bodiu
 
Digital Identity is Under Attack: FIDO Paris Seminar.pptx
Digital Identity is Under Attack: FIDO Paris Seminar.pptxDigital Identity is Under Attack: FIDO Paris Seminar.pptx
Digital Identity is Under Attack: FIDO Paris Seminar.pptxLoriGlavin3
 
New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024BookNet Canada
 
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptx
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptxPasskey Providers and Enabling Portability: FIDO Paris Seminar.pptx
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptxLoriGlavin3
 
Unleashing Real-time Insights with ClickHouse_ Navigating the Landscape in 20...
Unleashing Real-time Insights with ClickHouse_ Navigating the Landscape in 20...Unleashing Real-time Insights with ClickHouse_ Navigating the Landscape in 20...
Unleashing Real-time Insights with ClickHouse_ Navigating the Landscape in 20...Alkin Tezuysal
 
A Deep Dive on Passkeys: FIDO Paris Seminar.pptx
A Deep Dive on Passkeys: FIDO Paris Seminar.pptxA Deep Dive on Passkeys: FIDO Paris Seminar.pptx
A Deep Dive on Passkeys: FIDO Paris Seminar.pptxLoriGlavin3
 
Decarbonising Buildings: Making a net-zero built environment a reality
Decarbonising Buildings: Making a net-zero built environment a realityDecarbonising Buildings: Making a net-zero built environment a reality
Decarbonising Buildings: Making a net-zero built environment a realityIES VE
 
Modern Roaming for Notes and Nomad – Cheaper Faster Better Stronger
Modern Roaming for Notes and Nomad – Cheaper Faster Better StrongerModern Roaming for Notes and Nomad – Cheaper Faster Better Stronger
Modern Roaming for Notes and Nomad – Cheaper Faster Better Strongerpanagenda
 
2024 April Patch Tuesday
2024 April Patch Tuesday2024 April Patch Tuesday
2024 April Patch TuesdayIvanti
 
Rise of the Machines: Known As Drones...
Rise of the Machines: Known As Drones...Rise of the Machines: Known As Drones...
Rise of the Machines: Known As Drones...Rick Flair
 
From Family Reminiscence to Scholarly Archive .
From Family Reminiscence to Scholarly Archive .From Family Reminiscence to Scholarly Archive .
From Family Reminiscence to Scholarly Archive .Alan Dix
 
Scale your database traffic with Read & Write split using MySQL Router
Scale your database traffic with Read & Write split using MySQL RouterScale your database traffic with Read & Write split using MySQL Router
Scale your database traffic with Read & Write split using MySQL RouterMydbops
 
What is DBT - The Ultimate Data Build Tool.pdf
What is DBT - The Ultimate Data Build Tool.pdfWhat is DBT - The Ultimate Data Build Tool.pdf
What is DBT - The Ultimate Data Build Tool.pdfMounikaPolabathina
 
Emixa Mendix Meetup 11 April 2024 about Mendix Native development
Emixa Mendix Meetup 11 April 2024 about Mendix Native developmentEmixa Mendix Meetup 11 April 2024 about Mendix Native development
Emixa Mendix Meetup 11 April 2024 about Mendix Native developmentPim van der Noll
 
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptx
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptxMerck Moving Beyond Passwords: FIDO Paris Seminar.pptx
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptxLoriGlavin3
 
A Framework for Development in the AI Age
A Framework for Development in the AI AgeA Framework for Development in the AI Age
A Framework for Development in the AI AgeCprime
 

Último (20)

Potential of AI (Generative AI) in Business: Learnings and Insights
Potential of AI (Generative AI) in Business: Learnings and InsightsPotential of AI (Generative AI) in Business: Learnings and Insights
Potential of AI (Generative AI) in Business: Learnings and Insights
 
How to write a Business Continuity Plan
How to write a Business Continuity PlanHow to write a Business Continuity Plan
How to write a Business Continuity Plan
 
Assure Ecommerce and Retail Operations Uptime with ThousandEyes
Assure Ecommerce and Retail Operations Uptime with ThousandEyesAssure Ecommerce and Retail Operations Uptime with ThousandEyes
Assure Ecommerce and Retail Operations Uptime with ThousandEyes
 
The Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptx
The Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptxThe Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptx
The Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptx
 
DevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platformsDevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platforms
 
Digital Identity is Under Attack: FIDO Paris Seminar.pptx
Digital Identity is Under Attack: FIDO Paris Seminar.pptxDigital Identity is Under Attack: FIDO Paris Seminar.pptx
Digital Identity is Under Attack: FIDO Paris Seminar.pptx
 
New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
 
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptx
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptxPasskey Providers and Enabling Portability: FIDO Paris Seminar.pptx
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptx
 
Unleashing Real-time Insights with ClickHouse_ Navigating the Landscape in 20...
Unleashing Real-time Insights with ClickHouse_ Navigating the Landscape in 20...Unleashing Real-time Insights with ClickHouse_ Navigating the Landscape in 20...
Unleashing Real-time Insights with ClickHouse_ Navigating the Landscape in 20...
 
A Deep Dive on Passkeys: FIDO Paris Seminar.pptx
A Deep Dive on Passkeys: FIDO Paris Seminar.pptxA Deep Dive on Passkeys: FIDO Paris Seminar.pptx
A Deep Dive on Passkeys: FIDO Paris Seminar.pptx
 
Decarbonising Buildings: Making a net-zero built environment a reality
Decarbonising Buildings: Making a net-zero built environment a realityDecarbonising Buildings: Making a net-zero built environment a reality
Decarbonising Buildings: Making a net-zero built environment a reality
 
Modern Roaming for Notes and Nomad – Cheaper Faster Better Stronger
Modern Roaming for Notes and Nomad – Cheaper Faster Better StrongerModern Roaming for Notes and Nomad – Cheaper Faster Better Stronger
Modern Roaming for Notes and Nomad – Cheaper Faster Better Stronger
 
2024 April Patch Tuesday
2024 April Patch Tuesday2024 April Patch Tuesday
2024 April Patch Tuesday
 
Rise of the Machines: Known As Drones...
Rise of the Machines: Known As Drones...Rise of the Machines: Known As Drones...
Rise of the Machines: Known As Drones...
 
From Family Reminiscence to Scholarly Archive .
From Family Reminiscence to Scholarly Archive .From Family Reminiscence to Scholarly Archive .
From Family Reminiscence to Scholarly Archive .
 
Scale your database traffic with Read & Write split using MySQL Router
Scale your database traffic with Read & Write split using MySQL RouterScale your database traffic with Read & Write split using MySQL Router
Scale your database traffic with Read & Write split using MySQL Router
 
What is DBT - The Ultimate Data Build Tool.pdf
What is DBT - The Ultimate Data Build Tool.pdfWhat is DBT - The Ultimate Data Build Tool.pdf
What is DBT - The Ultimate Data Build Tool.pdf
 
Emixa Mendix Meetup 11 April 2024 about Mendix Native development
Emixa Mendix Meetup 11 April 2024 about Mendix Native developmentEmixa Mendix Meetup 11 April 2024 about Mendix Native development
Emixa Mendix Meetup 11 April 2024 about Mendix Native development
 
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptx
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptxMerck Moving Beyond Passwords: FIDO Paris Seminar.pptx
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptx
 
A Framework for Development in the AI Age
A Framework for Development in the AI AgeA Framework for Development in the AI Age
A Framework for Development in the AI Age
 

Avc prot 2013a_en

  • 1. Whole Product Dynamic “Real‐World” Protection Test – (March‐June 2013)  www.av‐comparatives.org  ‐ 1 ‐  Whole Product Dynamic “Real-World” Protection Test March-June 2013 Language: English July 2013 Last revision: 25th July 2013 www.av-comparatives.org
  • 2. Whole Product Dynamic “Real‐World” Protection Test – (March‐June 2013)  www.av‐comparatives.org  ‐ 2 ‐  Content Test Procedure ..........................................................4  Settings....................................................................5  Preparation for every testing day ...............................5  Testing Cycle for each malicious URL..........................5  Test Set....................................................................6  Tested products.........................................................7  Test Cases.................................................................7  Results.....................................................................8  Summary Results (March-June) ..................................9  Award levels reached in this test................................12
  • 3. Whole Product Dynamic “Real‐World” Protection Test – (March‐June 2013)  www.av‐comparatives.org  ‐ 3 ‐  Introduction The threat posed by malicious software is growing day by day. Not only is the number of malware pro- grams increasing, also the very nature of the threats is changing rapidly. The way in which harmful code gets onto computers is changing from simple file-based methods to distribution via the Internet. Mal- ware is increasingly infecting PCs through e.g. users deceived into visiting infected web pages, installing rogue/malicious software or opening emails with malicious attachments. The scope of protection offered by antivirus programs is extended by the inclusion of e.g. URL-blockers, content filtering, anti-phishing measures and user-friendly behaviour-blockers. If these features are per- fectly coordinated with the signature-based and heuristic detection, the protection provided against threats increases. In spite of these new technologies, it remains very important that the signature-based and heuristic detection abilities of antivirus programs continue to be tested. It is precisely because of the new threats that signature/heuristic detection methods are becoming ever-more important too. The growing frequen- cy of zero-day attacks means that there is an increasing risk of malware infection. If this is not inter- cepted by “conventional” or “non-conventional” methods, the computer will be compromised, and it is only by using an on-demand scan with signature and heuristic-based detection that the malware can be found (and hopefully removed). The additional protection technologies also offer no means of checking existing data stores for already-infected files, which can be found on the file servers of many companies. Those new security layers should be understood as an addition to good detection rates, not as replace- ment. In this test all features of the product contribute protection, not only one part (like signatures/ heuristic file scanning). So the protection provided should be higher than in testing only parts of the product. We would recommend that all parts of a product should be high in protection, not only single components (e.g. URL blocking protects only while browsing the web, but not against malware introduced by other means or already present on the system). The Whole-Product Dynamic “Real-World” Protection test is a joint project of AV- Comparatives and the University of Innsbruck’s Faculty of Computer Science and Quality Engineering. It is partially funded by the Republic of Austria. The methodology of our Real-World Protection Test has received the following awards and certifications:  Constantinus 2013 – given by the Austrian government  Cluster Award 2012 – given by the Standortagentur Tirol – Tyrolean government  eAward 2012 – given by report.at (magazine for Computer Science) and the Office of the Federal Chancellor
  • 4. Whole Product Dynamic “Real‐World” Protection Test – (March‐June 2013)  www.av‐comparatives.org  ‐ 4 ‐  Test Procedure Testing dozens of antivirus products with hundreds of URLs each per day is a great deal of work, which cannot be done manually (as it would involve visiting thousands of websites in parallel), so it is neces- sary to use some sort of automation. Lab Setup Every security program to be tested is installed on its own test computer. All computers are connected to the Internet (details below). The system is updated and then frozen at a particular date, with the operat- ing system and security program installed. The entire test is performed on real workstations; we do not use any kind of virtualization. Each workstation has its own Internet connection with its own external IP address. We have special agreements with several providers (failover clustering and no traffic blocking) to ensure a stable Internet connection for each PC, which is live during the tests. We have taken the necessary precautions (with specially configured firewalls etc.) not to harm other computers (i.e. not to cause outbreaks). Hardware and Software For this test we use identical workstations, a control and command server and network attached storage. Vendor Type CPU RAM Hard Disk Workstations Dell Optiplex 755 Intel Core 2 Duo 4 GB 80 GB SSD Control Server Supermicro Microcloud Intel Xeon E5 32 GB 4 x 500 GB SSD Storage Eurostor ES8700-Open-E Dual Xeon 32 GB 140 TB Raid 6 The tests are performed under Microsoft Windows 7 Professional SP1 64-Bit, with updates as at 4th March 2013. Some further installed vulnerable software includes: Vendor Product Version Vendor Product Version Adobe Flash Player ActiveX 11.5 Microsoft Office Home Premium 2010 Adobe Flash Player Plug-In 11.5 Microsoft .NET Framework 4.0 Adobe Acrobat Reader 11.0 Mozilla Firefox 18.0.1 Apple QuickTime 7.7 Oracle Java 1.7.0.11 Microsoft Internet Explorer 9.0 VideoLAN VLC Media Player 2.0.5 The use of more up-to-date third-party software and an updated Windows 7 64-Bit made it much harder to find exploits in-the-field for the test. This should also remind users always to keep their systems and applications up-to-date, in order to minimize the risk of being infected through exploits which use un- patched software vulnerabilities.
  • 5. Whole Product Dynamic “Real‐World” Protection Test – (March‐June 2013)  www.av‐comparatives.org  ‐ 5 ‐  Settings We use every security suite with its default settings. Our Whole-Product Dynamic Protection Test aims to simulate real-world conditions as experienced every day by users. If user interactions are required, we always choose “Allow” or equivalent. If the product protects the system anyway, we count the malware as blocked, even though we allow the program to run when the user is asked to make a decision. If the system is compromised, we count it as user-dependent. We consider “protection” to mean that the sys- tem is not compromised. This means that the malware is not running (or is removed/terminated) and there are no significant/malicious system changes. An outbound-firewall alert about a running malware process, which asks whether or not to block traffic form the users’ workstation to the Internet, is too little, too late and not considered by us to be protection. Preparation for every testing day Every morning, any available security software updates are downloaded and installed, and a new base image is made for that day. Before each test case is carried out, the products have some time to down- load and install newer updates which have just been released, as well as to load their protection modules (which in several cases takes some minutes). In the event that a major signature update for a product is made available during the day, but fails to download/install before each test case starts, the product will at least have the signatures that were available at the start of the day. This replicates the situation of an ordinary user in the real world. Testing Cycle for each malicious URL Before browsing to each new malicious URL we update the programs/signatures (as described above). New major product versions (i.e. the first digit of the build number is different) are installed once at the beginning of the month, which is why in each monthly report we only give the main product version number. Our test software monitors the PC, so that any changes made by the malware will be recorded. Furthermore, the recognition algorithms check whether the antivirus program detects the malware. After each test case the machine is reset to its clean state. Protection Security products should protect the user’s PC. It is not very important at which stage the protection takes place. It could be while browsing to the website (e.g. protection through URL Blocker), while an exploit tries to run, while the file is being downloaded/created or when the malware is executed (either by the exploit or by the user). After the malware is executed (if not blocked before), we wait several minutes for malicious actions and also to give e.g. behaviour-blockers time to react and remedy actions performed by the malware. If the malware is not detected and the system is indeed infect- ed/compromised, the process goes to “System Compromised”. If a user interaction is required and it is up to the user to decide if something is malicious, and in the case of the worst user decision the system gets compromised, we rate this as “user-dependent”. Because of this, the yellow bars in the results graph can be interpreted either as protected or not protected (it’s up to each individual user to decide what he/she would probably do in that situation).
  • 6. Whole Product Dynamic “Real‐World” Protection Test – (March‐June 2013)  www.av‐comparatives.org  ‐ 6 ‐  Due to the dynamic nature of the test, i.e. mimicking real-world conditions, and because of the way sev- eral different technologies (such as cloud scanners, reputation services, etc.) work, it is a matter of fact that such tests cannot be repeated or replicated in the way that e.g. static detection rate tests can. An- yway, we log as much data as reasonably possible to support our findings and results. Vendors are invited to provide useful log functions in their products that can provide the additional data they want in the event of disputes. Vendors were given after each testing month the possibility to dispute our conclusion about the compromised cases, so that we could recheck if there were maybe some problems in the auto- mation or with our analysis of the results. In the case of cloud products, we can only consider the results that the products achieved in our lab at the time of testing; sometimes the cloud services provided by the security vendors are down due to faults or maintenance downtime by the vendors, but these cloud-downtimes are often not disclosed to the us- ers by the vendors. This is also a reason why products relying too heavily on cloud services (and not mak- ing use of local heuristics, behavior blockers, etc.) can be risky, as in such cases the security provided by the products can decrease significantly. Cloud signatures/reputation should be implemented in the prod- ucts to complement the other local/offline protection features, but not replace them completely, as e.g. offline cloud services would mean the PCs being exposed to higher risks. Test Set We aim to use visible and relevant malicious websites/malware that are currently out there and present a risk to ordinary users. We usually try to include about 50% URLs that point directly to malware executa- bles; this causes the malware file to be downloaded, thus replicating a scenario in which the user is tricked by social engineering into following links in spam mails or websites, or installing some Trojan or other malicious software. The rest are drive-by exploits - these are usually well covered by almost all major security products, which may be one reason why the scores look relatively high. We use our own crawling system to search continuously for malicious sites and extract malicious URLs (including spammed malicious links). We also search manually for malicious URLs. If our in-house crawler does not find enough valid malicious URLs on one day, we have contracted some external researchers to provide additional malicious URLs (initially for the exclusive use of AV-Comparatives) and look for addi- tional (re)sources. In this kind of testing, it is very important to use enough test cases. If an insufficient number of sam- ples is used in comparative tests, differences in results may not indicate actual differences in protective capabilities among the tested products1 . In fact, even in our tests (with thousands of test cases) we consider products in the same protection cluster to be more or less equally good – provided that they do not wrongly block clean files/sites more than the industry average. In total, 3,163 malicious test cases were tested, of which 1,191 exploits were ineffective due to the patch level – those test cases were therefore not counted in the test. 1 Read more in the following paper: http://www.av-comparatives.org/images/stories/test/statistics/somestats.pdf
  • 7. Whole Product Dynamic “Real‐World” Protection Test – (March‐June 2013)  www.av‐comparatives.org  ‐ 7 ‐  Comments Microsoft Security Essentials, which provides basic malware protection, can easily be installed from Win- dows Update, and is used as the basis of comparison for malware protection. Microsoft Windows 7 includes a firewall and automatic updates, and warns users about executing files downloaded from the Internet. Furthermore, most modern browsers include pop-up blockers, phish- ing/URL-Filters, and warn users about downloading files from the Internet. These are just some of the build-in protection features, but despite all of them, systems can become infected anyway. The reason for this in most cases is the ordinary user, who may be tricked by social engineering into visiting mali- cious websites or installing malicious software. Users expect a security product not to ask them if they really want to execute a file etc. but expect that the security product will protect the system in any case without them having to think about it, and despite what they do (e.g. executing unknown files). Tested products Many vendors make a simple antimalware program (usually called “Antivirus”) and a suite (normally called “Internet Security”), whereby the latter includes some other security-related features such as the vendor’s own firewall. Some vendors additionally produce a third type of suite with additional features like backup, which are not directly related to security. For this test we normally use the Internet Security suite, as any protection features that prevent the system from being compromised can be used. However, a vendor can choose to enter their Antivirus product instead, if they prefer. The main product versions of the products tested in each monthly test run are shown below: Vendor Product Version March Version April Version May Version June AhnLab V3 Internet Security 8.0 8.0 8.0 8.0 Avast Free Antivirus 8.0 8.0 8.0 8.0 AVG Internet Security 2013 2013 2013 2013 Avira Internet Security 2013 2013 2013 2013 Bitdefender Internet Security 2013 2013 2013 2013 BullGuard Internet Security 2013 2013 2013 2013 Emsisoft Anti-Malware 7.0 7.0 7.0 7.0 eScan Internet Security 14.0 14.0 14.0 14.0 ESET Smart Security 6.0 6.0 6.0 6.0 F-Secure Internet Security 2013 2013 2013 2013 Fortinet FortiClient Lite 5.0 5.0 5.0 5.0 G DATA Internet Security 2013 2014 2014 2014 Kaspersky Internet Security 2013 2013 2013 2013 McAfee Internet Security 2013 2013 2013 2013 Microsoft Security Essentials 4.2 4.2 4.2 4.2 Panda Cloud Free Antivirus 2.1.1 2.1.1 2.1.1 2.1.1 Sophos Endpoint Security 10.2 10.2 10.2 10.2 ThreatTrack Vipre Internet Security 2013 2013 2013 2013 Trend Micro Titanium Internet Security 2013 2013 2013 2013 Products that are available only in Chinese language are included only in the Chinese report version on our website (http://www.av-comparatives.org/dynamic-tests/).
  • 8. Whole Product Dynamic “Real‐World” Protection Test – (March‐June 2013)  www.av‐comparatives.org  ‐ 8 ‐  Test Cases Test period Test-cases 11th to 25th March 2013 422 3rd to 23th April 2013 545 3rd to 28th May 2013 431 5th to 21st June 2013 574 TOTAL 1972 Results Below you can see an overview of the individual testing months2 . March 2013 – 422 test cases April 2013 – 545 test cases May 2013 – 431 test cases June 2013 – 574 test cases We purposely do not give exact numbers for the individual months in this report, to avoid the minor differences of a few cases being misused to state that one product is better than the other in a given month and with a given test-set size. We provide the total numbers in the overall reports, where the size of the test-set is bigger, and differences that are more significant may be observed. 2 Interested users who want to see the exact protection rates (without FP rates) every month can see the monthly updated interactive charts on our website: http://chart.av-comparatives.org/chart1.php
  • 9. Whole Product Dynamic “Real‐World” Protection Test – (March‐June 2013)  www.av‐comparatives.org  ‐ 9 ‐  Summary Results (March-June) Test period: March – June 2013 (1972 Test cases) Blocked User dependent Compromised PROTECTION RATE [Blocked % + (User dependent %)/2]3 Cluster4 Bitdefender, Kaspersky 1970 - 2 99,9% 1 F-Secure 1968 3 1 99,9% 1 Trend Micro 1968 - 4 99,8% 1 Emsisoft 1954 14 4 99,4% 1 McAfee 1959 - 13 99,3% 1 eScan 1956 5 11 99,3% 1 Fortinet 1951 - 21 98,9% 1 Avast 1946 12 14 98,9% 1 G DATA 1937 19 16 98,7% 1 ESET 1944 1 27 98,6% 1 BullGuard 1922 16 34 97,9% 2 AVIRA 1914 15 43 97,4% 2 Panda 1920 - 52 97,4% 2 AVG 1848 102 22 96,3% 2 Vipre 1897 - 75 96,2% 2 Sophos 1896 - 76 96,1% 2 Microsoft 1825 - 147 92,5% 3 AhnLab 1774 - 198 90,0% 4 The graph below shows the overall protection rate (all samples), including the minimum and maximum protection rates for the individual months. 3 User-dependent cases are given half credit. For example, if a program blocks 80% by itself, and another 20% of cases are user-dependent, we give half credit for the 20%, i.e. 10%, so it gets 90% altogether. 4 Hierarchical Clustering Method: defining four clusters using average linkage between groups (Euclidian distance) based on the protection rate (see dendrogram on page 12).
  • 10. Whole Product Dynamic “Real‐World” Protection Test – (March‐June 2013)  www.av‐comparatives.org  ‐ 10 ‐  Whole-Product “False Alarm” Test (wrongly blocked domains/files) The false-alarm test in the Whole-Product Dynamic “Real-World” Protection Test consists of two parts: wrongly blocked domains (while browsing) and wrongly blocked files (while downloading/installing). It is necessary to test both scenarios because testing only one of the two above cases could penalize products which focus mainly on one type of protection method, either URL filtering or on- access/behaviour/reputation-based file protection. a) Wrongly blocked domains (while browsing) We used around one thousand randomly chosen popular domains. Blocked non-malicious domains/URLs were counted as false positives (FPs). The wrongly blocked domains have been reported to the respective vendors for review and should now no longer be blocked. By blocking whole domains, the security products not only risk causing a loss of trust in their warnings, but also possibly causing financial damage (besides the damage to website reputation) to the domain owners, including loss of e.g. advertisement revenue. Due to this, we strongly recommend vendors to block whole domains only in the case where the domain’s sole purpose is to carry/deliver malicious code, and otherwise block just to the malicious pages (as long as they are indeed malicious). Products which tend to block URLs based e.g. on reputation may be more prone to this and score also higher in protec- tion tests, as they may block many unpopular/new websites. b) Wrongly blocked files (while downloading/installing) We used about one hundred different applications listed either as top downloads or as new/recommended downloads from about a dozen different popular download portals. The applications were downloaded from the original software developers’ websites (instead of the download portal host), saved to disk and installed to see if they are blocked at any stage of this procedure. Additionally, we included a few clean files that were encountered and disputed over the past months of the Real-World Protection Test. The duty of security products is to protect against malicious sites/files, not to censor or limit the access only to well-known popular applications and websites. If the user deliberately chooses a high security setting, which warns that it may block some legitimate sites or files, then this may be considered ac- ceptable. However, we do not regard it to be acceptable as a default setting, where the user has not been warned. As the test is done at points in time and FPs on very popular software/websites are usually noticed and fixed within a few hours, it would be surprising to encounter FPs with very popular applica- tions. Due to this, FP tests which are done e.g. only with very popular applications, or which use only the top 50 files from whitelisted/monitored download portals would be a waste of time and resources. Users do not care whether they are infected by malware that affects only them, just as they do not care if the FP count affects only them. While it is preferable that FPs do not affect many users, it should be the goal to avoid having any FPs and to protect against any malicious files, no matter how many users are affect- ed or targeted. Prevalence of FPs based on user-base data is of interest for internal QA testing of AV vendors, but for the ordinary user it is important to know how accurately its product distinguishes be- tween clean and malicious files.
  • 11. Whole Product Dynamic “Real‐World” Protection Test – (March‐June 2013)  www.av‐comparatives.org  ‐ 11 ‐  The below table shows the numbers of wrongly blocked domains/files: Wrongly blocked clean domains/files (blocked / user-dependent5 ) Wrongly blocked score6 AhnLab, AVG, ESET, Microsoft - / - (-) 0 Kaspersky, Sophos 1 / - (1) 1 G DATA 1 / 1 (2) 1.5 AVIRA 3 / - (3) 3 Avast - / 10 (10) 5 Emsisoft 5 / - (5) 5 BullGuard, eScan, Panda, Trend Micro 6 / - (6) 6 average (8) average (7) Vipre 17 / - (17) 17 Bitdefender 24 / - (24) 24 F-Secure 22 / 13 (35) 28.5 Fortinet, McAfee 31 / - (31) 31 To determine which products have to be downgraded in our award scheme due to the rate of wrongly blocked sites/files, we backed up our decision by using statistical methods and by looking at the average scores. The following products with above-average FPs have been downgraded: Bitdefender, Fortinet, F- Secure, McAfee and Vipre. As an experiment, we looked at which products block URLs or the download/execution of files just be- cause they are unknown (i.e. block also completely innocent files) and/or have no reputation infor- mation. When downloading a new self-written innocent/clean program from one of our websites, Avast gave a warning about the unknown reputation of the file (user-dependent). F-Secure, McAfee and Trend Micro wrongly blocked the download/URL completely and handled the file/URL as if it were malicious. Although such protection techniques based on metadata, including e.g. reputation and whitelists, help to produce high scores in protection tests, it has to be kept in mind that this may also lead to detec- tion/blocking of innocent/clean files and URLs just because they are unknown to the cloud and/or be- cause the hosted files are too new. 5 Although user dependent cases are extremely annoying (esp. on clean files) for the user, they were counted only as half for the “wrongly blocked rate” (like for the protection rate). 6 Lower is better.
  • 12. Whole Product Dynamic “Real‐World” Protection Test – (March‐June 2013)  www.av‐comparatives.org  ‐ 12 ‐  Illustration of how awards were given The dendrogram (using average linkage between groups) shows the results of the hierarchical cluster analysis. It indicates at what level of similarity the clusters are joined. The red drafted line defines the level of similarity. Each intersection indicates a group (in this case 4 groups). Products that had above- average FPs (wrongly blocked score) are marked in red (and downgraded according to the ranking system below). Ranking system Protection score Cluster7 4 Protection score Cluster 3 Protection score Cluster 2 Protection score Cluster 1 <  FPs Tested Standard Advanced Advanced+ >  FPs Tested Tested Standard Advanced The graph page 9 shows the test results compared to the average "out-of-box" malware protection in Microsoft Windows (red line). In Windows 8, this is provided by Windows Defender, which is pre-installed by default with the operating system. The equivalent in Windows 7 is Microsoft Security Essentials, which is not pre-installed, but can easily be added for free as an option via the Windows Update service. 7 See protection score clusters on page 9.
  • 13. Whole Product Dynamic “Real‐World” Protection Test – (March‐June 2013)  www.av‐comparatives.org  ‐ 13 ‐  Award levels reached in this test The awards are decided and given by the testers based on the observed test results (after consulting statistical models). The following awards8 are for the results reached in the spring 2013 Whole-Product Dynamic “Real-World” Protection Test: AWARD LEVELS PRODUCTS Kaspersky Trend Micro Emsisoft eScan Avast G DATA ESET Bitdefender* F-Secure* McAfee* Fortinet* BullGuard AVIRA Panda AVG Sophos Vipre* AhnLab * downgraded by one rank due to the score of wrongly blocked sites/files (FPs); see page 12 Expert users who do not care about wrongly blocked files/websites (false alarms) are free to rely on the protection rates on page 9 instead of our awards ranking which takes FPs in consideration. 8 Microsoft security products are not included in the awards page, as their out-of-box protection is (optionally) included in the operating system and is therefore out-of-competition.
  • 14. Whole Product Dynamic “Real‐World” Protection Test – (March‐June 2013)  www.av‐comparatives.org  ‐ 14 ‐  Copyright and Disclaimer This publication is Copyright © 2013 by AV-Comparatives e.V. ®. Any use of the results, etc. in whole or in part, is ONLY permitted with the explicit written agreement of the management board of AV- Comparatives e.V., prior to any publication. AV-Comparatives e.V. and its testers cannot be held liable for any damage or loss which might occur as a result of, or in connection with, the use of the information provided in this paper. We take every possible care to ensure the correctness of the basic data, but liabil- ity for the correctness of the test results cannot be taken by any representative of AV-Comparatives e.V. We do not give any guarantee of the correctness, completeness, or suitability for a specific purpose of any of the information/content provided at any given time. No-one else involved in creating, producing or delivering test results shall be liable for any indirect, special or consequential damage, or loss of prof- its, arising out of, or related to, the use (or inability to use), the services provided by the website, test documents or any related data. AV-Comparatives e.V. is a registered Austrian Non-Profit-Organization. For more information about AV-Comparatives and the testing methodologies please visit our website. AV-Comparatives e.V. (July 2013)