Security: Advanced Endpoint Protection Comparative
Security: Advanced Endpoint Protection Comparative
REPORT
Security
MARCH 5, 2019
Authors - Thomas Skybakmoen, Scott Robin
Tested Products
Bitdefender GravityZone Ultra v6.6.7.106 Carbon Black CB Defense 3.2.10105
Check Point Software Technologies Check Point SandBlast Agent Next Generation AV E80.82.1
Cisco Advanced Malware Protection (AMP) for Endpoints 6.2.3.10807
Comodo Client Security 10.8.0.7053
Cylance CylancePROTECT + CylanceOPTICS v2.0.1500
Endgame Endpoint Security v3.3
enSilo Endpoint Security Platform v3.0
F-Secure Computer Protection Premium v18.14
Fortinet FortiClient v6.0.3
Kaspersky Lab Kaspersky Endpoint Security v11.0.1.90 Malwarebytes Endpoint Protection and Response
v1.2.0.632 Panda Security Panda Adaptive Defense 360 v3.40.00 Sophos Intercept X Advanced v2.0.10
Symantec Endpoint Protection and Advanced Threat Protection (ATP) v14.2.1023.0100
Trend Micro Smart Protection for Endpoints v12.0.5024
Vendor A
Vendor B
Vendor C
Environment
NSS Labs Advanced Endpoint Protection (AEP) Test Methodology v3.0 NSS Labs Evasions Test
Methodology v1.2
Overview
Implementation of advanced endpoint protection (AEP) products can be complex, with multiple factors affecting security
effectiveness. The following factors should be considered when evaluating these products:
• Blocking capabilities
• Detection capabilities
• Resistance to evasions
• Manageability and forensics capabilities
The Security Effectiveness score, which is represented on the y axis of the SVM, does not include the Additional Detection
Rate, since an AEP product's focus is on blocking threats. However, to establish total cost of ownership (TCO), Block Rate and
Additional Detection Rate are included in the Overall Capability Score calculations. The Overall Capability Score
calculations are used to determine the TCO per Protected Agent, which in turn is used to plot a product's value on the x axis
in the NSS Labs Security Value Map™ (SVM). A product's capability to detect threats that were not blocked reduces the
operational burden and cost of remediating infections and incidents (breaches). For additional information on TCO, please see
the Comparative Report on TCO at www.nsslabs.com.The security effectiveness of a product is determined primarily by its
block rate, but calculations also take into consideration the severity of each attack used in the test. Figure 1 presents the
results of Security Effectiveness testing.
Bitdefender 97.2%
Carbon Black 96.9%
Check Point 96.6%
Cisco 94.5%
Comodo 98.5%
Cylance 96.8%
Endgame 98.9%
enSilo 97.4%
Fortinet Technologies 96.7%
F-Secure 98.3%
Kaspersky Lab 96.8%
Malwarebytes 92.8%
Panda Security 98.4%
Sophos 99.1%
Symantec 96.2%
Trend Micro 97.1%
Vendor A 90.8%
Vendor B 87.4%
Vendor C 87.8%
Figure 1 - Security Effectiveness
Table of Contents
Tested Products................................................1
Environment.......................................................................................................................................1
Overview.......................................................2
Analysis.......................................................4
Block Rate...........................................................................................................................................4
Additional Detection Rate..................................................................................................................4
False Positives....................................................................................................................................4
Test Categories...................................................................................................................................4
Test Composition................................................................................................................................5
Block Rate...........................................................................................................................................6
Test Methodology...............................................7
Contact Information............................................7
Table of Figures
Figure 1 - Security Effectiveness......................................................................................................................... 2
Figure 2 - Test Composition............................................................................................................................... 5
Figure 3 - Block Rate.......................................................................................................................................... 6
Analysis
The threat landscape is evolving constantly; attackers are refining their strategies and increasing both the volume and the
sophistication of their attacks. Additionally, enterprises now must defend against targeted persistent attacks. In the past,
servers were the main target; however, attacks against desktop client applications are now mainstream and present a clear
danger to organizations as an initial infection vector.
As part of the initial AEP group test setup, 96 instances of the endpoint product were deployed on Windows 7 and Windows
10 operating systems. All product configurations were reviewed, validated, and approved by NSS prior to the test.
Block Rate
Block Rate is defined as the percentage of exploits and malware blocked within 15 minutes of attempted execution. Block
Rate measures a product's ability to block malware and exploits during download, on access, and during execution. Products
use various methods, including reputation, application whitelisting, behavior-based blocking, and/or signatures to block
threats.
False Positives
The ability of an AEP product to correctly identify and allow benign content is as important as its ability to provide protection
against malicious content. NSS ran various samples of legitimate application files and documents, all of which products were
required to properly identify and allow. If any legitimate files could not be opened or executed immediately, these were
recorded as a false positives. For additional information on false positives, please see the individual test reports, available at
www.nsslabs.com.
Test Categories
• Malware Delivered over HTTP: In these web-based attacks, users click on malicious links to download and execute
malware.
• Malware Delivered over Email: In these inbound, email-based attacks, users are deceived into downloading malicious
attachments in emails to execute malware.
• Malware Delivered by Docs and Scripts: In these attacks, malware is delivered via documents and scripts. Such attacks
could be as simple as delivering malware using macros.
• Offline Threats: These attacks are performed on victim machines that are disconnected from the Internet. Attacks are
delivered and executed with no cloud or backend connectivity or support. Victim machines are later reconnected to
the network.
• Unknown Threats: These threats have not previously been seen in the wild. They are either samples created by NSS, or
they are pre-existing samples that have been modified.
• Exploits: These are defined as malicious software that is designed to take advantage of existing deficiencies in hardware
or software systems, such as vulnerabilities or bugs. In some cases, a user merely needs to visit a web page hosting
malicious code in order to be infected via exploits.
• Blended Threats: These threats possess the characteristics of both exploits and socially engineered malware. They
attempt to make it difficult to distinguish between what is malicious and what is legitimate activity.
• Evasions: These techniques include packers, crypters, and other types of evasive techniques used to bypass traditional
antivirus signature detection.
Test Composition
Each product was initially tested against 1,629 unique malicious samples and 1,061 unique false positive samples. Ultimately,
897 unique malicious samples and 1,053 unique false positive samples met NSS' validation criteria and were included as part
of the test. Figure 2 depicts the percentage of samples used in each test category.
Tests Samples
Block Rate
Figure 3 displays each product's block rate for all test categories.
Vendor HTTP Email Docs & Offline Unknown Exploits Blended Evasions
Scripts Threats Threats Threats
Carbon Black 99.1% 100.0% 91.9% 100.0% 95.5% 94.1% 46.2% 97.6%
Check Point 100.0% 100.0% 90.3% 100.0% 81.8% 88.2% 50.0% 100.0%
Fortinet Technologies 100.0% 98.5% 91.9% 93.3% 100.0% 100.0% 50.0% 100.0%
Kaspersky Lab 99.7% 98.7% 83.9% 100.0% 100.0% 88.2% 76.9% 100.0%
Panda Security 99.6% 99.4% 96.8% 100.0% 100.0% 76.5% 88.5% 100.0%
Trend Micro 99.5% 99.7% 88.7% 53.3% 95.5% 76.5% 100.0% 90.2%
Test Methodology
NSS Labs Advanced Endpoint Protection (AEP) Test Methodology v3.0 NSS
Contact Information
NSS Labs, Inc.
3711 South Mopac Expressway Building 1, Suite 400 Austin, TX 78746
info@nsslabs.com www.nsslabs.com
This and other related documents are available at: www.nsslabs.com. To receive a licensed copy or report misuse, please contact
NSS Labs.
© 2019 NSS Labs, Inc. All rights reserved. No part of this publication may be reproduced, copied/scanned, stored on a retrieval system, e-mailed or otherwise disseminated or transmitted without the express written consent of NSS Labs, Inc. ("us" or "we"). Please read the disclaimer
in this box because it contains important information that binds you. If you do not agree to these conditions, you should not read the rest of this report but should instead return the report immediately to us. "You" or "your" means the person who accesses this report and any entity
on whose behalf he/she has obtained this report.
1. The information in this report is subject to change by us without notice, and we disclaim any obligation to update it.
2. The information in this report is believed by us to be accurate and reliable at the time of publication, but is not guaranteed. All use of and reliance on this report are at your sole risk. We are not liable or responsible for any damages, losses, or expenses of any nature whatsoever
arising from any error or omission in this report.
3. NO WARRANTIES, EXPRESS OR IMPLIED ARE GIVEN BY US. ALL IMPLIED WARRANTIES, INCLUDING IMPLIED WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, AND NON-INFRINGEMENT, ARE HEREBY DISCLAIMED AND EXCLUDED BY US. IN NO
EVENT SHALL WE BE LIABLE FOR ANY DIRECT, CONSEQUENTIAL, INCIDENTAL, PUNITIVE, EXEMPLARY, OR INDIRECT DAMAGES, OR FOR ANY LOSS OF PROFIT, REVENUE, DATA, COMPUTER PROGRAMS, OR OTHER ASSETS, EVEN IF ADVISED OF THE POSSIBILITY THEREOF.
4. This report does not constitute an endorsement, recommendation, or guarantee of any of the products (hardware or software) tested or the hardware and/or software used in testing the products. The testing does not guarantee that there are no errors or defects in the products
or that the products will meet your expectations, requirements, needs, or specifications, or that they will operate without interruption.
5. This report does not imply any endorsement, sponsorship, affiliation, or verification by or with any organizations mentioned in this report.
6. All trademarks, service marks, and trade names used in this report are the trademarks, service marks, and trade names of their respective owners.
Security
MARCH 5, 2019
Authors - Thomas Skybakmoen, Scott Robin
Tested Products
Bitdefender GravityZone Ultra v6.6.7.106 Carbon Black CB Defense 3.2.10105
Check Point Software Technologies Check Point SandBlast Agent Next Generation AV E80.82.1
Cisco Advanced Malware Protection (AMP) for Endpoints 6.2.3.10807
Comodo Client Security 10.8.0.7053
Cylance CylancePROTECT + CylanceOPTICS v2.0.1500
Endgame Endpoint Security v3.3
enSilo Endpoint Security Platform v3.0
F-Secure Computer Protection Premium v18.14
Fortinet FortiClient v6.0.3
Kaspersky Lab Kaspersky Endpoint Security v11.0.1.90 Malwarebytes Endpoint Protection and Response
v1.2.0.632 Panda Security Panda Adaptive Defense 360 v3.40.00 Sophos Intercept X Advanced v2.0.10
Symantec Endpoint Protection and Advanced Threat Protection (ATP) v14.2.1023.0100
Trend Micro Smart Protection for Endpoints v12.0.5024
Vendor A
Vendor B
Vendor C
Environment
NSS Labs Advanced Endpoint Protection (AEP) Test Methodology v3.0 NSS Labs Evasions Test
Methodology v1.2
Overview
Implementation of advanced endpoint protection (AEP) products can be complex, with multiple factors affecting security
effectiveness. The following factors should be considered when evaluating these products:
• Blocking capabilities
• Detection capabilities
• Resistance to evasions
• Manageability and forensics capabilities
The Security Effectiveness score, which is represented on the y axis of the SVM, does not include the Additional Detection
Rate, since an AEP product's focus is on blocking threats. However, to establish total cost of ownership (TCO), Block Rate and
Additional Detection Rate are included in the Overall Capability Score calculations. The Overall Capability Score
calculations are used to determine the TCO per Protected Agent, which in turn is used to plot a product's value on the x axis
in the NSS Labs Security Value Map™ (SVM). A product's capability to detect threats that were not blocked reduces the
operational burden and cost of remediating infections and incidents (breaches). For additional information on TCO, please see
the Comparative Report on TCO at www.nsslabs.com.The security effectiveness of a product is determined primarily by its
block rate, but calculations also take into consideration the severity of each attack used in the test. Figure 1 presents the results
of Security Effectiveness testing.
Bitdefender 97.2%
Carbon Black 96.9%
Check Point 96.6%
Cisco 94.5%
Comodo 98.5%
Cylance 96.8%
Endgame 98.9%
enSilo 97.4%
Fortinet Technologies 96.7%
F-Secure 98.3%
Kaspersky Lab 96.8%
Malwarebytes 92.8%
Panda Security 98.4%
Sophos 99.1%
Symantec 96.2%
Trend Micro 97.1%
Vendor A 90.8%
Vendor B 87.4%
Vendor C 87.8%
Figure 1 - Security Effectiveness
Table of Contents
Tested Products................................................1
Environment.......................................................................................................................................1
Overview.......................................................2
Analysis.......................................................4
Block Rate...........................................................................................................................................4
Additional Detection Rate..................................................................................................................4
False Positives....................................................................................................................................4
Test Categories...................................................................................................................................4
Test Composition................................................................................................................................5
Block Rate...........................................................................................................................................6
Test Methodology...............................................7
Contact Information............................................7
Table of Figures
Figure 1 - Security Effectiveness......................................................................................................................... 2
Figure 2 - Test Composition............................................................................................................................... 5
Figure 3 - Block Rate.......................................................................................................................................... 6
Analysis
The threat landscape is evolving constantly; attackers are refining their strategies and increasing both the volume and the
sophistication of their attacks. Additionally, enterprises now must defend against targeted persistent attacks. In the past,
servers were the main target; however, attacks against desktop client applications are now mainstream and present a clear
danger to organizations as an initial infection vector.
As part of the initial AEP group test setup, 96 instances of the endpoint product were deployed on Windows 7 and Windows
10 operating systems. All product configurations were reviewed, validated, and approved by NSS prior to the test.
Block Rate
Block Rate is defined as the percentage of exploits and malware blocked within 15 minutes of attempted execution. Block
Rate measures a product's ability to block malware and exploits during download, on access, and during execution. Products
use various methods, including reputation, application whitelisting, behavior-based blocking, and/or signatures to block
threats.
False Positives
The ability of an AEP product to correctly identify and allow benign content is as important as its ability to provide protection
against malicious content. NSS ran various samples of legitimate application files and documents, all of which products were
required to properly identify and allow. If any legitimate files could not be opened or executed immediately, these were
recorded as a false positives. For additional information on false positives, please see the individual test reports, available at
www.nsslabs.com.
Test Categories
• Malware Delivered over HTTP: In these web-based attacks, users click on malicious links to download and execute
malware.
• Malware Delivered over Email: In these inbound, email-based attacks, users are deceived into downloading malicious
attachments in emails to execute malware.
• Malware Delivered by Docs and Scripts: In these attacks, malware is delivered via documents and scripts. Such attacks
could be as simple as delivering malware using macros.
• Offline Threats: These attacks are performed on victim machines that are disconnected from the Internet. Attacks are
delivered and executed with no cloud or backend connectivity or support. Victim machines are later reconnected to
the network.
• Unknown Threats: These threats have not previously been seen in the wild. They are either samples created by NSS, or
they are pre-existing samples that have been modified.
• Exploits: These are defined as malicious software that is designed to take advantage of existing deficiencies in hardware
or software systems, such as vulnerabilities or bugs. In some cases, a user merely needs to visit a web page hosting
malicious code in order to be infected via exploits.
• Blended Threats: These threats possess the characteristics of both exploits and socially engineered malware. They
attempt to make it difficult to distinguish between what is malicious and what is legitimate activity.
• Evasions: These techniques include packers, crypters, and other types of evasive techniques used to bypass traditional
antivirus signature detection.
Test Composition
Each product was initially tested against 1,629 unique malicious samples and 1,061 unique false positive samples. Ultimately,
897 unique malicious samples and 1,053 unique false positive samples met NSS' validation criteria and were included as part
of the test. Figure 2 depicts the percentage of samples used in each test category.
Tests Samples
Block Rate
Figure 3 displays each product's block rate for all test categories.
Vendor HTTP Email Docs & Offline Unknown Exploits Blended Evasions
Scripts Threats Threats Threats
Carbon Black 99.1% 100.0% 91.9% 100.0% 95.5% 94.1% 46.2% 97.6%
Check Point 100.0% 100.0% 90.3% 100.0% 81.8% 88.2% 50.0% 100.0%
Fortinet Technologies 100.0% 98.5% 91.9% 93.3% 100.0% 100.0% 50.0% 100.0%
Kaspersky Lab 99.7% 98.7% 83.9% 100.0% 100.0% 88.2% 76.9% 100.0%
Panda Security 99.6% 99.4% 96.8% 100.0% 100.0% 76.5% 88.5% 100.0%
Trend Micro 99.5% 99.7% 88.7% 53.3% 95.5% 76.5% 100.0% 90.2%
Test Methodology
NSS Labs Advanced Endpoint Protection (AEP) Test Methodology v3.0 NSS
Contact Information
NSS Labs, Inc.
3711 South Mopac Expressway Building 1, Suite 400 Austin, TX 78746
info@nsslabs.com www.nsslabs.com
This and other related documents are available at: www.nsslabs.com. To receive a licensed copy or report misuse, please contact
NSS Labs.
© 2019 NSS Labs, Inc. All rights reserved. No part of this publication may be reproduced, copied/scanned, stored on a retrieval system, e-mailed or otherwise disseminated or transmitted without the express written consent of NSS Labs, Inc. ("us" or "we"). Please read the disclaimer
in this box because it contains important information that binds you. If you do not agree to these conditions, you should not read the rest of this report but should instead return the report immediately to us. "You" or "your" means the person who accesses this report and any entity
on whose behalf he/she has obtained this report.
7. The information in this report is subject to change by us without notice, and we disclaim any obligation to update it.
8. The information in this report is believed by us to be accurate and reliable at the time of publication, but is not guaranteed. All use of and reliance on this report are at your sole risk. We are not liable or responsible for any damages, losses, or expenses of any nature whatsoever
arising from any error or omission in this report.
9. NO WARRANTIES, EXPRESS OR IMPLIED ARE GIVEN BY US. ALL IMPLIED WARRANTIES, INCLUDING IMPLIED WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, AND NON-INFRINGEMENT, ARE HEREBY DISCLAIMED AND EXCLUDED BY US. IN NO
EVENT SHALL WE BE LIABLE FOR ANY DIRECT, CONSEQUENTIAL, INCIDENTAL, PUNITIVE, EXEMPLARY, OR INDIRECT DAMAGES, OR FOR ANY LOSS OF PROFIT, REVENUE, DATA, COMPUTER PROGRAMS, OR OTHER ASSETS, EVEN IF ADVISED OF THE POSSIBILITY THEREOF.
10. This report does not constitute an endorsement, recommendation, or guarantee of any of the products (hardware or software) tested or the hardware and/or software used in testing the products. The testing does not guarantee that there are no errors or defects
in the products or that the products will meet your expectations, requirements, needs, or specifications, or that they will operate without interruption.
11. This report does not imply any endorsement, sponsorship, affiliation, or verification by or with any organizations mentioned in this report.
12. All trademarks, service marks, and trade names used in this report are the trademarks, service marks, and trade names of their respective owners.
NSS>^s
ADVANCED ENDPOINT
PROTECTION COMPARATIVE
REPORT
MARCH 5, 2019
Authors - Thomas Skybakmoen, Scott Robin
Tested Products
Bitdefender GravityZone Ultra v6.6.7.106 Carbon Black CB Defense 3.2.10105
Check Point Software Technologies Check Point SandBlast Agent Next Generation AV E80.82.1
Cisco Advanced Malware Protection (AMP) for Endpoints 6.2.3.10807
Comodo Client Security 10.8.0.7053
Cylance CylancePROTECT + CylanceOPTICS v2.0.1500
Endgame Endpoint Security v3.3
enSilo Endpoint Security Platform v3.0
F-Secure Computer Protection Premium v18.14
Fortinet FortiClient v6.0.3
Kaspersky Lab Kaspersky Endpoint Security v11.0.1.90 Malwarebytes Endpoint
Protection and Response v1.2.0.632 Panda Security Panda Adaptive Defense 360
v3.40.00 Sophos Intercept X Advanced v2.0.10
Symantec Endpoint Protection and Advanced Threat Protection (ATP) v14.2.1023.0100
Trend Micro Smart Protection for Endpoints v12.0.5024
Vendor A
Vendor B
Vendor C
Environment
NSS Labs Advanced Endpoint Protection (AEP) Test Methodology v3.0 NSS Labs
enSfo
Trend Micro FOrtlne*
Carbon Bbc KMperaky Lab , 0
Check Pokit )
Cisco
•
A
•
#B •
▲A▲
NSS Labs was unable to measure the effectiveness and determine the suitability
of advanced endpoint protection products from three noteworthy vendors and
therefore cautions against their deployment without a comprehensive evaluation.
NSS Labs was unable to test the Microsoft Advanced Threat Protection {ATP)
product because a centralized management console that integrates Windows 7
endpoints was not available.
Figure 1 - NSS Labs' 2019 Security Value Map (SVM) for Advanced Endpoint Protection (AEP) Products
Key Findings
• Fourteen products were rated as Recommended and two products were rated as Security Recommended.
• The Security Effectiveness of verified products ranged between 87.4% and 99.1% with fourteen of the
nineteen verified products achieving a rating greater than 95%.
• The average Security Effectiveness rating was 92.2%; sixteen of the verified products received an
aboveaverage Security Effectiveness rating, and three received a below-average Security Effectiveness
Se
cu
rit
y
Ef
fec
tiv
en
ess
rating.
• The TCO per Protected Agent for verified products ranged between US$64 and US$520 with most tested
products costing less than US$150 per protected agent.
• The average TCO per Protected Agent (Value) was US$322; fourteen products demonstrated value above the
average, and five demonstrated value below the average.
Product Rating
The Overall Rating in Figure 2 is determined by which section of the SVM the product falls within: Recommended
(top right), Security Recommended (top left), Neutral (bottom right), or Caution (bottom left). For more
information on how the SVM is constructed, see the How to Read the SVM section of this document.
Figure 2 - NSS Labs' 2019 Recommendations for Advanced Endpoint Protection (AEP) Products
Table of Contents
Tested Products......................................................................................................................................1
Environment....................................................................................................................................................... 1
Overview................................................................................................................................................2
Key Findings....................................................................................................................................................... 3
Product Rating.................................................................................................................................................... 3
Test Methodology..................................................................................................................................7
Contact Information...............................................................................................................................7
Table of Figures
Figure 1 - NSS Labs' 2019 Security ValueMap (SVM) for Advanced Endpoint Protection (AEP) Products.....................2
Figure 2 - NSS Labs' 2019 Recommendations for Advanced Endpoint Protection (AEP) Products................................3
Figure 3 - Example SVM...................................................................................................................................................5
How to Read the SVM
The SVM depicts the value of a typical deployment of 2,500 agents.
This report is part of a series of comparative reports on the security and total cost of ownership (TCO) of tested
advanced endpoint protection (AEP) products. For more information, visit www.nsslabs.com.
• •
• • •
Average
• •
•
Average
60%
s
I
ts
50%
LU
•I"
3
O
<u
tn
40%
10
%
Figure 3 - Example SVM
No two security products deliver the same security effectiveness or TCO, making precise comparisons extremely
difficult. In order to enable value-based comparisons of AEP products on the market, NSS has developed a unique
metric: TCO per Protected Agent. For more information, see the TCO Comparative Report at www.nsslabs.com.
80
%
The x axis displays the TCO per Protected Agent in US dollars, which decreases from left to right. The Overall
70
%
Capability Score calculations are used to determine the TCO per Protected Agent, which in turn is used to plot a
product's value on the x axis in the SVM. A product's capability to detect threats that were not blocked reduces the
operational burden and cost of remediating infections and incidents (breaches). For additional information on TCO,
30
%
please see the Comparative Report on TCO at www.nsslabs.com.
The Security Effectiveness score, which is represented on the y axis of the SVM, does not include the Additional
Detection Rate since the focus of an AEP product is on blocking threats.
The y axis displays the Security Effectiveness score as a percentage. Security effectiveness is greater toward the
top of the y axis. Products that are missing critical security capabilities will have a reduced Security Effectiveness
score.
The SVM displays two dotted lines that represent the average Security Effectiveness and TCO per Protected
Agent of all the tested products. These lines divide the SVM into four unequally sized sections. Where a
product's Security Effectiveness and TCO per Protected Agent scores map on the SVM determine which section it
falls into:
• Recommended: Products that map into the upper-right section of the SVM score well for both Security
Effectiveness and TCO per Protected Agent. These products provide a high level of detection and value for
money.
• Security Recommended: Products that map into the upper-left section of the SVM are suitable for
environments requiring a high level of detection, albeit at a higher-than-average cost.
• Neutral: Products that map into the lower-right section of the SVM may be good choices for organizations
where a slightly lower level of detection is acceptable in exchange for a lower TCO.
• Caution: Products that map into the lower-left section of the SVM offer limited value for money given their
three-Year TCO and measured Security Effectiveness.
In all cases, the SVM should only be a starting point. Enterprise customers can contact NSS to model their own
SVM in order to better understand which products might be best for them.
Test Methodology
NSS Labs Advanced Endpoint Protection (AEP) Test Methodology v3.0
NSS Labs Evasions Test Methodology v1.2
Copies of the test methodologies are available at www.nsslabs.com.
Contact Information
NSS Labs, Inc.
3711 South Mopac Expressway Building 1, Suite 400 Austin, TX 78746
info@nsslabs.com www.nsslabs.com
This and other related documents are available at: www.nsslabs.com. To receive a licensed copy or report misuse,
please contact NSS Labs.
© 2019 NSS Labs, Inc. All rights reserved. No part of this publication may be reproduced, copied/scanned, stored on a retrieval system,
e-mailed or otherwise disseminated or transmitted without the express written consent of NSS Labs, Inc. ("us" or "we").
Please read the disclaimer in this box because it contains important information that binds you. If you do not agree to these conditions,
you should not read the rest of this report but should instead return the report immediately to us. "You" or "your" means the person
who accesses this report and any entity on whose behalf he/she has obtained this report.
1. The information in this report is subject to change by us without notice, and we disclaim any obligation to update it.
2. The information in this report is believed by us to be accurate and reliable at the time of publication, but is not guaranteed. All use of
and reliance on this report are at your sole risk. We are not liable or responsible for any damages, losses, or expenses of any nature
whatsoever arising from any error or omission in this report.
3. NO WARRANTIES, EXPRESS OR IMPLIED ARE GIVEN BY US. ALL IMPLIED WARRANTIES, INCLUDING IMPLIED WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, AND NON-INFRINGEMENT, ARE HEREBY DISCLAIMED AND EXCLUDED BY US.
IN NO EVENT SHALL WE BE LIABLE FOR ANY DIRECT, CONSEQUENTIAL, INCIDENTAL, PUNITIVE, EXEMPLARY, OR INDIRECT DAMAGES, OR
FOR ANY LOSS OF PROFIT, REVENUE, DATA, COMPUTER PROGRAMS, OR OTHER ASSETS, EVEN IF ADVISED OF THE POSSIBILITY THEREOF.
4. This report does not constitute an endorsement, recommendation, or guarantee of any of the products (hardware or software) tested
or the hardware and/or software used in testing the products. The testing does not guarantee that there are no errors or defects in the
products or that the products will meet your expectations, requirements, needs, or specifications, or that they will operate without
interruption.
5. This report does not imply any endorsement, sponsorship, affiliation, or verification by or with any organizations mentioned in this
report.
6. All trademarks, service marks, and trade names used in this report are the trademarks, service marks, and trade names of their
respective owners.