-Vulnerability_Mgmt_Service_for_Product Lifecycle
-Vulnerability_Mgmt_Service_for_Product Lifecycle
-Vulnerability_Mgmt_Service_for_Product Lifecycle
Andon Nikolov
Thesis supervisor:
Thesis advisor:
Preface
I want to thank Professor Heikki Hämmäinen and my instructor Matti Frisk for their
great guidance and support. Without their help, this thesis would have never been
completed!
Otaniemi, 22.5.2017
Andon Nikolov
iv
Contents
Abstract ii
Preface iii
Contents iv
Abbreviations vi
1 Introduction 1
1.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Research question and scope . . . . . . . . . . . . . . . . . . . . . . . 2
1.2.1 Research question . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2.2 Scope . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.3 Research methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.4 Thesis structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
2 Literature review 5
2.1 Importance of security . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.2 State of computer security . . . . . . . . . . . . . . . . . . . . . . . . 7
2.3 Security threat intelligence . . . . . . . . . . . . . . . . . . . . . . . . 7
2.4 Software faults and vulnerabilities . . . . . . . . . . . . . . . . . . . . 8
2.5 Security statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.6 Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.6.1 Focus group . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.6.2 Make-or-buy decision . . . . . . . . . . . . . . . . . . . . . . . 12
3 Analysis 14
3.1 Vulnerability management . . . . . . . . . . . . . . . . . . . . . . . . 14
3.1.1 ISO standards . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
3.1.2 Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
3.1.3 Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
3.2 Product classification . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
3.3 Vulnerability management for IT companies . . . . . . . . . . . . . . 20
3.4 Vulnerability management for product life cycle . . . . . . . . . . . . 21
3.5 Requirement specification . . . . . . . . . . . . . . . . . . . . . . . . 25
3.6 Commercial off-the-shelf products . . . . . . . . . . . . . . . . . . . . 26
3.7 Requirements specification details . . . . . . . . . . . . . . . . . . . . 27
3.7.1 Registration . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
3.7.2 Vulnerability monitoring and analysis . . . . . . . . . . . . . . 28
3.7.3 Vulnerability remediation . . . . . . . . . . . . . . . . . . . . 29
3.7.4 Communication methods . . . . . . . . . . . . . . . . . . . . . 30
3.7.5 Service and Technologies . . . . . . . . . . . . . . . . . . . . . 31
3.8 Make-or-buy decision . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
v
4 Solution 35
4.1 Service design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
4.2 Service implementation . . . . . . . . . . . . . . . . . . . . . . . . . . 38
4.3 Vulnerability management processes and maturity model . . . . . . . 39
4.3.1 Vulnerability monitoring . . . . . . . . . . . . . . . . . . . . . 40
4.3.2 Security updates management . . . . . . . . . . . . . . . . . . 41
4.3.3 Vulnerability assessment . . . . . . . . . . . . . . . . . . . . . 42
4.3.4 Product release . . . . . . . . . . . . . . . . . . . . . . . . . . 43
4.3.5 Maturity model . . . . . . . . . . . . . . . . . . . . . . . . . . 44
4.4 Enterprise Vulnerability Management Service . . . . . . . . . . . . . . 46
4.4.1 Acceptance testings - compliance with requirements specification 46
4.4.2 Exploitation phase . . . . . . . . . . . . . . . . . . . . . . . . 48
4.4.3 Benchmarks and user experience . . . . . . . . . . . . . . . . . 48
4.4.4 Service maturity . . . . . . . . . . . . . . . . . . . . . . . . . 50
5 Conclusion 52
5.1 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
5.2 Assessment of results . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
5.3 Exploitation of results . . . . . . . . . . . . . . . . . . . . . . . . . . 53
5.4 Future research . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
References 55
A Appendix 61
B Appendix 62
B.1 Functional requirements . . . . . . . . . . . . . . . . . . . . . . . . . 62
B.1.1 User Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
B.1.2 Automation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
B.1.3 Reporting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
B.1.4 Alerting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
B.1.5 Auditing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
B.1.6 Database . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
B.2 Non-functional requirements . . . . . . . . . . . . . . . . . . . . . . . 63
B.2.1 Capabilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
B.2.2 Serviceability . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
B.2.3 Latency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
B.2.4 Monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
B.2.5 Logging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
B.2.6 Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
B.2.7 Quality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
B.2.8 Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
B.2.9 Integrity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
B.2.10 Confidentiality . . . . . . . . . . . . . . . . . . . . . . . . . . 66
B.2.11 Availability . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
B.2.12 Backup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
vi
Abbreviations
2PP Second Party Product (product/component developed by the same
company which is used as part of the product in question )
3PP Third Party Product
AD Active Directory
API Application Programming Interface
BCM Business Continuity Management
CAPEX Capital Expediture
CEO Chief Executive Officer
CERT Computer Emergency Response Team
CIS Center for Internet Security
COTS Commercial Off-The-Shelf
CPE Common Platform Enumeration
CVE Common Vulnerabilities and Exposures
CVRF Common Vulnerability Reporting Framework
CVSS Common Vulnerability Scoring System
CWE Common Weakness Enumeration
DDoS Distributed Denial of Service
DNS Domain Name System
DRP Disaster Recovery Planning
EVMS Enterprise Vulnerability Management Service
EU European Union
FIPS Fedaral Information Processing Standard
FOSS Free and Open Source Software
HBAC Host Based Access Control
HIPAA Health Insurance Portability and Accountability
HSM Hardware Security Module
HTTP Hypertext Transfer Protocol (RFC 2616)
IaaS Infrastructure-as-a-Service
ICT Information and Communications Technology
ICASI The Industry Consortium for Advancement of Security on the Internet
IDS Intrusion Detection System
IEC International Electrotechnical Commission
IETF Internet Engineering Task Force
IoT Internet of Things
IP Internet Protocol (RFC 791)
IPR Intellectual Property Rights
IPS Intrusion Prevention System
ISMS Information Security Management System
ISO International Organization for Standardization
ISP Internet Service Provider
IT Information Technology
Mbps Megabits per second
MVP Minimum viable product
vii
1.2.2 Scope
This thesis will focus on the vulnerability management practices in companies that
develop software or products which run software. Some of the practices shown
here may apply to other industries as well, however, that is a topic for a separate
research. To be able to deploy VMS, the organization must start by defining its
vulnerability management vision, devise vulnerability management strategy and
create vulnerability management policy. If these prerequisites are not met, VMS
cannot be effectively deployed.
The scope of the study does not cover the definition of vulnerability management
vision, strategy, policy and processes.
As a part of this study, the author analyzes vulnerability management policy
requirements, collects user expectations, refines all stakeholder input and proposes
a solution, which can fulfill all user requirements. In addition, the author has
contributions for definition of vulnerability management process and has provided
assistance during service design, implementation, testing and deployment phases.
The goals of this study are:
The most relevant standards for this thesis are ISO/IEC 29147 [25], ISO/IEC
30111 [24] and National Institute of Standards and Technology (NIST) originating:
Common Platform Enumeration (CPE) [35]; Common Vulnerabilities and Exposures
(CVE) [36]; Common Vulnerability Reporting Framework (CVRF) [23]; Common
Vulnerability Scoring System (CVSS) [12]; Common Weakness Enumeration (CWE)
[34]; Security Content Automation Protocol (SCAP) [42]; Open Vulnerability and
Assessment Language (OVAL) [37];
2 Literature review
2.1 Importance of security
Every year, there is an increasing amount of news on cybersecurity, data breaches,
user data leaks, confidentiality and privacy violations. For example during 2016 we
have seen [59]:
• Yahoo - 1 billion users targeted, which broke their previous disclosure record
of 500 million accounts earlier the same year
• LinkedIn - reports 117 million usernames and passwords stolen back in June
2012.
• Home WiFi routers by Netgear and D-Link being vulnerable and allowing
complete remote access.
• Various devices with backdoors. Mostly cheap Android phones, but some
laptops too.
• Ransomware attacks
life support device might even cost a human life. Even cars that are dependent on
software for features as autopilot have already caused human casualties [77]. Cases
where hackers would remotely control victim’s car from science fiction just became
scary reality [18]. Such facts should make people aware of the need for better software
security and improvement in the handling of software flaws.
To be able to understand the basis of information security, it is recommended
to start with definitions. An informative summary on the topic has been written
by Andersson [1]. Starting with the CIA model - Confidentiality, Integrity and
Availability, is widely used and easy to comprehend framework for information
security. As suggested by Andersson, to measure the impact, one should measure the
elements of CIA, a measurement which is hard to perform due to lack of standard
metrics.
Most often the metric used is raw number of incidents and the cost of compromise.
With the exception of HIPAA [71], FIPS [40] and PCI DSS [52] the IT industry has
not been regulated. The HIPAA, FIPS and PCI DSS compliance are the widely
used requirements in the IT world due to the fact that solutions, which should
handle health, governmental and payment data are regulated and compliance to the
requirements can be enforced by governments and regulator.
There is a number of other requirements applicable for specific countries (India,
China) or international entities (EU), yet those requirements are bound to their
localities and can be enforced only within their borders. This further strengthens
the point made by Andersson for a lack of general metrics for information security.
Furthermore, there is a lack of widely accepted definition for information security.
The argument continues describing the difficulties of evaluating risks and their
probabilities especially when those risks originate in human behavior. The same
argument is made also by Hall [19], who reminds that a system is only as secure
as its weakest link and employees are rarely motivated to maintain strong system
security. Hall goes on to, emphasize another well know problem: companies measure
their employees’ performance based on sales targets, deadline fulfillment and costs
saved.
This industry practice leads to a model where a company will always prefer to
deliver a product or service on time at the expense of security. Having a limited
budget management, will often prioritize feature development (which is easy to
measure), compared to security controls, which might even disable some wanted
product functionality. In addition, if a new product has not been compromised, it
is hard to justify if it is due to proper security or due to the fact that nobody has
attacked it yet.
However, as Andersson suggests, this is not a reason to give up and abandon
security. He proposes that information security in enterprise should be "A well-informed
sense of assurance that information risks and controls are in balance." This concept
could be defined as business optimal security, which implies that there should be
balance between the business objectives and the security measures implemented.
Furthermore, such assurance helps to define metrics for information security, which
in turn makes it easier to implement and monitor. Assurance is also something
that can be communicated to customers, thus increasing the trust and visibility on
7
• Security breaches
• Identity theft
• Crypto-Ransomware
• Web attacks
All these trends are further accelerated by growth in IoT. As the market demand
for cheaper connected devices continues to be strong, vendors have incentive to
fulfill the need by cutting cost from non-essential components. Unfortunately, one
such non-essential component happens to be security. With more devices, which are
able to run code and are connected to our home networks or the Internet, our risk
exposure is increasing at high rate.
From Figure 3 it can be observed that there is clear, increasing trend. Using linear
approximation, the estimated growth rate is 12883 new entries per year. Considering
this rate, even with improvements in software quality, it is not likely that overall
vulnerability trends will decrease. [44]
To supplement the new CPE entries data, attached were also the statistics for
new, published by NVD, vulnerabilities. Note that there is a larger number of
vulnerabilities with reserved CVE identifiers, which have not been published yet.
10
The data in Figure 4 spans over longer period of time to better visualize the
fluctuations in amount of vulnerabilities published. The figure indicates that there
was an increasing trend until 2006, which then declined with local minimum in 2011.
Then a second increase started with a peak in 2014.
2014 was the year of the so-called "mega-vulnerabilities": Shellshock, Heartbleed,
POODLE. Those vulnerabilities had such impact due to the fact that they were
found in widely used components (BASH, OpenSSL). As BASH and OpenSSL were
present in the majority of Internet connected servers, the threat and risk of exploiting
these vulnerabilities was high, which in turn explains the unrivaled before media
coverage.
In 2015 and 2016, there is again decline. While there is no clear indication of
the reason for this decline, a number of factors could contribute to it. Difference
in counting methods for vulnerabilities is one such factor. Some researches and
companies would bundle multiple related flaws in one CVE identifier, while others
will break it into multiple CVE identifiers. Also, the statistics only accounts for
published in NVD vulnerabilities, a number that could be different from total amount
of found vulnerabilities. That said, 2017 is actually on track to become a new peak
with projection of over 10000 vulnerabilities published. On average, NVD has 2816
vulnerabilities recorded for the last 30 years. If the focus is on the last 10 years, the
average is more than double at 5801 vulnerabilities per year [45]. In comparison,
commercial VulnDB statistics show even more vulnerabilities registered in their
private database every year.
11
2.6 Methodology
The following part will present the methodological choices and considerations for this
thesis, as well as the decisions related to what has been studied, which approaches have
been used, and how it was conducted. This is done in order to create transparency
and to provide the reader with a better understanding of the results.
A major decision that has to be made about every study is the general orientation
to the conduct of the research. There are two distinctive clusters of research
strategies: quantitative and qualitative, that have different foundations. Quantitative
strategy emphasizes the quantification in the collection and analysis of data, whereas
qualitative strategy focuses on gaining an understanding of underlying reasons,
opinions and motivations.
The focus of the present paper is placed on the service users in their various roles
and their understanding of the problem, as well as their vision as to how it can be
solved most efficiently. Therefore, the qualitative strategy is better suited for the
12
purposes of the present thesis. This strategy allows going in-depth with the available
empirical data and gaining a better understanding of the studied matter.
• Scalability
• Competitors
• Pricing models
• Scalability
3 Analysis
The previous Literature review chapter covered the basics of software security, how
it impacts our daily life and the methods that will be utilized in this analysis.
This chapter will start with introduction to vulnerability management and expand
on how it is applicable in the product life cycle. Then, it will present the ISO standards
which are commonly used in the industry to identify and handle vulnerabilities. The
chapter continues with description of VMS, and the process of collecting requirements
for this new service. At the end of the chapter is the Make-or-Buy analysis.
3.1.2 Definitions
To ensure that vulnerability handling service is compatible with existing tools
and enable automated processing of vulnerabilities, it is necessary to use standard
definitions, terminology and metrics. Using the ISO and NIST [43] standards as
basis, this paper uses the following definitions:
• Vulnerable - the product is using the vulnerable component and there may be
potential attack vectors.
• The use of these standards and frameworks enables the Security Content
Automation Protocol (SCAP), which has major role in implementing efficient,
automated vulnerability handling and compliance audit processes [42].
• The assessments of system security and use of SCAP are further improved with
the Open Vulnerability and Assessment Language (OVAL) [37].
3.1.3 Process
Once we have established common language, it becomes easier to delve into the
vulnerability management process. As described by Gartner [17], vulnerability
management process can improve the security in IT environments. They suggest a
process, which contains six steps.
As can be seen from Figure 7, during the introduction phase, the product
development requires an investment that is larger than the revenue. The same applies
for product testing step. After the product has been introduced to the market,
provided it is successful, the revenue becomes larger than the investment and the
product starts bringing profit. This characteristic is typical for the the growth phase.
The product user base continues to grow until the product reaches maturity phase.
At the same time the company is investing mainly in product maintenance efforts.
At a certain point, the company can decide that the product has reached its end of
life and as the cost to continue maintaining it will grow, it is more efficient to retire
it. This happens during the product decline phase.
Later in this chapter, it will be shown that most COTS products for vulnerability
management are useful only after the product has been deployed. This will cover
mainly the product release and maintenance steps. As most researchers and security
companies have low business incentive to publish and detect vulnerabilities for
products that are labeled as end of life, most COTS products will not detect
vulnerabilities in old or retired products. It is clear that, if a solution should
cover the complete product life cycle, it would have a larger set of requirements
when compared to others, which handle only a number of phases. In addition, the
vulnerability handling process is more complex for companies, which develop IT
products, compared to others, which only operate such products.
• Software-as-a-Services products
Hardware products are such where all the functionality is implemented in hardware.
Example of such product is a sensor. Changes to the hardware are often expensive
and in many case not possible. If a hardware product has to be changed, most often,
it is completely replaced by the next generation product. Hardware products are a
major challenge for vulnerability management. For low value products, replacement
is a viable option for vulnerability mitigation. However, for products with high value
and expected life of tens of years, vulnerability management requires workarounds
and mitigation by other means.
Hardware dependent products are products, which run firmware or software, but
are depended on specific hardware architecture or features. Network appliances
could be given as an example of such products. They depend on network accelerator
processors and run proprietary operating systems. Nonetheless, it is possible to
change their firmware or software to mitigate vulnerabilities. Such products often
have long life expectancy and in order to receive correction package, users are required
to purchase support contracts. Due to the close integration with hardware, any
changes to the software or firmware have to be validated with the hardware as
well. That could lead to longer time to fix. In case that support is not provided
by the vendor, or product has been retired, it is not possible to receive updates
and vulnerability fixes. These products pose medium challenge to vulnerability
management.
Platform dependent products are purely software. However, they depend on
platforms such as computer architecture or OS. These products allow fixes due to
their software nature. If the products have open source code, the customer could
implement the vulnerability mitigation directly by changing the code himself. If
the products have proprietary code, the software vendor is expected to provide
the fix. These products have often short life, usually three to five years. If the
proprietary software has reached its end of life, the vendor would not provide fixes.
This is a major advantage for open source products, as support and fixes can be done
even if the product has been officially retired. Testing and implementing fixes for
software products is easier, as they have to be validated only once on the abstraction
of platform. Development and testing can be highly automated and fixes can be
developed and delivered within hours. Platform dependent products do not pose a
challenge for vulnerability management.
Virtualized products are also purely software products. They have been further
decoupled from hardware by virtualization abstraction layer. They can still have
dependency on OS, however, no dependency on hardware is expected. These products
20
behave similarly to platform dependent products, therefore they are not distinguished
as separate category. Virtualized products do not pose a challenge for vulnerability
management.
Software-as-a-Service (SaaS) products are a special type of software products.
While the platform dependent products are still delivered to customers, in SaaS case,
service is provided for customers. In SaaS scenario, if a vulnerability is found, it is
the responsibility of the service provider to mitigate it. Changes happen on layers
which are not visible to the service users, thus no actions are required from the users.
SaaS allows even more flexibility compared to software products and large service
providers as Google, Amazon, Netflix are able to deploy new versions of their service
multiple times a day to multiple times a minute [27], [21]. SaaS products do not
pose a challenge for vulnerability management.
Having discussed how different type of products relate to vulnerability management,
this study continues description of vulnerability management in IT and product
development environments.
The first step in the process is identification, analysis and communication of new
vulnerabilities. The second step describes how the vulnerability is fixed, upstream
components are propagated to product development units and how those changes are
tracked. The third step serves as a toll gate to confirm if the fix has been successful
and that the product is no longer affected by the vulnerability, which triggered the
process. The last step is making the updated software available for download to
customers.
In IT VMS, when vulnerability is found, most of the communication is kept
internal. It is usually spread in the operations team, or between monitoring and
deployment teams if the functions are segregated. On the contrary, in product
development, when a vulnerability is found, based on the severity and processes,
communication to external parties might be triggered. Thus, it is important to define
the communication policy and vulnerability severity which would trigger it.
When VMS has to cover the whole life cycle of a product, it must track each
product from its inception, until its retirement. To be efficient, VMS must be
integrated with the rest of the product life cycle systems. While IT VMS would be
able to provide valuable service for operators in their deployments, it is a bad fit for
product development process. Their tooling is focused on detecting vulnerabilities in
systems that are operational. IT VMSs are made to detect vulnerabilities on services
running on an interface. Those solutions rarely have concepts of product, system or
solution.
IT VMS’s priority is to execute series of network tests, based on predefined rules,
against the system under investigation. Depending on the service response, VMS
will suggest if the system is vulnerable or not. This method can lead to false positive
results in cases where the detection rule processes the service version, and does
not actually try to validate the vulnerability. A vendor can use a version of the
software which is considered vulnerable, but without the part of the code which has
the vulnerability. This will lead to false positive detection result.
In contrast, product life cycle VMS approach, instead of rules, uses database with
listing of all components, hardware and software, commercial (3PP) or FOSS, and
internal common components (2PP). It also allows definition of abstract concepts
such as system, solution or service, by combining components in hierarchical structure
with recursions. This information can be further enriched with customer install base
data. Such deployment allows the supplier company to inform its customers when
vulnerability is detected in the specific version of the product used in customer
deployments, without running scanning tools.
In addition, IT VMS is often providing severity rating based on the assumption
that the scanned interface is Internet connected. This often leads to the overestimation
of the vulnerability impact, as customers can run the service in restricted networks
with additional layers of protection. Furthermore, IT VMS would rarely provide
information on such assumption, which can also lead to underestimation of the
impact. Simple IT VMS depend on their detection rules, if those rules are not fit for
specific customer deployment scenario, the results will be misleading.
In product life cycle VMS, the company which supplies the products has the
benefit of knowing how those products are designed and how they are deployed in its
customers’ premises. This allows product life cycle VMS to provide more accurate
rating for impact of detected vulnerabilities. The amount of information that is
based on facts is higher when compared to IT VMS, which have to make assumptions
instead.
In IT operations when vulnerability is detected by the VMS, analysis is triggered
to understand the impact of the vulnerability on the system. The analysis should
also lead to proposal for corrective action: software update; configuration change;
other or none. In product life cycle VMS, the analysis phase is longer and should
produce corrective actions for the products which are already deployed by customer
as well as the products that are still under development. Based on the corrective
actions proposal, there is wider variety of solutions available when compared to IT
VMS.
Having discussed the major difference between IT VMS and product life cycle
23
VMS, this paper continues with implementation of product life cycle VMS. Based
on ISO standards [26] and industry best practices, the establishment of product life
cycle VMS requires the following steps:
The next step is the actual vulnerability management process. The purpose
of this process is to minimize and control the impact of vulnerabilities on the
products throughout the life cycle. The process is cyclic and it applies for a wide
range of software products and services, or such that use software. The process
applies to different product development methods (waterfall, agile), different software
architectures and platforms.
24
• S/MIME - IETF - RFC 5652, 5750, 5751, 5754 [20], [55], [56], [63]
Each of the above listed methods have their own merits and drawbacks. In this thesis
it is recommend to implement VMS with all of them. This recommendation does not
impact the overall cost of the service, while it provides a wide compatibility with
human and machine client interface.
• Product management
• Customer support
After series of interviews, the service requirements were aggregated and formalized
into a requirement specification document. As most of the users in the focus groups
were interested in the service functionality, and not in its implementation, there
was a number of requirement areas that were not covered in the interviews. More
specifically, the areas identified for additional research were:
• Service availability
• Data confidentiality
26
• Backup process
The additional requirements in the list above are crucial for implementation of
VMS. However, those requirements and their specification are not in the scope of this
paper. Regardless, those requirements were specified as part of the commissioned
work. Once all requirements have been document and approved by the VMS owners,
the work proceeds with analysis of commercial off-the-shelf products which could be
utilized as VMS.
• RSA Archer
• Tenable SC (SecurityCenter)
• Rapid 7 InsightVM
• Rapid 7 Nexpose
• NCSC-NL Taranis
was also valuable to understand if a product is available as open source code. That
could allow modification and tailoring of the system in-house. Another feature based
on privacy and confidentiality concerns is whether the system is available as an
on-premises solution, or only cloud based. Flexibility shows the configuration options
and extensibility of the system. As vulnerabilities are being identified in components
(software or hardware), it is critical for the VMS to be able to represent the complete
product structure, down to the smallest component that could be vulnerable. Lastly,
the cost factor was summarized as price for solution, which will accommodate the
estimated workload. A quick summary of these features is provided in Table 2 below.
3.7.1 Registration
The service shall allow any company product, software, solution or a system to be
registered. It shall be possible to specify all components of the registered entity in
standard format. As an industry best practice, the service shall support CPE format.
All components, which are not found in public CPE dictionaries, shall be stored in
custom CPE dictionary. Users shall be allowed to register only components in CPE
format. This is done to ensure consistency of the input information and minimize
possible duplicate entries due to input errors. The service shall be accepting standard
Microsoft Excel spreadsheets as input, as long as the entries inside are in CPE format
and are stored according to predefined template.
In addition, the service shall allow security contacts with different privileges and
roles to be assigned per product version. It shall be also possible to assign contacts on
product level or higher, for monitoring and observation purposes. All registration in
the service shall be following role based access. The initial users in the services will be
configured manually by the administration team after those users have been vetted.
Once a critical mass of users is registered in the system, future user management
tasks will be handled by the integrated role base access, permissions and request
handling processes.
When new vulnerability is recorded, it shall have a unique identifier (CVE or
other). In addition, all recorded vulnerabilities shall have a CVSS version 3 (backward
compatibility with version 2) value calculated to analyze the severity of the issue.
available on their support pages. Internal security tests shall be utilized as input of
vulnerability information to complement the external sources.
When potentially vulnerable component is found, CVSS "base" and "temporal"
rating shall be calculated. Based on the rating, the process will determine if alert
needs to be generated and within what time frame it should be handled. The service
shall automatically match potentially affected products based on supplied 3PP list
with CPE formated entries. It shall be possible to differentiate between vulnerabilities
found in components and ones found in own developed products. It should be possible
to assign different impact values based on the source of the vulnerability (component
or own software). Also, the recipients of the information are determined based on
the CVSS rating. CVSS "environmental" rating shall be calculated when necessary.
Figure 10 presents a summary of the proposed vulnerability analysis process in
product development. The VMS shall implement the proposed process, to facilitate
structured analysis and aid the software development activities.
mitigated in specific software version shall be listed and described in the release
notes. This will increase the transparency and build trust with the customers, as
they will be able to see and verify that fixes were implemented as agreed.
• RSA Archer
32
• NCSC-NL Taranis
The final three COTS product were evaluated based on identified strategical
and operational factors. Table 4 provides overview of the benefits and drawback for
buying VMS.
33
Benefits Drawbacks
• Industry standard tools that have • COTS solutions able to cover only
been tested and used by many, parts of the product life cycle
reliable source of information
• Additional tools needed to be
• Lower CAPEX developed in-house to fulfill all
requirements not covered by the
COTS product
As the company had strategic interest in security services, more factors were
identified in favor of developing own VMS. Table 5 depicts the analysis of benefits
and drawbacks for implementing own VMS.
34
Benefits Drawbacks
Even though the factors listed above were important for the decision making
process, only by implementing own VMS the company will be able to fulfill all
requirements that were collected. Having considered the potential cost of development
and operating own VMS, the author recommends to develop own VMS as more cost
effective solution. The company management weighed all the factors and accepted
the recommendation to implement own solution. Developing own VMS was also
considered needed as part of internal vulnerability management improvement program.
The following chapter contains the design and implementation steps taken to deploy
own VMS.
35
4 Solution
This chapter presents a summary of the actions taken to develop and deploy own
VMS.
To be able to utilize efficiently such service, there is a need for organization
wide process and culture changes. This starts by devising vulnerability management
strategy on company level. The strategy should embody the vision of the company
for vulnerability-free products and services. Once the strategy is in place, the next
step is to define policies, roles and responsibilities. These steps are performed by high
and middle level management. The next steps require more field specific knowledge
and expertise, thus it is recommended to be performed by vulnerability management
and security experts.
The experts should define the actual processes that will be used in the company
and the interfaces of those processes. They should also specify how the new
service will interact with the existing systems and define interfaces and APIs for
future extensibility. When the details have been documented and approved by the
management, the project can proceed with even more technical requirements. At
this stage, it is advisable to involve service design, user experience (UX), network
and security experts. At this point of the project. it is recommended to prepare a
resource plan. An example of such plan can be seen on the Figure 11 below.
Based on the plan and the identified resource needs, company management should
commit and secure funding for the VMS development. When developing service of
such importance, it is recommended to have full-time dedicated personnel working on
the project. Temporary assignments and personnel changes add unnecessary strain
on the development process. In addition, it is very valuable to assign full-time project
manager or lead person, which will make sure that the service is being developed as
required and any impediments are promptly addressed. The project lead should also
communicate to the stakeholders progress reports as well as any additional needs
identified.
and business interfaces, which are not a focus of this study, are omitted. Figure 12
depicts the simplified configuration of VMS.
The core of the service is vulnerability database, which stores vulnerabilities and
their relation products and components. There are a number of technical interfaces
utilizing service APIs. They allow communication and collaboration within the
company, as well as with external partners. For example, 3PP vendors can provide
vulnerability feeds to VMS with RSS, or a direct subscription. Once a component
is selected for use in the company, it will be stored in the central repository. The
repository can communicate with VMS database to register new components for
vulnerability monitoring. VMS can, in return, update the repository with information
regarding vulnerable components, which should not be allowed for use in products.
PLCM systems can communicate toward VMS database information on product life
cycle state. When a product is retired, there is no need for vulnerability monitoring
process. In addition, when a new product is released, PLCM can communicate and
register the product and its components in VMS.
When a new vulnerability is published, the information is recored in VMS. Then,
alert notifications are sent to product development units and customers. Once product
units have evaluated the impact, the analysis is recored in VMS and forwarded to
customers. After mitigation measures are in place, provided new software is released,
product delivery organization will update VMS accordingly. At this point, product
release directly or via VMS can inform customers that new software is available for
download, which mitigates the vulnerability in question.
The are two special cases in VMS: communication with security researchers and
with regulators. While communication with regulators is based on legal requirements
and partnership agreements, there is no need for a specific technical interface. It is
expected to be a rare case, and could be considered in future studies. On the contrary,
communication with security researches will benefit from structured technical interface.
The interface in question should be specified as a part of the vulnerability disclosure
policy of the company. It could be implemented as a simple e-mail box or, it can
even have a dedicated web interface, if rich functionality is expected.
This concludes the service overview. The next step in this study provides technical
details on the service design.
37
Since VMS is a software service, industry best practices suggest the need for
development environment, test environment, verification environment and when the
service is ready, production environment. The service will have to be developed over
a long period of time and further improved after being deployed. It is advised to
have well established development and change management processes in place. All
applicable company IT and security polices and requirements shall be implemented
as well.
It is also beneficial to utilize again the focus groups, which were formed to collect
user requirements. Another option is to look for pilot users and develop a prototype in
close collaboration with such test users. Short feedback loop from the users and rapid
release software cycle are also beneficial for service development. After reviewing
the collected and approved requirements, the development team should make service
architecture and design proposals for review. Having Proof-of-Concept (POC) demo
is another valuable tool. Mock-up designs can be utilized to gather input from the
selected users, thus having better specification of the UI, before time is spent on
actually developing it.
38
In parallel, the vulnerability management experts should work together with the
developers to specify the data models and concepts that will be used within the
VMS. As soon as the data model and structure have been approved, privacy impact
analysis should be performed. Based on the results, refinements to the data models
and structure can be made to decrease the impact if necessary. If the refinement steps
are insufficient, additional controls should be specified and implemented. Privacy
violations have serious financial consequences for the company, which commits them.
Thus, there is a strong incentive to mitigate as many as possible, and minimize the
risk of such violations.
At this point, the VM processes should be translated to service work flows. With
the parallel tasks for data modeling and POC, UI mock-ups also in progress, the
project lead should specify the duration of development and review cycles. As already
mentioned, having short feedback loop with users is very valuable. However, having
too short development sprints is not always efficient. A balance should be reached
between short feedback loop and optimal development cycles.
Following the latest software design principles, the service implementation should
be broken into small independent components, which can be designed and tested
separately. Then continuous integration and continuous deployment machinery will
be responsible for testing and deploying the newly built components. Regardless
of the actual methods used for service development, the project should have a
clear acceptance testing procedure and definition of done. Industry best practice
recommends to use a checklist, based on the service requirement specification for
such purposes, which is signed-off by the stakeholders.
The project lead should have a release plan and a road map, showing when specific
requirements or service features will be implemented. The development team should
agree with the stakeholders on a deadline for the minimal viable delivery, when the
service is deployed in production setup and pilot users are engaged in testing the
implementation. When the deadlines are set, activities, which accompany the release,
should be specified as well. These include risk assessments, vulnerability assessments,
penetration testing and re-run of the privacy impact assessment. Additional personnel
may be involved as experts to execute the assessments. While the development
activities are ongoing, the same team, or a separate one, should also consider a
number of service implementation requirements.
• Role based access control (RBAC) and host based access control (HBAC) shall
be implemented
39
In addition, when building VMS, which will be used by the whole company,
Business Continuity Management (BCM) must be implemented. Service Level
Agreements (SLA) must be defined and approved by the stakeholders. Consequences
of violated SLA shall be specified as well. BCM is often supplemented by Disaster
Recovery Planning (DRP) activities. Depending on the SLA and requirements,
service design should consider Geo-redundant solution. Off-site stand-by system is
an alternative way to fulfill high-availability requirements.
As discussed earlier, ISMS is an important part of secure system design. In
addition, if the company has decided that security certification is needed, ISMS must
be implemented to fulfill ISO/IEC 27000 requirements. Risk assessment, vulnerability
assessment and penetration testing are a part of industry best practices too. VMS is
expected to have long life, thus it is necessary to define its own life cycle management
processes, including operations, maintenance and further development.
If it is deemed necessary by the stakeholders, security benchmarks can be executed
as well. This will provide additional assurance for the posture and implementation
of the services and can be reused later, when offering the service to third parties.
An example of such benchmarks are the Center for Internet Security (CIS) security
benchmarks [15]. Another industry accepted reference is Open Web Application
Security Project (OWASP) [48]. While CIS benchmarks are focused on measuring
compliance, OWASP projects include tools, recommendations, or just lists of risks
[49].
Having discussed some of the technical details about service design and implementation,
it is important to describe the processes that will be implemented by VMS. The
VMS processes design and implementation are described in the following subsection.
As discussed earlier, VMS can be used as an input for the sourcing process, as
seen on Figure 16 above. Vulnerability alerts are fed to software update process
in sourcing and product development units. This is another activity, which must
be automated in order to maximize the value delivered by the service. When new,
fixed software is available, product development and maintenance units shall run
vulnerability assessments to verify if the vulnerability is mitigated. The same process
shall be followed regardless of the source of software updates. External and internal
software fixes shall be tested to provide evidence of successful vulnerability mitigation.
Due to governmental and contractual requirements many software developing
companies would run standard vulnerability scanners against the ready for release
product. This is done to provide assurance that all known vulnerabilities have been
identified and addressed. The scans also assist in the detection of possible false
positive results and possible misinterpretations of such scanner results, when run by
customers.
as CVE. Product release can also provide input for VMS in the form of security
assurance. VMS database shall be updated to reflect that new product version has
been released and shall be monitored for vulnerabilities. In addition, VMS database
shall be updated to record the version and release date of the software, which has
mitigated vulnerabilities. The data should also reflect the availability of the new
software to customers.
Figure 18 above shows that the first level in the model is the baseline. There are
some vulnerability management activities, but they are not structured and depend on
the people who execute the actions. The first level is often referred to as "firefighting
mode" of operations. At this stage the company should consider formalizing processes
and training employees.
The second level is associated with devising strategy, writing policies and defining
processes. Planing for new actions is based on previous experience. Discipline is
required for successful execution of the task at hand. After these activities are
completed, the company has the baseline needed to proceed with VMS deployment.
The third level is characterized with integration of existing processes in VMS.
The processes are being implemented in a system to be able to deliver VMS. This
could be the MVP version of VMS. In addition, this is the level which could be
46
reached with majority of COTS products. While COTS often implement more than
MVP, they rarely provide quantitative metrics for process performance.
The forth level is defined by quantitative measurable goals for process and
service performance. VMS should be deployed for the entire company portfolio and
management should define performance goals. To reach this level of maturity, it is
likely that VMS should be integrated with other existing PLCM machinery. The
integration with other PLCM systems is one of the key aspects of this thesis. Having
a stand-alone VMS, which is able to handle vulnerabilities throughout product life
cycle, is an expensive and inefficient deployment scenario. It will require a much
larger amount of manual work to synchronize data between all systems and it is prone
to errors. To be able to meet customer requirements and performance expectations,
VMS must integrate with systems from sourcing, development, delivery and support
organizations.
The fifth level is focusing on optimization and continuous improvements. The
company should identify flaws and weaknesses as well as proactively improve VMS.
New technologies and practices can be implemented to deliver more cost effective VMS.
Some of these improvements could require major service redesign and additional
requirements specification. An example of such improvement is the addition of
real-time threat intelligence feeds to VMS. It can be implemented as an additional
vulnerability input module to increase the value and coverage of the service. Other
modules can be developed to interface with customer Security Information and
Event Management (SIEM) systems. In this case VMS could be used as a threat
intelligence input for customer SIEM, or SIEM could feedback real-time data from
live networks with information regarding vulnerability exploit attempts. These and
further improvements of VMS should be considered in further studies.
Referring to the original CMM for software development, this study recommends
implementing VMS to be able to reach level 4 and 5 of software quality process.
With the help of VMS, a company should be able to have predictable quality of their
software and ensure that it has a minimal amount of flaws on release. While VMS
deployment is not mandatory in the classic CMM, it could be a valuable addition.
Following the requirements and process specifications developed as part of this
thesis, an enterprise scale VMS was developed and deployed. The deployed service is
described in the next section.
and VMS stakeholders were present to confirm that the service in fact implements
the requirements as specified. As the service was deployed in multiple releases,
for practical reasons, the checklists were split to cover the expected features and
functionality in each release. When the acceptance testing was completed, VMS
stakeholders have signed the acceptance test report and have approved deployment of
each release to production. Samples of the acceptance testing checklists are provided
in Appendix B.
The requirements for VMS can be divided into two main categories: functional
and non-functional. While the functional requirements focus on system features,
which will be observed by the users, the non-functional requirements emphasize
aspects related to operations and maintenance of the service. The requirements
identified in this stage were used in the Statement of Compliance (SOC) checklist as
seen in Appendix B. Table 6 provides a summary of the high level functional and
non-functional requirements.
Functional requirements Non-functional
requirements
• Quality requirements
• Security requirements
• Integrity requirements
• Confidentiality requirements
• Availability requirements
• Backup requirements
It should be clear that, while non-functional requirements are not visible directly
to the users, they are high priority for service stakeholders and operations team.
Thus, the service shall not be deployed unless both functional and non-functional
requirements are met. The progress of service development leads to a larger number
48
of requirements being fulfilled. When all requirements pass the acceptance tests,
VMS will be considered completed and have reached the fourth level of the maturity
model.
• Web client
• Business logic
• Database
The web client is browser based and it is built on top of REST API. If users need
to improve it or automate it, they can implement their own client, based on the API
documentation and libraries supplied. The business logic is protected by firewalls
49
• * There is actual limit which can be overcome by simply scaling the storage
subsystem.
It is important to note that, as the service has been developed in-house and is being
operated in-house, there should be no license fees, which would grow with service
scaling up. As most of the COTS alternatives had billing based on usage or users,
the custom VMS provides better cost efficiency when scaled to such size.
In addition to a number of service performance metrics, the users observed a
number of quality improvements as well. VMS is able to increase visibility and
enforce access restriction to information, better than older systems. This has led to
better oversight of product security and improved vulnerability alert answers from
product units.
Having standard identifiers for components (CPE), vulnerabilities (CVE) and
scoring (CVSS), has also provided improved visibility and traceability of vulnerabilities.
Users are able to perform searches with higher accuracy and VMS is able to match
with improved certainty impacted products. VMS has also enforced specific fields
required for vulnerability analysis. This structure has made the analysis information
searchable and indexable, which increases its value.
50
Integration of alerting and work flow system has improved also answer rates
with help of reminders. The internal notification system has allowed for improved
information flow between dependent products. Whenever platform type of product
provides their analysis to a vulnerability, the information is instantly available in
read-only mode to all products which have registered as users of that platform.
As VMS has been deployed following standard IT processes, it has shown
significant improvements by fulfilling a number of non-functional requirements.
Example of such improvements are: better user support; improved backup and
availability processes; as well as high availability deployment. Accounting has been
improved as well by implementing RBAC and HBAC with remote secure audit logging
servers. This allows also for improved traceability of user actions and faster detection
of malicious activities.
To ensure that the VMS is free of known vulnerabilities, it has been registered
for vulnerability monitoring in it self. This allows the DevOps team to receive
vulnerability alerts and act in timely manner to protect the service. At the time of
writing, VMS has not fulfilled all requirements, as it is still being developed. The
following subsection describes the maturity level reached at the time of writing.
The addition of real-time status view and integration with customers and other
partners will further increase the value of this VMS. One can say that basic principles
of value network designs apply in this case as well. Having more parties connected
and communicating over VMS would increase the overall value of the services and
provide better value for all participants.
This concludes the Solution chapter of this thesis. The following Conclusion
chapter will provide a summary of the results and suggest topics for improvements
and future research.
52
5 Conclusion
The work on this thesis was done over the duration of more than three years. The
design, development and deployment of the service took two years. The goals set by
the customer were successfully achieved. As the company, that commissioned the
service has over 400 products ranging from hardware to SaaS offerings - VMS was
built generic by design. The modular architecture and APIs will help to integrate it
and expand it with minimal effort in the future.
The concepts used within the system are defined as abstract entities and can be
used recursively. User management is delegated to standard external system, which
also reduces privacy impact on the system. VMS does not store users’ personal data.
In addition, the user permissions are defined in expandable hierarchical manner,
which makes it easy to add new roles and map users to groups, which in turn will
automatically grant them the permissions inherited from their group membership.
5.1 Results
The first phase of this thesis starts with research on vulnerability management systems
and services. Then, it proceeds with collection of information and requirements with
the help of semi-structured focus groups interviews. Once all requirements have
been collected, formulated and approved by stakeholders, the study continues with
evaluation of available COTS products and make-or-buy analysis.
The second phase of this thesis examines the design, development and deployment
of in-house developed VMS. As VMS stakeholders decided to implement own design
instead of buying COTS product, the author facilitated the design and development
of the enterprise scale VMS. While the service was being developed and deployed,
the author had a role to observe the progress and provide feedback with acceptance
testing and future improvements. Since VMS official launch, the project has been
handled by PSIRT and VMS DevOps team and the author’s involvement has been
brought down to a minimum.
To answer the research question, "Which is the solution able to deliver enterprise
scale, cost effective and fast vulnerability management service for product life cycle?",
this study concludes that for this particular customer, developing own VMS is the
right approach. The interviews and benchmarks have confirmed that VMS is able to
deliver more than COTS products within budget. In addition, VMS can be scaled at
minimal cost of hardware and additional personnel needed to develop, operate and
maintain it. The cost of VMS is not tied to usage and for enterprise customer this
has proved a major financial benefit.
analysis may differ, reproducing the rest of the research can still prove beneficial.
Differentiating factor for make-or-buy will be the input from customer needs.
In addition, this study can be applied to any company, which develops products
that are software or run software. To some extend it can be utilized by other
industries, however standards and requirement for other industries might differ. A
good example of industry where this study will be very important is Internet of
Things. Based on the scale and speed of development in IoT, the author is convinced
that VMS will be crucial for device manufacturers, system integrators and service
providers.
It is also possible to utilize the same methods in more generic way to develop
completely different service. Nonetheless, the value of this study is higher in the
software and IT industries. Time and cost to adapt it to other applications might
outweigh the benefit of reuse.
The role of the author was to collect user requirements, formalize them and then
analyze the available COTS products. Then the author was responsible for providing
input and recommendations for make-or-buy decision. The company decided to
implement own VMS. At that stage, the author was responsible for the design of the
service, its components, work-flows and logic. The author was also responsible for
technology choices and data formatting. As VMS development progressed, the author
was responsible for acceptance testing and implementation guidance. In addition,
the author was responsible for VMS’s non-functional requirements and proposals for
their implementation.
As described earlier, the company regards the VMS implemented based on this
study as a success. That said, the collection of requirements, design and development
could have been done in a shorter time with help of stronger commitment from
high level management. As vulnerability management strategy, policy and process
development are prerequisites for VMS, it is beneficial if those are in place before
starting analysis of VMS needs. It is also possible to speed up the development and
deployment of VMS with the help of larger funding.
Having designed and deployed VMS, is it valuable to provide insights on how the
service could be exploited further.
References
[1] J. M. Anderson. Why we need a new definition of information security.
22(4):308–313, 2003. ISSN 0167-4048. doi: 10.1016/S0167-4048(03)
00407-3. URL http://www.sciencedirect.com/science/article/pii/
S0167404803004073.
[5] A. Bryman. Social Research Methods. Oxford University Press, 2015. ISBN
978-0-19-968945-3. Google-Books-ID: N2zQCgAAQBAJ.
[10] T. Dierks, Independent, E. Rescorla, and R. Inc. The transport layer security (tls)
protocol version 1.2, 2008. URL https://tools.ietf.org/html/rfc5246.
[11] J. Dittmer. Applying lessons learned for the next generation vulnerability
management system. SANS Institute InfoSec Reading Room, 2015.
URL https://www.sans.org/reading-room/whitepapers/threats/
applying-lessons-learned-generation-vulnerability-management-system-35997.
[20] R. Housley and V. Security. Cryptographic message syntax (cms), 2009. URL
https://tools.ietf.org/html/rfc5652.
[21] J. Humble. The case for continuous delivery, 2014. URL https://www.
thoughtworks.com/insights/blog/case-continuous-delivery.
[22] M. Humphries. D’oh! 2016’s biggest tech fails, 2016. URL http://www.pcmag.
com/feature/350413/d-oh-2016-s-biggest-tech-fails.
[23] ICASI. The common vulnerability reporting framework, 2012. URL http:
//www.icasi.org/cvrf/.
[28] S. Keach. 12 of the biggest tech fails from 2016: Can you guess the worst?, 2016.
URL http://www.trustedreviews.com/news/biggest-tech-fails-2016.
[32] M. Michael. The state of cyber security 2017, 2017. URL https://business.
f-secure.com/the-state-of-cyber-security-2017.
[36] Mitre. Common vulnerabilities and exposures the standard for information
security vulnerability names, 2014. URL https://cve.mitre.org/.
[43] NIST. Guide for conducting risk assessments, 2012. URL http://nvlpubs.
nist.gov/nistpubs/Legacy/SP/nistspecialpublication800-33.pdf.
[44] NIST. Official common platform enumeration (cpe) dictionary statistics, 2017.
URL https://nvd.nist.gov/products/cpe/statistics.
[47] OASIS. Oasis commong security advisory fremework (csaf) standard work, 2016.
URL https://cvrf.github.io/index.html.
[48] OWASP. Open web application security project, 2017. URL https://www.
owasp.org/index.php/Main_Page.
[50] I. Paul. The year in tech: 2016’s biggest flops, fails, and
faux pas, 2016. URL www.pcworld.com/article/3152054/internet/
the-year-in-tech-2016s-biggest-flops-fails-and-faux-pas.html.
[58] Rapid7. Live vulnerability management and endpoint analytics, 2017. URL
https://www.rapid7.com/products/insightvm/.
[62] RSA. Rsa archer it & security risk management, 2016. URL https://www.rsa.
com/content/dam/rsa/PDF/2016/05/h15021-rsa-archer-itsrm-sb.pdf.
[63] I. Sean Turner. Using sha2 algorithms with cryptographic message syntax, 2010.
URL https://tools.ietf.org/html/rfc5754.
[65] T. Spangler. The biggest tech fails of 2016, 2016. URL http://variety.com/
2016/digital/news/2016-biggest-tech-fails-1201945122/.
[69] Symantec. Mirai: what you need to know about the botnet behind recent
major ddos attacks, 2016. URL https://www.symantec.com/connect/blogs/
mirai-what-you-need-know-about-botnet-behind-recent-major-ddos-attacks.
60
[71] U.S.Department. Summary of the hipaa security rule, 2017. URL https:
//www.hhs.gov/hipaa/for-professionals/security/laws-regulations/.
[74] VulnDB. Vulnerability quickview 2016 year end. Technical report, Risk Based
Security Inc., Jan. 2017. URL https://pages.riskbasedsecurity.com/
hubfs/Reports/2016VulnDBYearEndReport1.27.17.pdf.
[77] D. Yadron and D. Tynan. Tesla driver dies in first fatal crash while using
autopilot mode, 2016. URL https://www.theguardian.com/technology/
2016/jun/30/tesla-autopilot-death-self-driving-car-elon-musk.
61
A Appendix
B Appendix
Sample Statement of Compliance checklists for EVMS
B.1.2 Automation
B.1.3 Reporting
B.1.4 Alerting
B.1.5 Auditing
B.1.6 Database
B.2.2 Serviceability
B.2.3 Latency
B.2.4 Monitoring
B.2.5 Logging
B.2.6 Operations
B.2.7 Quality
B.2.8 Security
B.2.9 Integrity
B.2.10 Confidentiality
B.2.11 Availability
B.2.12 Backup