Iotac 1

Download as pdf or txt
Download as pdf or txt
You are on page 1of 6

The IoTAC Software Security-by-Design Platform:

Concept, Challenges, and Preliminary Overview


Miltiadis Siavvas∗ , Erol Gelenbe † , Dimitrios Tsoukalas∗ , Ilias Kalouptsoglou∗ , Maria Mathioudaki∗ , Mert Nakip † ,
Dionysios Kehagias∗ , Dimitrios Tzovaras∗

Centre for Research and Technology Hellas, Thessaloniki, Greece
† Institute
of Theoretical & Applied Informatics, Polish Academy of Sciences, Gliwice, Poland
siavvasm@iti.gr, seg@iitis.pl, tsoukj@iti.gr, iliaskaloup@iti.gr, mariamathi@iti.gr, mnakip@iitis.pl,
diok@iti.gr, dimitrios.tzovaras@iti.gr

Abstract—Critical everyday activities handled by modern IoT the overall architecture may be. Hence, to ensure a secure IoT
Systems imply that security is of major concern both for the end- System, the security level of the software running on its nodes
users and the industry. Securing the IoT System Architecture is should be assessed and optimized throughout its development.
commonly used to strengthen its resilience to malicious attacks.
However, the security of software running on the IoT must be To this end, we develop the Software Security-by-design
considered as well, since the exploitation of its vulnerabilities (SSD) Platform, i.e., a novel software security monitoring
can infringe the security of the overall system, regardless of how and optimization platform that provides mechanisms for as-
secure its architecture may be. Thus, we present an IoT Software sessing and improving the security of IoT software applica-
Security-by-Design (SSD) Platform, which provides mechanisms tions, throughout their overall Software Development Lifecy-
for monitoring and optimizing the security of IoT software
applications throughout their development lifecycle, to validate cle (SDLC). In particular, SSD allows the developers of an
the broader security of the IoT software. This paper describes IoT software application to (i) ensure the correct definition
the proposed SSD platform that leverages security information of the security requirements, (ii) ensure the adherence of the
from all phases of development, using some novel mechanisms produced IoT software application to the originally defined
that have been implemented, and which can lead to a holistic requirements, (iii) evaluate the security level of the source code
security evaluation and future security certification.
the IoT software application, and (iv) provide recommenda-
Keywords—Internet of Things, Software Security, Requirements
tions for security improvements. In that way, the SSD Platform
Engineering, Static Analysis, Vulnerability Prediction
provides a more holistic software security assessment, as it
covers all the phases of the SDLC horizontally.
I. I NTRODUCTION
The purpose of the present paper is to present the overview
Modern Internet of Things (IoT) Systems consist of a large of the envisaged SSD Platform and describe its main func-
number of interconnected and highly diverse devices, such tionalities, i.e., the main novel mechanisms that have been
as sensors, actuators, gateways, etc., often accessible and proposed and developed so far. The SSD Platform is one of
controllable through the Internet. The high interconnectivity the main outcomes of the IoTAC Project, an EU Project funded
and accessibility of modern IoT Systems, along with the through the Horizon2020 Programme.
criticality of the daily activities that they monitor and control In the sequel, Section 2 discusses the related work focusing
(e.g. smart living, autonomous driving, industrial control, etc.) on the main challenges that we try to address. Section 3
render their security an aspect of utmost concern, both for the provides an overview of the broader SSD Platform, whereas
users and the owning enterprises [1]. Section 3 describes the main novel mechanisms that have been
An effective way of securing an IoT System is by securing developed so far. Finally, Section 5 concludes the paper and
its architecture. This can be achieved through conformance discusses directions for future work.
to International IoT Security Standards and the deployment
of various security countermeasures, such as intelligent at- II. R ELATED W ORK
tack detection, prevention, and mitigation mechanisms, se- According to the Security-by-Design paradigm, security
curity gateways, honeypots, etc. Several initiatives have re- should be monitored and optimized at all phases of the SDLC,
cently focused on extending well-established IoT Architectures and particularly during the Requirements, Design, Coding, and
(e.g., the ISO/IEC 30141 Reference Architecture [2]) towards Testing phases.
strengthening their security (e.g., SerIoT [1]). Apart from a During the Design and Requirement phases, security can be
secure IoT Architecture however, the software that is running added by ensuring that the security requirements are correctly
on the different nodes of an IoT System should also be defined, since a large portion of software vulnerabilities stem
considered. As per the “security of the weakest link” principle, from missing, incorrect, or vague security requirements [3].
if the software contains vulnerabilities, the security of the Although several approaches for specifying, verifying, and
overall system could be compromised regardless of how secure validating functional requirements exist [4]–[6], highly limited

978-1-5386-7097-2/18/$31.00 ©2018 European Union


contributions can be found with respect to non-functional security level of IoT software through the provision of high-
requirements, including security requirements. In particular, level security measures. Solutions will be also provided for
several templates and dedicated specification languages have validating (in fact, certifying) the overall security of an IoT
been proposed [3], but their adoption in practice is limited as software application by considering (i) security evaluation re-
they are highly complex and tedious to apply. Hence, there sults from the aforementioned phase-level security monitoring
is a need for a mechanism able to facilitate the specification mechanisms, and (iii) its compliance to selected International
of security requirements and enable their verification and Security Standards (e.g., ISO/IEC 27001, IEC 62443-4-2, etc.).
validation, preferably in a highly automated way. The high-level overview of the SSD Platform is illustrated
During the Coding and Testing phases of the SDLC, several in Figure 1. As can be seen, the proposed platform consists
mechanisms have been proposed for detecting security issues, of three main modules, i.e., the Design and Requirements
with static analysis being the most promising solution [7]. Module, the Security Assurance Module, and the Security
However, the fact that their results (i.e., static analysis alerts) Certification Module. The first two modules provide mech-
are in a very raw and low-level form, hinders their practicality, anisms for monitoring and optimizing the security of a given
since the encapsulated security information is not easily con- application during the Requirements, Design, Coding, and
veyed to the end-users. Hence, dedicated tools are needed, able Testing phases of the SDLC. The third module is responsible
to post-process the results of these tools in order to provide for validating the overall security of the analyzed software
more intuitive security information to developers and project based on the results of the other two modules, and issuing a
managers, assisting them in making strategic decisions. certificate (in fact, a pseudo-certificate) that reflects its overall
Two common post-processing mechanisms are: (i) the se- security level. To better understand the overall goal of the SSD
curity assessment models, which aggregate the static analysis Platform, its core modules are briefly described in the rest of
results to provide high-level quantified security measures that this section, while a more detailed description of the main
reflect important security aspects, and (ii) the vulnerability already-developed mechanisms is provided in Section IV.
prediction models, which highlight software components (e.g.,
classes, functions) that are likely to contain vulnerabilities.
However, despite some notable attempts [8], [9], no well-
accepted security models exist in the literature (especially for
the case of IoT software). In addition, existing vulnerability
prediction models [10] have not demonstrated sufficient re-
sults, and do not consider system-level information in order to
assess the vulnerability status of software components. Finally,
no assessment methodology exists that takes into account
information from all the phases of the SDLC and provide a
more holistic security evaluation.
Within the context of the IoTAC project, we attempt to
address the aforementioned challenges by proposing novel
security monitoring and optimization mechanisms for the
various phases of the SDLC. We also develop the SSD Plat-
form that integrates these individual security monitoring and
optimization solutions, and aggregates their results in order
to provide a more holistic security evaluation. The broader
evaluation results will be issued in the form of a pseudo-
certificate, in order to showcase how the SSD Platform can
be leveraged for future security certification activities.
Fig. 1: The high-level overview of the IoTAC Software
III. OVERVIEW OF THE I OTAC S ECURITY- BY-D ESIGN Security-by-Design (SSD) Platform
P LATFORM
The SSD Platform is an independent platform that aims The Design and Requirements Module is responsible for
towards offering solutions for monitoring and optimizing the monitoring and improving the security level of a software
security level of software application running on IoT devices, application during the Design and Requirement phases of the
throughout their overall SDLC. In particular, it enables the user SDLC. As shown in Figure 1, this module consists of three
to (i) correctly specify security requirements of a given IoT main mechanisms (i.e., components):
software application in a uniform and concrete manner, (ii) • Software Security Requirements Specification: This
assess the adherence of the specified security requirements, mechanism is meant to facilitate the specification of the
(iii) detect potential vulnerabilities that reside in the source security requirements of a given software product. It
code of the software application or potentially vulnerable allows the user to define the desired security require-
components (i.e., security hotspots), and (iv) quantify the ments into natural language (avoiding tedious templates),
processes them to automatically determine their main specify security requirements in a well defined and structured
concepts and underlying semantics, and turns them into way, without having to utilize tedious formal templates or
a well-structured and unified form. specification languages. To this end, as part of the SSD
• Software Security Requirements Verification and Val- Platform, a novel security requirements specification mech-
idation: The purpose of this mechanism is to evaluate anism has been implemented, able to automatically identify
the correctness and completeness of the software security the main concepts of a security requirement defined by the
requirements defined by the user, as well as to provide user in natural language (i.e., text), and express them in a
recommendations regarding their improvement. well-structured and common form [11]. More specifically, the
• Software Security Requirements Adherence Check: This proposed mechanism applies Natural Language Processing
mechanism is responsible for evaluating whether the techniques (i.e., syntactic and semantic analysis) to identify
final IoT software application adheres to the originally the main syntactic and semantic concepts of the submitted
imposed security requirements. More specifically, this requirements, and maps these concepts into dedicated Ontol-
mechanism is expected to pinpoint security requirements ogy objects (i.e., Actor, Action, etc.), which are stored into
that either have not been addressed at all, or have partially a dedicated Ontology. In that way, the submitted raw textual
been implemented and require more technical work. requirements are automatically turned into a formal ontology-
The Software Security Assurance Module is responsible for based representation. The high-level overview of the proposed
monitoring and improving the security level of a software mechanism is illustrated in Figure 2.
application during the Coding and Testing phases of the SDLC.
As shown in Figure 1, this module provides the following core
mechanisms (i.e., components):
• Quantitative Software Security Assessment: The purpose
of this component is to provide quantitative expressions
of internal security aspects of an IoT Software applica-
tion, based on the existence of potential security issues
that may reside in their code. This component is based
on state-of-the-art concepts from the fields of software
quality and software security evaluation (e.g., [8], [9]).
• Vulnerability Prediction: This component is responsible Fig. 2: The high-level overview of the Software Security
for identifying security hotspots, i.e., software compo- Requirements Specification mechanism
nents that are likely to contain vulnerabilities. It is based
on vulnerability prediction models that are built based on As can be seen by Figure 2, the proposed mechanism
(i) advanced machine learning techniques (mainly deep consists of two main steps, namely the Syntactic Analysis,
learning) and (ii) popular vulnerability datasets. which identifies the main grammatical terms (e.g., noun,
Finally, the purpose of the Software Security Certification verb, etc.) of the requirement along with their grammatical
module is to provide solutions for validating the overall relationships (e.g., subject-verb-object, etc.), and the Semantic
security of a given IoT software application, focusing on Analysis, which identifies the main semantic concepts (e.g.,
individual security assessments from the various phases of its Action, Actor, Object, etc.) of the requirement along with their
SDLC that could be used to potentially support its future cer- semantic relations.
tification. To do so, this module takes into account (i) project- The Syntactic Analysis step receives as input the require-
specific security evaluations (retrieved from the other two ment expressed in natural language and applies tokenization
modules), and (ii) conformance/compliance to international in order to split the sentence into word tokens, as well
security standards (e.g., ISO/IEC 27001, IEC-62443-4-2, etc.) as lemmatization in order to derive the uninflected form
and produces a pseudo-certificate reflecting the security level of each word token. Subsequently, it applies part-of-speech
of the analysed IoT software, along with a report on various tagging to identify the grammatical category of each word and
aspects that require fixing. dependency parsing to determine the grammatical relations
between them. For the above procedure, the Mate Tools1 were
IV. C ORE E LEMENTS employed, since they are widely-used for such tasks.
This section describes the novel mechanisms that have been In the next step, the results of the Syntactic Analysis, and
developed as part of the SSD Platform, along with details particularly the identified grammatical terms and relations, are
regarding their main elements. To date, novel mechanisms provided as input to the Semantic Analysis mechanism, in
have been developed as part of the Design and Requirements order to be mapped to semantic terms and relationships. Ini-
module and the Software Security Assurance module. tially, a semantic role labeling process is performed to assign
labels to words or phrases that indicate their semantic concept
A. Design and Requirements Module
in the sentence, using the semantic role labeller provided by
1) Software Security Requirements Specification: The pur-
pose of this component is to aid software engineers formally 1 https://code.google.com/archive/p/mate-tools/
the Mate tools. However, since this semantic role labeller is B. Security Assurance Module
able to detect only generic thematical concepts and relations 1) Quantitative Security Assessment: The purpose of this
(e.g., acceptor, property, etc.), it has been extended in order to component is to provide quantitative security indicators (i.e.
also detect the main requirement-specific concepts (i.e., Actor, security measures) that reflect important security attributes of
Action, etc.), based on a set of custom rules [11]. In brief, as an IoT application’s source code, and help developers identify
can be seen by Figure 2, initially the Priority and the Action of code-level issues that may correspond to critical vulnerabil-
the requirement are identified. Subsequently, for each Action, ities. Those indicators are computed by aggregating security
the associated Actor and Object are identified. Finally, for information retrieved from static code-level analysis, which is
each identified Object, several Properties (e.g., requirement acknowledged to be one of the most effective mechanisms for
prerequisites, etc.) are detected and reported. Finally, all the detecting vulnerabilities that reside in source code [7], [12].
identified requirement concepts are stored into a dedicated To this end, as part of the SSD Platform, we developed the
Ontology, called Security Requirements Knowledge Base. Security Evaluation Framework (SEF), which is a mecha-
2) Software Security Requirements Verification and Vali-
nism that evaluates the security level of the source code of
dation: As already mentioned, a large portion of software
a given IoT software application in a quantitative manner,
vulnerabilities stem from incorrect, missing, or vague security
based on the results of security-specific static analysis. More
requirements. The purpose of this component is to verify and
specifically, SEF (i) integrates popular static code analyzers
validate the correctness and completeness of the user-defined
known to detect security issues, and (ii) enables the calculation
security requirements and provide recommendations for their
of high-level security measures by aggregating the low-level
improvement. To this end, a novel mechanism has been
results of the security-specific static analysis. High-level secu-
developed, aiming to compare a given security requirement to
rity measures are more intuitive and easily understandable even
a curated list of well-defined security requirements (normally
by stakeholders with little or no technical knowledge, such
retrieved from international standards and other projects),
as project managers, facilitating in that way decision making
identify inconsistencies, and finally propose refinements [11].
during the overall development of the software application.
The high-level overview of the aforementioned mechanism is
The high-level overview of SEF is illustrated in Figure 4.
depicted in Figure 3.

Fig. 3: The high-level overview of the Software Security Fig. 4: Overview of the Security Evaluation Framework
Requirements Verification and Validation mechanism

As can be seen by Figure 3, the mechanism receives as As can be seen by Figure 4, SEF receives as input a software
input the Ontology instances of a security requirement, i.e., application and employs static analysis in order to detect
its main semantic concepts (e.g., Action, Actor, Object, etc.), potential security issues (i.e., security-related static analysis
as derived by the specification mechanism presented in Section alerts) that may reside in the software. This is achieved mainly
IV-A1. Subsequently, these instances are compared to the via a popular static analysis platform, namely SonarQube3 ,
ontology instances of the carefully curated list of security which is configured in order to detect important security
requirements that are stored in the Software Security Require- vulnerabilities (e.g., SQL Injection, Cross-site Scripting, Mem-
ments Knowledge Base. In particular, similarity checks are ory Leaks, Weak Cryptography, etc.). Additional open-source
performed between the user-defined requirement and those of static code analyzers are also utilized (e.g., CppCheck and
the curated list, utilizing popular NLP toolkits (e.g., WordNet FindBugs) through dedicated SonarQube plugins.
with NLTK2 ). Based on the values of the calculated similarity Subsequently, the low-level static analysis alerts are fed
scores, several recommendations for improvement are pro- to the Security Measures Computation mechanism, which
vided, such as: (i) rephrasing the analyzed security requirement aggregates them to compute the high-level security measures.
based on a highly similar requirement found in the curated IoT-specific security models are utilized (leveraging concepts
list, (ii) inclusion of additional security requirements that are from state-of-the-art security and quality models [8], [9]) to
observed to be closely related to the analyzed requirement, and determine (i) the high-level security measures that should
(iii) changing the priority of the analyzed security requirement be computed, and (ii) which low-level static analysis results
based on the priority of similar requirements in the curated list. should be aggregated (and in what way) to quantify those
2 https://www.nltk.org/ 3 https://www.sonarqube.org/
measures. The output of SEF is a report containing the detailed demand, based on user feedback. In that way, the SAA adapts
results of the analysis, and particularly: (i) the calculated high- to the specific characteristics of the software product to which
level security measures, and (ii) the low-level static analysis it is applied to provide more accurate assessments.
alerts that were utilized for computing those measures. 2) Vulnerability Prediction: Vulnerability Prediction is re-
Security Alerts Criticality Assessor: Although effective in sponsible for the identification of security hotspots, i.e., soft-
detecting security issues, static analysis is known to produce ware components (e.g., classes) that are likely to contain
long lists of alerts, most of them not being critical from a se- critical vulnerabilities. For the identification of potentially vul-
curity viewpoint. This hinders its practicality, since developers nerable software components, vulnerability prediction models
often have to go through the tedious process of triaging the (VPM) are constructed, which are mainly machine learning
alerts to detect those that correspond to critical security issues. models that are built based on software attributes retrieved
In an attempt to address the aforementioned problem, primarily from the source code of the analyzed software (e.g.,
we propose a novel mechanism for assessing the criticality software metrics, text features, etc.). The results of the vul-
of security-related static analysis alerts [13]. In particular, nerability prediction models are highly useful for developers
we developed a self-adaptive technique, the Security Alerts and project managers, as they allow them to better prioritize
Criticality Assessor (SACA), for classifying and prioritizing their testing and fortification efforts by allocating limited test
security-related static analysis alerts based on their critical- resources to high-risk (i.e., potentially vulnerable) areas.
ity, by considering information retrieved from (i) the alerts a) Component-level Vulnerability Prediction: Existing
themselves, (ii) vulnerability prediction (see Section IV-B2), vulnerability prediction models focus on the intrinsic charac-
and (iii) user feedback. The proposed technique is based on teristics of the analyzed software component, and particularly
machine learning models, particularly on neural networks, on attributes of its source code, in order to judge whether it
which were built using data retrieved from static analysis is vulnerable or not. Among the different attributes that have
reports of real-world software applications. The high-level been examined in the literature, those that are derived through
overview of the tool is presented in Figure 5. text mining have demonstrated the most promising results [10],
[14]. To this end, as part of the Software Security Assurance
module of the SSD Platform, we developed vulnerability
prediction models based on deep learning, utilizing as input the
sequences of word tokens that reside in the source code of the
component, and word embedding vectors for their effective
representation [15]. A high-level overview of the proposed
models is illustrated in Figure 6.

Fig. 5: Overview of the Security Alerts Criticality Assessor

As can be seen by Figure 5, a software project is provided


as input to the system and subsequently, security-specific static
analysis is employed to retrieve the alerts that are relevant to
this project. In addition, vulnerability prediction models are Fig. 6: Overview of Component-level Vulnerability Prediction
employed to compute the vulnerability scores (see Section
IV-B2) of its software components, which reflect the likelihood As can be seen by Figure 6, the source code of a software
of each component to contain vulnerabilities. Then, the alerts component is provided as input to the model and subsequently,
along with the vulnerability scores are provided as input to text tokenization is applied in order to extract the word
the Security Alerts Assessor (SAA), i.e., a neural network tokens and construct their corresponding sequence. Then, since
that assesses how critical each one of the alerts is from a Deep Neural Networks (DNN) operate on numerical values,
security viewpoint. More specifically, for each one of the the token sequences need to be properly transformed into
received alerts, it reports (i) a criticality flag (i.e., a binary numerical vectors. For this purpose, word embedding vectors
value between 0 and 1, with 1 denoting that the corresponding are utilized. Word embedding refers to the representation of
alert is considered critical), and (ii) a criticality score (i.e., a words, in the form of a real-valued vector that encodes the
continuous value in the [0,1] interval denoting how likely it meaning of the word in such a way that words that are close in
is for the alert to be security-critical). Hence, developers can the vector space are expected either to have similar meanings
start their refactoring activities by fixing alerts that have higher or to be in close proximity in the source code. Our mechanism
criticality, increasing in that way the probability of detecting utilizes two popular word embedding algorithms to generate
and mitigating actual vulnerabilities. the word embedding vectors, namely word2vec4 and fast-text5 .
As illustrated in Figure 5, the user can also correct the 4 https://radimrehurek.com/gensim/models/word2vec.html

outputs of the model, and the SAA can be retrained on 5 https://radimrehurek.com/gensim/models/fasttext.html


The derived word embedding vectors constitute the word combines vulnerability scores of individual components, with
embedding layer, i.e., the input layer of the DNN, which a system-level data regarding the interconnected components.
is the core element of the model. The output of the model
V. C ONCLUSION
is a vulnerability flag (i.e., a binary value between 0 and
1) indicating whether the given component is potentially The present paper provides an overview of the Software
vulnerable (i.e., 1) or not (i.e., 0), and a vulnerability score Security-by-Design (SSD) Platform, developed within the con-
(i.e., a continuous value within the [0,1] interval) denoting how text of the IoTAC project. It describse the main novel mech-
likely it is for the component to be vulnerable. The developers anisms proposed and developed in IoTAC, including novel
can use this information in order to decide where to focus their security monitoring and optimization mechanisms developed
testing and refactoring activities, starting, for instance, from in several phases of the overall SDLC, regarding the Design,
those components that have higher vulnerability score. Requirements, Coding, and Testing phases which are at the
b) System-wide Vulnerability Prediction: Section IV-B2a core of the SSD Platform. They constitute the basis on which
summarized existing models to predict software vulnerability the Software Security Certification module operates to provide
based solely on analyzing the software component’s source a broad evaluation of the security level of an IoT software
code to indicate whether it may contain a critical vulnerability. application, which may be issued as a “pseudo-certificate”.
Such models do not account for the overall architecture In the rest of the IoTAC project, emphasis will be given
of the application, which is an important issue when soft- on building the Software Security Certification module, and
ware consists of multiple interconnected components; i.e. the further improving the mechanisms described in this paper.
vulnerability of a component can affect other components, R EFERENCES
regardless of their individual vulnerability. To the best of
[1] J. Soldatos, Security Risk Management for the Internet of Things: Tech-
our knowledge, vulnerability prediction models including such nologies and Techniques for IoT Security, Privacy and Data Protection.
interdependencies have not been published. Now Publishers, 2020.
Thus we present a novel mechanism for system-wide vul- [2] ISO/IEC, ISO/IEC 30141:2018 - Internet of Things (IoT) — Reference
Architecture. ISO/IEC, 2018.
nerability prediction based on Adversarial Random Neural [3] M. Ramachandran, “Software security requirements engineering: State
Networks (ARNN) [16] with learning [17] previously used of the art,” in Int. Conf. on Global Security, Safety, and Sustainability.
for network cyber-attack detection [18]. The ARNN takes [4] T. Diamantopoulos and A. L. Symeonidis, Mining Software Engineering
Data for Software Reuse. Springer Nature, 2020.
into account (i) the vulnerability score of each software [5] T. Diamantopoulos and A. Symeonidis, “Enhancing requirements
component derived from component-level VPMs, and (ii) the reusability through semantic modeling and data mining techniques,”
interconnections among components of the software under Enterprise information systems, 2018, publisher: Taylor & Francis.
[6] E. Yu, “Modeling Strategic Relationships for Process Reengineering.”
analysis, to compute the resulting system-level vulnerability Social Modeling for Requirements Engineering, no. 2011.
likelihood of each component, as summarized in Figure 7. [7] G. McGraw, “Software security,” IEEE Security & Privacy, 2004.
[8] U. Dayanandan and V. Kalimuthu, “Software architectural quality as-
sessment model for security analysis using fuzzy analytical hierarchy
process (fahp) method,” 3D Research, 2018.
[9] I. Heitlager, T. Kuipers, and J. Visser, “A practical model for measur-
ing maintainability,” in 6th international conference on the quality of
information and communications technology. IEEE, 2007.
[10] R. Scandariato, J. Walden, A. Hovsepyan, and W. Joosen, “Predicting
vulnerable software components via text mining,” IEEE Transactions on
Software Engineering, vol. 40, no. 10, pp. 993–1006, 2014.
[11] D. Tsoukalas, M. Siavvas, M. Mathioudaki, and D. Kehagias, “An
ontology-based approach for automatic specification, verification, and
validation of software security requirements: Preliminary results,” in
Fig. 7: Overview of the System-wide Vulnerability Prediction 2021 IEEE 21st International Conference on Software Quality, Reli-
ability and Security (QRS), 2021.
[12] M. Howard, Writing secure code. Microsoft Press, 2003.
Figure 7 shows how the source code of the application [13] M. Siavvas, I. Kalouptsoglou, D. Tsoukalas, and D. Kehagias, “A self-
under analysis is inputed to the proposed mechanism. First, the adaptive approach for assessing the criticality of security-related static
analysis alerts,” in International Conference on Computational Science
text mining-based vulnerability prediction models of Section and Its Applications. Springer, 2021, pp. 289–305.
IV-B2a are used to derive their vulnerability scores denoting [14] S. Chakraborty, R. Krishna, Y. Ding, and B. Ray, “Deep learning
how likely it is for each software component to contain based vulnerability detection: Are we there yet,” IEEE Transactions on
Software Engineering, 2021.
vulnerabilities in its source code. Then a Component Intercon- [15] I. Kalouptsoglou, M. Siavvas, D. Kehagias, A. Chatzigeorgiou, and
nection Analyzer is used to detect interconnections between A. Ampatzoglou, “An empirical evaluation of the usefulness of word
components based on function calls, providing an Adjacency embedding techniques in deep learning-based vulnerability prediction,”
in EuroCybersec2021, 2021.
Matrix. The Adjacency Matrix and the individual vulnerability [16] E. Gelenbe and M. Nakip, “The Adversarial Random Neural Network
scores of components are inputed to the ARNN [16], which and Botnet attack detection,” Submitted for publication, 2021.
outputs the Likelihood Ratio indicating how likely it is for [17] E. Gelenbe, “Learning in the recurrent random neural network,” Neural
Computation, no. 1, pp. 154–164, 1993.
each interconnected software component to be vulnerable, [18] M. Nakip and E. Gelenbe, “Mirai botnet attack detection with auto-
based both on its own vulnerability and the effect of other associative dense random neural networks,” in 2021 IEEE Global
components in the application. Thus the proposed mechanism Communications Conference. IEEE Communications Society, 2021.

You might also like