Matter 01
Matter 01
Matter 01
alike. While the cloud model presents opportunities for cost savings and increased
revenues, it also faces several challenges that hinder its widespread adoption, with
issues, the cloud model introduces new security problems such as loss of control,
lack of trust, data isolation, and integration of tenants' security controls. Current
cloud platforms have yet to fully address these security challenges. However, the
current commercial clouds have been built to support web and small database
workloads, which are very different from typical scientific computing workloads.
service, and security specifications using novel system and security mega-models.
Cloud stakeholders can then specify their assets' security requirements, and our
real-time.
Penetration testing is used for testing the systems for vulnerabilities and plugging
the loopholes if any. All major enterprises have web applications or provide
i
requirements that need to be met requires extensive knowledge of systems and
even more critical in the cloud model, where services are publicly accessible,
security assessments for services. The results of this analysis are then utilized by
our security-patching component. Using online (cloud) testing tools any ordinary
person can test the system for vulnerabilities. To validate the effectiveness of our
three case studies to address key cloud security problems. The results of these
ii
CHAPTER –1
INTRODUCTION
1.1 Introduction
increased business benefits and reduced IT infrastructure costs [2]. This model
outsourcing, and pay-per-use payment models [3]. It offers a win-win solution for
via multi-tenancy, where different tenants share the same service instance.
resolving them before external hackers can exploit it, is called penetration testing.
The main objective of penetration testing is to correctly assess the real security
services poses significant challenges [4]. Our analysis identifies key factors
security requirements and the outsourcing of assets to the cloud without tenants'
architecture of the cloud model with diverse security issues further complicates
Introduction 1
management tasks, and the diverse service delivery models and security controls
and defining assets' security specifications, enforcing security, and monitoring and
standards, such as ISO27000 and NIST-FISMA, do not align well with the cloud
vulnerabilities are critical, which are insignificant, and which are false positives.
design-time security engineering, but for the cloud model's dynamic nature,
largely unexplored.
2
In the security monitoring phase, existing efforts mainly provide lagging security
a challenge.
management framework is designed to mitigate these challenges and fill the gaps
security problem and the cloud model, operating on an abstract level to work
across platforms and target services. Our approach extends security management
conducted three case studies to assess different parts of the platform in addressing
services.
3
dependencies between identified research gaps in cloud computing security
computing security to identify research problems, gaps, and define the research
was created and validated using a motivating scenario. The rest of our platform
runtime and reflects the resultant system-security model onto the running system
4
MDSE@R on open source applications from our benchmark set [34] and
aspect," which extends AOP with more flexible signature specification constructs
[38].
precision and recall rates, making it a reliable tool for analyzing SaaS applications
architecture and design artifacts for threat and risk analysis [42].
Sixth, given the set of security objectives, identified vulnerabilities, and enforced
security controls, how to confirm that the system is now secure enough, meets the
specified security objectives, and blocks the possible security attacks. Security
monitoring is a very raw field where a very limited amount of research has been
5
done so far. Existing efforts do not help in automated security monitoring. They
used to generate required security probes. These probes are integrated within the
target services using AOP that intercept service execution and extract values of
different attributes of the services to be evaluated. The collected measures are sent
back to analysis service that analyze such measures and generates the
corresponding metric values. The evaluation results show that our approach is
measurements. This approach along with the evaluation results are submitted to
ASE2014.
problems arising from the adoption of the cloud computing model. Below, we
management:
novel alignment of the NIST-FISMA standard to fit the cloud computing model
and its multi-tenancy hosting. Existing security management standards were not
suitable for cloud consumers who lack control over their outsourced cloud assets,
6
especially in SaaS applications. Our proposed alignment improved collaboration
control and lack-of-trust issues, shared among all cloud stakeholders, including
service analyzes service architecture, design, source code, and binaries to identify
security flaws and bugs. The service is integrated throughout different service
security monitoring service does not rely on predefined security metrics. Service
users can define their security metrics for diagnosing the security status of their IT
7
assets. The service uses these metric signatures to deploy security probes and
details. This approach helps in managing service security using MDSE@R and
MDSE@R and the vulnerability analysis service, our approach injects security
reported vulnerabilities.
model, enhancing its security and providing a more robust security management
platform.
The cloud computing model offers three distinct service delivery models and three
department has full control over the cloud platform. The primary objective of this
8
2. Public Cloud: Public clouds are available to the general public, allowing users
to register and utilize the available infrastructure and resources over the internet.
Out of these three models, public clouds are the most vulnerable deployment
option because they are accessible to the general public, including potential
expose advanced vulnerabilities, leading to security breaches not only for one
tenant but also for other cloud tenants. Therefore, it becomes crucial to have an
online security analysis tool that can automatically detect and identify any
tenants to secure their assets according to their specific needs is essential to ensure
Software-as-a-Service
Platform-as-a-Service
Infrastructure-as-a-Service
9
The cloud model introduces three fundamental service delivery models,
providers offer computational resources, storage, and network services over the
virtual machines that share the same physical server. Amazon EC2 is a well-
platforms, tools, and other business services that empower customers to develop,
deploy, and manage their own applications without installing any of these
platforms or support tools on their local machines. PaaS may be hosted on top of
users. Customers do not need to install the applications on their local networks.
SaaS may be hosted on top of PaaS, IaaS, or directly on the cloud infrastructure.
the complexity of developing standard security models for each one. Additionally,
the coexistence of these models on one cloud platform further complicates the
10
security management process. These service delivery models build upon existing
virtualization, which already face security challenges. Among these models, the
SaaS model is considered the most vulnerable, attracting many attacks aimed at
breaching its security. In this research project, our primary focus lies on the SaaS
model, but the concepts and work can be extended to the PaaS model (particularly
web services) with ease. Future work may involve integrating the findings from
this research with ongoing work in our research group concerning the IaaS model.
11
12
Chapter 02
Review of Literature
The main responsibility of a security management system is to help security administrators and
engineers in capturing and defining IT assets’ security, enforcing specified security details,
monitoring the security status of these assets, and improving assets’ security to meet target security
objectives. These security objectives may change overtime according to business needs. This thus
implies changing related security requirements, policies, controls and security control configurations.
Thus, any cloud computing security management approach must focus on automating these security
management tasks including defining security, enforcing security, and monitoring and improving
security. A cloud computing security management approach must address the loss- of-control and
lack-of-trust problems because it is the best place to incorporate cloud consumers’ security. It also
should take into consideration multi-tenancy, as a key factor contributing to the cloud computing
security problem. This because multi-tenancy has a big impact on how to capture, enforce, and
monitoring tasks. For example, instead of working only with one set of security requirements, with
multi-tenancy we have different sets of security requirements for different tenants. These need to be
captured, enforced, and monitored on a shared service instance.
We have determined four main relevant research areas to our research problem (cloud computing
security management), including the security management area, security analysis, security
engineering, and security monitoring. Two more areas are also relevant to our research problem, cloud
computing security as the underlying problem domain, and security reengineering/retrofitting. The
latter is required when addressing cloud services with built-in security capabilities that hamper
(conflict) with the adoption of our security management platform in automating service security
management. Below we discuss the key points and questions that we need to get answers for in each
research area in order to specify what efforts could help building our cloud computing security
management approach.
Security Analysis
13
The security analysis task is one of the very complicated tasks in both security
engineering and management domains. The security analysis task includes threat
analysis, security vulnerability analysis, and security attack analysis. These tasks are
integrated together while conducting security analysis. The output of the security
analysis tasks represents the main source of security requirements to be realized by any
ISMS. A key limitation of the existing security risk analysis efforts is that they mainly
focus on introducing a risk analysis methodology to be followed by security
administrators or at the maximum help in the documentation and assessment process.
They do not help in automating the security risk analysis. Thus, we had to investigate in
other security analysis areas that contribute to the security analysis including
architecture risk analysis that helps in identifying security threats in a given software
system, and security vulnerability analysis that helps in identifying system security
vulnerabilities. Both constitute the main sources of information required to build a
complete security risk model. Below we discuss effort in these three main areas.
Security risk analysis is one of the key steps in the ISMSs. The main target of the
security risk analysis task is to identify possible threats and attacks against a target IT
system, that need to be secured, and could be exploited by attack agents to breach
systems’ security. Then, security experts use these vulnerabilities to develop the
possible threats and attack graphs that could be fired by attackers against systems. The
outcome of the security risk analysis task is used to develop risk treatment plan where
security experts specify how to mitigate or prevent the identified risks. These treatment
plans contain a set of security requirements that need to be enforced in order to mitigate
reported risks and meet customers’ security objectives.
The existing security risk analysis and assessment efforts can be categorized into two
main categories: tool-based analysis and workshop-based analysis [9]. The tool-based
analysis approaches introduce toolsets that help in capturing enterprise assets and
security details and automatically conduct the security risk analysis task. In the
workshop-based analysis, the stakeholders conduct brainstorming and interviewing
sessions. Another possible categorization of the security risk analysis methods is either
qualitative or quantitative. The latter category uses mathematical formulas to assess
identified risks while the earlier depends on subjective assessment bands (high,
14
medium, low). Syalim et al [86] introduce a comparison between four key security
analysis methodologies including Mehari, Magerit, NIST800-30 and Microsoft’s
security management. The key limitation of the security risk analysis efforts is the focus
on introducing a risk analysis methodology to be followed by security
administrators or at the
15
maximum help in the documentation and assessment process. Thus, they do not help in
automating the security risk analysis. Below we discuss key security risk analysis
efforts:
Construct a platform for Risk Analysis of Security Critical Systems (CORAS) [10] is
a model-based method for security risk analysis. The security analysis team conducts
an initial risk analysis to identify initial list of threats and vulnerabilities. This is
followed by a deeper analysis by the security team. A brainstorming workshop is
conducted to identify any unwanted security incidents. This is followed by another risk
estimation workshop to estimate the likelihood of the identified risks. Finally, they
conduct a workshop to develop risk treatment plan including a cost-benefit analysis.
Saripalli [11] [87] introduces different security risk analysis methods for the cloud
computing model adopting the existing security management standards such as
ISO27000. However, they consider applying these standards from the service (cloud
platform) provider perspective not from the cloud consumers’ perspective. Moreover,
these efforts did not consider hosting of external services that have been developed by
other service providers where the cloud platform provider does not have any
information about their security issues.
In summary, the existing security risk analysis efforts focus mainly on how to conduct
and document the outcomes of the security risk analysis process through a set of well-
16
defined steps, some with tool support. However, they do not help in automating the
analysis process itself.
Existing efforts in architecture security risk analysis can be categorized into two main
groups: scenario-based approaches and metrics-based approaches. Both have
limitations related to
17
approach formality in describing metrics or scenarios, extensibility to capture new
metrics or scenarios to be assessed, and in automating the architecture risk analysis
process. A key notice from the existing efforts is that they focus mostly on scenario-
based analysis. A possible justification of this tendency is that developing security
metrics is a hard problem. However, it limits the capabilities of the approach compared
to user-defined or tool-supported scenarios.
Scenario-based Analysis
Kazman et al. [88], Dobrica et al. [89], and Babar et al [90] introduce comprehensive
software architecture analysis methods for different milestones of the software
architecture development. They introduce a set of criteria that can be used in developing
or evaluating an architecture analysis method including identification of the goals,
properties under examination, analysis support, and analysis outcomes. Babar et al.
compare and contrast eight different existing architecture analysis approaches. A
common weakness of all these approaches is the lack of extensible tool support.
Faniyi et al. [92] extend the ATAM approach to support architecture analysis in
unpredictable environments such as cloud computing platforms. They improve the
scenario elicitation process using security testing with implied scenarios (unanticipated
scenarios of components’ interactions). This generates potential scenarios that may lead
to security attacks. Although this improved the scenario elicitation process, it still
requires manual analysis. A further extension to our signature-based architecture
security analysis approach could be to integrate this approach as a source of attack and
metrics signatures’ specifications.
18
using security patterns. Thus, the lack of specific security patterns in a given system
architecture means possibility of violating certain security objectives in the underlying
system architecture. However, their approach does not support developing custom
security scenarios to be analyzed in the target system.
19
conformance checking. However, their proposed framework is high-level and lacks
details of its realization system.
Metrics-based Analysis
Alshammari et al. [100, 101] introduce a hierarchical security assessment model for
object- oriented programs. They define a set of dependent metrics that capture security
attributes of a given system. The proposed metrics are well organized; however, they
are not extensible (i.e. are predefined metrics). They also do not consider security
architecture-level details.
21
Using this model, they defined a set of built-in security metrics to assess the security
architecture of a given system.
22
Command Injection, etc. They have also developed a set of test cases that help in
assessing the capabilities of a security analysis tool in discovering such vulnerabilities.
23
Lei et al. [108] trace the memory size of buffer-related variables and instrument the
code with corresponding constraint assertions before the potential vulnerable points by
constraint based analysis. They used model checking to test for the reachability of the
injected constraints.
Martin et al [110, 111] introduce a program query language PQL that can be used to
capture definition of program queries that are capable to identify security errors or
vulnerabilities. PQL query is a pattern to be matched on execution traces. They focus on
Java-based applications and define signatures in terms of code snippets. This limits their
capabilities in locating vulnerabilities’ instances that match semantically but not
syntactically.
Ganesh et al [114, 115] introduce a string constraint solver to check if a given string
can have a substring with a given set of constraints. They use this technique to conduct
white box and dynamic testing to verify if a given system is vulnerable to SQLI attacks
using strings generated by the string constraint solver.
Bau et al [116] perform an analysis of the black-box web vulnerability scanners. They
conducted an evaluation of a set of eight leading commercial tools to assess the
supported classes of vulnerabilities and their effectiveness against these target
24
vulnerabilities. A key conclusion of their analysis is that all these tools have low
detection rates of advanced (second- order) XSS and SQLI. The average percentage of
discovered vulnerabilities equals 53%. The analysis shows that these tools achieve 87%
in session management vulnerabilities and 45% in the cross-site scripting
vulnerabilities.
Kals et al [117] introduce a black-box vulnerability scanner that scans websites for
the presence of exploitable SQLI and XSS vulnerabilities. They do not depend on a
vulnerability
25
signature database, but they require attacks to be implemented as classes that satisfy
certain software interfaces. Thus such attacks can be called from their vulnerability
scanner.
Monga et al [120] introduce a hybrid analysis framework that blends static and dynamic
approaches to detect vulnerabilities in web applications. The application code is
translated into an intermediate model. This static model is filtered to focus only on
dangerous statements. This reduces model size where dynamic analysis will be
conducted, mitigating the performance overhead of the dynamic taint analysis approach.
This approach, as most taint analysis approaches (either static or dynamic), targets only
injection-related vulnerabilities. Balzarotti et al [121] introduce composition of static
and dynamic analysis approaches “Saner” to help validating sanitization functions
(addressing the input validation related attacks) in web applications. The static analysis
is used to identify sensitive source and sink methods. The dynamic analysis component
is used to analyse suspected paths only.
26
Enumeration Database (CWE) are informal which hamper their adoption in developing
automated vulnerability analysis tools.
Security Engineering
27
algorithms and mechanisms. Some of the key limitations with the existing security
engineering efforts include: (i) these efforts focus mainly on design-time security
engineering – i.e. how to capture and enforce security requirements during software
development phase; (ii) limited support to dynamic and adaptive security and require
design-time preparation; and (iii) no support for multi-tenancy, most of the existing
efforts focus on getting services (including cloud services) to reflect one set of security
requirements without considering different tenants’ security requirements. Benjamin et
al [122] introduce a detailed survey of the existing security engineering efforts; however
they did not highlight limitations of these approaches.
28
Secure i* [126, 127] introduces a methodology based on the i* (agent-oriented
requirements modelling) framework to address the security and privacy requirements.
The secure i* focuses on identifying security requirements through analysing
relationships between users, attackers, and agents of both parties. This analysis process
has seven steps organized in three phases of security analysis as follows: (i) attacker
analysis focuses on identifying potential system abusers
29
and malicious intents; (ii) dependency vulnerability analysis helps in detecting
vulnerabilities according to the organizational relationships among stakeholders; (iii)
countermeasure analysis focus on addressing and mitigating the vulnerabilities and
threats identified in previous steps.
Misuse cases [131, 132] capture use cases that the system should allow side by side
with the use cases that the system should not allow which may harm the system or the
stakeholders operations or security. The misuse cases focus on the interactions between
the system and malicious users. This helps in developing the system expecting security
threats and drives the development of security use cases.
30
Efforts in this area focus on how to map security requirements (identified in the
previous stage) on system design entities at design time and how to help in generating
secure and security code specified. Below we summarize the key efforts in this area
organized according to the approach used or the underlying software system
architecture and technology used.
UMLsec [14, 133, 134] is one of the first model-driven security engineering efforts.
UMLsec extends UML specification with a UML profile that provides stereotypes to be
used in annotating system design elements with security intentions and requirements.
UMLsec provides
31
a comprehensive UML profile but it was developed mainly for use during the design
phase. Moreover, UMLsec contains stereotypes for predefined security requirements
(such as secrecy, secure dependency, critical, fair-exchange, no up-flow, no down-flow,
guarded entity) to help in security analysis and security generation. UMLsec is
supported with a formalized security analysis mechanism that takes the system models
with the specified security annotations and performs model checking. UMLsec [135]
has recently got a simplified extension to help in secure code generation.
Satoh et al. [136] provides end-to-end security through the adoption of model-driven
security using the UML2.0 service profile. Security analysts add security intents
(representing security patterns) as stereotypes for the UML service model. Then, this is
used to guide the generation of the security policies. It also works on securing service
composition using pattern-based by introducing rules to define the relationships among
services using patterns. Shiroma et al [137] introduce a security engineering approach
merging model driven security engineering with patterns-based security. The proposed
approach works on system class diagrams as input along with the required security
patterns. It uses model transformation techniques (mainly ATL - atlas transformation
language) to update the system class diagrams with the suitable security patterns
applied. This process can be repeated many times during the modeling phase. One point
to be noticed is that the developers need to be aware of the order of security patterns to
be applied (i.e. authentication then authorization, then…)
Delessy et al. [138] introduce a theoretical framework to align security patterns with
modeling of SOA systems. The approach is based on a security patterns map divided
into two groups: (i) abstraction patterns that deliver security for SOA without any
implementation dependencies; and (ii) realization patterns that deliver security solutions
for web services’ implementation. It appends meta-models for the security patterns on
the abstract and concrete levels of models. Thus, architects become able to develop their
SOA models (platform independent) including security patterns attribute. Then generate
32
the concrete models (platform dependent web services) including the realization
security patterns. Similar work introduced by
[139] to use security patterns in capturing security requirements and enforcement using
patterns.
33
applications and services. It is based on outsourcing security tasks to be done by the
SeAAS component. Security services are registered with SeAAS and then it becomes
available for consumers and customers to access whenever needed. A key problem of
the SeAAS is that it introduces a single point of failure and a bottleneck in the network.
Moreover, it did not provide any interface where third-party security controls can
implement to support integration with the SeAAS component. The SECTET project
[141] focuses on the business-to-business collaborations (such as workflows) where
security need to be incorporated between both parties. The solution was to model
security requirements (mainly RBAC policies) at high-level and merged with the
business requirements using SECTET-PL [142]. These modeled security requirements
are then used to automate the generation of implementation and configuration of the
realization security services using WS-Security as the target applications are assumed to
be SOA-oriented.
Different proposals have been developed trying to align and incorporate security
engineering activities with the software development lifecycle. These processes such as
Security Quality Requirements Engineering (SQUARE) [65], SREP [143] and
Microsoft SDL specify the steps to follow during software engineering process to
capture, model, and implement system security requirements. Such processes are
aligned with system development processes. They focus on engineering security at
design time making assumption about the expected operational environment of the
application under development. This leads to lot of difficulties when integrating such
systems and their implemented security with the operational environment security as
software systems depend on their built-in security controls.
We have determined different industrial security platforms that have been developed to
help software engineers realizing security requirements through a set of provided
security functions and mechanisms that the software engineers can select from.
Microsoft has introduced more advanced extensible security model - Windows Identity
Foundation (WIF) [144] to enable service providers delivering applications with
extensible security. It requires service providers to use and implement certain
34
interfaces in system implementation. The Java Spring framework has a security
framework – Acegi [145]. It implements a set of security controls for identity
management, authentication, and authorization. However, these platforms require
developers’ involvement in writing integration code between their applications and such
security platforms. The resultant software systems are tightly coupled with these
platforms’ capabilities and mechanisms. Moreover, using different third-party security
controls requires updating the system source code to add necessary integration code.
35
Adaptive Application Security
Several research efforts try to enable systems to adapt their security capabilities at
runtime. Elkhodary et al. [146] survey adaptive security systems. Extensible Security
Infrastructure [17] is a framework that enables systems to support adaptive
authorization enforcement through updating in memory authorization policy objects
with new low level C code policies. It requires developing wrappers for every system
resource that catch calls to such resource and check authorization policies. Strata
Security API [18] where systems are hosted on a strata virtual machine which enables
interception of system execution at instruction level based on user security policies. The
framework does not support securing distributed systems and it focuses on low level
policies specified in C code.
The SERENITY project [19, 147, 148] enables provisioning of appropriate security
and dependability mechanisms for Ambient Intelligence (AI) systems at runtime. The
SERENITY framework supports: definition of security requirements in order to enable a
requirements- driven selection of appropriate security mechanisms within integration
schemes at run-time; provide mechanisms for monitoring security at run-time and
dynamically react to threats, breaches of security, or context changes; and integrating
security solutions, monitoring, and reaction mechanisms in a common framework.
SERENITY attributes are specified on system components at design time. At runtime,
the framework links serenity-aware systems to the appropriate security and
dependability patterns. SERENITY does not support dynamic or runtime adaptation for
new unanticipated security requirements neither adding security to system entities that
was not secured before and become critical points.
36
integration code or to use specific platform or architecture). They also support only
limited security objectives, such as access control. Unanticipated security requirements
are not supported. No validation that the target system (after adaptation) correctly
enforces security as specified.
37
Multi-tenancy Security Engineering
Xu et al. [26] propose a new hierarchical access control model for the SaaS model.
Their model adds higher levels to the access control policy hierarchy to be able to
capture new roles such as service providers’ administrators (super and regional) and
tenants’ administrators. Service provider administrators delegate the authorization to the
tenants’ administrators to grant access rights to their corresponding resources.
38
Zhong et al. [155] propose a framework that tackles the trust problem between
service consumers, service providers and cloud providers on being able to inspect or
modify data under processing in memory. Their framework delivers a trusted execution
environment based on encrypting and decrypting data before and after processing
inside the execution environment
39
while protecting the computing module from being access from outside the execution
environment.
In summary, the existing efforts in this area focus mainly on design time security
engineering which is not feasible in multi-tenant cloud-based applications where
tenants’ security requirements are not known at design time. The existing adaptive
security engineering efforts require deep design time preparation, which is mostly not
feasible for legacy applications. Moreover, the existing multi-tenant security
engineering efforts do not consider the possibilities of integrating tenant third-party
security controls at runtime.
Security Re-engineering
40
conduct the reengineering process. The maintenance or reengineering of such systems
is hardly supported by existing security (re)engineering approaches. Below we
summarize the key efforts we found in relevant research areas including security
retrofitting, software maintenance and change impact analysis, dynamic software
updating, and concept location areas.
41
Security Retrofitting Approaches
Research efforts in the security retrofiting area focus on how to update software systems
in order to extend their security capabilities or mitigate security issues. Al Abdulkarim
et al [158] discussed the limitations and drawbacks of applying the security retrofitting
techniques including cost and time problems, technicality problems, issues related to the
software architecture and design security flaws.
Hafiz et al. [159, 160] propose a security on demand approach which is based on a
developed catalog of security-oriented program transformations to extend or retrofit
system security with new security patterns that have been proved to be effective and
efficient in mitigating specific system security vulnerabilities. These program
transformations include adding policy enforcement point, single access point,
authentication enforcer, perimeter filter, decorated filter and more. A key problem with
this approach is that it depends on predefined transformations that are hard to extend
especially by software engineers.
Ganapathy et al. [161, 162] propose an approach to retrofit legacy systems with
authorization security policies. They used concept analysis techniques (locating system
entities using certain signatures) to find fingerprints of security-sensitive operations
performed by system under analysis. Fingerprints are defined in terms of data structures
(such as window, client, input, Event, Font) that we would like to secure their access
and the set of APIs that represent the security sensitive operations. The results represent
a set of candidate joinpoints where we can operate the well-known “reference monitor”
authorization mechanism.
Padraig et al. [163] present a practical tool to inject security features that defend
against low- level software attacks into system binaries. The authors focus on cases
where the system source code is not available to system customers. The proposed
approach focuses on handling buffer overflow related attacks for both memory heap and
stack.
Software Maintenance
Another key area that could be used in addressing the security reengineering problem is
the software maintenance and reengineering. System reengineering “preventive
maintenance” [165]
43
targets improving system structure to easily understand and help in reducing cost of
future system maintenance. The re-engineering process includes activities such as
source code translation, reverse engineering, program structure improvement, and
program modularization. System maintenance [166] includes any post-delivery
modification to existing software. Runtime system adaptation is similar to system
maintenance where we can handle post-delivery “unanticipated” requirements, but this
should happen while the system is running. These concepts target adding, removing,
replacing or modifying a system feature or structure either at design time or at runtime.
Engineering approaches that depend on aspect-oriented software development enable
systems to extend, may be replace, system functionality even at runtime but they do not
support leaving out certain patterns that may be buggy, unsafe or insecure.
Xiaobing et al. [168] introduce a static analysis approach to identify the impact set of
a given change request based on the change type (modify, add and delete) and the entity
to be modified (class, method, attribute). They construct a dependency graph of system
classes, methods, and members (OOCMDG). Given an entity with change type CT,
then using the OOCMDG and a set of impact rules they define the other impacted
entities in a given application. Types of changes are limited to classes and methods.
Statement-level modifications are not considered. The types of modifications required
on the identified impact set are not known. Petrenko et al.
[169] introduce an interactive process to improve the precision of the identified impact
analysis using variable granularity analysis guided by developers. The proposed
approach depends on developers’ deep involvement during the impact analysis process
to control the precision of the change set.
44
Hassan et al. [170, 171] introduce an adaptive change impact analysis approach
based on adaptive change propagation heuristics. The approach combines different
heuristics-based approaches including history-based impact analysis (given a change
request to modify entity (A), what are the entities that are often modified with);
containment-based impact analysis (modifying entity (A) means other entities in the
same container may be changed as well. Container may be a component or a source
file); call-use-depends impact analysis (uses the
45
dependency graph to identify entities that refer to the modified entity); code ownership
impact analysis (modifying entity (A) will return other entities that are owned by the
same developer). The best heuristics table maintains for each entity in the system, the
best heuristics approach that helps in conducting more accurate impact analysis.
However, the approach did not explain how these best heuristics are automatically
selected and updated at runtime.
It is worth mentioning here that we could not find relevant work that addresses the
change propagation as the next step of the software maintenance problem. The system
refactoring area has a similar scope to maintenance but with limited system
modification, focusing on identifying code regions to be refactored – i.e. “bad smells”.
Some of the refactoring problems include how to specify and locate code snippets to be
refactored, and updating system models to maintain consistency between the updated
(refactored) source code and system models [24]. Wloka et al [172] introduce an aspect-
aware refactoring approach where refactoring takes into account updating the defined
aspects and pointcuts model. Most refactoring tools identify known refactoring patterns
[173]. They depend on user involvement to define syntactic bad smells or use aspect
mining tools to propose candidates that need to be refactored.
Dynamic system updating efforts aim to facilitate system updates at runtime. Such
efforts have been used also in adding new features and updating system at runtime.
Most of these efforts are based on the aspect oriented programming concept. Existing
AOP languages e.g. AspectJ support two types of crosscutting concerns: dynamic
crosscutting concerns that impact system behaviour by injecting code “advice” to run at
well-defined points (normally limited to updating methods – removing, modifying and
replacing); and static crosscutting concerns that impact the static structure and signature
of program entities (normally limited to adding new declarations and methods, rather
than modifying existing system entities like classes, methods and fields). These are key
limitations in adopting AOP for software reengineering and maintenance.
Pukall et al. [174] introduce an approach using AOP HotSwaping and object
wrapping. It is based on role-based adaptation. Given a system change (adaptation), the
involved entities are categorized into caller and callee. The callee is extended by a
46
wrapper class. The caller method is replaced with a new method that uses the new
wrapper class. The approach suffer from: memory and performance impact on the
updated system; out of synchronization with the original source code; requires
modifying the callers in all classes (at runtime); and has limited support for class
hierarchy change.
47
AOP and static crosscuts are not supported. Nicorra et al [176] introduced PROSE, an
AOP- based code replacement approach. PROSE does not support schema changes, or
“inter-type declarations” such as the replacing of a method, and does not allow the
addition of new class members (i.e., methods, fields) in the original code.
Concept location techniques help in identifying and locating source code blocks that
realize a given system feature or concept. This area is also relevant to our security
reengineering problem where we need to locate code blocks with certain signatures as
we going to explain in next chapters. Efforts in capturing code signatures include point-
cut designators, feature location, and aspect mining. Feature location is a key step in
system reengineering and maintenance to understand the target system and identifying
implemented features. Feature location approaches can be categorized into static-based,
dynamic-based, and runtime-based approaches. Below we summarize the key efforts in
the area of concept location:
Reiss [177], Shepherd et al [178], Marcus et al [179] use natural language and
ontology- based queries and information retrieval approaches in searching source code
looking for certain concepts. The adoption of natural language impacts approach
accuracy and soundness. Poshyvanyk et al [180] use AI techniques e.g. decision making
and uncertainty to locate system features. These help in understanding target systems
but they do not assure high soundness, a key requirement in system maintenance [181].
Zhang et al [182] introduce PRISM to help extracting aspects. It is similar to our
signature approach while it has limited signature specification capabilities.
In summary, the existing software and security re-engineering efforts require deep
understanding and involvement of the software and security engineers to effect a given
system modification. Many focus on retrofitting an application with specific security
patterns using tools with a set of predefined patterns and modification steps required to
realize such patterns. Furthermore, the existing signature matching efforts are not
formal enough to help in automating the software analysis process.
48
There is no mean that could be used to help proving that a given system is completely
(100%) secure [183]. Security is like a game between the security officers and malicious
users – mind- against-mind. This means that there is no limit to attackers’ malicious
actions to breach assets’ security. Thus, it is definitely hard to show that the system is
secure against existing as well as new security vulnerabilities. Security metrics
represent a good solution to the security
49
assessment problem. Different types of security metrics do exist either offline security
metrics such as comparing system security with other systems, how many
vulnerabilities found, attack surface, planned attacks reported, strength of the applied
security controls; and online security metrics that assess the current security status and
how the operated security is capable to defend against different attacks. NIST [184]
characterizes security metrics into three types as follows:
Security Monitoring
50
Savola et al. [28-31, 186] introduce an iterative process for security metrics
development based on a set of refinements of system security requirements down to the
set of on-line and off- line security metrics to be applied. These security metrics are
categorized by related security objective (authentication, authorization, confidentiality,
integrity and availability).
51
Muñoz et al. [187] introduce a cloud computing dynamic monitoring architecture for
the security attributes along with a language to capture monitoring rules. The proposed
architecture is made up of three main layers: the local application surveillance (LAS)
which collects measures from each application instance in virtual execution
environment; intra-platform surveillance (IPS) which collects measurements of different
LAS elements and start analysing them to detect violations occurred from the
interactions with other systems on the same virtual machine or different virtual
machines; and global application surveillance (GAS) which analyses the results of
different IPS for every specific application (taking into consideration its different
instances). The key problem of this approach is that it focuses on how to help the
service provider administrator but it did not consider the involvement of the service
tenants in developing and enforcing their own security metrics. Moreover, the new
proposed language is hard to learn and use in developing complex metrics.
The SERENITY Project [188, 189] introduced the EVEREST security monitoring
platform. The SERENITY framework helps adding security patterns to systems at
runtime (given that these systems have been already prepared at design time to integrate
with SERENITY). The main objective of the EVEREST security monitoring platform is
to assess the conditions of the operation of the security pattern realization components
when integrated with the target system at runtime. These conditions are specified as a
set of Event-Calculus [190] (first-order temporal formal language) assertion rules within
the security and dependability patterns. When a security pattern is selected, the specified
rules are fed into EVEREST to make sure that they are satisfied by events collected
from the system at runtime. A key problem with this approach is that the event calculus
is hard to develop by service tenants. An extension of this approach was introduced in
the area of SLA management as we going to discuss in the next subsection. Lorenzoli et
al. [191] introduce an extension of this framework (EVEREST+) that helps in delivering
SLA violation prediction capabilities based on the measurements and results got from
the EVEREST framework. Similar work introduced by Spanoudakis et al [192]. They
introduce a security monitoring approach based on event-calculus captured as
parameterized monitoring patterns classified based on the related security objectives
confidentiality, integrity and availability.
53
and overhead to the process. Using LSCs directly could help to automatically generate
security monitors.
SLA Monitoring
The service level agreement (SLA) management becomes a very important research
area with the wide-adoption of the service outsourcing for hosting on external third-
party platforms (such as the cloud computing model) where customers do not have
control on such services. This increases the need to measure the quality of the delivered
services (QOS) in terms of performance, availability, reliability, security, and so forth.
The service providers and consumers conduct SLAs that define the QOS attributes that
should be guaranteed by the service provider and penalties to be applied on the service
provider in case of any violation to any of these QOS terms. Hence, it is very important
to monitor the QOS terms specified in order to take proactive actions before such
violations occur or corrective actions in case of such violations happened. However, we
could not find efforts in the area of SLA that focus on how to specify, monitor, and
enforce security SLA terms. Below, we summarize some of the key relevant SLA
efforts we determined in this area.
54
Comuzzi et al. [200] introduce an approach for monitoring SLA terms as a part of the
SLA@SOI project that targets developing SLA management framework for the cloud
computing model. The proposed approach is based on the EVEREST monitoring
framework. This uses event-calculus to express rules and patterns of interest that should
be monitored. Thus a part of the proposed solution is to extract from the SLA terms
patterns to be monitored expressed in event-calculus. The proposed approach is event-
based. Measurements are sent to reasoning component to assess possible violations of
the specified SLA terms.
55
[186] were introduced in the area of service selection that take into
consideration different QOS attributes when selecting between
different services.
Review of Literature 56
Review of Literature 57