A Framework For Risk Assessment in Access Control Systems
A Framework For Risk Assessment in Access Control Systems
A Framework For Risk Assessment in Access Control Systems
$
Hemanth Khambhammettu
a
, Soene Boulares
b
, Kamel Adi
b
, Luigi Logrippo
b
a
PricewaterhouseCoopers LLP, New York, NY, USA
b
Universite du Quebec en Outaouais, Gatineau, Quebec, Canada
Abstract
We describe a framework for risk assessment specically within the context of risk-based
access control systems, which make authorization decisions by determining the security
risk associated with access requests and weighing such security risk against operational
needs together with situational conditions. Our framework estimates risk as a product
of threat and impact scores. The framework that we describe includes four dierent ap-
proaches for conducting threat assessment: an object sensitivity-based approach, a subject
trustworthiness-based approach and two additional approaches which are based on the dif-
ference between object sensitivity and subject trustworthiness. We motivate each of the
four approaches with a series of examples. We also identify and formally describe the prop-
erties that are to be satised within each approach. Each of these approaches results in
dierent threat orderings, and can be chosen based on the context of applications or pref-
erence of organizations. We also propose formulae to estimate the threat of subject-object
accesses within each of the four approaches of our framework.
We then demonstrate the application of our threat assessment framework for estimating
the risk of access requests, which are initiated by subjects to perform certain actions on data
objects, by using the methodology of NIST Special Publication 800-30. We show that risk
estimates for access requests actually dier based on the threat assessment approach that
has been chosen. Therefore, organizations must make prudent judgement while selecting
a threat assessment function for risk-based access control systems.
Keywords: Security, Access control, Risk, Threat, Impact
1. Introduction
The need to share information in dynamic environments has prompted the devel-
opment of risk-based access control systems [1, 2, 3, 4, 5, 6]. Essentially, in order to
facilitate information sharing, risk-based access controls extend traditional access control
paradigms to provide support for exible decision-making by specifying acceptable security
$
This paper is an extended version of our paper entitled A framework for threat assessment in access
control systems that appeared in Proceedings of 27th IFIP TC 11 Information Security and Privacy
Conference (SEC 2012), 2012.
, p
) i Threat(s, o)
Threat(s
, o
).
The relation
T
allows threats to be compared, and greater and lesser threats
assessed. We dene (s, o)
T
(s
, o
) i (s, o)
T
(s
, o
) and (s, o)
T
(s
, o
), and say
(s
, o
, o
) if (s, o)
T
(s
, o
) and
(s, o)
T
(s
, o
). We may write (s
, o
)
T
(s, o) whenever (s, o)
T
(s
, o
).
3.3. Running example
In this section, we describe the setting of a scenario that is used in the rest of the paper
for motivating our threat assessment approaches.
We assume the existence of the following four subjects: Alice, Bob, Carol and Dave.
Table 1(a) illustrates the trustworthiness scores of these four subjects.
Let us consider two objects o and o
is assigned a
sensitivity score 100. Table 1(b) shows the sensitivity scores of objects o and o
.
Recall that the objective of our work is to assess the threat of access requests, where a
subject s S who initiated the request may not be pre-authorized for the requested data
object o O. Hence, throughout this paper, we only cite examples where sl(s) < ol(o).
3.4. Object-based threat assessment
It is possible that certain applications which maintain highly sensitive data, such as
government or military systems, may understand or interpret threat in terms of access to
such data. We now give examples that motivate our technique for threat assessment that
primarily is based on the sensitivity score of objects.
Example 1. Suppose that Alice requests access to object o
) = 100.
If we were to consider object sensitivity score to be the basic criteria for determining
threat measures, then according to Principle 1 stated in Section 3.2 allowing Alice to
access object o
, where ol(o
.
Now, if we were to determine which of these two accesses is a greater threat, then
according to Principle 2 (see Section 3.2) one is likely to conclude that allowing Carol to
access object o
)
T
(Carol, o
).
It can easily be seen that we can construct a priority order of excessive accesses
by subjects for objects, when sensitivity scores are higher than trustworthiness scores, in
terms of their threat by adhering to the properties of Remark 1. Essentially, the properties
of Remark 1 can be generalized as follows: (s, o)
T
(s
, o
) if either
1. ol(o) < ol(o
) or
2. ol(o) = ol(o
) and sl(s
) < sl(s).
Within an object-based threat assessment approach, whenever object sensitivity scores
are the same, unlike Remark 1 we may wish to apply Principle 3 as a secondary criterion.
In other words, we may use the dierence of object sensitivity and subject trustworthiness
scores as a secondary parameter, rather than subject trustworthiness scores. Note however
that whenever the object sensitivity score is xed, the dierence between object sensitivity
and subject trustworthiness scores increases only if subject trustworthiness scores decrease.
This means that the threat priority order remains the same irrespective of whether we apply
Principle 2 or Principle 3 as a secondary criterion. Hence, we do not describe the subcase
that applies Principle 3 as a secondary criterion.
7
3.5. Subject-based threat assessment
As discussed earlier in Section 1, in certain scenarios (such as business-to-business
environments), access requests could be initiated by subjects who may not be (directly)
known to data owners. In such situations, trustworthiness of subjects may take higher
preference than sensitivity of data objects while estimating access threats.
We now give examples that motivate our technique for threat assessment that, primarily,
is based on trustworthiness scores of subjects.
Example 3. Let us reuse the setting of Example 1 here. That is, we consider subjects
Alice and Bob, and suppose that Alice requests access to object o
. This is because Bob, who has a subject trustworthiness score of 80, is less trusted than
Alice, who has a subject trustworthiness score of 90.
Example 4. Let us extend Example 3 by considering an additional user Dave where
sl(Dave) = 80 (see Table 1(a)). Now both Bob and Dave have the same trustworthiness
score. Suppose that Dave is requesting access to object o
.
Now, if we were to determine which one of the above two accesses of Bob and Dave
poses a greater threat, then according to Principle 1 (see Section 3.2) we may reasonably
say that granting access to Dave for o
)
T
(Bob, o)
T
(Dave, o
).
Essentially, the properties of Remark 2 can be generalized as follows: (s, o)
T
(s
, o
)
if either
1. sl(s
) < sl(s) or
2. sl(s
) > ol(o).
8
It is important to note the eect of the basic criterion on threat orderings or metrics
when subjects request access to objects, where sensitivity scores are higher than trustwor-
thiness scores. In particular, should the sensitivity score of objects be the basic criterion
for assessing threat, then (Bob, o)
T
(Alice, o
)
T
(Bob, o) (see Example 3).
Note that, whenever subject trustworthiness scores are the same, unlike Remark 2 we
may wish to apply Principle 3 as a secondary criterion. That is, we may wish to use
the dierence of object sensitivity and subject trustworthiness scores as a secondary
parameter, rather than object sensitivity scores. However, note that whenever the sub-
ject trustworthiness score is xed, the dierence between object sensitivity and subject
trustworthiness scores increases only if object sensitivity scores increase. This means that,
within a subject-based threat assessment approach, the threat priority order remains the
same irrespective of whether we apply Principle 1 or Principle 3 as a secondary criterion.
Hence, we do not describe the subcase that applies Principle 3 as a secondary criterion.
3.6. Dierence of scores-based threat assessment
In certain scenarios, we may not be directly concerned with either the object sensitiv-
ity scores or subject trustworthiness scores; however, our objective could be to understand
threat simply in terms of the dierence between object sensitivity and subject trustworthi-
ness scores. Essentially, in such an approach, the degree of threat proportionally increases
with the dierence between object sensitivity and subject trustworthiness scores.
In this section, we adopt such a notion of threat (as described above) and develop two
dierent techniques for threat assessment which, primarily, are based on the dierence
between the sensitivity scores of objects and subjects. We rst give examples below for
motivating our threat assessment techniques and then formalize their properties.
Example 5. Let us reuse the setting from Examples 1 and 2 here. In particular, we
consider user Bob from Example 1 and Carol from Example 2. As before, we suppose that
Bob requests access for object o and Carol requests access for object o
.
Now, should the basic criteria for determining threat measures be the dierence between
object sensitivity and subject trustworthiness scores, then according to Principle 3 (see
Section 3.2) granting access to Carol for object o
) = 100
(see Table 1).
Note that the dierence between the sensitivity score of object o
and trustworthiness
score of Alice (which is 100 90 = 10) is the same as the dierence between the
sensitivity score of object o and trustworthiness score of Bob (which is 90 80 = 10).
Hence, it is not immediately obvious which of the above two subject-object accesses poses a
greater threat.
If we were to determine which of the two subject-object accesses considered in Ex-
ample 6 is a greater threat, then we may choose between applying either Principle 1 or
Principle 2 (see Section 3.2) yielding two dierent approaches, which consider dierent sec-
ondary parameters, for resolving the parity observed in Example 6. These two approaches
are described below.
3.6.1. Dierence weighted by object sensitivity score
In this approach, we consider object sensitivity scores as a secondary criterion and
apply Principle 1 which says that threat increases with an increase in object sensitivity
scores (see Section 3.2) for resolving the parity observed in Example 6.
This means that, in Example 6, granting access to object o
) > ol(o).
Remark 3. A threat assessment technique that primarily is based on the dierence between
object sensitivity and subject trustworthiness scores, and that uses object sensitivity scores
as a secondary criterion should support the following properties:
1. always apply Principle 3 (that is, threat increases as the dierence between object
sensitivity and subject trustworthiness scores increases),
2. whenever parity is observed on the dierence between object sensitivity and subject
trustworthiness scores, apply Principle 1 (that is, threat increases as object sensitivity
score increases).
Based on Remark 3, we obtain the following ordering of threat for the subject-object
accesses which were considered in Examples 5 and 6: (Bob, o)
T
(Alice, o
)
T
(Carol, o
).
The properties of Remark 3 can be generalized as follows: (s, o)
T
(s
, o
) if either
1. (ol(o) sl(s)) < (ol(o
) sl(s
)) or
2. (ol(o) sl(s)) = (ol(o
) sl(s
)) and ol(o
) > ol(o).
10
3.6.2. Dierence weighted by subject trustworthiness score
In this approach, we consider subject trustworthiness scores as a secondary criterion and
apply Principle 2 which says that threat increases with a decrease in subject trustworthiness
scores (see Section 3.2) for resolving the parity observed in Example 6.
Recall from Example 6 that the dierence between the sensitivity score of object o
and
trustworthiness score of Alice (which is 100 90 = 10) is the same as the dierence
between the sensitivity score of object o and trustworthiness score of Bob (which is 9080 =
10). In this approach, granting access to object o Bob poses a greater threat than granting
access to object o
)
T
(Bob, o)
T
(Carol, o
).
The properties of Remark 4 can be generalized as follows: (s, o) (s
, o
) if either
1. (ol(o) sl(s) < ol(o
) sl(s
)) or
2. (ol(o) sl(s) = ol(o
) sl(s
)) and sl(s
) < sl(s).
4. Formulae for Quantifying Threat
In the previous section, we have described four dierent approaches for threat assess-
ment and dened the properties that are to be satised within each approach. We have
also discussed the construction of threat priority orders for a given set of subject-object
accesses in all four approaches that we developed.
Such priority orders only oer a qualitative threat comparison between two or more
subject-object accesses. For example, given two subject-object accesses (s, o) and (s
, o
), a
threat priority order is useful to determine which one of the given subject-object accesses
poses a greater threat than the other.
For computational purposes, quantitative measures which correspond to this threat
ordering may be useful. However, there can be many dierent formulae which respect the
properties of the four approaches and can quantitatively measure the threat of granting
access to a subject s for an object o within each approach, where sl(s) < ol(o). In this
section, we propose one such formula for each approach and describe their construction.
In the rest of the paper, for the purposes of generality, we use the terminology of
subject clearance levels which represent trustworthiness scores of subjects and object
classication levels which represent sensitivity scores of objects.
11
Threat Index. We now introduce the concept of threat indexing of subject clearance lev-
els and object classication levels. Essentially, we assign a unique numerical value from
the set {0, . . . , |L| 1} that represents the threat index of a security level l L =
{Unclassied, Restricted, Classied, Secret, TopSecret}.
Note that, from the point of view of subjects, we expect the threat to increase
as subject clearance levels decrease. For example, a subject who holds a Classied
clearance level poses a greater threat than a subject who holds a Secret clearance
level.
Hence, subject threat index values decrease with subject clearance levels. We write
Secret = 1.
However, from the point of view of objects, we expect the threat to increase as object
classication levels increase. For example, the compromise of an object that has a
Secret level poses a greater threat than an object that has a Classied level.
Hence, object threat indexes increase with object classication levels. We write
l
to denote an object threat index. Formally,
(w
ol(o))+
sl(s)
(|L
S
||L
O
|)1
if sl(s) < ol(o), where w = |L|, and
0 otherwise.
(3)
The numerator part of the above given formula is intuitive. Since we require that more
importance be given to the threat index of objects, we multiply the object threat index
with a weight w that equals the cardinality of the set of security levels L used in the system.
Thus, we rst achieve a weighted object threat index. Then, we add the threat index of
the subject to the weighted object threat index.
Essentially, the numerator part of the formula maps all possible accesses by subjects
to objects into an interval [0..(|L
S
| |L
O
|) 1], where a higher value represents a greater
12
threat. In order to normalize the threat values into an interval [0..1], we divide the value
obtained from the numerator by (|L
S
| |L
O
|) 1. The resultant value represents the
object-based threat likelihood value that respects the properties of Remark 1.
Table 3 shows a two-dimensional array representation of all possible accesses by subjects
to objects. Note that an attempt by a subject s to access an object o, such that sl(s) ol(o)
does not pose a threat. Hence, in Table 3, we assign a threat value of zero to all accesses
which are either along or below the diagonal of the array.
Each array entry [i, j] includes a value that represents the threat likelihood of a subject
s accessing an object o, where sl(s) = i and ol(o) = j that has been calculated by using
Formula 3 with weight w = 5. Each array entry also includes its threat rank (shown
next to the threat likelihood value in Sans Serif font within parenthesis) relative to other
accesses, where a higher rank means higher threat.
It can be seen from the threat values of any row in Table 3, which were calculated
by using Formula 3, that threat measures increase as object classication levels increase.
Note also that, for any particular object classication level, (within each column) threat
values increase as subject clearance levels decrease. Specically, lower threat values are
observed for objects with Restricted classication level, whereas higher threats are observed
for objects with Top Secret classication level. In particular, the highest threat is observed
for subjects with Unclassied clearance while attempting to access objects with Top Secret
classication level.
4.2. Subject-based threat
As before, we need to devise a formula that respects the properties of Remark 2 to be
able to quantitatively measure the threat of granting access to a subject s for an object o,
where sl(s) < ol(o). We propose one such formula below and describe its construction.
Threat(s, o) =
(w
sl(s))+
ol(o)
(|L
S
||L
O
|)1
if sl(s) < ol(o), where w = |L|, and
0 otherwise.
(4)
In this approach, since we require that more importance be given to the threat index of
subjects, we multiply the subject threat index with a weight w that equals the cardinality
of the set of security levels L. Thus, we rst achieve a weighted subject threat index.
Then, we add the threat index of the object to the weighted subject threat index.
Similar to Formula 3, the numerator part of Formula 4 also maps all possible accesses
by subjects to objects into an interval [0..(|L
S
| |L
O
|) 1], where a higher value represents
a greater threat. In order to normalize the threat values into an interval [0..1], we divide
the value obtained from the numerator by (|L
S
| |L
O
|) 1. The resultant value represents
the subject-based threat likelihood value that is consistent with Remark 2.
Table 4 shows a two-dimensional array representation of all possible accesses by subjects
to objects. As before, we assigned a threat value of zero to all accesses where sl(s) ol(o)
in Table 4 since such accesses does not pose a threat.
Each array entry [i, j] includes a value that represents the threat likelihood of a subject
s accessing an object o, where sl(s) = i and ol(o) = j. This value has been calculated
13
by using Formula 4 with weight w = 5. Each array entry also includes its threat rank
(shown next to the threat likelihood value in Sans Serif font within parenthesis) relative to
other accesses, where a higher rank means higher threat.
We can see from the threat values of columns in Table 4, which were calculated by
using Formula 4, that threat values increase as subject clearance levels decrease. Note also
that, for any particular subject clearance level, (within each row) threat values increase
as object classication levels increase. Specically, least threat is observed when subjects
who hold a Secret clearance attempt to access objects with a Top Secret classication.
Whereas, higher threats are observed for subjects who hold a Unclassified clearance
while attempting to access objects with higher classication levels. In particular, highest
threat is observed for subjects who hold a Unclassified clearance attempting to access
objects with a Top Secret classication.
4.3. Dierence of scores-based threat
4.3.1. Dierence weighted by object classication level
As before, we need to devise a formula to be able to quantitatively measure the threat
of subject-object accesses that is based on Remark 3, We note that there can be many
formulae which satisfy the properties of Remark 3. We give one such formula below and
describe its construction.
Threat(s, o) =
(w[ol(o)sl(s)])+
ol(o)
(|L
S
||L
O
|)1
if sl(s) < ol(o), where w = |L|, and
0 otherwise.
(5)
Here we need to compute the dierence between the security level of objects and sub-
jects. Hence, we rst compute ol(o) sl(s).
Since we require that more importance be given to the dierence between object clas-
sication level and subject clearance level, in Formula 5, we multiply ol(o) sl(s) with
a weight w that equals the cardinality of the set of security levels L. Thus, we initially
achieve a threat index for the dierence between the security indexes of objects and sub-
jects. Then, we add the threat index of the object to the initial threat index.
Similar to Formulae 3 and 4, in order to normalize the threat values into an interval
[0..1], we divide the value obtained from the numerator by (|L
S
| |L
O
|) 1. The resultant
value represents the threat likelihood value that gives more importance to the dierence
between object classication level and subject clearance level; and also considers object
classication levels. Hence, Formula 5 is consistent with properties of Remark 3.
Table 5 shows a two-dimensional array representation of all possible accesses by subjects
to objects. As before, we assigned a threat value of zero to all accesses where sl(s) ol(o)
in Table 5 since such accesses do not pose a threat. As before, a threat rank is shown
next to the threat likelihood value in Sans Serif font in parenthesis within each array entry,
where a higher rank means higher threat.
14
We can see in Table 5 that threat values always increase as the dierence between
object classication level and subject clearance level increases. Furthermore, whenever
such a dierence is the same, threat values increase as object classication level increases.
4.3.2. Dierence weighted by subject clearance level
As before, we devise a formula to be able to quantitatively measure the threat of subject-
object accesses that is based on Remark 4, We note that there can be many formulae
which satisfy the properties of Remark 4. We give one such formula below and describe
its construction.
Threat(s, o) =
{
(w[ol(o)sl(s)])+
sl(s)
(|L
S
||L
O
|)1
if sl(s) < ol(o), where w = |L|, and
0 otherwise.
(6)
Similar to Formula 5 we rstly need to compute the dierence between object classi-
cation level and subject clearance level. Hence, we rst compute ol(o) sl(s).
Similar to Formula 5, since we require that more importance be given to the dierence
between object classication levels and subject clearance levels, we multiply ol(o) sl(s)
by a weight w that at least equals the cardinality of the set of security levels L used in the
system. Then, we add the threat index of the subject to the initial threat index.
Similar to Formulae 3, 4 and 5, in order to normalize the threat values into an interval
[0..1], we divide the value obtained from the numerator part of Formula 6 by (|L
S
|
|L
O
|) 1. The resultant value represents the threat likelihood value that gives more
importance to the dierence between object classication level and subject clearance level;
and also considers subject clearance levels. Hence, Formula 6 is consistent with properties
of Remark 4.
Table 6 shows a two-dimensional array representation of all possible accesses by subjects
to objects. As before, we assign a threat value of zero to all accesses where sl(s) ol(o)
in Table 6 since such accesses do not pose a threat. As before, a threat rank is shown
next to the threat likelihood value in Sans Serif font in parenthesis within each array entry,
where a higher rank means higher threat.
Similar to Table 5, we can see in Table 6 that threat values always increase as the
dierence between object classication level and subject clearance level increases. However,
unlike Table 5, whenever such a dierence is the same, threat values in Table 6 increase as
subject clearance levels decrease.
5. Application to the NIST SP 800-30 Risk Assessment Methodology
In the previous section, we have proposed a method for computing the threat of a
subject being able to access a given object, denoted by Threat(s, o), within each of the
four approaches of our framework. The NIST SP 800-30 standard suggests that impact
values of any compromise of security objectives of objects can be obtained from data
classication proles.
15
In the remainder of this section, we rstly describe data classication policies and the
computation of Impact(o, a) by using data classication policies in Section 5.1. Essentially,
Impact(o, a) gives the impact value of executing an action a A on an object o O.
Recall from Formula 2 that such impact values will be used, together with Threat(s, o),
to compute risk scores. Subsequently, in Section 5.2, we demonstrate the application of
our threat assessment framework to compute risk scores by using Formula 2. Section 5.3
describes the application of other three threat assessment approaches for computing risk
scores.
5.1. Preliminaries
5.1.1. A model for data classication
All information has value. Data classication is the task of evaluating the importance
of information to ensure that the information receives an appropriate level of protection.
The principal objective of the data classication activity is to group an organizations data
by varying levels of sensitivity with respect to the security objectives of condentiality,
integrity and availability.
The US Federal Information Processing Standard (FIPS) 199 [10] and NIST Special
Publication (SP) 800-60 [11] publications state that the classication of data with respect
to the security objectives is the rst step in developing a risk management framework.
Table 7 shows an example of security classication of data based on the US FIPS 199 [10]
and NIST SP-800-60 [11] publications, which suggest that information be classied:
with respect to the objectives of condentiality, integrity and availability, and
by using impact values (or sensitivity levels), such as Low, Moderate and High.
Note that no impact values have been dened for the security objectives of object o
1
in
Table 7. Hence, we may understand o
1
in Table 7 as an unprotected object and envisage
that o
1
has an Unclassied classication level.
Typically, any assignment of impact values to security objectives with respect to a given
object will be specied by the policy makers or business owners or data owners within an
organization. Furthermore, any such assignment of impact values to security objectives
of objects will suciently represent the damage or loss caused to the organization (or its
business processes) should the security objectives be compromised.
The denition of impact values {Low, Moderate, High} is quoted below as stated in the
US FIPS 199 [10] and NIST SP-800-60 [11] publications.
The potential impact is Low if the loss of condentiality, integrity, or availability
could be expected to have a limited adverse eect on organizational operations,
organizational assets, or individuals.
The potential impact is Moderate if the loss of condentiality, integrity, or availabil-
ity could be expected to have a serious adverse eect on organizational operations,
organizational assets, or individuals.
16
The potential impact is High if the loss of condentiality, integrity, or availability
could be expected to have a severe or catastrophic adverse eect on organiza-
tional operations, organizational assets, or individuals.
In practice, any such security classication of data will typically be made within the
context of an organization that owns the data, and may be based on the inputs provided
by the domain or security experts within that organization.
Essentially, a security classication prole for data stresses the importance of the re-
quired protection levels for the data. For example,
In a nancial services scenario, such as electronic payment systems, it is often the
case that the condentiality of credit card numbers has more signicance than the
availability of such data for processing payments. Hence, the protection levels for the
condentiality objective of data representing credit card information could be high,
whereas the protection levels for the availability objective of such information may
comparatively be less important.
However, in a medical services scenario, such as health insurance companies, is often
required that data representing the health information of individuals must be highly
condential, must be prevented from inappropriate or unauthorized modications
and must be available when required. Hence, the protection levels for all the three
security objectives (condentiality, integrity and availability) of such data may have
equal importance.
We may envisage a third scenario where the data may be publicly known information,
but is used for mission-critical and high-availability systems. In such scenarios, the
protection for the condentiality objective of such data need not be addressed at all,
whereas the protection levels for the integrity and availability objectives of such data
could be of signicance.
Security classication of data is widely practised in the real-world, and is not merely a
theoretical concept. For example,
The Information Security Oce, Stanford University, USA has authored Data Clas-
sication Guidelines for securing the data that is stored within its information sys-
tems [12].
The Government of Alberta, Canada has developed an Information Security Classi-
cation Guideline to assist its ministries in establishing eective security classication
practices for the information that is stored or maintained by them [13].
We now formalize the assignment of impact values to the security objectives of objects.
Let SO = {confidentiality, integrity, availability} denote the set of security ob-
jectives. We admit the existence of a set of impact values IV = {Low, Moderate, High}
that can be assigned to security objectives for objects. Specically, every object o O is
17
associated with the set of security objectives SO. We then assign an impact value iv IV
for every object o O with respect to each security objective so SO. A function
Impact : O SO IV represents the assignment of impact levels to objects for each
security objective sc SO.
5.1.2. Observations
We now make a few simple observations about the eect of actions on security objec-
tives. These observations have long been known and are intuitive. In order to facilitate
our understanding, it is necessary to precisely dene the set of actions that we consider.
Specically, we consider standard actions such as A = {read, write, delete}.
We assume that every action a A species a single operation and is atomic. In
particular, action read allows only the viewing of existing data, action write allows only
the creation of new data, and action delete allows only the destruction of existing data.
It is possible to combine two or more atomic actions for creating a composite action. For
example, we can create a composite action modify that allows both creation of new data
and destruction of existing data. However, in this paper, we only consider atomic actions.
We then note the following:
action read has an eect on confidentiality security objective,
action write has an eect on integrity security objective,
action delete has an eect on availability security objective.
We dene a function sec obj : A SO that species a relationship between each action
a A and security objectives so SO.
The above assumptions can easily be extended for considering composite access
rights that include two or more standard access rights. For example, we may regard
update/modify as a composite access right that includes the following two standard ac-
cess rights: {read, write, delete}. Such a composite access right may have an eect
on two or more security objectives; in this case, we need to dene the function sec obj as
A 2
SO
.
Note that read and write operations may also aect the availability criterion; this can
happen as a consequence of a slow read attack, for example. However, such attacks, and
denial of service (DoS) attacks, can only be analyzed by taking into consideration the fact
that operations take time, leading to a server overloads. Timing considerations open a
completely new dimension, that we could not include in this paper. Furthermore, the goal
of DoS attacks is not to gain unauthorized access to systems or its data, but to restrict
legitimate subjects from gaining access to system resources. The scope of this paper is to
assess the risk of requests which are initiated by legitimate subjects within the organization
to access resources for which they may not be pre-authorized. Hence, consideration of DoS
attack parameters is outside the scope of this paper.
18
5.1.3. Determining impact level of permissions
We model permissions as a set P OA. A permission (o, a) P species that action
a A can be performed on an object o O. As noted in Section 5.1.2, there exists a one-
to-one mapping between the set of standard access rights {read, write, delete} and the
set of security objectives {confidentiality, integrity, availability}. Hence, we could
derive the impact level of a given permission p = (o, a) by using functions sec obj : A SO
and Impact : O SO IV .
Specically, we rst derive the security objective so SO that corresponds to access
right a by using function sec obj. Then, we use so = sec obj(a) as one of the input
parameters of function Impact for determining the level that object o has with respect to
security objective so.
In summary, the impact level of a permission p = (o, a) is given by Impact(o, sec obj(a)).
In the rest of the paper, we write pl(o, a) as a shorthand for Impact(o, sec obj(a)).
Consider, for example, object o
2
whose data classication is shown in Table 7. Sup-
pose that there exist two permissions p = (o
2
, read) and p
= (o
2
, write). If we were to
determine the impact level of these two permissions, then, according to Table 7, the level
of permissions p and p
) = Impact(o
2
, sec obj(write)) = Impact(o
2
, integrity) = High.
5.2. Object-based risk assessment
Table 8 shows the risk scores by using object-based threat likelihood values and impact
values. Such threat values are obtained by using Formula 2. In this table, rows are indexed
with subject clearance levels and columns are indexed with objects and their classication
levels. Furthermore, for each of the objects {o
2
, o
3
, o
4
, o
5
} shown in Table 8, the subjective
impact levels of confidentiality, integrity and availability objectives are borrowed
from the data classication prole shown in Table 7.
In order to be able to quantify risk scores, it is necessary to assign a value for each
subjective impact level {Low, Moderate, High}. Generally, such an assignment of values
to subjective impact levels is at the discretion of organizations, and may vary from one
organization to another. However, for the purposes of demonstration (similar to the NIST
SP 800-30 standard) we chose to assign values to subjective impact levels in Table 8 as
follows: Low = 10, Moderate = 50, High = 100.
Recall that, in Table 7, no impact values have been dened for the security objectives
of object o
1
, since o
1
is an unprotected object that has a security level Unclassied. Since
the computation of risk scores for unprotected objects is not necessary, we do not include
object o
1
of Table 7 in Table 8.
Each entry of the table shows the risk score obtained by using Formula 2. We now
describe the computation of risk scores shown in Table 8. Consider, for example, a subject
s
1
S where sl(s
1
) = Unclassied and object o
2
in Table 8, where ol(o
2
) = Restricted.
Assume that subject s
1
initiates a request to perform action read on object o
2
. Then,
the risk score of (s
1
, o
2
, read) is computed as follows:
19
Threat(s
1
, o
2
) = 0.38 (obtained from Table 3 that shows object-based threat mea-
sures),
Impact(o
2
, read) = Impact(o
2
, sec obj(read)) = Impact(o
2
, confidentiality) =
10,
Risk(s
1
, o
2
, read) = Threat(s
1
, o
2
) Impact(o
2
, read) = 0.38 10 = 3.8.
Assume now that subject s
1
initiates a request to perform action write on object o
2
.
Recall that action write has an eect on integrity objective. Then, it can be seen from
Table 8 that Risk(s
1
, o
2
, write) is 19.
It can be seen from the above two scenarios that: since the same subject is accessing
the same object in both scenarios, the threat likelihood values remain the same. However,
executing action write on o
2
(that has an eect on the integrity objective) has a higher
impact value than executing action read on o
2
(that has an eect on the confidentiality
objective). Hence, Scenario 2 has a higher risk score than Scenario 1. We therefore say
that our risk scores capture both threat likelihood values and impact values.
Now let us consider, for example, subject s
1
from the above two scenarios and object o
3
in Table 8, where ol(o
3
) = Classied. Assume that subject s
1
initiates a request to perform
action write on object o
3
. Then, we can see in Table 8 that Risk(s
1
, o
3
, write) is 29.
We can observe from Scenarios 2 and 3 that: although the impact values remain the
same in both scenarios, Scenario 3 has a higher risk score than Scenario 2. This is because
the threat likelihood value of subject s
1
accessing object o
2
is greater than s
1
accessing o
3
.
Thus, we can see that risk scores increase as object levels increase.
Now let us consider, for example, another subject s
2
, where sl(s
2
) = Restricted and ob-
ject o
3
in Table 8, where (o
3
) = Classied. Assume that subject s
2
initiates a request to per-
form action write on object o
3
. Then, it can be seen from Table 8 that Risk(s
2
, o
3
, write)
is 27.
We can observe from the above two scenarios that: although the impact values remain
the same in both scenarios, (s
1
, o
3
, write) has a higher risk score than (s
2
, o
3
, write). This
is because the threat likelihood value of subject s
1
accessing object o
3
is greater than s
2
accessing o
3
. Thus, we can see that: whenever object levels and impact values remain the
same, risk scores increase as subject levels decrease.
5.3. Applying other threat assessment approaches for risk assessment
In this section, we describe the computation of risk scores by using subject-based threat
assessment (see Table 9) and dierence of security levels based threat assessment (see
Tables 10 and 11).
For the purposes of consistency with Table 8, the rows and columns of Tables 9, 10
and 11 are indexed with subject clearance levels and objects classication levels, respec-
tively. Furthermore, similar to Table 8, for each of the objects {o
2
, o
3
, o
4
, o
5
} shown in
Tables 9, 10 and 11, the subjective impact levels of confidentiality, integrity and
availability objectives is borrowed from the data classication prole shown in Table 7.
20
As in Table 9, we assigned values to subjective impact levels in Tables 9, 10 and 11, as
follows: Low = 10, Moderate = 50, High = 100. As before, we chose not to include object
o
1
of Table 7 in Tables 9, 10 and 11 because no impact levels have been specied for the
security objectives of o
1
.
Table 9 shows risk scores where threat likelihood values are subject-based and obtained
by using Formula 2. Each entry of Table 9 shows the risk score obtained by using Formula 2.
Table 10 shows risk scores where threat likelihood values are based on the dierence
between the security level of object and subject. Such threat values are obtained by using
Formula 5. Each entry of Table 10 shows the risk score obtained by using Formula 2.
Table 11 shows risk scores where threat likelihood values which are based on the dier-
ence between the security level of object and subject and weighted by subject level. Such
threat values are obtained by using Formula 6. Each entry of Table 11 shows the risk score
obtained by using Formula 2.
6. Proof of Correctness
This section shows that the formulae for threat assessment proposed in Section 4 satisfy
properties A and B described in Section 2.
Lemma 1. Formulae 3, 4, 5 and 6 satisfy properties A and B.
Proof. We have the following from Formula 3.
Threat(s, o) =
(w
ol(o))+
sl(s)
(|L
S
||L
O
|)1
if sl(s) < ol(o), where w = |L|, and
0 otherwise.
Suppose that |L
s
| and |L
o
| are xed and that w = |L|. Then, we have the following.
When ol(o) increases (or decreases),
(ol(o)) = ol(o) 1 also increases (or decreases
respectively). Consequently, for any given subject, Threat(s, o) increases (or de-
creases) as ol(o) increases (or decreases respectively). We deduce that the Formula 3
satises property A.
When sl(s) increases (or decreases),
c
a
t
i
o
n
L
e
v
e
l
s
o
2
:
R
e
s
t
r
i
c
t
e
d
o
3
:
C
l
a
s
s
i
e
d
o
4
:
S
e
c
r
e
t
o
5
:
T
o
p
S
e
c
r
e
t
c
o
n
=
i
n
t
=
a
v
a
=
c
o
n
=
i
n
t
=
a
v
a
=
c
o
n
=
i
n
t
=
a
v
a
=
c
o
n
=
i
n
t
=
a
v
a
=
L
o
w
M
o
d
e
r
a
t
e
H
i
g
h
M
o
d
e
r
a
t
e
M
o
d
e
r
a
t
e
M
o
d
e
r
a
t
e
H
i
g
h
L
o
w
L
o
w
H
i
g
h
M
o
d
e
r
a
t
e
H
i
g
h
(
1
0
)
(
5
0
)
(
1
0
0
)
(
5
0
)
(
5
0
)
(
5
0
)
(
1
0
0
)
(
1
0
)
(
1
0
)
(
1
0
0
)
(
5
0
)
(
1
0
0
)
U
n
c
l
a
s
s
i
e
d
0
.
3
8
1
0
0
.
3
8
5
0
0
.
3
8
1
0
0
0
.
5
8
5
0
0
.
5
8
5
0
0
.
5
8
5
0
0
.
7
9
1
0
0
0
.
7
9
1
0
0
.
7
9
1
0
1
.
0
1
0
0
1
.
0
5
0
1
.
0
1
0
0
=
3
.
8
=
1
9
=
3
8
=
2
9
=
2
9
=
2
9
=
7
9
=
7
.
9
=
7
.
9
=
1
0
0
=
5
0
=
1
0
0
R
e
s
t
r
i
c
t
e
d
0
1
0
0
1
0
0
0
1
0
0
0
.
5
4
5
0
0
.
5
4
5
0
0
.
5
4
5
0
0
.
7
5
1
0
0
0
.
7
5
1
0
0
.
7
5
1
0
0
.
9
6
1
0
0
0
.
9
6
5
0
0
.
9
6
1
0
0
=
0
=
0
=
0
=
2
7
=
2
7
=
2
7
=
7
5
=
7
.
5
=
7
.
5
=
9
6
=
4
8
=
9
6
C
l
a
s
s
i
e
d
0
1
0
0
1
0
0
0
1
0
0
0
5
0
0
5
0
0
5
0
0
.
7
1
1
0
0
0
.
7
1
1
0
0
.
7
1
1
0
0
.
9
2
1
0
0
0
.
9
2
5
0
0
.
9
2
1
0
0
=
0
=
0
=
0
=
0
=
0
=
0
=
7
1
=
7
.
1
=
7
.
1
=
9
2
=
4
6
=
9
2
S
e
c
r
e
t
0
1
0
0
1
0
0
0
1
0
0
0
5
0
0
5
0
0
5
0
0
1
0
0
0
1
0
0
1
0
0
.
8
8
1
0
0
0
.
8
8
5
0
0
.
8
8
1
0
0
=
0
=
0
=
0
=
0
=
0
=
0
=
0
=
0
=
0
=
8
8
=
4
4
=
8
8
T
o
p
S
e
c
r
e
t
0
1
0
0
1
0
0
0
1
0
0
0
5
0
0
5
0
0
5
0
0
1
0
0
0
1
0
0
1
0
0
1
0
0
0
5
0
0
1
0
0
=
0
=
0
=
0
=
0
=
0
=
0
=
0
=
0
=
0
=
0
=
0
=
0
35
T
a
b
l
e
9
:
S
u
b
j
e
c
t
-
b
a
s
e
d
r
i
s
k
s
c
o
r
e
s
S
u
b
j
e
c
t
C
l
e
a
r
a
n
c
e
L
e
v
e
l
s
O
b
j
e
c
t
C
l
a
s
s
i
c
a
t
i
o
n
L
e
v
e
l
s
o
2
:
R
e
s
t
r
i
c
t
e
d
o
3
:
C
l
a
s
s
i
e
d
o
4
:
S
e
c
r
e
t
o
5
:
T
o
p
S
e
c
r
e
t
c
o
n
=
i
n
t
=
a
v
a
=
c
o
n
=
i
n
t
=
a
v
a
=
c
o
n
=
i
n
t
=
a
v
a
=
c
o
n
=
i
n
t
=
a
v
a
=
L
o
w
M
o
d
e
r
a
t
e
H
i
g
h
M
o
d
e
r
a
t
e
M
o
d
e
r
a
t
e
M
o
d
e
r
a
t
e
H
i
g
h
L
o
w
L
o
w
H
i
g
h
M
o
d
e
r
a
t
e
H
i
g
h
(
1
0
)
(
5
0
)
(
1
0
0
)
(
5
0
)
(
5
0
)
(
5
0
)
(
1
0
0
)
(
1
0
)
(
1
0
)
(
1
0
0
)
(
5
0
)
(
1
0
0
)
U
n
c
l
a
s
s
i
e
d
0
.
8
8
1
0
0
.
8
8
5
0
0
.
8
8
1
0
0
0
.
9
2
5
0
0
.
9
2
5
0
0
.
9
2
5
0
0
.
9
6
1
0
0
0
.
9
6
1
0
0
.
9
6
1
0
1
.
0
1
0
0
1
.
0
5
0
1
.
0
1
0
0
=
8
.
8
=
4
4
=
8
8
=
4
6
=
4
6
=
4
6
=
9
6
=
9
.
6
=
9
.
6
=
1
0
0
=
5
0
=
1
0
0
R
e
s
t
r
i
c
t
e
d
0
1
0
0
1
0
0
0
1
0
0
0
.
7
1
5
0
0
.
7
1
5
0
0
.
7
1
5
0
0
.
7
5
1
0
0
0
.
7
5
1
0
0
.
7
5
1
0
0
.
7
9
1
0
0
0
.
7
9
5
0
0
.
7
9
1
0
0
=
0
=
0
=
0
=
3
5
.
5
=
3
5
.
5
=
3
5
.
5
=
7
5
=
7
.
5
=
7
.
5
=
7
9
=
3
9
.
5
=
7
9
C
l
a
s
s
i
e
d
0
1
0
0
1
0
0
0
1
0
0
0
5
0
0
5
0
0
5
0
0
.
5
4
1
0
0
0
.
5
4
1
0
0
.
5
4
1
0
0
.
5
8
1
0
0
0
.
5
8
5
0
0
.
5
8
1
0
0
=
0
=
0
=
0
=
0
=
0
=
0
=
5
4
=
5
.
4
=
5
.
4
=
5
8
=
2
9
=
5
8
S
e
c
r
e
t
0
1
0
0
1
0
0
0
1
0
0
0
5
0
0
5
0
0
5
0
0
1
0
0
0
1
0
0
1
0
0
.
3
8
1
0
0
0
.
3
8
5
0
0
.
3
8
1
0
0
=
0
=
0
=
0
=
0
=
0
=
0
=
0
=
0
=
0
=
3
8
=
1
9
=
3
8
T
o
p
S
e
c
r
e
t
0
1
0
0
1
0
0
0
1
0
0
0
5
0
0
5
0
0
5
0
0
1
0
0
0
1
0
0
1
0
0
1
0
0
0
5
0
0
1
0
0
=
0
=
0
=
0
=
0
=
0
=
0
=
0
=
0
=
0
=
0
=
0
=
0
36
T
a
b
l
e
1
0
:
D
i
e
r
e
n
c
e
o
f
o
b
j
e
c
t
a
n
d
s
u
b
j
e
c
t
s
e
c
u
r
i
t
y
l
e
v
e
l
s
-
b
a
s
e
d
r
i
s
k
s
c
o
r
e
s
W
e
i
g
h
t
e
d
b
y
O
b
j
e
c
t
L
e
v
e
l
s
S
u
b
j
e
c
t
C
l
e
a
r
a
n
c
e
L
e
v
e
l
s
O
b
j
e
c
t
C
l
a
s
s
i
c
a
t
i
o
n
L
e
v
e
l
s
o
2
:
R
e
s
t
r
i
c
t
e
d
o
3
:
C
l
a
s
s
i
e
d
o
4
:
S
e
c
r
e
t
o
5
:
T
o
p
S
e
c
r
e
t
c
o
n
=
i
n
t
=
a
v
a
=
c
o
n
=
i
n
t
=
a
v
a
=
c
o
n
=
i
n
t
=
a
v
a
=
c
o
n
=
i
n
t
=
a
v
a
=
L
o
w
M
o
d
e
r
a
t
e
H
i
g
h
M
o
d
e
r
a
t
e
M
o
d
e
r
a
t
e
M
o
d
e
r
a
t
e
H
i
g
h
L
o
w
L
o
w
H
i
g
h
M
o
d
e
r
a
t
e
H
i
g
h
(
1
0
)
(
5
0
)
(
1
0
0
)
(
5
0
)
(
5
0
)
(
5
0
)
(
1
0
0
)
(
1
0
)
(
1
0
)
(
1
0
0
)
(
5
0
)
(
1
0
0
)
U
n
c
l
a
s
s
i
e
d
0
.
2
5
1
0
0
.
2
5
5
0
0
.
2
5
1
0
0
0
.
5
0
5
0
0
.
5
0
5
0
0
.
5
0
5
0
0
.
7
5
1
0
0
0
.
7
5
1
0
0
.
7
5
1
0
1
.
0
1
0
0
1
.
0
5
0
1
.
0
1
0
0
=
2
.
5
=
1
2
.
5
=
2
5
=
2
5
=
2
5
=
2
5
=
7
5
=
7
.
5
=
7
.
5
=
1
0
0
=
5
0
=
1
0
0
R
e
s
t
r
i
c
t
e
d
0
1
0
0
1
0
0
0
1
0
0
0
.
2
9
5
0
0
.
2
9
5
0
0
.
2
9
5
0
0
.
5
4
1
0
0
0
.
5
4
1
0
0
.
5
4
1
0
0
.
7
9
1
0
0
0
.
7
9
5
0
0
.
7
9
1
0
0
=
0
=
0
=
0
=
1
4
.
5
=
1
4
.
5
=
1
4
.
5
=
5
4
=
5
.
4
=
5
.
4
=
7
9
=
3
9
.
5
=
7
9
C
l
a
s
s
i
e
d
0
1
0
0
1
0
0
0
1
0
0
0
5
0
0
5
0
0
5
0
0
.
3
3
1
0
0
0
.
3
3
1
0
0
.
3
3
1
0
0
.
5
8
1
0
0
0
.
5
8
5
0
0
.
5
8
1
0
0
=
0
=
0
=
0
=
0
=
0
=
0
=
3
3
=
3
.
3
=
3
.
3
=
5
8
=
2
9
=
5
8
S
e
c
r
e
t
0
1
0
0
1
0
0
0
1
0
0
0
5
0
0
5
0
0
5
0
0
1
0
0
0
1
0
0
1
0
0
.
3
7
1
0
0
0
.
3
7
5
0
0
.
3
7
1
0
0
=
0
=
0
=
0
=
0
=
0
=
0
=
0
=
0
=
0
=
3
7
=
1
8
.
5
=
3
7
T
o
p
S
e
c
r
e
t
0
1
0
0
1
0
0
0
1
0
0
0
5
0
0
5
0
0
5
0
0
1
0
0
0
1
0
0
1
0
0
1
0
0
0
5
0
0
1
0
0
=
0
=
0
=
0
=
0
=
0
=
0
=
0
=
0
=
0
=
0
=
0
=
0
37
T
a
b
l
e
1
1
:
D
i
e
r
e
n
c
e
o
f
o
b
j
e
c
t
a
n
d
s
u
b
j
e
c
t
s
e
c
u
r
i
t
y
l
e
v
e
l
s
-
b
a
s
e
d
r
i
s
k
s
c
o
r
e
s
W
e
i
g
h
t
e
d
b
y
S
u
b
j
e
c
t
L
e
v
e
l
s
S
u
b
j
e
c
t
C
l
e
a
r
a
n
c
e
L
e
v
e
l
s
O
b
j
e
c
t
C
l
a
s
s
i
c
a
t
i
o
n
L
e
v
e
l
s
o
2
:
R
e
s
t
r
i
c
t
e
d
o
3
:
C
l
a
s
s
i
e
d
o
4
:
S
e
c
r
e
t
o
5
:
T
o
p
S
e
c
r
e
t
c
o
n
=
i
n
t
=
a
v
a
=
c
o
n
=
i
n
t
=
a
v
a
=
c
o
n
=
i
n
t
=
a
v
a
=
c
o
n
=
i
n
t
=
a
v
a
=
L
o
w
M
o
d
e
r
a
t
e
H
i
g
h
M
o
d
e
r
a
t
e
M
o
d
e
r
a
t
e
M
o
d
e
r
a
t
e
H
i
g
h
L
o
w
L
o
w
H
i
g
h
M
o
d
e
r
a
t
e
H
i
g
h
(
1
0
)
(
5
0
)
(
1
0
0
)
(
5
0
)
(
5
0
)
(
5
0
)
(
1
0
0
)
(
1
0
)
(
1
0
)
(
1
0
0
)
(
5
0
)
(
1
0
0
)
U
n
c
l
a
s
s
i
e
d
0
.
3
7
1
0
0
.
3
7
5
0
0
.
3
7
1
0
0
0
.
5
8
5
0
0
.
5
8
5
0
0
.
5
8
5
0
0
.
7
9
1
0
0
0
.
7
9
1
0
0
.
7
9
1
0
1
.
0
1
0
0
1
.
0
5
0
1
.
0
1
0
0
=
3
.
7
=
1
8
.
5
=
3
7
=
2
9
=
2
9
=
2
9
=
7
9
=
7
.
9
=
7
.
9
=
1
0
0
=
5
0
=
1
0
0
R
e
s
t
r
i
c
t
e
d
0
1
0
0
1
0
0
0
1
0
0
0
.
3
3
5
0
0
.
3
3
5
0
0
.
3
3
5
0
0
.
5
4
1
0
0
0
.
5
4
1
0
0
.
5
4
1
0
0
.
7
5
1
0
0
0
.
7
5
5
0
0
.
7
5
1
0
0
=
0
=
0
=
0
=
1
6
.
5
=
1
6
.
5
=
1
6
.
5
=
5
4
=
5
.
4
=
5
.
4
=
7
5
=
3
7
.
5
=
7
5
C
l
a
s
s
i
e
d
0
1
0
0
1
0
0
0
1
0
0
0
5
0
0
5
0
0
5
0
0
.
2
9
1
0
0
0
.
2
9
1
0
0
.
2
9
1
0
0
.
5
0
1
0
0
0
.
5
0
5
0
0
.
5
0
1
0
0
=
0
=
0
=
0
=
0
=
0
=
0
=
2
9
=
2
.
9
=
2
.
9
=
5
0
=
2
5
=
5
0
S
e
c
r
e
t
0
1
0
0
1
0
0
0
1
0
0
0
5
0
0
5
0
0
5
0
0
1
0
0
0
1
0
0
1
0
0
.
2
5
1
0
0
0
.
2
5
5
0
0
.
2
5
1
0
0
=
0
=
0
=
0
=
0
=
0
=
0
=
0
=
0
=
0
=
2
5
=
1
2
.
5
=
2
5
T
o
p
S
e
c
r
e
t
0
1
0
0
1
0
0
0
1
0
0
0
5
0
0
5
0
0
5
0
0
1
0
0
0
1
0
0
1
0
0
1
0
0
0
5
0
0
1
0
0
=
0
=
0
=
0
=
0
=
0
=
0
=
0
=
0
=
0
=
0
=
0
=
0
38