Untrustworthiness: A Trust-Based Security Metric
Untrustworthiness: A Trust-Based Security Metric
Untrustworthiness: A Trust-Based Security Metric
123
978-1-4244-4497-7/09/$25.00 ©2009 IEEE
at once. This kind of metric is what we call a trust- ness (MU) as a metric that systematizes the notion of
based metric, in the sense that it exposes and quantify how untrustworthy (at least) the administrator must
the trustworthiness relationship between an administra- consider the system regarding its ability of preventing
tor and the system he manages. the manifestation of relevant threats in the form of se-
In this work we defend that a highly useful trust- curity attacks. This systematization provides the ad-
based metric can be based on the evaluation of how ministrator with evidence of what are the aspects of the
much active effort the administrator puts in his system system that might be more prone to the existence of
to make it more secure. Note that, in the context of this security vulnerabilities and, therefore, require more
paper, effort is used broadly, including not only real close attention from his part.
effort (e.g. testing an application) but also effort put on To better understand the idea of untrustworthiness
becoming aware of the state of the system (e.g. becom- and how it maps to the security of a real system, lets
ing aware that a server loads insecure processes). This consider two distinct applications designed to fulfill a
effort can be summarized as the level of trust (or rather particular task: one application developed in an ad hoc
distrust) that can be justifiably put in a given system as manner, not following any kind of development me-
not being susceptible to attacks. As an instantiation of thodology, and another one developed using a carefully
these ideas, we propose a trust-based metric called studied development process and extensively tested. A
minimum untrustworthiness that expresses the mini- traditional security metric should allow comparing
mum level of distrust one should put in a given system both applications and give a solid evidence of what
or component to act accordingly to its specification. application is more secure, despite the method used to
The outline of this paper is as follows. Section 2 develop each one. However, in the lack of such securi-
further explores the idea of trust-based metrics and ty metric (that is very difficult, or even impossible, to
their applicability. Section 3 explains how this type of compute), we argue that a user should justifiably dis-
metrics can be computed in practice and Section 4 trust the first one much more than he should distrust
presents an initial attempt to apply them on ben- the second one. This untrustworthiness is based on the
chmarking the untrustworthiness of database configu- idea that, although we cannot know how insecure each
rations. Finally, Section 5 concludes the paper. application really is, we do know that the first one
could have, at least, followed a better development
2. Trust-based metrics process, which is known to help avoiding vulnerabili-
ties. Thus, we can say that the untrustworthiness for
Usually, a system administrator has a certain level the first application is much higher than for the second
of confidence that his system will resist attacks. This one.
confidence, or trust, is intuitively based on three main In summary, we define minimum untrustworthiness
aspects: 1) how much effort he actively put in the sys- (MU) as a metric that expresses the minimum level of
tem to prevent attacks (e.g. effort in terms of system distrust one should put in a given system or component
configuration, training of users, testing of code, etc); 2) to act accordingly to its specification. Notice that this
how much certainty he has that he is not neglecting metric represents a minimum threshold, and not an
important aspects; and 3) how much certainty he has absolute value. This happens because, for a given sys-
that complex interactions between distinct parts of the tem the real untrustworthiness can be higher than what
system would not give rise to vulnerabilities (even is reported (e.g., if we consider unknown vulnerabili-
when those parts are considered secure independently). ties and system interactions). What MU expresses is
Intuition, however, can be misleading. No matter how much the system is evidently and justifiably un-
how confident he is, a wise administrator should al- trustworthy at least in the eyes of current security
ways ask himself what aspects he is not aware of or is knowledge and effort applied on it.
forgetting. Such worries are related to two key aspects:
the system’s complexity and the administrator’s securi- 3. Computing minimum untrustworthiness
ty knowledge. The more complex the system, the high-
er is the number of different elements that need securi- On a high-level view, minimum untrustworthiness
ty attention and the higher is the number of ways these can be computed as a direct comparison of the level of
elements may interact and give rise to vulnerabilities. effort actively applied in the security of a system
The security knowledge of the administrator affects the against a consensual maximum possible level of effort
evaluation in a more obvious way, as he might not that could have been applied. Less evident effort indi-
even know where to look at and how to evaluate and cates a more untrustworthy system. Again, neither the
mitigate security threats. maximum nor the minimum levels of effort directly
In this work we propose minimum untrustworthi- imply security or insecurity, but both can be related to
124
a level of justifiable trust associated with it. the computation of the MU. The actual computation of
Computing the level of effort that has been applied the metric occurs as a simple procedure: for each secu-
in securing a system requires a description of what is rity best practice, the evaluator assesses if it is being
the consensual maximum level of effort possible for actively applied in the target system or not. This yields
the considered domain. We believe that this consensual three distinct possibilities:
maximum level of effort, in practice, may be defined as 1) The best practice is, clearly, being applied;
a comprehensive list of security best practices asso- 2) The best practice is, clearly, not be being applied;
ciated with the type of system under evaluation. A se- 3) The evaluator does not know or is unsure if it is
curity best practice is defined as a precaution, which being applied. In this case, he should assume that
can be a policy, a procedure or merely a choice for a it is not being applied.
configuration option that is consensually accepted as The procedure results in a yes/no answer for each
having the property of improving the security of a par- best practice in the list. For each threat vector, the per-
ticular system or scenario. For example, “always check centage of best practices that map to it and that are not
the type and length of an input parameter” is a valid applied provides the actual MU value. As the best
best practice for the development of any application, practices may be mapped onto one or more threats, the
which can prevent, depending on the circumstances, contribution of each practice to each vector may differ,
SQL injection and buffer overflow attacks. providing a different MU value for each threat. The
Measuring the MU of a computer system implies evaluator (e.g., the administrator) should use his per-
building a list of best practices for the corresponding ception of each threat and the different levels of un-
application domain. Nowadays, it is possible to find an trustworthiness for each vector to decide what are the
enormous quantity of security recommendations for (not implemented) practices that are more important to
several domains in the form of books, web pages, focus on. For instance, a high MU on an important
white papers, etc. Clearly, this knowledge needs to be threat would be a warning sign. Depending on the case,
evaluated, summarized and systematized in order to though, he might use his own judgment to choose be-
usable in an assessment. This process was accom- tween an important threat with low MU value and a not
plished in the areas of database configurations [1] and so important threat with a high MU.
web server configurations [9], which prove its feasi- In summary, the process of computing the minimum
bility. untrustworthiness provides the evaluator with a fairly
With the list of best practices defined, it is neces- amount of structured information. First, he becomes
sary to map each of them to a list of associated threats aware of several security aspects related with his sys-
(i.e., threats that are mitigated by the practice), called tem. More specifically, he becomes conscious of all the
threat vectors. For instance, two common threats asso- best practices that he does not follow and, more impor-
ciated with web-based applications are SQL injections tantly, the recommendations related with aspects that
vulnerabilities and Cross-site scripting vulnerabilities he is not even aware of or have neglected to inspect
[10]. Several programming best practices exist in order until that moment. Second, he becomes aware of the
to help developers avoid each one of these (and several most important threats for the type of system he is
others). Each best practice would then be associated managing, and how each recommendation affects each
with each threat (or both), allowing for the metric to threat. Finally, the administrator obtains an index that
express more specifically which threats are more likely translates the impact of his system’s state into the
to manifest (depending on the applied practices). proneness of each threat becoming a real problem, pro-
Mapping best practices with threats is what allows viding a ranking that not only helps to back up deci-
summarizing and systematizing the effort put in the sions but is also suitable to incorporate the target envi-
system. Thus, in practice, minimum untrustworthiness ronment specifics needs.
translates into “how untrustworthy is a system when it
comes to prevent the manifestation of each threat”. The 4. Benchmarking untrustworthiness in
separation into threat vectors is needed for the adminis- DBMS configurations
trator to focus on the threats that make sense in his
environment, and a useful set of threat vectors must In order to evaluate the usefulness of the proposal,
include a set of threats as orthogonal as possible and we are developing a benchmarking procedure for the
that reflect the most important security concerns in the evaluation of the MU in DBMS configurations (the
domain under evaluation. This idea will be further ex- complete proposal can be found in [2]). We define a
plained in Section 4, in which we present threat vectors DBMS configuration as the set of all the elements and
for database configurations. features that are influenced by the DBA’s decisions. In
The list of best practices, the threat vectors and the this kind of environment, the DBA is the administrator
mapping between both, define the tool that supports
125
and is the decision maker. Based on multiple sources In the course of building an untrustworthiness
of information (e.g. [3]), we have collected a compre- benchmark for DBMS configurations, we presented a
hensive list of 64 database configurations security best selection of eight most relevant DBMS configuration
practices. Interested readers can find the complete list threats. The next step of the research is to create the
in [1]. mapping between the threats and a list of database se-
We are currently working on the mapping of this curity recommendations, in order to propose a full ben-
list into relevant threat vectors. The following eight chmarking process. Preliminary evaluations point to
threats are being considered for this domain (based on the fact that this kind of approach is particularly useful
an analysis of several sources [4][5][12] and discussion to help DBAs with limited security knowledge that
with industry practitioners): legitimate excessive privi- have to manage complex environments.
lege achievement (increases the probability of a user to
obtain more privileges than it should in a legal man- References
ner); illegitimate privilege elevation (increases the
probability of a user obtaining an arbitrary privilege [1] Araújo Neto, A., Vieira, M., "Towards Assessing the
which he should not have in any circumstances); denial Security of DBMS Configurations", Intl Conf. Depend.
of service (increases the probability of a user being Systems and Networks (DSN 2008), USA, 2008.
denied timely access to some functionality or re- [2] Araújo Neto, A., Vieira, M., "A Trust-Based Benchmark
source); communication weakness (increases the prob- for DBMS Configurations", IEEE 15th Pacific Rim
ability that a communication channel between a user International Symposium on Dependable Computing
(PRDC’09), Shangai, China, 2009.
and the server behaves in an unexpected way); authen- [3] Center for Internet Security, CIS Benchmarks/Scoring
tication weakness (increases the probability that an tools, 2008; http://www.cisecurity.org
individual becomes authenticated to the system as [4] Common Criteria. “Commercial Database Management
another individual); side-channel data exposure (cha- System Protection Profile (C.DBMS PP)”. Iss. 1. 1998.
racteristic that increases the probability that data is [5] Common Criteria. “Database Management System Pro-
accessed through an alternative illegitimate access tection Profile (DBMS PP)”. Issue 2.1. 2000.
channel); audit trail weakness (characteristic which [6] INFOSEC Research Council. Hard Problem List.
may yield a decreased ability of identifying unexpected November, 2005.
behavior, its causes and possible suspects); and SQL http://www.cyber.st.dhs.gov/docs/IRC_Hard_Pr
oblem_List.pdf (Accessed March 2009)
injection enhancement (increases the probability that
[7] Jansen, W. “Directions in Security Metrics Research”,
an SQL injection vulnerability becomes exposed or NISTIR 7564. March 2009.
enhanced). http://csrc.nist.gov/publications/drafts/nistir-
The benchmark also presumes that attackers are in- 7564/Draft-NISTIR-7564.pdf (Accessed March 2009)
dividuals that either use: a DBMS userid, an operating [8] Jelen, G. and Williams J. “A Practical Approach to
system userid, and application userid, or have no rela- Measuring Assurance”, 14th Annual Computer Security
tion with the system. These represent four different Applications Conference, Dec 1998, Phoenix.
environment containers with diverse sets of predefined [9] Mendes, N., Araújo Neto, A., Durães, J., Vieira, M. and
privileges. Our expectation is that such an untrustwor- Madeira, H. “Assessing and Comparing Security of Web
thiness benchmark will help DBAs in understanding Servers”, 14th Pacific Rim International Symposium on
the security characteristics of their environments and Dependable Computing (PRDC'08). Taiwan. December,
focusing on the security issues that are more important 2008.
for their needs. [10] Open Web Application Security Project (OWASP).
OWASP top 10. 2007.
[11] Payne, S. C. “A Guide to Security Metrics”, SANS
5. Conclusion & future work Institute Information Security Reading Room. June
2006.
This paper presented the idea of trust-based security [12] Shoulman, A. “Top Ten Database Security Threats.”
metrics, and how they may be as useful as traditionally White paper. 2009.
defined security metrics for security evaluation. Based [13] Torgerson, M. "Security Metrics for Communication
on this idea, we also presented a specific metric: mini- Systems", 12th ICCRTS, Newport, Rhode Island, 2007.
mum untrustworthiness, which expresses the minimum
level of distrust an administrator should put in a given
system as being able to prevent the manifestation of
relevant threats. We explained the basic idea behind
the metric and why it helps an administrator in secur-
ing a system.
126