Unit 1-Introduction
Unit 1-Introduction
Unit 1-Introduction
Computer Security:
The protection afforded to an automated information system in order to attain the applicable
objectives of preserving the integrity, availability, and confidentiality of information system
resources (includes hardware, software, firmware, information/data, and
telecommunications).
This definition introduces three key objectives that are at the heart of computer security:
i. Confidentiality
Data confidentiality: Assures that private or confidential information is not made
available or disclosed to unauthorized individuals.
Privacy: Assures that individuals control or influence what information related to
them may be collected and stored and by whom and to whom that information may
be disclosed.
ii. Integrity
Data integrity: Assures that information and programs are changed only in a
specified and authorized manner.
System integrity: Assures that a system performs its intended function in an
unimpaired manner, free from deliberate or inadvertent unauthorized manipulation
of the system.
iii. Availability
Assures that systems work promptly and service is not denied to authorized users.
These three concepts form what is often referred to as the CIA triad.
CIA Triad
The three concepts embody the fundamental security objectives for both data and for
information and computing services.
• Availability: Ensuring timely and reliable access to and use of information. A loss of
availability is the disruption of access to or use of information or an information system.
Although the use of the CIA triad to define security objectives is well established, some in the
security field feel that additional concepts are needed to present a complete picture. Two of
the most commonly mentioned are as follows:
• Authenticity: The property of being genuine and being able to be verified and trusted;
confidence in the validity of a transmission, a message, or message originator. This means
verifying that users are who they say they are and that each input arriving at the system
came from a trusted source.
• Accountability: The security goal that generates the requirement for actions of an entity
to be traced uniquely to that entity. This supports nonrepudiation, deterrence, fault isolation,
intrusion detection and prevention, and after-action recovery and legal action. Because truly
secure systems are not yet an achievable goal, we must be able to trace a security breach to
a responsible party. Systems must keep records of their activities to permit later forensic
analysis to trace security breaches or to aid in transaction disputes.
Attack: An assault on system security that derives from an intelligent threat; that is, an
intelligent act that is a deliberate attempt (especially in the sense of a method or technique)
to evade security services and violate the security policy of a system.
Risk: An expectation of loss expressed as the probability that a particular threat will exploit
a particular vulnerability with a particular harmful result.
Security Policy: A set of rules and practices that specify or regulate how a system or
organization provides security services to protect sensitive and critical system resources.
System Resource/Asset
The assets of a computer system can be categorized as follows
• Hardware: Including computer systems and other data processing, data storage, and data
communications devices
• Data: Including files and databases, as well as security-related data, such as password files.
• Communication facilities and networks: Local and wide area network communication
links, bridges, routers, and so on.
In the context of security, the main concern is with the vulnerabilities of system resources.
Following are the general categories of vulnerabilities of a computer system or network
asset:
• It can be corrupted, so that it does the wrong thing or gives wrong answers. For example,
stored data values may differ from what they should be because they have been improperly
modified.
• It can become leaky. For example, someone who should not have access to some or all of
the information available through the network obtains such access.
• It can become unavailable or very slow. That is, using the system or network becomes
impossible or impractical.
Corresponding to the various types of vulnerabilities to a system resource are threats that
are capable of exploiting those vulnerabilities. A threat represents a potential security harm
to an asset.
Attack
An attack is a threat that is carried out (threat action) and, if successful, leads to an
undesirable violation of security, or threat consequence.
The agent carrying out the attack is referred to as an attacker, or threat agent. We can
distinguish two types of attacks:
• Passive attack: An attempt to learn or make use of information from the system that does
not affect system resources.
• Inside attack: Initiated by an entity inside the security perimeter (an “insider”). The
insider is authorized to access system resources but uses them in a way not approved by
those who granted the authorization.
Countermeasures
A countermeasure is any means taken to deal with a security attack. Ideally, a
countermeasure can be devised to prevent a particular type of attack from succeeding. When
prevention is not possible, or fails in some instance, the goal is to detect the attack and then
recover from the effects of the attack. A countermeasure may itself introduce new
vulnerabilities. In any case, residual vulnerabilities may remain after the imposition of
countermeasures. Such vulnerabilities may be exploited by threat agents representing a
residual level of risk to the assets. Owners will seek to minimize that risk given other
constraints.
i. Unauthorized disclosure
A circumstance or event whereby an entity gains access to data for which the entity is not
authorized. Unauthorized disclosure is a threat to confidentiality. The following types of
attacks can result in this threat consequence:
Exposure: Sensitive data are directly released to an unauthorized entity. It can be deliberate,
as when an insider intentionally releases sensitive information, such as credit card numbers,
to an outsider.
Inference: A threat action whereby an unauthorized entity indirectly accesses sensitive data
(but not necessarily the data contained in the communication) by reasoning from
characteristics or by-products of communications. An example of inference is known as
traffic analysis, in which an adversary is able to gain information from observing the pattern
of traffic on a network, such as the amount of traffic between particular pairs of hosts on the
network
ii. Deception
A circumstance or event that may result in an authorized entity receiving false data and
believing it to be true. Deception is a threat to either system integrity or data integrity. The
following types of attacks can result in this threat consequence:
Falsification: False data deceive an authorized entity. For example, a student may alter his
or her grades on a school database.
iii. Disruption
A circumstance or event that interrupts or prevents the correct operation of system services
and functions. Disruption is a threat to availability or system integrity. The following types
of attacks can result in this threat consequence:
Obstruction: A threat action that interrupts delivery of system services by hindering system
operation.
iv. Usurpation
Hardware
A major threat to computer system hardware is the threat to availability. Hardware is the
most vulnerable to attack and the least susceptible to automated controls. Threats include
accidental and deliberate damage to equipment as well as theft. The proliferation of personal
computers and workstations and the widespread use of LANs increase the potential for
losses in this area. Theft of CD-ROMs and DVDs can lead to loss of confidentiality. Physical
and administrative security measures are needed to deal with these threats.
Software
Software includes the operating system, utilities, and application programs. A key threat to
software is an attack on availability. Software, especially application software, is often easy
to delete. Software can also be altered or damaged to render it useless. Careful software
configuration management, which includes making backups of the most recent version of
software, can maintain high availability. A more difficult problem to deal with is software
modification that results in a program that still functions but that behaves differently than
before, which is a threat to integrity/authenticity. Computer viruses and related attacks fall
into this category.
Data
Hardware and software security are typically concerns of computing center professionals or
individual concerns of personal computer users. A much more widespread problem is data
security, which involves files and other forms of data controlled by individuals, groups, and
business organizations.
Security concerns with respect to data are broad, encompassing availability, secrecy, and
integrity. In the case of availability, the concern is with the destruction of data files, which
can occur either accidentally or maliciously.
The obvious concern with secrecy is the unauthorized reading of data files or databases, and
this area has been the subject of perhaps more research and effort than any other area of
computer security. A less obvious threat to secrecy involves the analysis of data and
manifests itself in the use of so-called statistical databases, which provide summary or
aggregate information.
Passive attacks are in the nature of eavesdropping on, or monitoring of, transmissions. The
goal of the attacker is to obtain information that is being transmitted. Two types of passive
attacks are the release of message contents and traffic analysis.
A second type of passive attack, traffic analysis, is subtler. Suppose that we had a way of
masking the contents of messages or other information traffic so that opponents, even if they
captured the message, could not extract the information from the message. The common
technique for masking contents is encryption. If we had encryption protection in place, an
opponent might still be able to observe the pattern of these messages. The opponent could
determine the location and identity of communicating hosts and could observe the frequency
and length of messages being exchanged. This information might be useful in guessing the
nature of the communication that was taking place.
Passive attacks are very difficult to detect because they do not involve any alteration of the
data. Typically, the message traffic is sent and received in an apparently normal fashion and
neither the sender nor receiver is aware that a third party has read the messages or observed
the traffic pattern. However, it is feasible to prevent the success of these attacks, usually by
means of encryption. Thus, the emphasis in dealing with passive attacks is on prevention
rather than detection.
Active attacks involve some modification of the data stream or the creation of a false stream
and can be subdivided into four categories: replay, masquerade, modification of messages,
and denial of service.
Replay involves the passive capture of a data unit and its subsequent retransmission to
produce an unauthorized effect.
A masquerade takes place when one entity pretends to be a different entity. A masquerade
attack usually includes one of the other forms of active attack. For example, authentication
sequences can be captured and replayed after a valid authentication sequence has taken
place, thus enabling an authorized entity with few privileges to obtain extra privileges by
impersonating an entity that has those privileges.
The denial of service prevents or inhibits the normal use or management of communication
facilities. This attack may have a specific target; for example, an entity may suppress all
messages directed to a particular destination (e.g., the security audit service). Another form
of service denial is the disruption of an entire network, either by disabling the network or by
overloading it with messages so as to degrade performance.
Active attacks present the opposite characteristics of passive attacks. Whereas passive
attacks are difficult to detect, measures are available to prevent their success. On the other
hand, it is quite difficult to prevent active attacks absolutely, because to do so would require
physical protection of all communication facilities and paths at all times. Instead, the goal is
to detect them and to recover from any disruption or delays caused by them. Because the
detection has a deterrent effect, it may also contribute to prevention.
Access Control: Limit information system access to authorized users, processes acting on
behalf of authorized users, or devices (including other information systems) and to the types
of transactions and functions that authorized users are permitted to exercise.
Awareness and Training: (i) Ensure that managers and users of organizational information
systems are made aware of the security risks associated with their activities and of the
applicable laws, regulation, and policies related to the security of organizational information
systems; and (ii) ensure that personnel are adequately trained to carry out their assigned
information security-related duties and responsibilities.
Audit and Accountability: (i) Create, protect, and retain information system audit records to
the extent needed to enable the monitoring, analysis, investigation, and reporting of
unlawful, unauthorized, or inappropriate information system activity; and (ii) ensure that
the actions of individual information system users can be uniquely traced to those users so
they can be held accountable for their actions.
Certification, Accreditation, and Security Assessments: (i) Periodically assess the security
controls in organizational information systems to determine if the controls are effective in
their application; (ii) develop and implement plans of action designed to correct deficiencies
and reduce or eliminate vulnerabilities in organizational information systems; (iii) authorize
the operation of organizational information systems and any associated information system
connections; and (iv) monitor information system security controls on an ongoing basis to
ensure the continued effectiveness of the controls.
Contingency Planning: Establish, maintain, and implement plans for emergency response,
backup operations, and post disaster recovery for organizational information systems to
ensure the availability of critical information resources and continuity of operations in
emergency situations.
Media Protection: (i) Protect information system media, both paper and digital; (ii) limit
access to information on information system media to authorized users; and (iii) sanitize or
destroy information system media before disposal or release for reuse.
Physical and Environmental Protection: (i) Limit physical access to information systems,
equipment, and the respective operating environments to authorized individuals; (ii)
protect the physical plant and support infrastructure for information systems; (iii) provide
supporting utilities for information systems; (iv) protect information systems against
environmental hazards; and (v) provide appropriate environmental controls in facilities
containing information systems.
Planning: Develop, document, periodically update, and implement security plans for
organizational information systems that describe the security controls in place or planned
for the information systems and the rules of behavior for individuals accessing the
information systems.
Personnel Security: (i) Ensure that individuals occupying positions of responsibility within
organizations (including third-party service providers) are trustworthy and meet
established security criteria for those positions; (ii) ensure that organizational information
and information systems are protected during and after personnel actions such as
terminations and transfers; and (iii) employ formal sanctions for personnel failing to comply
with organizational security policies and procedures.
Systems and Services Acquisition: (i) Allocate sufficient resources to adequately protect
organizational information systems; (ii) employ system development life cycle processes
that incorporate information security considerations; (iii) employ software usage and
installation restrictions; and (iv) ensure that third party providers employ adequate security
measures to protect information, applications, and/or services outsourced from the
organization.
System and Communications Protection: (i) Monitor, control, and protect organizational
communications (i.e., information transmitted or received by organizational information
systems) at the external boundaries and key internal boundaries of the information systems;
and (ii) employ architectural designs, software development techniques, and systems
engineering principles that promote effective information security within organizational
information systems.
System and Information Integrity: (i) Identify, report, and correct information and
information system flaws in a timely manner; (ii) provide protection from malicious code at
Economy of mechanism: the design of security measures embodied in both hardware and
software should be as simple and small as possible.
Fail-safe default: the access decisions should be based on permission rather than exclusion.
That is, the default situation is lack of access, and the protection scheme identifies conditions
under which access is permitted.
Complete mediation: every access must be checked against the access control mechanism.
Systems should not rely on access decisions retrieved from a cache.
Open design: the design of a security mechanism should be open rather than secret. For
example, although encryption keys must be secret, encryption algorithms should be open to
public scrutiny.
Least privilege: every process and every user of the system should operate using the least set
of privileges necessary to perform the task. For example, system programs or administrators
who have special privileges should have those privileges only when necessary; when they
are doing ordinary activities the privileges should be withdrawn.
Least common mechanism: the design should minimize the functions shared by different
users, providing mutual security.
Psychological acceptability: the security mechanisms should not interfere unduly with the
work of users, while at the same time meeting the needs of those who authorize access. If
security mechanisms hinder the usability or accessibility of resources, users may opt to turn
off those mechanisms.
Isolation: It is a principle that applies in three contexts. First, public access systems should
be isolated from critical resources (data, processes, etc.) to prevent disclosure or tampering.
Second, the processes and files of individual users should be isolated from one another
except where it is explicitly desired. And finally, security mechanisms should be isolated in
the sense of preventing access to those mechanisms.
Layering: it is the use of multiple, overlapping protection approaches addressing the people,
technology, and operational aspects of information systems. By using multiple, overlapping
protection approaches, the failure or circumvention of any individual protection approach
will not leave the system unprotected.
Least astonishment: It means that a program or user interface should always respond in the
way that is least likely to astonish the user. For example, the mechanism for authorization
should be transparent enough to a user that the user has a good intuitive understanding of
how the security goals map to the provided security mechanism.
Attack Surfaces
An attack surface consists of the reachable and exploitable vulnerabilities in a system.
Examples of attack surfaces are the following:
• Open ports on outward facing Web and other servers, and code listening on those ports
• Code that processes incoming data, email, XML, office documents, and industry specific
custom data exchange formats
• Network attack surface: This category refers to vulnerabilities over an enterprise network,
wide-area network, or the Internet. Included in this category are network protocol
vulnerabilities, such as those used for a denial-of-service attack, disruption of
communications links, and various forms of intruder attacks.
An attack surface analysis is a useful technique for assessing the scale and severity of threats
to a system. A systematic analysis of points of vulnerability makes developers and security
analysts aware of where security mechanisms are required. Once an attack surface is
defined, designers may be able to find ways to make the surface smaller, thus making the
task of the adversary more difficult. The attack surface also provides guidance on setting
priorities for testing, strengthening security measures, or modifying the service or
application.
Attack Trees
An attack tree is a branching, hierarchical data structure that represents a set of potential
techniques for exploiting security vulnerabilities. The security incident that is the goal of the
attack is represented as the root node of the tree, and the ways that an attacker could reach
that goal are iteratively and incrementally represented as branches and subnodes of the tree.
Each subnode defines a subgoal, and each subgoal may have its own set of further subgoals,
etc. The final nodes on the paths outward from the root, i.e., the leaf nodes, represent
different ways to initiate an attack. Each node other than a leaf is either an AND-node or an
OR-node. To achieve the goal represented by an AND-node, the subgoals represented by all
of that node’s subnodes must be achieved; and for an OR-node, at least one of the subgoals
must be achieved. Branches can be labeled with values representing difficulty, cost, or other
attack attributes, so that alternative attacks can be compared.
The motivation for the use of attack trees is to effectively exploit the information available
on attack patterns. Organizations such as CERT publish security advisories that have enabled
the development of a body of knowledge about both general attack strategies and specific
attack patterns. Security analysts can use the attack tree to document security attacks in a
structured form that reveals key vulnerabilities. The attack tree can guide both the design of
systems and applications, and the choice and strength of countermeasures.
• User terminal and user (UT/U): These attacks target the user equipment, including the
tokens that may be involved, such as smartcards or other password generators, as well as
the actions of the user.
• Internet banking server (IBS): These types of attacks are offline attack against the servers
that host the Internet banking application.
Five overall attack strategies can be identified, each of which exploits one or more of the
three components. The five strategies are as follows:
• User credential compromise: This strategy can be used against many elements of the
attack surface. There are procedural attacks, such as monitoring a user’s action to observe a
PIN or other credential, or theft of the user’s token or handwritten notes. An adversary may
also compromise token information using a variety of token attack tools, such as hacking the
smartcard or using a brute force approach to guess the PIN. Another possible strategy is to
embed malicious software to compromise the user’s login and password. An adversary may
also attempt to obtain credential information via the communication channel (sniffing).
• User credential guessing: It is reported in [HILT06] that brute force attacks against some
banking authentication schemes are feasible by sending random usernames and passwords.
The attack mechanism is based on distributed zombie personal computers, hosting
automated programs for username- or password-based calculation.
• Security policy violation: For example, violating the bank’s security policy in combination
with weak access control and logging mechanisms, an employee may cause an internal
security incident and expose a customer’s account.
• Use of known authenticated session: This type of attack persuades or forces the user to
connect to the IBS with a preset session ID. Once the user authenticates to the server, the
attacker may utilize the known session ID to send packets to the IBS, spoofing the user’s
identity.
Security Policy
The first step in devising security services and mechanisms is to develop a security policy. In
developing a security policy, a security manager needs to consider the following factors:
Security Implementation
Security implementation involves four complementary courses of action:
• Prevention: An ideal security scheme is one in which no attack is successful. Although this
is not practical in all cases, there is a wide range of threats in which prevention is a
reasonable goal. For example, consider the transmission of encrypted data. If a secure
encryption algorithm is used, and if measures are in place to prevent unauthorized access to
encryption keys, then attacks on confidentiality of the transmitted data will be prevented.
• Recovery: An example of recovery is the use of backup systems, so that if data integrity is
compromised, a prior, correct copy of the data can be reloaded.
Assurance is the degree of confidence one has that the security measures, both technical
and operational, work as intended to protect the system and the information it processes.
This encompasses both system design and system implementation. Thus, assurance deals
with the questions, “Does the security system design meet its requirements?” and “Does the
security system implementation meet its specifications?”
Evaluation is the process of examining a computer product or system with respect to certain
criteria. Evaluation involves testing and may also involve formal analytic or mathematical
techniques.
Access Control
Access control: limiting who can access what in what ways
Access control is a system to protect general objects, such as files, tables, access to hardware
devices or network connections, and other resources. In general, it is a flexible structure,
which describes how certain users can use a resource in one way (for example, readonly),
others in a different way (for example, allowing modification), and still others not at all.
These techniques should be robust, easy to use, and efficient.
• Subjects are human users, often represented by surrogate programs running on behalf of
the users.
• Objects are things on which an action can be performed: Files, tables, programs, memory
objects, hardware devices, strings, data fields, network connections, and processors are
examples of objects. So too are users, or rather programs or processes representing users,
because the operating system (a program representing the system administrator) can act on
a user, for example, allowing a user to execute a program, halting a user, or assigning
privileges to a user.
• Access modes are any controllable actions of subjects on objects, including, but not limited
to, read, write, modify, delete, execute, create, destroy, copy, export, import, and so forth.
Effective separation will keep unauthorized subjects from unauthorized access to objects,
but the separation gap must be crossed for authorized subjects and modes.
Reliable and effective access control systems should be deployed with adaptability in mind,
making use of intelligently designed methodologies intended to grant varying degrees of
access to relevant employees, residents and visitors based upon a predetermined and
informed ruleset.
This is the concept behind access control models: systems that allow admins or system
administrators to better manage user permissions and grant property access based on
measurable criteria such as time, company role and security clearance.
Two fundamental constructs in the field of authorization are access control lists, or ACLs,
and capabilities, or C-lists. Both ACLs and C-lists are derived from Lampson's access control
matrix, which has a row for every subject and a column for every object. The access allowed
by subject S to object O is stored at the intersection of the row indexed by S and the column
indexed by O.
An example of an access control matrix appears is given below, where we use UNIX-style
notation, that is, x, r, and w stand for execute, read, and write privileges, respectively.
ALICE rx rx r rw rw
ACCOUNTING rx rx rw rw r
PROGRAM
Notice that, the accounting program is treated as both an object and a subject. This is a useful
fiction, since we can enforce the restriction that the accounting data is only modified by the
accounting program. The intent here is to make corruption of the accounting data more
difficult, since any changes to the accounting data must be done by software that,
presumably, includes standard accounting checks and balances.
However, this does not prevent all possible attacks, since the system administrator, Sam,
could replace the accounting program with a faulty (or fraudulent) version and thereby
break the protection. But this trick does allow Alice and Bob to access the accounting data
without allowing them to corrupt it—either intentionally or unintentionally.
Alongside systems used to secure building entry points and physical spaces, access control
can also be deployed as a digital security measure. In these configurations, access control
methods can be used to:
Grant access to digital files, hardware and software required for specific organizational
roles
Grant and monitor user permissions to ensure that staff are able to work efficiently
Most access control methods can be categorized using one (or more) of these five models or
access control lists:
These access control models describe the way in which an installed security system is
instructed to operate, including the parameters that must be met to grant resource access,
the way that unique user permissions are understood and the ruleset used to inform wider
security policies.
predefined access criteria, they will be locked out of the access control network regardless
of their level of security clearance.
This process makes the flow of data much easier and a lot more user-friendly than any other
system. But, contrary to the flexibility it offers, this system is also less secure as the flow of
information can’t be administered regularly.
Due to the less secure passage, this system offers a flow of information, which makes it unfit
for organizations that require high-level security of their data such as in fields like - medical,
finance, military, government offices, etc.
As in any other system in this system also whether to give access to data or not to an
employee is decided by administration only. An important point worth mentioning here
again is that once the access is granted to a user only then they can give access to someone
else, but they cannot give access to any of the data to which they do not have access.
Granting the same privileges that the subject has to other subjects or objects.
Being able to alter security attributes on objects, subjects, system components and
information subjects.
Having the ability to pass the information on to other subjects and objects.
Being able to select the security attributes that are to be associated with new or revised
objects.
Having the power to change the rules that manage access control.
As a discretionary access control example, when considering the context of a company with
different levels of employees, you may grant all the above levels of access to your highest-
ranking manager. Conversely, for a more junior employee who works in something like
communication, it may be important for them to have the third level of access - the ability to
pass the information on to other subjects and objects - but not the others.
Thus, their authentication key would allow them to access the resource and share
information but not do other things like altering security attributes. The manager, however,
with a different authentication key, would be able to do that.
Therefore, with DAC, it is up to the resource owner to decide which access levels are required
by different users - they can alter this at their discretion, hence the term "discretionary
access control".
When using RBAC for Role Management, you analyze the needs of your users and group them
into roles based on common responsibilities. You then assign one or more roles to each user
and one or more permissions to each role. The user-role and role-permissions relationships
make it simple to perform user assignments since users no longer need to be managed
individually, but instead have privileges that conform to the permissions assigned to their
role(s).
For example, if you were using RBAC to control access for an HR application, you could give
HR managers a role that allows them to update employee details, while other employees
would be able to view only their own details.
When planning your access control strategy, it's best practice to assign users the fewest
number of permissions that allow them to get their work done.
Benefits of RBAC
Managing and auditing network access is essential to information security. Access can and
should be granted on a need-to-know basis. With hundreds or thousands of employees,
security is more easily maintained by limiting unnecessary access to sensitive information
based on each user’s established role within the organization. Other advantages include:
1. Reducing administrative work and IT support. With RBAC, you can reduce the need for
paperwork and password changes when an employee is hired or changes their role.
Instead, you can use RBAC to add and switch roles quickly and implement them globally
across operating systems, platforms and applications. It also reduces the potential for
error when assigning user permissions.
3. Improving compliance. All organizations are subject to federal, state and local
regulations. With an RBAC system in place, companies can more easily meet statutory
and regulatory requirements for privacy and confidentiality as IT departments and
executives have the ability to manage how data is being accessed and used. This is
especially significant for health care and financial institutions, which manage lots of
sensitive data such as PHI and PCI data
Current Status: Create a list of every software, hardware and app that has some sort of
security. For most of these things, it will be a password. However, you may also want to
list server rooms that are under lock and key. Physical security can be a vital part of data
protection. Also, list the status of who has access to all of these programs and areas.
Current Roles: Even if you do not have a formal roster and list of roles, determining what
each individual team member does may only take a little discussion. Try to organize the
team in such a way that it doesn’t stifle creativity and the current culture (if enjoyed).
Write a Policy: Any changes made need to be written for all current and future employees
to see. Even with the use of a RBAC tool, a document clearly articulating your new system
will help avoid potential issues.
Make Changes: Once the current security status and roles are understood (not to mention
a policy is written), it’s time to make the changes.
Continually Adapt: It’s likely that the first iteration of RBAC will require some tweaking.
Early on, you should evaluate your roles and security status frequently. Assess first, how
well the creative/production process is working and secondly, how secure your process
happens to be.
A core business function of any organization is protecting data. An RBAC system can ensure
the company's information meets privacy and confidentiality regulations.
Subject
The subject is the user requesting access to a resource to perform an action. Subject
attributes in a user profile include ID, job roles, group memberships, departmental
and organizational memberships, management level, security clearance, and other
identifying criteria. ABAC systems often obtain this data from an HR system or
directory, or otherwise collect this information from authentication tokens used
during login.
Resource
The resource is the asset or object (such as a file, application, server, or even API) that
the subject wants to access. Resource attributes are all identifying characteristics, like
a file’s creation date, its owner, file name and type, and data sensitivity. For example,
when trying to access your online bank account, the resource involved would be
“bank account = <correct account number>.”
Action
The action is what the user is trying to do with the resource. Common action
attributes include “read,” “write,” “edit,” “copy,” and “delete.” In some cases, multiple
attributes can describe an action. To continue with the online banking example,
requesting a transfer may have the characteristics “action type = transfer” and
“amount = $200.”
Environment
The environment is the broader context of each access request. All environmental
attributes speak to contextual factors like the time and location of an access attempt,
the subject’s device, communication protocol, and encryption strength. Contextual
information can also include risk signals that the organization has established, such
as authentication strength and the subject’s normal behavior patterns.
Attribute-based access control analyzes the attributes of these components against rules.
These rules define which attribute combinations are authorized in order for the subject to
successfully perform an action with the object.
“If the subject is in a communications job role, they should have read and edit access to media
strategies for the business units they represent.”
Whenever an access request happens, the ABAC system analyzes attribute values for
matches with established policies. As long as the above policy is in place, an access request
with the following attributes should grant access:
Action = “edit”
In effect, ABAC allows admins to implement granular, policy-based access control, using
different combinations of attributes to create conditions of access that are as specific or
broad as the situation calls for.
Multilevel security, or MLS, is familiar to all fans of spy novels, where classified information
often figures prominently. In MLS, the subjects are the users (generally, human) and the
objects are the data to be protected (for example, documents). Furthermore, classifications
apply to objects while clearances apply to subjects.
The U.S. Department of Defense, or DoD, employs four levels of classifications and clearances,
which can be ordered as
For example, a subject with a SECRET clearance is allowed access to objects classified
SECRET or lower but not to objects classified TOP SECRET. Apparently to make them more
visible, security levels are generally rendered in upper case.
Let O be an object and S a subject. Then O has a classification and S has a clearance. The
security level of O is denoted L(0), and the security level of S is similarly denoted L(S). In the
DoD system, the four levels given above are used for both clearances and classifications. Also,
There are many practical problems related to the classification of information. For example,
the proper classification is not always clear, and two experienced users might have widely
differing views. Also, the level of granularity at which to apply classifications can be an issue.
It's entirely possible to construct a document where each paragraph, when taken
individually, is UNCLASSIFIED, yet the overall document is TOP SECRET. This problem is
even worse when source code must be classified, which is sometimes the case within the
DoD. The flip side of granularity is aggregation—an adversary might be able to glean TOP
SECRET information from a careful analysis of UNCLASSIFIED documents.
Multilevel security is needed when subjects and objects at different levels use the same
system resources. The purpose of an MLS system is to enforce a form of access control by
restricting subjects so that they only access objects for which they have the necessary
clearance.
Bell-LaPadula
The first security model that we'll consider is Bell-LaPadula, or BLP, which was named after
its inventors, Bell and LaPadula. The purpose of BLP is to capture the minimal requirements,
with respect to confidentiality, that any MLS system must satisfy. BLP consists of the
following two statements:
Simple Security Condition: Subject S can read object O if and only if L(0) ≤ L(S).
*-Property (Star Property): Subject S can write object O if and only if L(S) ≤ L(0).
The simple security condition merely states that Alice, for example, cannot read a document
for which she lacks the appropriate clearance. This condition is clearly required of any MLS
system.
The star property is somewhat less obvious. This property is designed to prevent, say, TOP
SECRET information from being written to, say, a SECRET document. This would break MLS
security since a user with a SECRET clearance could then read TOP SECRET information. The
writing could occur intentionally or, for example, as the result of a computer virus.
The simple security condition can be summarized as "no read up," while the star property
implies "no write down."
BLP has inspired many other security models, most of which strive to be more realistic. The
price that these systems pay for more reality is more complexity. This makes most other
models more difficult to analyze and more difficult to apply, that is, it's more difficult to show
that a real-world system satisfies the requirements of the model.
Biba's Model
Bell LaPadula model deals with confidentiality whereas Biba's model deals with integrity. In
fact, Biba's model is essentially an integrity version of BLP.
If we trust the integrity of object 01 but not that of object O2, then if object O is composed of
01 and O2, we cannot trust the integrity of object O. In other words, the integrity level of O is
the minimum of the integrity of any object contained in O. Another way to say this is that for
integrity, a low water mark principle holds. In contrast, for confidentiality, a high water mark
principle applies.
To state Biba's model formally, let I(0) denote the integrity of object O and I(S) the integrity
of subject S. Biba's model is defined by the two statements:
Write Access Rule: Subject S can write object O if and only if 1(0) ≤ I(S).
Biba's Model: A subject S can read the object O if and only if I(S) ≤ I(0).
The write access rule states that we don't trust anything that S writes any more than we trust
S. Biba's model states that we can't trust S any more than the lowest integrity object that S
has read. In essence, we are concerned that S will be "contaminated" by lower integrity
objects, so S is forbidden from viewing such objects.
Biba's model is actually very restrictive, since it prevents S from ever viewing an object at a
lower integrity level. It's possible—and, in many cases, perhaps desirable—to replace Biba's
model with the following:
Low Water Mark Policy: If subject S reads object O, then I(S) = min(l(S),I(0)).
Under the low water mark principle,
subject S can read anything, under
the condition that the integrity of
subject S is downgraded after
accessing an object at a lower level.
Identity management: Identifies a subject and establishes that they are and who they
claim to be when authorizing access to a system (i.e. physical entity, digital entity).
Access management: Grants permissions for what users can do and see within a system
(e.g. using specific groups and role for separation). Access management determines
which roles or users can access different information and processes at specific times and
levels of security (i.e. restrict access to a database unless the user is authenticated and is
in a role that has been granted access to that resource).
Threat actors seek to steal user identities and credentials in order to access networks,
systems, and data. These types of cyber-attacks pose significant risks to your organization,
including:
Spreading misinformation
These risks usually stem from threats associated with phishing attacks and malware that
have exposed credentials and access information to threat actors. Compromised access is a
high risk for organizations dealing with sensitive information, critical resources, and
emergency processes.
Biometrics
Biometrics are used as a convenient form of authentication. Using your unique body
characteristics as identification, biometrics can be used instead of or alongside a pin,
password, or passphrase.
Trust Frameworks
Trust Frameworks operate at the sector level to collaboratively establish and maintain
a light layer of identity management, governance, definitions, principles and Open Standards
for data sharing, which create the foundations of a trusted data-sharing ecosystem.
Scheme: rulebook for specific use cases (technical, legal, policy, operational)
Schemes are the rulebooks of how organizations can share data. They define what can be
shared, why, by whom, how, and what protections exist. Trust Frameworks help apply the
rules in a way that both humans and machines can understand. They enable monitoring,
reporting and verification of those using the rules. This helps make sure the rules are being
followed and offers ways to help enforce the rules when needed.
For example, if you wanted to share electricity smart meter data with an app that helped
reduce your energy bills, there are many different companies involved. As the one sharing
that data you want to know it’s being handled properly, and that it won’t be misused. The
benefit for the companies involved is that by having common rules, and ways of enforcing
them, is that their risks are reduced. It can also help government and regulators influence,
or regulate, rules that are needed to ensure markets are fair and open, unlock innovation and
enable protections.
Trust Frameworks address the challenges of establishing trust in digital ecosystems, where
traditional face-to-face verification methods and trust signals are not feasible. They are
particularly important in sectors where secure and reliable data exchange is critical.
Trust Frameworks lay foundational ways for creating a trusted ecosystem for data sharing
and identity management across a sector. Schemes build upon this foundation, tailoring the
rules and practices to specific market-wide applications, thereby facilitating targeted and
efficient data sharing and identity verification efforts.
Trust Frameworks are crucial for enabling secure, reliable, and efficient digital transactions
and interactions, ensuring that all parties can trust the integrity and confidentiality of the
data exchanged and the authenticity of the identities involved.
Trust Frameworks:
1. Serve as the underpinning structure offering a comprehensive set of rules and guidelines
for identity management, verification, and assurance within a sector.
2. Facilitate assured data publishing by establishing common rules and standards that
ensure data is accessible, interoperable, and usable across different entities and systems.
3. Have their own governance structures to oversee the implementation, adherence, and
evolution of the framework, ensuring it remains relevant and effective in addressing the
needs of the sector.
Increased Efficiency: Facilitating faster and more efficient digital transactions and
interactions by reducing the need for repeated identity verifications and data exchanges.
Key Components:
Identity Verification and Management: Procedures and standards for ensuring that the
identities of all parties involved in a transaction or interaction are accurately verified and
managed throughout their lifecycle.
Data Protection and Privacy Foundations: Foundational guidelines for Schemes that
ensure data must be handled in compliance with laws and best practices, protecting
sensitive information from unauthorized access or disclosure. In addition, for personal
data, also ensures that their rights are addressed while enabling acceptable uses.
Security Standards: Protocols, foundational guidelines and best practices for securing
data transmission that reduce the risk of data breaches and cyber-attacks.
Legal and Regulatory Compliance: Frameworks are designed to ensure that all activities
conducted under their guidance comply with relevant legal and regulatory requirements,
minimizing legal risks for participants and foundational guidelines for Schemes to ensure
legal interoperability is considered in their design and implementation.
References
Bishop, M. (2002). Computer Security: Art and Science. Addison Wesley.
Pfleeger, C. P., Pfleeger, S. L., & J. M. (2015). Security in Computing. Westford, Massachusetts:
Pearson Education.
Stallings, W., & Brown, L. (2015). Computer Security Principles and Practice. New Jersey:
Pearson Education.
Stamp, M. (2011). Information security: principles and practice. Hoboken, New Jersey: John
Wiley & Sons.