Suggested Citation:"Concepts of Information Security."
Suggested Citation:"Concepts of Information Security."
Suggested Citation:"Concepts of Information Security."
Organizations and people that use computers can describe their needs for
information security and trust in systems in terms of three major requirements:
Page 50
Suggested Citation:"Concepts of Information Security."
National Research Council. 1991. Computers at Risk: Safe
Computing in the Information Age. Washington, DC: The
National Academies Press. doi: 10.17226/1581.
×
Save
Cancel
The framework within which an organization strives to meet its needs for
information security is codified as security policy. A security policy is a concise
statement, by those responsible for a system (e.g., senior management), of
information values, protection responsibilities, and organizational commitment.
One can implement that policy by taking specific actions guided by management
control principles and utilizing specific security standards, procedures, and
mechanisms. Conversely, the selection of standards, procedures, and
mechanisms should be guided by policy to be most effective.
To be useful, a security policy must not only state the security need (e.g., for
confidentiality—that data shall be disclosed only to authorized individuals), but
also address the range of circumstances under which that need must be met and
the associated operating standards. Without this second part, a security policy is
so general as to be useless (although the second part may be realized through
procedures and standards set to implement the policy). In any particular
circumstance, some threats are more probable than others, and a prudent policy
setter must assess the threats, assign a level of concern to each, and state a
policy in terms of which threats are to be resisted. For example, until recently
most policies for security did not require that security needs be met in the face of
a virus attack, because that form of attack was uncommon and not widely
understood. As viruses have escalated from a hypothetical to a commonplace
threat, it has become necessary to rethink such policies in regard to methods of
distribution and acquisition of software. Implicit in this process is management's
choice of a level of residual risk that it will live with, a level that varies among
organizations.
Technical measures alone cannot prevent violations of the trust people place in
individuals, violations that have been the source of
Page 51
Suggested Citation:"Concepts of Information Security."
National Research Council. 1991. Computers at Risk: Safe
Computing in the Information Age. Washington, DC: The
National Academies Press. doi: 10.17226/1581.
×
Save
Cancel
much of the computer security problem in industry to date (see Chapter 6).
Technical measures may prevent people from doing unauthorized things but
cannot prevent them from doing things that their job functions entitle them to do.
Thus, to prevent violations of trust rather than just repair the damage that results,
one must depend primarily on human awareness of what other human beings in
an organization are doing. But even a technically sound system with informed
and watchful management and users cannot be free of all possible
vulnerabilities. The residual risk must be managed by auditing, backup, and
recovery procedures supported by general alertness and creative responses.
Moreover, an organization must have administrative procedures in place to bring
peculiar actions to the attention of someone who can legitimately inquire into the
appropriateness of such actions, and that person must actually make the inquiry.
In many organizations, these administrative provisions are far less satisfactory
than are the technical provisions for security.
A major conclusion of this report is that the lack of a clear articulation of security
policy for general computing is a major impediment to improved security in
computer systems. Although the Department of Defense (DOD) has articulated
its requirements for controls to ensure confidentiality, there is no articulation for
systems based on other requirements and management controls (discussed
below)—individual accountability, separation of duty, auditability, and recovery.
This committee's goal of developing a set of Generally Accepted System Security
Principles, GSSP, is intended to address this deficiency and is a central
recommendation of this report.
On the basis of reported losses, such attitudes are not unjustified (Neumann,
1989). However, computers are active entities, and programs can be changed in
a twinkling, so that past happiness is no predictor of future bliss. There has to be
only one Internet worm incident to signal a larger problem. Experience since the
Internet worm involving copy-cat and derivative attacks shows how a possibility
once demonstrated can become an actuality frequently used.1
Page 52
Suggested Citation:"Concepts of Information Security."
National Research Council. 1991. Computers at Risk: Safe
Computing in the Information Age. Washington, DC: The
National Academies Press. doi: 10.17226/1581.
×
Save
Cancel
Some consensus does exist on fundamental or minimum-required security
mechanisms. A recent informal survey conducted on behalf of the committee
shows a widespread desire among corporate system managers and security
officers for the ability to identify users and limit times and places of access,
particularly over networks, and to watch for intrusion by recording attempts at
invalid actions (see Chapter Appendix 2.2). Ad hoc virus checkers, well known in
the personal computer market, are also in demand. However, there is little
demand for system managers to be able to obtain positive confirmation that the
software running on their systems today is the same as what was running
yesterday. Such a simple analog of hardware diagnostics should be a
fundamental requirement; it may not be seen as such because vendors do not
offer it or because users have difficulty expressing their needs.
Although threats and policies for addressing them are different for different
applications, they nevertheless have much in common, and the general systems
on which applications are built are often the same. Furthermore, basic security
services can work against many threats and support many policies. Thus there is
a large core of policies and services on which most of the users of computers
should be able to agree. On this basis the committee proposes the effort to
define and articulate GSSP.
The weight given to each of the three major requirements describing needs for
information security—confidentiality, integrity, and availability—depends strongly
on circumstances. For example, the adverse effects of a system not being
available must be related in part to requirements for recovery time. A system that
must be restored within an hour after disruption represents, and requires, a more
demanding set of policies and controls than does a similar system that need not
be restored for two to three days. Likewise, the risk of loss of confidentiality with
respect to a major product announcement will change with time. Early disclosure
may jeopardize competitive advantage, but disclosure just before the intended
announcement may be insignificant. In this case the information remains the
same, while the timing of its release significantly affects the risk of loss.
Confidentiality
Confidentiality is a requirement whose purpose is to keep sensitive information
from being disclosed to unauthorized recipients. The
Page 53
Suggested Citation:"Concepts of Information Security."
National Research Council. 1991. Computers at Risk: Safe
Computing in the Information Age. Washington, DC: The
National Academies Press. doi: 10.17226/1581.
×
Save
Cancel
secrets might be important for reasons of national security (nuclear weapons
data), law enforcement (the identities of undercover drug agents), competitive
advantage (manufacturing costs or bidding plans), or personal privacy (credit
histories) (see Chapter Appendix 2.1).
The most fully developed policies for confidentiality reflect the concerns of the
U.S. national security community, because this community has been willing to
pay to get policies defined and implemented (and because the value of the
information it seeks to protect is deemed very high). Since the scope of threat is
very broad in this context, the policy requires systems to be robust in the face of
a wide variety of attacks. The specific DOD policies for ensuring confidentiality do
not explicitly itemize the range of expected threats for which a policy must hold.
Instead, they reflect an operational approach, expressing the policy by stating the
particular management controls that must be used to achieve the requirement for
confidentiality. Thus they avoid listing threats, which would represent a severe
risk in itself, and avoid the risk of poor security design implicit in taking a fresh
approach to each new problem.
The operational controls that the military has developed in support of this
requirement involve automated mechanisms for handling information that is
critical to national security. Such mechanisms call for information to be classified
at different levels of sensitivity and in isolated compartments, to be labeled with
this classification, and to be handled by people cleared for access to particular
levels and/or compartments. Within each level and compartment, a person with
an appropriate clearance must also have a "need to know" in order to gain
access. These procedures are mandatory: elaborate procedures must also be
followed to declassify information.2
Page 54
Suggested Citation:"Concepts of Information Security."
National Research Council. 1991. Computers at Risk: Safe
Computing in the Information Age. Washington, DC: The
National Academies Press. doi: 10.17226/1581.
×
Save
Cancel
tory labeling, in part because there is no way to tell where copies of information
may flow. With Trojan horse attacks, for example, even legitimate and honest
users of an owner mechanism can be tricked into disclosing secret data. The
commercial world has borne these vulnerabilities in exchange for the greater
operational flexibility and system performance currently associated with relatively
weak security.
Integrity
Integrity is a requirement meant to ensure that information and programs are
changed only in a specified and authorized manner. It may be important to keep
data consistent (as in double-entry bookkeeping) or to allow data to be changed
only in an approved manner (as in withdrawals from a bank account). It may also
be necessary to specify the degree of the accuracy of data.
Some policies for ensuring integrity reflect a concern for preventing fraud and are
stated in terms of management controls. For example, any task involving the
potential for fraud must be divided into parts that are performed by separate
people, an approach called separation of duty. A classic example is a purchasing
system, which has three parts: ordering, receiving, and payment. Someone must
sign off on each step, the same person cannot sign off on two steps, and the
records can be changed only by fixed procedures—for example, an account is
debited and a check written only for the amount of an approved and received
order. In this case, although the policy is stated operationally—that is, in terms of
specific management controls—the threat model is explicitly disclosed as well.
Other integrity policies reflect concerns for preventing errors and omissions, and
controlling the effects of program change. Integrity policies have not been studied
as carefully as confidentiality policies. Computer measures that have been
installed to guard integrity tend to be ad hoc and do not flow from the integrity
models that have been proposed (see Chapter 3).
Availability
Availability is a requirement intended to ensure that systems work promptly and
service is not denied to authorized users. From an operational standpoint, this
requirement refers to adequate response time and/or guaranteed bandwidth.
From a security standpoint, it represents the ability to protect against and recover
from a damaging event. The availability of properly functioning computer systems
(e.g., for routing long-distance calls or handling airline reservations) is essential
to the operation of many large enterprises and sometimes
Page 55
Suggested Citation:"Concepts of Information Security."
National Research Council. 1991. Computers at Risk: Safe
Computing in the Information Age. Washington, DC: The
National Academies Press. doi: 10.17226/1581.
×
Save
Cancel
for preserving lives (e.g., air traffic control or automated medical systems).
Contingency planning is concerned with assessing risks and developing plans for
averting or recovering from adverse events that might render a system
unavailable.
For example, a simple availability policy is usually stated like this: "On the
average, a terminal shall be down for less than 10 minutes per month." A
particular terminal (e.g., an automatic teller machine or a reservation agent's
keyboard and screen) is up if it responds correctly within one second to a
standard request for service; otherwise it is down. This policy means that the up
time at each terminal, averaged over all the terminals, must be at least 99.98
percent.
Page 56
Suggested Citation:"Concepts of Information Security."
National Research Council. 1991. Computers at Risk: Safe
Computing in the Information Age. Washington, DC: The
National Academies Press. doi: 10.17226/1581.
×
Save
Cancel
the host system, the availability of individual teller machines is of less concern.
A telephone switching system, on the other hand, does not have high
requirements for integrity on individual transactions, as lasting damage will not be
incurred by occasionally losing a call or billing record. The integrity of control
programs and configuration records, however, is critical. Without these, the
switching function would be defeated and the most important attribute of all—
availability—would be compromised. A telephone switching system must also
preserve the confidentiality of individual calls, preventing one caller from
overhearing another.
Security needs are determined more by what a system is used for than by what it
is. A typesetting system, for example, will have to assure confidentiality if it is
being used to publish corporate proprietary material, integrity if it is being used to
publish laws, and availability if it is being used to publish a daily paper. A
general-purpose time-sharing system might be expected to provide confidentiality
if it serves diverse clientele, integrity if it is used as a development environment
for software or engineering designs, and availability to the extent that no one
user can monopolize the service and that lost files will be retrievable.
Save
Cancel
early warning of vulnerabilities. Organizations in almost every line of endeavor
have established controls based on the following key principles:
Individual accountability,
Auditing, and
Separation of duty.
These principles, recognized in some form for centuries, are the basis of
precomputer operating procedures that are very well understood.
Page 58
Suggested Citation:"Concepts of Information Security."
National Research Council. 1991. Computers at Risk: Safe
Computing in the Information Age. Washington, DC: The
National Academies Press. doi: 10.17226/1581.
×
Save
Cancel
and that many systems can also be compromised if surreptitious access can be
gained, accountability is a vital last resort. Auditing services make and keep the
records necessary to support accountability. Usually they are closely tied to
authentication and authorization (a service for determining whether a user or
system is trusted for a given purpose—see discussion below), so that every
authentication is recorded, as is every attempted access, whether authorized or
not. Given the critical role of auditing, auditing devices are sometimes the first
target of an attacker and should be protected accordingly.
A system's audit records, often called an audit trail, have other potential uses
besides establishing accountability. It may be possible, for example, to analyze
an audit trail for suspicious patterns of access and so detect improper behavior
by both legitimate users and masqueraders. The main drawbacks are processing
and interpreting the audit data.
Systems may change constantly as personnel and equipment come and go and
applications evolve. From a security standpoint, a changing system is not likely to
be an improving system. To take an active stand against gradual erosion of
security measures, one may supplement a dynamically collected audit trail
(which is useful in ferreting out what has happened) with static audits that check
the configuration to see that it is not open for attack. Static audit services may
check that software has not changed, that file access controls are properly set,
that obsolete user accounts have been turned off, that incoming and outgoing
communications lines are correctly enabled, that passwords are hard to guess,
and so on. Aside from virus checkers, few static audit tools exist in the market.
Page 59
Suggested Citation:"Concepts of Information Security."
National Research Council. 1991. Computers at Risk: Safe
Computing in the Information Age. Washington, DC: The
National Academies Press. doi: 10.17226/1581.
×
Save
Cancel
that users have access only to the correct objects. Inside the computer, these
enforcement mechanisms are usually called access control mechanisms.
Security breaches usually entail more recovery effort than do acts of God. Unlike
proverbial lightning, breaches of security can be counted on to strike twice unless
the route of compromise has been shut off. Causes must be located. Were
passwords compromised? Are backups clean? Did some user activity
compromise the system by mistake? And major extra work—changing all
passwords, rebuilding the system from original copies, shutting down certain
communication links or introducing authentication procedures on them, or
undertaking more user education—may have to be done to prevent a recurrence.
Save
Cancel
mise, of users as well as systems. What damage can the person in front of
the automated teller machine do? What about the person behind it?4
Thence follows a rough idea of expected losses. On the other side of the ledger
are these:
The security plans then become a business decision, possibly tempered by legal
requirements and consideration of externalities (see ''Risks and Vulnerabilities,"
below).
Ideally, controls are chosen as the result of careful analysis.5 In practice, the
most important consideration is what controls are available. Most purchasers of
computer systems cannot afford to have a system designed from scratch to meet
their needs, a circumstance that seems particularly true in the case of security
needs. The customer is thus reduced to selecting from among the various
preexisting solutions, with the hope that one will match the identified needs.
Some organizations formalize the procedure for managing computer-associated
risk by using a control matrix that identifies appropriate control measures for
given vulnerabilities over a range of risks. Using such a matrix as a guide,
administrators may better select appropriate controls for various resources. A
rough cut at addressing the problem is often taken: How much business depends
on the system? What is the worst credible kind of failure, and how much would it
cost to recover? Do available mechanisms address possible causes? Are they
cost-effective?
Page 61
Suggested Citation:"Concepts of Information Security."
National Research Council. 1991. Computers at Risk: Safe
Computing in the Information Age. Washington, DC: The
National Academies Press. doi: 10.17226/1581.
×
Save
Cancel
there is not a clear, widely accepted articulation of how computer systems should
be designed to support these controls, what sort of robustness is required in the
mechanisms, and so on. As a result, customers for computer security are faced
with a "take-it-or-leave-it" marketplace. For instance, customers appear to
demand password-based authentication because it is available, not because
analysis has shown that this relatively weak mechanism provides enough
protection. This effect works in both directions: a service is not demanded if it is
not available, but once it becomes available somewhere, it soon becomes
wanted everywhere. See Chapter 6 for a discussion of the marketplace.
The cases considered in the sampling cited above often involved multiple classes
of abuse. In attacking the National Aeronautics and Space Administration
systems, the West German Chaos Computer
Page 62
Suggested Citation:"Concepts of Information Security."
National Research Council. 1991. Computers at Risk: Safe
Computing in the Information Age. Washington, DC: The
National Academies Press. doi: 10.17226/1581.
×
Save
Cancel
BOX 2.1 THE WILY HACKER
In August 1986, Clifford Stoll, an astronomer working at the Lawrence Berkeley Laboratory,
detected an intruder, nicknamed him the Wily Hacker, and began to monitor his intrusions.
Over a period of 10 months, the Wily Hacker attacked roughly 450 computers operated by the
U.S. military and its contractors, successfully gaining access to 30 of them. Prior to detection,
he is believed to have mounted attacks for as long as a year.
Although originally thought to be a local prankster, the Wily Hacker turned out to be a
competent and persistent computer professional in West Germany, with alleged ties to the
Soviet KGB, and possibly with confederates in Germany.* It is assumed that the Wily Hacker
was looking for classified or sensitive data on each of the systems he penetrated, although
regulations prohibit the storage of classified data on the systems in question.
Looking for technological keywords and for passwords to other systems, the Wily Hacker
exhaustively searched the electronic files and messages located on each system. He carefully
concealed his presence on the computer systems and networks that he penetrated, using
multiple entry points as necessary. He made long-term plans, in one instance establishing a
trapdoor that he used almost a year later.
The most significant aspect of the Wily Hacker incident is that the perpetrator was highly
skilled and highly motivated. Also notable is the involvement of a U.S. accomplice. Tracking
the Wily Hacker required the cooperation of more than 15 organizations, including U.S.
authorities, German authorities, and private corporations. The treatment of the Wily Hacker by
German authorities left some in the United States unsatisfied, because under German law the
absence of damage to German systems and the nature of the evidence available diminished
sentencing options.
* He has been identified variously as Mathias Speer or Marcus Hess, a computer science
student in Hanover.
Save
Cancel
present situation. However, it is unwise to extrapolate from the present to predict
the classes of vulnerability that will be significant in the future. As expertise and
interconnection increase and as control procedures improve, the risks and likely
threats will change.6 For example, given recent events, the frequency of Trojan
horse and virus attacks is expected to increase.
Such scenarios have been played out many times in real life. In saving money for
itself, installation A has shifted costs to B, creating what economists call an
externality. At the very least, it seems, installation B should be aware of the
security state of A before agreeing to communicate.
Page 64
Suggested Citation:"Concepts of Information Security."
National Research Council. 1991. Computers at Risk: Safe
Computing in the Information Age. Washington, DC: The
National Academies Press. doi: 10.17226/1581.
×
Save
Cancel
On November 2, 1988, the Internet was attacked by a self-replicating program called a worm
that spread within hours to somewhere between 2,000 and 6,000 computer systems—the
precise number remains uncertain. Only systems (VAX and Sun 3) running certain types of
Unix (variants of BSD 4) were affected.
The Internet worm was developed and launched by Robert T. Morris, Jr., who at the time was a
graduate student at Cornell University. Morris exploited security weaknesses (in the fingerd,
rhosts, and sendmail programs) in the affected versions of Unix. The worm program itself did
not cause any damage to the systems that it attacked in the sense that it did not steal, corrupt, or
destroy data and did not alter the systems themselves; however, its rapid proliferation and the
ensuing confusion caused severe degradation in service and shut down some systems and
network connections throughout the Internet for two or three days, affecting sites that were not
directly attacked. Ironically, electronic mail messages with guidance for containing the worm
were themselves delayed because of network congestion caused by the worm's rapid
replication.
Although Morris argued that the worm was an experiment unleashed without malice, he was
convicted of a felony (the conviction may be appealed) under the Computer Fraud and Abuse
Act (CFAA) of 1986, the first such conviction. Reflecting uncertainty about both the
applicability of the CFAA and the nature of the incident, federal prosecutors were slow to
investigate and bring charges in this case.
The Internet worm has received considerable attention by computing professionals, security
experts, and the general public, thanks to the abundant publicity about the incident, the divided
opinions within the computer community about the impact of the incident, and a general
recognition that the Internet worm incident has illuminated the potential for damage from more
dangerous attacks as society becomes more dependent on computer networks. The incident
triggered the establishment of numerous computer emergency response teams (CERTs),
starting with DARPA's CERT for the Internet; a reevaluation of ethics for computer
professionals and users; and, at least temporarily, a general tightening of security in corporate
and government networks.
SOURCES: Comer (1988); Spafford (1989a); Rochlis and Eichin (1989); and Neumann
(1990).
Page 65
Suggested Citation:"Concepts of Information Security."
National Research Council. 1991. Computers at Risk: Safe
Computing in the Information Age. Washington, DC: The
National Academies Press. doi: 10.17226/1581.
×
Save
Cancel
In other sectors, including the research community, the design and the
management of computer-mediated networks generate communication
vulnerabilities. In these systems (e.g., Bitnet) messages travel lengthy paths
through computers in the control of numerous organizations of which the
communicants are largely unaware, and for which message handling is not a
central business concern. Responsibility for the privacy and integrity of
communications in these networks is so diffuse as to be nonexistent. Unlike
common carriers, these networks warrant no degree of trust. This situation is
understood by only some of these networks' users, and even they may gamble
on the security of their transmissions in the interests of convenience and reduced
expenses.
Page 66
Suggested Citation:"Concepts of Information Security."
National Research Council. 1991. Computers at Risk: Safe
Computing in the Information Age. Washington, DC: The
National Academies Press. doi: 10.17226/1581.
×
Save
Cancel
records in physically separate, more rigorously controlled hardware. Such
isolation of function is universal in serious cryptography.
APPENDIX 2.1—
PRIVACY
Concern for privacy arises in connection with the security of computer systems in
two disparate ways:
The first need supports privacy; the institution of policies and mechanisms for
confidentiality should strengthen it. The second, however, is a case in which
need is not aligned with privacy; strong auditing or surveillance measures may
well infringe on the privacy of those whose actions are observed. It is important
to understand both aspects of privacy.
The Privacy Act is based on five major principles that have been generally
accepted as basic privacy criteria in the United States and Europe:
Page 67
Suggested Citation:"Concepts of Information Security."
National Research Council. 1991. Computers at Risk: Safe
Computing in the Information Age. Washington, DC: The
National Academies Press. doi: 10.17226/1581.
×
Save
Cancel
obtained about them for one purpose from being used or made available
for other purposes without their consent.
Most of us have no way of knowing all the databases that contain information
about us. In short, we are losing control over the information about ourselves.
Many people are not confident about existing safeguards, and few are convinced
that they should have to pay for the benefits of the computer age with their
personal freedoms. (Lewis, 1990)
Page 68
Suggested Citation:"Concepts of Information Security."
National Research Council. 1991. Computers at Risk: Safe
Computing in the Information Age. Washington, DC: The
National Academies Press. doi: 10.17226/1581.
×
Save
Cancel
consider a policy stating that company computing resources will be used only for
proper business purposes. Users certify upon starting their jobs (or upon
introduction of the policy) that they understand and will comply with this policy
and others. Random spot checks of user files by information security analysts
may be conducted to ensure that personal business items, games, and so on,
are not put on company computing resources. Disciplinary action may result
when violations of policy are discovered.
The above situation does not, in itself, relate to security. However, one method
proposed to increase the level of system security involves monitoring workers'
actions to detect, for example, patterns of activity that suggest that a worker's
password has been stolen. This level of monitoring provides increased
opportunity to observe all aspects of worker activity, not just security-related
activity, and to significantly reduce a worker's expectation for privacy at work.
There are complex trade-offs among privacy, management control, and more
general security controls. How, for example, can management ensure that its
computer facilities are being used only for legitimate business purposes if the
computer system contains security features that limit access to the files of
individuals? Typically, a system administrator has access to everything on a
system. To prevent abuse of this privilege, a secure audit trail may be used. The
goal is to prevent the interaction of the needs for control, security, and privacy
from inhibiting the adequate achievement of any of the three.
Note that by tracing or monitoring the computer actions of individuals, one can
violate the privacy of persons who are not in an employee relationship but are
more generally clients of an organization or citizens of a country. For example,
the Wall Street Journal reported recently that customer data entered by a travel
agency into a major airline reservation system was accessible to and used by
other travel service firms without the knowledge of the customer or
Page 69
Suggested Citation:"Concepts of Information Security."
National Research Council. 1991. Computers at Risk: Safe
Computing in the Information Age. Washington, DC: The
National Academies Press. doi: 10.17226/1581.
×
Save
Cancel
the travel agency (Winans, 1990). Computer systems as a mechanism provide
no protection for people in these situations; as was observed above, computers,
even very secure computers, are only a mechanism, not a policy. Indeed, very
secure systems may actually make the problem worse, if the presence of these
mechanisms falsely encourages people to entrust critical information to such
systems.
APPENDIX 2.2—
Individuals were asked to consider 40 specific security measures. For each, they
were asked whether the measure should be built into vendor systems as a
mandatory (essential) item, be built in as an optional item, or not be built in.
User Identification
All of the interviewees believed that a unique identification (ID) for each user and
automatic suspension of an ID for a certain number
Page 70
Suggested Citation:"Concepts of Information Security."
National Research Council. 1991. Computers at Risk: Safe
Computing in the Information Age. Washington, DC: The
National Academies Press. doi: 10.17226/1581.
×
Save
Cancel
of unauthorized access attempts were essential. The capability to prevent the
simultaneous use of an ID was considered essential by 90 percent of the
individuals interviewed. A comment was that this capability should be controllable
based either on the ID or the source of the access.
Eighty-three percent of the interviewees agreed it is essential that the date, time,
and place of last use be displayed to the user upon sign-on to the system. A
comment was that this feature should also be available at other times. The same
number required the capability to assign to the user an expiration date for
authorization to access a system. Comments on this item were that the ability to
specify a future active date for IDs was needed and that the capability to let the
system administrator know when an ID was about to expire was required.
Seventy-three percent thought that the capability to limit system access to certain
times, days, dates, and/or from certain places was essential.
Sixty percent saw the capability to interface with a dynamic password token as
an essential feature. One recommendation was to investigate the use of icons
that would be assigned to users as guides to selecting meaningful (easily
remembered) passwords. Thirty-three percent considered a random password
generator essential; 7 percent did not want one.
Page 71
Suggested Citation:"Concepts of Information Security."
National Research Council. 1991. Computers at Risk: Safe
Computing in the Information Age. Washington, DC: The
National Academies Press. doi: 10.17226/1581.
×
Save
Cancel
thought such a capability should be essential, at least some representatives from
all other categories of businesses preferred that such a feature be optional.
Eighty-three percent agreed that a virus detection and protection capability and
the ability to purge a file during deletion were essential features. An added
comment was that vendors should be required to certify a product as being free
of viruses or trapdoors. Seventy-three percent considered the capability to
encrypt sensitive data to be mandatory, but one respondent was opposed to that
feature because it could complicate disaster recovery (i.e., one might not be able
to access such data in an emergency during processing at an alternate site).
Ninety-five percent thought it should be essential to require the execution of
production programs from a secure production library and also, if using
encryption, to destroy the plaintext during the encryption process.
Terminal Controls
All interviewees agreed that preventing the display of passwords on screens or
reports should be essential. Ninety-five percent favored having an automated
log-off/time-out capability as a mandatory feature. A comment was that it should
be possible to vary this feature by ID.
An additional comment was that a token port (for dynamic password interface)
should be a feature of terminals.
Additional comments in this area addressed the need for message authentication
and nonrepudiation as security features.
Page 72
Suggested Citation:"Concepts of Information Security."
National Research Council. 1991. Computers at Risk: Safe
Computing in the Information Age. Washington, DC: The
National Academies Press. doi: 10.17226/1581.
×
Save
Cancel
Detection Measures
All interviewees believed that audit trails identifying invalid access attempts and
reporting ID and terminal source identification related to invalid access attempts
were essential security measures. Likewise, all agreed that violation reports
(including date, time, service, violation type, ID, data sets, and so forth) and the
capability to query a system's log to retrieve selected data were essential
features.
Make requirements general rather than specific so that they can apply to
all kinds of systems.
Make security transparent to the user.
Make sure that ''mandatory" really means mandatory.
Seek opinions from those who pay for the systems.
In summary, it was clearly the consensus that basic information security features
should be required components that vendors build into information systems.
Some control of the implementation of features should be available to
organizations so that flexibility to accommodate special circumstances is
available.
Interviewees indicated that listing essential (must-have and must-use) and
optional security features in an accredited standards document would be very
useful for vendors and procurement officers in the private sector. Vendors could
use the criteria as a measure of how well their products meet requirements for
information security and the needs of the users. Procurement officers could use
the criteria as benchmarks in evaluating different vendors' equipment during the
purchasing cycle. Vendors could also use the criteria as a marketing tool, as they
currently use the Orange Book criteria. These comments are supportive of the
GSSP concept developed by this committee.
NOTES
1. Some documentation can be found in the Defense Advanced Research Projects Agency's
Computer Emergency Response Team advisories, which are distributed to system managers
and in a variety of electronic newsletters and bulletin boards.
Page 73
Suggested Citation:"Concepts of Information Security."
National Research Council. 1991. Computers at Risk: Safe
Computing in the Information Age. Washington, DC: The
National Academies Press. doi: 10.17226/1581.
×
Save
Cancel
2. The mechanisms for carrying out such procedures are called mandatory access controls by
the DOD.
3. Such mechanisms are called discretionary access controls by the DOD, and user-directed,
identity-based access controls by the International Organization for Standards. Also, the
owner-based approach stands in contrast with the more formal, centrally administered
clearance or access-authorization process of the national security community.
4. There are many kinds of vulnerability. Authorized people can misuse their authority. One
user can impersonate another. One break-in can set up the conditions for others, for
example, by installing a virus. Physical attacks on equipment can compromise it. Discarded
media can be scavenged. An intruder can get access from a remote system that is not well
secured, as happened with the Internet worm.
5. Although it might be comforting to commend the use of, or research into, quantitative risk
assessment as a planning tool, in many cases little more than a semiquantitative or
checklist-type approach seems warranted. Risk assessment is the very basis of the insurance
industry, which, it can be noted, has been slow to offer computer security coverage to
businesses or individuals (see Chapter 6, Appendix 6.2, "Insurance"). In some cases (e.g.,
the risk of damage to the records of a single customer's accounts) quantitative assessment
makes sense. In general, however, risk assessment is a difficult and complex task, and
quantitative assessment of myriad qualitatively different, low-probability, high-impact risks
has not been notably successful. The nuclear industry is a case in point.
6. The extent of interconnection envisioned for the future underscores the importance of
planning for interdependencies. For example, William Mitchell has laid out a highly
interconnected vision:
Through open systems interconnection (OSI), businesses will rely on computer networks as
much as they depend on the global telecom network. Enterprise networks will meet an
emerging need: they will allow any single computer in any part of the world to be as
accessible to users as any telephone. OSI networking capabilities will give every networked
computer a unique and easily accessible address. Individual computer networks will join
into a single cohesive system in much the same way as independent telecom networks join
to form one global service. (Mitchell, 1990, pp. 69–72)
7. Other federal privacy laws include the Fair Credit Reporting Act of 1970 (P.L. 91–508), the
Family Educational Rights and Privacy Act of 1974 (20 U.S.C. 1232g), the Right of
Financial Privacy Act of 1978 (11 U.S.C. 1100 et seq.), the Electronic Funds Transfer Act
of 1978 (15 U.S.C. 1693, P.L. 95–200), the Cable Communications Policy Act of 1984 (48
U.S.C. 551), the Electronic Communications Privacy Act of 1986 (18 U.S.C. 2511), and the
Computer Matching and Privacy Protection Act of 1988 (5 U.S.C. 552a Note) (Turn,
1990). States have also passed laws to protect privacy.
8. This point was made by the congressional Office of Technology Assessment in an analysis
of federal agency use of electronic record systems for computer matching, verification, and
profiling (OTA, 1986b).
9. Recent cases about management perusing electronic mail messages that senders and
receivers had believed were private amplify that debate (Communications Week, 1990a).
Next: Technology to Achi