Information Security
Information Security
Information Security
Chapter-1
The 1960s
During the 1960s, the Department of Defense’s Advanced Research Procurement Agency
(ARPA) began examining the feasibility of a redundant networked communications system
designed to support the military’s need to exchange information.
1. During the next decade, the ARPANET grew in popularity and use, and so did its
potential for misuse.
2. In December of 1973, Robert M. Metcalfe indicated that there were fundamental
problems with ARPANET security.
3. Individual remote users’ sites did not have sufficient controls and safeguards to protect
data against unauthorized remote users.
4. Lack of safety procedures for dial-up connections to the ARPANET.
5. User identification and authorization to the system were nonexistent.
6. The movement toward security went beyond protecting physical locations and began
with the Rand Report R-609. This was sponsored by the Department of Defense, which
attempted to define multiple controls and mechanisms necessary for the protection of a
multilevel computer system.
7. The scope of computer security grew from physical security to include:
Safety of the data itself
Limiting of random and unauthorized access to that data
Involvement of personnel from multiple levels of the organization
8. The concept of computer security evolved into the more sophisticated system we call
information security.
MULTICS
1. Much of the focus for research on computer security centered on a system called
MULTICS (Multiplexed Information and Computing Service).
2. In mid-1969, not long after the restructuring of the MULTICS project, several of the
key players created a new operating system called UNIX.
3. The MULTICS system had planned security with multiple security levels and
passwords, the UNIX system did not.
4. In late 1970s, the microprocessor brought in a new age of computing capabilities and
security threats as these microprocessors were networked.
The 1990s
1. At the close of the 20th century, as networks of computers became more common, so
too did the need to connect the networks to each other. This gave rise to the Internet, the
first manifestation of a global network of networks.
2. There has been a price for the phenomenal growth of the Internet, however. When
security was considered at all, early Internet deployment treated it as a low priority.
3. Note that as the requirement for networked computers became the dominant style of
computing, the ability to physically secure the physical computer was lost, and the
stored information became more exposed to security threats.
2000 to Present
1. Today, the Internet has brought millions of unsecured computer networks into
communication with each other.
2. Our ability to secure each computer’s stored information is now influenced by the
security on each computer to which it is connected.
What is Security?
1. Security is “the quality or state of being secure—to be free from danger.” It means to be
protected from adversaries, from those who would do harm, intentionally or otherwise.
2. Successful organization should have the following multiple layers of security in place
for the protection of its operations:
Physical security to protect the physical items, objects, or areas of an organization
from unauthorized access and misuse
Personal security to protect the individual or group of individuals who are
authorized to access the organization and its operations
Operations security to protect the details of a particular operation or series of
activities
Communications security to protect an organization’s communications media,
technology, and content
Network security to protect networking components, connections, and contents
Information security to protect information assets
3. Information security, therefore, is the protection of information and its critical elements,
including the systems and hardware that use, store, and transmit that information.
However, to protect the information and its related systems from danger, tools, such as
policy, awareness, training, education, and technology, are necessary.
4. The C.I.A. triangle, which has been considered the industry standard for computer
security since the development of the mainframe. It was solely based on three
characteristics that described the utility of information: confidentiality, integrity, and
availability. The C.I.A. triangle has expanded into a list of critical characteristics of
information.
1. The security model has three dimensions. If you extrapolate the three dimensions of
each axis, you end up with a 3 × 3 × 3 cube with 27 cells representing areas that must be
addressed to secure the information systems of today. Your primary responsibility is to
make sure that each of the 27 cells is properly addressed during the security process.
Investigation
1. The first phase, investigation, is the most important. What is the problem the system is
being developed to solve? This phase begins with an examination of the event or plan
that initiates the process.
2. The objectives, constraints, and scope of the project are specified. A preliminary
cost/benefit analysis is developed to evaluate the perceived benefits and the appropriate
levels of cost an organization is willing to expend to obtain those benefits.
3. Note that a feasibility analysis is performed to assess the economic, technical, and
behavioral feasibilities of the process and to ensure that implementation is worth the
organization’s time and effort.
Analysis
1. The analysis phase begins with the information learned during the investigation phase.
This phase consists primarily of assessments of the organization, the status of current
systems, and the capability to support the proposed systems.
2. Analysts begin to determine what the new system is expected to do and how it will
interact with existing systems. The phase ends with the documentation of the findings
and a feasibility analysis update.
Logical Design
1. In the logical design phase, the information gained from the analysis phase is used to
begin creating a solution system for a business problem.
2. The next step is selecting applications capable of providing needed services based on
the business need. Based on the applications needed, data support and structures capable
of providing the needed inputs are selected.
3. Finally, based on all of the above, specific technologies are selected to implement the
physical solution. In the end, another feasibility analysis is performed.
Physical Design
1. During the physical design phase, specific technologies are selected to support the
alternatives identified and evaluated in the logical design.
2. The selected components are evaluated based on a make-or-buy decision (develop in-
house or purchase from a vendor).
3. Final designs integrate various components and technologies.
4. After yet another feasibility analysis, the entire solution is presented to the end-user
representatives for approval.
Implementation
1. In the implementation phase, any needed software is created. Components are ordered,
received, and tested.
2. Afterwards, users are trained and supporting documentation is created. Again, a
feasibility analysis is prepared, and the users are presented with the system for a
performance review and acceptance test.
1. The maintenance and change phase and explain that it is the longest and most expensive
phase of the process. This phase consists of the tasks necessary to support and modify
the system for the remainder of its useful life cycle.
2. Even though formal development may conclude during this phase, the life cycle of the
project continues until it is determined that the process should begin again from the
investigation phase. When the current system can no longer support the changed
mission of the organization, the project is terminated and a new project is implemented.
1. Each of the phases of the SDLC should include consideration of the security of the
system being assembled as well as the information it uses. Such consideration means
that each implementation of a system is secure and does not risk compromising the
confidentiality, integrity, and availability of the organization’s information assets.
2. NIST recommends that organizations incorporate the associated IT security steps of the
included general SDLC into their development processes. (See textbook pages 23-25) It
is imperative that information security be designed into a system from its inception,
rather than added in during or after the implementation phase.
3. Organizations are moving toward more security-focused development approaches,
seeking to improve not only the functionality of the systems they have in place, but the
confidence of the consumer in their product.
Investigation
1. The investigation of the SecSDLC begins with a directive from upper management,
dictating the process, outcomes, and goals of the project, as well as the constraints
placed on the activity. Frequently, this phase begins with an enterprise information
security policy (EISP) that outlines the implementation of security.
2. The teams of responsible managers, employees, and contractors are organized;
problems are analyzed; and the scope is defined, including goals, objectives, and
constraints not covered in the program policy.
3. An organizational feasibility analysis is performed to determine whether the
organization has the resources and commitment necessary to conduct a successful
security analysis and design.
Analysis
1. In the analysis phase, the documents from the investigation phase are studied. The
development team conducts a preliminary analysis of existing security policies or
programs, along with documented current threats and associated controls.
2. This phase also includes an analysis of relevant legal issues that could impact the design
of the security solution.
3. The risk management task—identifying, assessing, and evaluating the levels of risk
facing the organization—also begins in this stage.
Logical Design
1. The logical design phase creates and develops the blueprints for security and examines
and implements key policies that influence later decisions.
2. At this stage, critical planning is developed for incident response actions to be taken in
the event of partial or catastrophic loss.
3. A feasibility analysis determines whether or not the project should continue or be
outsourced.
Physical Design
1. In the physical design phase, the security technology needed to support the blueprint
outlined in the logical design is evaluated, alternative solutions are generated, and a
final design is agreed upon.
2. The security blueprint may be revisited to keep it synchronized with the changes needed
when the physical design is completed.
3. The criteria needed to determine the definition of successful solutions is also prepared
during this phase.
4. Included at this time, are the designs for physical security measures to support the
proposed technological solutions.
5. At the end of this phase, a feasibility study should determine the readiness of the
organization for the proposed project, and then the champion and users are presented
with the design. At this time, all parties involved have a chance to approve the project
before implementation begins.
Implementation
1. The maintenance and change phase, though last, is perhaps the most important, given
the high level of ingenuity in today’s threats.
2. The reparation and restoration of information is a constant duel with an often unseen
adversary.
3. As new threats emerge and old threats evolve, the information security profile of an
organization requires constant adaptation to prevent threats from successfully
penetrating sensitive data.
Senior Management
1. The Chief Information Officer is the senior technology officer, although other titles
such as vice president of information, VP of information technology, and VP of systems
may also be used. The CIO is primarily responsible for advising the chief executive
officer, president, or company owner on the strategic planning that affects the
management of information in the organization.
2. The Chief Information Security Officer is the individual primarily responsible for the
assessment, management, and implementation of securing the information in the
organization. The CISO may also be referred to as the manager for security, the security
administrator, or a similar title.
Data Responsibilities
Communities of Interest
1. Each organization develops and maintains its own unique culture and values. Within
each organizational culture, there are communities of interest. As defined here, a
community of interest is a group of individuals who are united by similar interests or
values within an organization and who share a common goal of helping the organization
to meet its objectives.
2. There can be many different communities of interest in an organization. The three that
are most often encountered, and which have roles and responsibilities in information
security, are listed here. In theory, each role must complement the other but this is often
not the case.
1. Technology managers often focus on operating the technology operations with a focus
on cost and not necessarily on security.
2. The goals of the IT community and the information security community are not always
in complete alignment, and depending on the organizational structure, this may cause
conflict.
1. Both IT and information security professionals must remind themselves that they are
here to serve and support this community.
2. A secure system is not useful to an organization if the business goals of the organization
cannot be furthered.
Security as Art
1. There are no hard and fast rules regulating the installation of various security
mechanisms, nor are there many universally accepted complete solutions.
2. While there are many manuals to support individual systems, once these systems are
interconnected, there is no magic user’s manual for the security of the entire system.
This is especially true with the complex levels of interaction between users, policy, and
technology controls.
Security as Science
Key Terms
Access: a subject or object’s ability to use, manipulate, modify, or affect another subject
or object.
Accuracy: when information is free from mistakes or errors and it has the value that the
end user expects.
Asset: the organizational resource that is being protected; can be logical, such as a Web
site, information, or data; or physical, such as a person, computer system, or other
tangible object.
Attack: an intentional or unintentional act that can cause damage to or otherwise
compromise information and/or the systems that support it; can be active or passive,
intentional or unintentional, and direct or indirect.
Authenticity of information: the quality or state of being genuine or original, rather
than a reproduction or fabrication.
Availability: enables authorized users—persons or computer systems—to access
information without interference or obstruction and to receive it in the required format.
Bottom-up approach: a grassroots effort in which systems administrators attempt to
improve the security of their systems.
C.I.A. triangle: the industry standard for computer security since the development of
the mainframe.
Champion: a senior executive who promotes the project and ensures its support, both
financially and administratively, at the highest levels of the organization.
Chief information officer (CIO): the senior technology officer.
Chief information security officer (CISO): has primary responsibility for the
assessment, management, and implementation of information security in the
organization.
Communications security: protects communications media, technology, and content.
Community of interest: a group of individuals who are united by similar interests or
values within an organization and who share a common goal of helping the organization
to meet its objectives.
Computer security: the need to secure physical locations, hardware, and software from
threats.
Confidentiality: when information is protected from disclosure or exposure to
unauthorized individuals or systems.
Control, safeguard, or countermeasure: security mechanisms, policies, or procedures
that can successfully counter attacks, reduce risk, resolve vulnerabilities, and otherwise
improve the security within an organization.
Data custodians: working directly with data owners, data custodians are responsible
for the storage, maintenance, and protection of the information.
Data owners: those responsible for the security and use of a particular set of
information.
Data users: end users who work with the information to perform their assigned roles
supporting the mission of the organization.
E-mail spoofing: the act of sending an e-mail message with a modified field.
End users: those whom the new system will most directly affect.
Enterprise information security policy (EISP): outlines the implementation of a
security program within the organization.
Exploit: a technique used to compromise a system; can be a verb or a noun; make use
of existing software tools or custom-made software components.
Exposure: a condition or state of being exposed; in information security, exposure
exists when a vulnerability known to an attacker is present.
File hashing: method of assuring information integrity of a file is read by a special
algorithm that uses the value of the bits in the file to compute a single large number.
Hash value: a single large number used in file hashing.
Information security project team: should consist of a number of individuals who are
experienced in one or multiple facets of the required technical and nontechnical areas.
Information security: to protect the confidentiality, integrity and availability of
information assets, whether in storage, processing or transmission.
Integrity: when information is whole, complete, and uncorrupted.
Loss: a single instance of an information asset suffering damage or unintended or
unauthorized modification or disclosure. When an organization’s information is stolen,
it has suffered a loss.
McCumber Cube: provides a graphical representation of the architectural approach
widely used in computer and information security.
Methodology: a formal approach to solving a problem by means of a structured
sequence of procedures.
Network security: to protect networking components, connections, and contents.
Operations security: to protect the details of a particular operation or series of
activities.
Organizational culture: unique culture and values of an organization.
Personnel security: to protect the individual or group of individuals who are authorized
to access the organization and its operations.
Phishing: when an attacker attempts to obtain personal or financial information using
fraudulent means.
Physical security: to protect physical items, objects, or areas from unauthorized access
and misuse.
Possession of information: the quality or state of ownership or control.
Protection profile or security posture: the entire set of controls and safeguards,
including policy, education, training and awareness, and technology, that the
organization implements (or fails to implement) to protect the asset.
Risk assessment specialists: people who understand financial risk assessment
techniques, the value of organizational assets, and the security methods to be used.
Risk management: the process of identifying, assessing, and evaluating the levels of
risk facing the organization, specifically the threats to the organization’s security and to
the information stored and processed by the organization.
Risk: the probability that something unwanted will happen; organizations must
minimize risk to match their risk appetite—the quantity and nature of risk the
organization is willing to accept.
Salami theft: occurs when an employee steals a few pieces of information at a time,
knowing that taking more would be noticed—but eventually the employee gets
something complete or useable.
Security policy developers: people who understand the organizational culture, existing
policies, and requirements for developing and implementing successful policies.
Security professionals: dedicated, trained, and well-educated specialists in all aspects
of information security from both a technical and nontechnical standpoint.
Subjects and objects: a computer can be either the subject of an attack—an agent
entity used to conduct the attack—and/or the object of an attack—the target entity
Systems administrators: people with the primary responsibility for administering the
systems that house the information used by the organization.
Systems development life cycle (SDLC): a methodology for the design and
implementation of an information system.
Team leader: a project manager, who may be a departmental line manager or staff unit
manager, who understands project management, personnel management, and
information security technical requirements.
Threat agent: the specific instance or a component of a threat. For example, all hackers
in the world present a collective threat, while Kevin Mitnick, who was convicted for
hacking into phone systems, is a specific threat agent. Likewise, a lightning strike,
hailstorm, or tornado is a threat agent that is part of the threat of severe storms.
Threat: a category of objects, persons, or other entities that presents a danger to an
asset; always present and can be purposeful or undirected.
Top-down approach: effort in which the security project is initiated by upper-level
managers who issue policy, procedures and processes, dictate the goals and expected
outcomes, and determine accountability for each required action—has a higher
probability of success.
Utility of information: the quality or state of having value for some purpose or end.
Information has value when it can serve a purpose.
Vulnerability: a weakness or fault in a system or protection mechanism that opens it to
attack or damage. Examples of vulnerabilities are a flaw in a software package, an
unprotected system port, and an unlocked door. Some well-known vulnerabilities have
been examined, documented, and published; others remain latent (or undiscovered).
Waterfall model: each phase begins with the results and information gained from the
previous phase.
Chapter 2
1. Today’s organizations are under immense pressure to create and operate integrated,
efficient, and capable applications. The modern organization needs to create an
environment that safeguards applications using the organization’s IT systems,
particularly those applications that serve as important elements of the organization’s
infrastructure.
2. Once the infrastructure is in place, management must continue to oversee it and not
abdicate its responsibility to the IT department.
1. Many organizations realize that one of their most valuable assets is their data. Without
data, an organization loses its record of transactions and/or its ability to deliver value to
its customers.
2. Protecting data in motion and data at rest are both critical aspects of information
security. An effective information security program is essential to the protection of the
integrity and value of the organization’s data.
Threats
1. To make sound decisions about information security as well as to create and enforce
policies, management must be informed of the various kinds of threats facing the
organization and its applications, data, and information systems.
2. A threat is an object, person, or other entity that represents a constant danger to an asset.
To better understand the numerous threats facing the organization, a categorization
scheme has been developed, allowing us to group threats by their respective activities.
By examining each threat category in turn, management can most effectively protect its
information through policy, education and training, and technology controls.
3. The 2009 Computer Security Institute/Federal Bureau of Investigation (CSI/FBI) survey
on Computer Crime and Security Survey found that:
64% of organizations responding to the survey suffered malware infections
14% of respondents identified indicated system penetration by an outsider
1. This category represents situations in which a product or services are not delivered to
the organization as expected.
2. The organization’s information system depends on the successful operation of many
interdependent support systems, including power grids, telecom networks, parts
suppliers, service vendors, and even the janitorial staff and garbage haulers.
3. Internet service, communications, and power irregularities are three sets of service
issues that dramatically affect the availability of information and systems.
4. Internet service issues: for organizations that rely heavily on the Internet and the Web to
support continued operations, Internet service provider failures can considerably
undermine the availability of information. Many organizations have sales staff and
telecommuters working at remote locations. When an organization places its Web
servers in the care of a Web hosting provider, that provider assumes responsibility for
all Internet services, as well as the hardware and operating system software used to
operate the Web site.
5. Communications and other service provider issues: other utility services can impact
organizations as well. Among these are telephone, water, wastewater, trash pickup,
cable television, natural or propane gas, and custodial services. The loss of these
services can impair the ability of an organization to function properly.
6. Power irregularities: irregularities from power utilities are common and can lead to
fluctuations, such as power excesses, power shortages, and power losses. In the U.S.,
we are “fed” 120-volt, 60-cycle power usually through 15 and 20 amp circuits.
7. Voltage levels can spike (momentary increase), surge (prolonged increase), sag
(momentary decrease), brownout (prolonged drop in voltage), fault (momentary
complete loss of power) or blackout (a more lengthy loss of power).
8. As sensitive electronic equipment—especially networking equipment, computers, and
computer-based systems—are susceptible to fluctuations, controls should be applied to
manage power quality.
Espionage or Trespass
1. This threat represents a well-known and broad category of electronic and human
activities that breach the confidentiality of information.
2. When an unauthorized individual gains access to the information an organization is
trying to protect, that act is categorized as a deliberate act of espionage or trespass.
3. When information gatherers employ techniques that cross the threshold of what is legal
and/or ethical, they enter the world of industrial espionage.
4. Instances of shoulder surfing occur at computer terminals, desks, ATM machines,
public phones, or other places where a person is accessing confidential information.
5. The threat of trespassing can lead to unauthorized, real, or virtual actions that enable
information gatherers to enter premises or systems they have not been authorized to
enter.
6. Controls are sometimes implemented to mark the boundaries of an organization’s
virtual territory. These boundaries give notice to trespassers that they are encroaching
on the organization’s cyberspace.
7. The classic perpetrator of deliberate acts of espionage or trespass is the hacker. In the
gritty world of reality, a hacker uses skill, guile, or fraud to attempt to bypass the
controls placed around information that is the property of someone else. The hacker
frequently spends long hours examining the types and structures of the targeted
systems.
8. There are generally two skill levels among hackers. The first is the expert hacker, who
develops software scripts and program exploits used by the second category, the novice,
or unskilled hacker.
9. The expert hacker is usually a master of several programming languages, networking
protocols, and operating systems and also exhibits a mastery of the technical
environment of the chosen targeted system.
10. Expert hackers have become bored with directly attacking systems and have turned to
writing software. The software they write are automated exploits that allow novice
hackers to become script kiddies—hackers of limited skill who use expertly written
software to exploit a system, but do not fully understand or appreciate the systems they
hack.
11. As a result of preparation and continued vigilance, attacks conducted by scripts are
usually predictable and can be adequately defended against.
12. There are other terms for system rule breakers:
The term cracker is now commonly associated with an individual who “cracks”
or removes software protection that is designed to prevent unauthorized
duplication.
A phreaker hacks the public telephone network to make free calls, disrupt
services, and generally wreak havoc.
Forces of Nature
1. Forces of nature, force majeure, or acts of God pose some of the most dangerous
threats, because they are unexpected and can occur with very little warning.
2. These threats can disrupt not only the lives of individuals, but also the storage,
transmission, and use of information. They include fire, flood, earthquake, and
lightning, as well as volcanic eruption and insect infestation. Since it is not possible to
avoid many of these threats, management must implement controls to limit damage and
also prepare contingency plans for continued operations.
1. This category includes the possibility of acts performed without intent or malicious
purpose by an individual who is an employee of an organization.
2. Inexperience, improper training, making incorrect assumptions, and other circumstances
can cause problems.
3. Employees constitute one of the greatest threats to information security, as they are the
individuals closest to the organizational data. Employee mistakes can easily lead to the
following: revelation of classified data, entry of erroneous data, accidental deletion or
modification of data, storage of data in unprotected areas, and failure to protect
information.
4. Many threats can be prevented with controls, ranging from simple procedures, such as
requiring the user to type a critical command twice, to more complex procedures, such
as the verification of commands by a second party.
Information Extortion
1. Security safeguards and information asset protection controls that are missing,
misconfigured, antiquated, or poorly designed or managed make an organization more
likely to suffer losses when other threats lead to attacks.
Sabotage or Vandalism
1. Equally popular today is the assault on the electronic face of an organization—its Web
site. This category of threat involves the deliberate sabotage of a computer system or
business, or acts of vandalism to either destroy an asset or damage the image of an
organization.
2. These threats can range from petty vandalism by employees to organized sabotage
against an organization.
3. Organizations frequently rely on image to support the generation of revenue, and
vandalism to a Web site can erode consumer confidence, thus reducing the
organization’s sales and net worth. Compared to Web site defacement, vandalism
within a network is more malicious in intent and less public.
4. Security experts are noticing a rise in another form of online vandalism, hacktivist or
cyberactivist operations. A more extreme version is referred to as cyberterrorism.
Theft
1. Theft is defined as the illegal taking of another’s property. Within an organization, that
property can be physical, electronic or intellectual.
2. Value of information suffers when it is copied and taken away without the owner’s
knowledge.
3. Physical theft can be controlled quite easily. Many measures can be taken, including
locking doors, training security personnel, and installing alarm systems.
4. Electronic theft, however, is a more complex problem to manage and control.
Organizations may not even know it has occurred.
1. This category involves threats that come from purchasing software with unknown,
hidden faults. Large quantities of computer code are written, debugged, published, and
sold before all of their bugs are detected and resolved.
2. Combinations of certain software and hardware can reveal new bugs. Sometimes these
items aren’t errors, but rather are purposeful shortcuts left by programmers for benign
or malign reasons.
Technological Obsolescence
Attacks
Malicious Code
1. The malicious code attack includes the execution of viruses, worms, Trojan horses, and
active Web scripts with the intent to destroy or steal information.
2. The polymorphic, or multivector, worm is a state-of-the-art attack system.
3. These attack programs use up to six known attack vectors to exploit a variety of
vulnerabilities in commonly found information system devices.
4. Attack replication vectors:
IP scan and attack: The infected system scans a random or local range of IP
addresses and targets any of several vulnerabilities known to hackers or left over
from previous exploits.
Web browsing: If the infected system has write access to any Web page, it makes
all Web content files infectious so that users who browse to those pages become
infected.
Virus: Each infected machine infects certain common executable or script files on
all computers to which it can write with virus code that can cause infection.
Unprotected shares: Using vulnerabilities in file systems and the way many
organizations configure them, the infected machine copies the viral component to all
locations it can reach.
Mass mail: By sending e-mail infections to addresses found in the address book, the
infected machine infects many users whose mail-reading programs automatically
run the program and infect other systems.
Simple Network Management Protocol (SNMP): By using the widely known and
common passwords that were employed in early versions of this protocol (which is
used for remote management of network and computer devices), the attacking
program can gain control of the device.
5. Other forms of malware including covert software applications—bots, spyware, and
adware—that are designed to work out of sight of users, or via an apparently innocuous
user action. A bot is “an automated software program that executes certain commands
when it receives a specific input.” Spyware is “any technology that aids in gathering
information about a person or organization without their knowledge.” Adware is “any
software program intended for marketing purposes such as that used to deliver and
display advertising banners or popups to the user’s screen or tracking the user’s online
usage or purchasing activity.”
Hoaxes
Back Doors
Password Crack
Brute Force
1. Brute force is the application of computing and network resources to try every possible
combination of options of a password.
Dictionary
1. A dictionary narrows the field by selecting specific accounts to attack and uses a list of
commonly used passwords (the dictionary) instead of random combinations.
Spoofing
Man-in-the-Middle
1. An attacker sniffs packets from the network, modifies them, and inserts them back into
the network. This is also known as a TCP hijacking attack.
Spam
1. Spam is unsolicited commercial e-mail. While many consider spam a nuisance rather
than an attack, it is emerging as a vector for some attacks.
Mail Bombing
1. Mail bombing is another form of e-mail attack that is also a DoS, in which an attacker
routes large quantities of e-mail to the target.
Sniffers
1. Sniffers are programs or devices that can monitor data traveling over a network. They
can be used both for legitimate network management functions and for stealing
information from a network.
Social Engineering
1. Within the context of information security, social engineering is the process of using
social skills to convince people to reveal access credentials or other valuable
information to the attacker.
2. “People are the weakest link. You can have the best technology; firewalls, intrusion-
detection systems, biometric devices...and somebody can call an unsuspecting
employee.
Phishing
1. Phishing is an attempt to gain personal or financial information from an individual,
usually by posing as a legitimate entity.
2. A variant is spear phishing, a label that applies to any highly targeted phishing attack.
While normal phishing attacks target as many recipients as possible, a spear phisher
sends a message that appears to be from an employer, a colleague, or other legitimate
correspondent, to a small group, or even one specific person.
3. Phishing attacks use three primary techniques, often used in combination with one
another: URL manipulation, Web site forgery, and phone phishing.
Pharming
1. Pharming is “the redirection of legitimate Web traffic to an illegitimate site for the
purpose of obtaining private information.”
2. Pharming may also exploit the Domain Name Server (DNS) by causing it to transform
the legitimate host name into the invalid site’s IP address. This form of pharming is also
known as “DNS cache poisoning.”
Timing Attack
1. A timing attack, which explores the contents of a Web browser’s cache and stores a
malicious form of cookie on the client’s system. The cookie can allow the designer to
collect information on how to access password-protected sites.
2. Another attack by the same name involves the interception of cryptographic elements to
determine keys and encryption algorithms.
1. Many of the information security issues discussed in this chapter have their root cause
in the software elements of the system.
2. Many organizations recognize the need to include planning for security objectives in the
SDLC they use to create a system, and they have put in place procedures to create
software that is more able to be deployed in a secure fashion. This approach to software
development is known as software assurance, or SA.
1. That good software development should result in a finished product that meets all of its
design specifications. The information security considerations in those specification are
now considered a critical factor, which has not always been true.
2. The following commonplace security principles:
Economy of mechanism: Keep the design as simple and small as possible.
Fail-safe defaults: Base access decisions on permission rather than exclusion.
Complete mediation: Every access to every object must be checked for authority.
Open design: The design should not be secret, but rather depend on the possession
of keys or passwords.
Separation of privilege: Where feasible, a protection mechanism should require two
keys to unlock, rather than one.
Least privilege: Every program and every user of the system should operate using
the least set of privileges necessary to complete the job.
Least common mechanism: Minimize mechanisms (or shared variables) common to
more than one user and depended on by all users.
Psychological acceptability: It is essential that the human interface be designed for
ease of use, so that users routinely and automatically apply the protection
mechanisms correctly.”
Key Terms
Adware: “any software program intended for marketing purposes such as that used to
deliver and display advertising banners or popups to the user’s screen or tracking the
user’s online usage or purchasing activity.”
Attack: an act that takes advantage of a vulnerability to compromise a controlled
system.
Availability disruption: degradation of service caused by events such as incidents such
as a backhoe taking out a fiber-optic link for an ISP.
Back door: component in a system, which allows the attacker to access the system at,
will with special privileges.
Blackout: a complete loss of power for a more lengthy period of time.
Boot virus: infects the key operating system files located in a computer’s boot sector.
Bot: an abbreviation of robot; “an automated software program that executes certain
commands when it receives a specific input.”
Brownout: a more prolonged drop in voltage.
Brute force attack: the application of computing and network resources to try every
possible password combination.
Buffer overflow: see buffer overrun.
Buffer overrun: an application error that occurs when more data is sent to a program
buffer than it is designed to handle.
Change control: a process developers used to ensure that the working system delivered
to users represents the intent of the developers.
Competitive intelligence: legal information gathering techniques employed.
Cracking: attempting to reverse-calculate a password.
Cross site scripting (XSS): occurs when an application running on a Web server
gathers data from a user in order to steal it.
Cyberactivist: see hacktivist.
Cyberterrorism: hacks of systems to conduct terrorist activities via network or Internet
pathways.
Dictionary attack: a variation of the brute force attack that narrows the field by
selecting specific target accounts and using a list of commonly used passwords (the
dictionary) instead of random combinations.
Distributed denial-of-service (DDoS): an attack in which a coordinated stream of
requests is launched against a target from many locations at the same time.
Expert hacker (elite hacker): develops software scripts and program exploits used by
those in the second category; usually a master of several programming languages,
networking protocols, and operating systems and also exhibits a mastery of the
technical environment of the chosen targeted system.
Fault: complete loss of power for a moment.
Hackers: “people who use and create computer software [to] gain access to information
illegally.”
Hacktivist: interfere with or disrupt systems to protest the operations, policies, or
actions of an organization or government agency.
Industrial espionage: when information gatherers employ techniques that cross the
threshold of what is legal or ethical.
Integer bugs: fall into four broad classes: overflows, underflows, truncations, and
signedness errors; are usually exploited indirectly—that is, triggering an integer bug
enables an attacker to corrupt other areas of memory, gaining control of an application
Macro virus: embedded in automatically executing macro code used by word
processors, spread sheets, and database applications.
Mail bomb: an attacker routes large quantities of e-mail to the target.
Malicious code: software designed and deployed to attack a system.
Malicious software: see malicious code.
Malware: see malicious code.
Man-in-the-middle: an attacker monitors (or sniffs) packets from the network,
modifies them, and inserts them back into the network.
Packet monkeys: script kiddies who use automated exploits to engage in distributed
denial-of-service attacks.
Packet sniffers: a sniffer on a TCP/IP network.
Password attack: see brute force attack.
Pharming: the redirection of legitimate Web traffic (e.g., browser requests) to an
illegitimate site for the purpose of obtaining private information.
Phishing: an attempt to gain personal or financial information from an individual,
usually by posing as a legitimate entity.
Phreaker: hacks the public telephone network to make free calls or disrupt services.
Polymorphic threat: one that over time changes the way it appears to antivirus
software programs, making it undetectable by techniques that look for preconfigured
signatures.
Sag: a momentary low voltage.
Script kiddies: hackers of limited skill who use expertly written software to attack a
system.
Service Level Agreement (SLA): an agreement providing minimum service levels.
Shoulder surfing: used in public or semipublic settings when individuals gather
information they are not authorized to have by looking over another individual’s
shoulder or viewing the information from a distance.
Sniffer: a program or device that can monitor data traveling over a network.
Social engineering: the process of using social skills to convince people to reveal
access credentials or other valuable information to the attacker.
Software piracy: the unlawful use or duplication of software-based intellectual
property.
Spam: unsolicited commercial e-mail.
Spear phishing: a label that applies to any highly targeted phishing attack.
Spike: a momentary increase in voltage.
Spoofing: a technique used to gain unauthorized access to computers, wherein the
intruder sends messages with a source IP address that has been forged to indicate that
the messages are coming from a trusted host.
Spyware: any technology that aids in gathering information about a person or
organization without their knowledge.
Surge: a prolonged increase in voltage.
TCP hijacking attack: see man-in-the-middle.
Theft: the illegal taking of another’s property, which can be physical, electronic, or
intellectual.
Threat agent: damages or steals an organization’s information or physical asset.
Threat: an object, person, or other entity that presents an ongoing danger to an asset.
Timing attack: explores the contents of a Web browser’s cache and stores a malicious
cookie on the client’s system.
Trap door: see back door.
Trespass: unauthorized real or virtual actions that enable information gatherers to enter
premises or systems they have not been authorized to enter.
Trojan horses: software programs that hide their true nature and reveal their designed
behavior only when activated.
Unskilled hacker: see script kiddies.
Virus hoaxes: e-mails warning of supposedly dangerous viruses that don’t exist
Virus: consists of segments of code that perform malicious actions.
Vulnerability: an identified weakness in a controlled system, where controls are not
present or are no longer effective.
Worm: a malicious program that replicates itself constantly, without requiring another
program environment.
Zombies: machines that are directed remotely (usually by a transmitted command) by
the attacker to participate in the attack.