Systems Security
Systems Security
Systems Security
CT 61
SECTION 6
CERTIFIED
INFORMATION COMMUNICATION
TECHNOLOGISTS
(CICT)
SYSTEM SECURITY
STUDY TEXT
GENERAL OBJECTIVE
This paper is intended to equip the candidate with the knowledge, skills and attitude thatwill enable
him/her to secure lCT systems in an organization
LEARNING OUTCOMES
CONTENT
3. Systems security
Classification
People errors
Procedural errors
Software errors
Electromechanical problems
Dirty data
5. Data/software security
Use of the normal security systems
Vulnerability assessment
6. Transmission security
Symmetric encryption
Asymmetric encryption
Duplicate and alternate routing
Firewall types and configuration
Secure socket layer and transport layer security
IPv4 and 1Pv6 security
Wireless network security
Mobile device security
Wireless protected access
CONTENT PAGE
Overview
IT security
Sometimes referred to as computer security, Information Technology security is information
security applied to technology (most often some form of computer system). It is worthwhile to
note that a computer does not necessarily mean a home desktop. A computer is any device with a
processor and some memory. Such devices can range from non-networked standalone devices as
simple as calculators, to networked mobile computing devices such as smartphones and tablet
computers. IT security specialists are almost always found in any major enterprise/establishment
due to the nature and value of the data within larger businesses. They are responsible for keeping
all of the technology within the company secure from malicious cyber-attacks that often attempt
to breach into critical private information or gain control of the internal systems.
Information assurance
The act of ensuring that data is not lost when critical issues arise. These issues include, but are
not limited to: natural disasters, computer/server malfunction, physical theft, or any other
instance where data has the potential of being lost. Since most information is stored on
computers in our modern era, information assurance is typically dealt with by IT security
specialists. One of the most common methods of providing information assurance is to have an
off-site backup of the data in case one of the mentioned issues arises.
Threats
Computer system threats come in many different forms. Some of the most common threats today
are software attacks, theft of intellectual property, identity theft, theft of equipment or
information, sabotage, and information extortion. Most people have experienced software attacks
of some sort. Viruses, worms, phishing attacks, and Trojan horses are a few common examples
of software attacks. The theft of intellectual property has also been an extensive issue for many
businesses in the IT field. Intellectual property is the ownership of property usually consisting of
some form of protection. Theft of software is probably the most common in IT businesses today.
Identity theft is the attempt to act as someone else usually to obtain that person's personal
information or to take advantage of their access to vital information. Theft of equipment or
information is becoming more prevalent today due to the fact that most devices today are mobile.
Cell phones are prone to theft and have also become far more desirable as the amount of data
For the individual, information security has a significant effect on privacy,, which is viewed very
differently in different cultures.
The field of information security has grown and evolved significantly in recent years. There are
many ways of gaining entry into the field as a career. It offers many areas for specialization
including securing network(s) and allied infrastructure, securing applications and databases,
security testing,, information systems auditing, business continuity planning and digital forensics.
Definitions
The definitions of InfoSec suggested in different sources are summarized below (adopted from).
2. "The protection of information and information systems from unauthorized access, use,
disclosure, disruption, modification, or destruction in order to provide confidentiality, integrity,
and availability." (CNSS, 2010)
3. "Ensures that only authorized users (confidentiality) have access to accurate and complete
information (integrity) when required (availability)." (ISACA, 2008)
5. "...information security is a risk management discipline, whose job is to manage the cost of
information risk to the business." (McDermott and Geer, 2001)
6. "A well-informed sense of assurance that information risks and controls are in balance."
(Anderson, J., 2003)
7. "Information security is the protection of information and minimizes the risk of exposing
information to unauthorized parties." (Venter and Eloff, 2003)
Threats to information and information systems may be categorized and a corresponding security
goal may be defined for each category of threats. A set of security goals, identified as a result of
a threat analysis, should be revised periodically to ensure its adequacy and conformance with the
evolving environment. The currently relevant set of security goals may include: confidentiality,
integrity, availability, privacy, authenticity & trustworthiness, non-repudiation, accountability
and auditability." (Cherdantseva and Hilton, 2013)
Information security is a stable and growing profession. Information security professionals are
very stable in their employment; more than 80 percent had no change in employer or
employment in the past year, and the number of professionals is projected to continuously grow
more than 11 percent annually from 2014 to 2015.
3. Confidentiality
4. Integrity
5. Availability
6. Non-repudiation. Accomplishing these is a management issue before it's a technical one,
as they are essentially business objectives.
Confidentiality is about controlling access to files either in storage or in transit. This requires
systems configuration or products (a technical job). But the critical definition of the parameters
(who should be able to access what) is a business-related process.
Ensuring integrity is a matter of version control - making sure only the right people can change
documents. It also requires an audit trail of the changes, and a fallback position in case changes
prove detrimental. This meshes with non-repudiation (the change record must include who as
well as what and when).
Availability is the Cinderella of information security as it is rarely discussed. But however safe
from hackers your information is, it is no use if you can't get at it when you need to. So you need
to think about data back-ups, bandwidth and standby facilities, which many people still leave out
of their security planning.
Key concepts
The CIA triad of confidentiality, integrity, and availability is at the heart of information
security. (The members of the classic InfoSec triad — confidentiality, integrity and availability
are interchangeably referred to in the literature as security attributes, properties, security
goals, fundamental aspects, information criteria, critical information characteristics and
basic building blocks.) There is continuous debate about extending this classic trio. Other
principles such as Accountability have sometimes been proposed for addition. It has been
pointed outthat issues such as Non-Repudiation do not fit well within the three core concepts.
Security of Information Systems and Networks proposed the nine generally accepted principles:
Awareness, Responsibility, Response, Ethics, Democracy, Risk Assessment, Security Design
and Implementation, Security Management, and Reassessment. Building upon those, in 2004
8 www.someakenya.com Contact: 0707 737 890
the NIST's Engineering Principles for Information Technology Security proposed 33 principles.
From each of these derived guidelines and practices.
In 2013, based on a thorough analysis of Information Assurance and Security (IAS) literature,
the IAS-octave was proposed as an extension of the CIA-triad. The IAS-octave includes
Confidentiality, Integrity, Availability, Accountability, Auditability,
Authenticity/Trustworthiness, Non-repudiation and Privacy. The completeness and accuracy of
the IAS-octave was evaluated via a series of interviews with IAS academics and experts. The
IAS-octave is one of the dimensions of a Reference Model of Information Assurance and
Security (RMIAS), which summarizes the IAS knowledge in one all-encompassing model.
Confidentiality
In information security, confidentiality "is the property, that information is not made available or
disclosed to unauthorized individuals, entities, or processes"
Integrity
In information security, data integrity means maintaining and assuring the accuracy and
completeness of data over its entire life-cycle. This means that data cannot be modified in an
unauthorized or undetected manner. This is not the same thing as referential integrity in
databases, although it can be viewed as a special case of consistency as understood in the classic
ACID model of transaction processing. Information security systems typically provide message
integrity in addition to data confidentiality.
Availability
For any information system to serve its purpose, the information must be available when it is
needed. This means that the computing systems used to store and process the information, the
security controls used to protect it, and the communication channels used to access it must be
functioning correctly. High availability systems aim to remain available at all times, preventing
service disruptions due to power outages, hardware failures, and system upgrades. Ensuring
availability also involves preventing denial-of-service attacks, such as a flood of incoming
messages to the target system essentially forcing it to shut down.
Non-repudiation
In law, non-repudiation implies one's intention to fulfill their obligations to a contract. It also
implies that one party of a transaction cannot deny having received a transaction nor can the
other party deny having sent a transaction. Note: This is also regarded as part of Integrity.
It is important to note that while technology such as cryptographic systems can assist in non-
repudiation efforts, the concept is at its core a legal concept transcending the realm of
technology. It is not, for instance, sufficient to show that the message matches a digital signature
signed with the sender's private key, and thus only the sender could have sent the message and
nobody else could have altered it in transit. The alleged sender could in return demonstrate that
Yes, this is a lot of work, and the up-front costs in both capital and resource terms can be
significant, but it’s a damned sight cheaper than the cost of non-compliance, fines, and
particularly; being breached. In the extreme, what if it’s the difference between you being in
business or not?
Security mechanisms
We use several layers of proven security technologies and processes to provide you with secure
online access to your accounts and information. These are continuously evaluated and updated
by experts to ensure that we protect you and your information. These include:
Authentication
To protect our users, we provide secure private websites for any business that users conduct with
us. Users login to these sites using a valid client number or username and a password. Users are
required to create their own passwords, which should be kept strictly confidential so that no one
else can login to their accounts.
Firewalls
We use a multi-layered infrastructure of firewalls to block unauthorized access by individuals or
networks to our information servers.
We are continuously updating our anti-virus protection. This ensures we maintain the latest in
anti-virus software to detect and prevent viruses from entering our computer network systems.
Data Integrity
The information you send to one of our secure private websites is automatically verified to
ensure it is not altered during information transfers. Our systems detect if data was added or
deleted after you send information. If any tampering has occurred, the connection is dropped and
the invalid information transfer is not processed.
Threats classification
Threats can be classified according to their type and origin:
Type of threat
o Physical damage
fire
water
pollution
o natural events
climatic
seismic
volcanic
o loss of essential services
electrical power
air conditioning
telecommunication
o compromise of information
eavesdropping,
theft of media
retrieval of discarded materials
o technical failures
equipment
software
capacity saturation
o compromise of functions
error in use
abuse of rights
denial of actions
Origin of threats
o Deliberate: aiming at information asset
spying
illegal processing of data
o accidental
equipment failure
software failure
o environmental
natural event
loss of power supply
o Negligence: Known but neglected factors, compromising the network safety and
sustainability.
affect an asset,
affect a software system
are brought by a threat agent
Threat classification
Microsoft has proposed a threat classification called STRIDE, from the initials of threat
categories:
Microsoft used to risk rating security threats using five categories in a classification called
DREAD: Risk assessment model. The model is considered obsolete by Microsoft. The categories
were:
The DREAD name comes from the initials of the five categories listed.
Associated terms
Threat agents
Individuals within a threat population; practically anyone and anything can, under the right
circumstances, be a threat agent – the well-intentioned, but inept, computer operator who
trashes a daily batch job by typing the wrong command, the regulator performing an audit, or
the squirrel that chews through a data cable.
It’s important to recognize that each of these actions affects different assets differently, which
drives the degree and nature of loss. For example, the potential for productivity loss resulting
from a destroyed or stolen asset depends upon how critical that asset is to the organization’s
productivity. If a critical asset is simply illicitly accessed, there is no direct productivity loss.
Similarly, the destruction of a highly sensitive asset that doesn’t play a critical role in
productivity won’t directly result in a significant productivity loss. Yet that same asset, if
disclosed, can result in significant loss of competitive advantage or reputation, and generate legal
costs. The point is that it’s the combination of the asset and type of action against the asset that
determines the fundamental nature and degree of loss. Which action(s) a threat agent takes will
be driven primarily by that agent’s motive (e.g., financial gain, revenge, recreation, etc.) and the
nature of the asset. For example, a threat agent bent on financial gain is less likely to destroy a
critical server than they are to steal an easily pawned asset like a laptop.
It is important to separate the concept of the event that a threat agent get in contact with the asset
(even virtually, i.e. through the network) and the event that a threat agent act against the asset.
The term Threat Agent is used to indicate an individual or group that can manifest a threat. It is
fundamental to identify who would want to exploit the assets of a company, and how they might
use them against the company.
Non-Target Specific: Non-Target Specific Threat Agents are computer viruses, worms,
Trojans and logic bombs.
Employees: Staff, contractors, operational/maintenance personnel, or security guards who
are annoyed with the company.
Organized Crime and Criminals: Criminals target information that is of value to them,
such as bank accounts, credit cards or intellectual property that can be converted into
money. Criminals will often make use of insiders to help them.
Corporations: Corporations are engaged in offensive information warfare or competitive
intelligence. Partners and competitors come under this category.
Human, Unintentional: Accidents, carelessness.
Human, Intentional: Insider, outsider.
Natural: Flood, fire, lightning, meteor, earthquakes.
Threat communities
The following threat communities are examples of the human malicious threat landscape many
organizations face:
Internal
o Employees
o Contractors (and vendors)
o Partners
External
o Cyber-criminals (professional hackers)
o Spies
o Non-professional hackers
o Activists
o Nation-state intelligence services (e.g., counterparts to the CIA, etc.)
o Malware (virus/worm/etc.) authors
Threat action
Threat analysis
Threat analysis is the analysis of the probability of occurrences and consequences of damaging
actions to a system. It is the basis of risk analysis.
Threat consequenceis a security violation that results from a threat action.It includes
disclosure, deception, disruption, and usurpation. The following subentries describe four kinds of
threat consequences, and also list and describe the kinds of threat actions that cause each
consequence. Threat actions that are accidental events are marked by "*".
Types of threats
External
Strategic: R&D
Operational: Systems and process (H&R, Payroll)
Financial: Liquidity, cash flow
Hazard: Safety and security; employees and equipment
Compliance: Actual or potential changes in the organization’s systems, processes,
suppliers, etc. may create exposure to a legal or regulatory non-compliance.
Criminal communities share strategies and tools and can combine forces to launch coordinated
attacks. They even have an underground marketplace where cyber criminals can buy and sell
stolen information and identities.
It's very difficult to crack down on cyber criminals because the Internet makes it easier for
people to do things anonymously and from any location on the globe. Many computers used in
cyber-attacks have actually been hacked and are being controlled by someone far away. Crime
laws are different in every country too, which can make things really complicated when a
criminal launches an attack in another country.
Attack Techniques
Here are a few types of attacks cyber criminals use to commit crimes. You may recognize a few
of them:
By taking measures to secure your own computer and protect your personal information, you are
not only preventing cyber criminals from stealing your identity, but also protecting others by
preventing your computer from becoming part of a botnet.
Social Engineering
Social engineering is a tactic used by cyber criminals that uses lies and manipulation to trick
people into revealing their personal information. Social engineering attacks frequently involve
very convincing fake stories to lure victims into their trap. Common social engineering attacks
include:
Sending victims an email that claims there's a problem with their account and has a link
to a fake website. Entering their account information into the site sends it straight to the
cyber-criminal (phishing)
Trying to convince victims to open email attachments that contain malware by claiming it
is something they might enjoy (like a game) or need (like anti-malware software)
Pretending to be a network or account administrator and asking for the victim's password
to perform maintenance
Claiming that the victim has won a prize but must give their credit card information in
order to receive it
Asking for a victim's password for an Internet service and then using the same password
to access other accounts and services since many people re-use the same password
Promising the victim they will receive millions of dollars, if they will help out the sender
by giving them money or their bank account information
Like other hacking techniques, social engineering is illegal in the United States and other
countries. To protect yourself from social engineering, don't trust any emails or messages you
receive that request any sort of personal information. Most companies will never ask you for
personal information through email. Let a trusted adult know when you receive an email or
message that might be a social engineering attack, and don't believe everything you read.
For a hacker who wants to come clean and turn away from crime, one option is to work for the
people they used to torment, by becoming a security consultant. These hackers-turned-good-guys
are called Grey Hat Hackers.
In the past, they were Black Hat Hackers, who used their computer expertise to break into
systems and steal information illegally, but now they are acting as White Hat Hackers, who
specialize in testing the security of their clients' information systems. For a fee, they will attempt
to hack into a company's network and then present the company with a report detailing the
existing security holes and how those holes can be fixed.
Controls
Selecting proper controls and implementing those will initially help an organization to bring
down risk to acceptable levels. Control selection should follow and should be based on the risk
assessment. Controls can vary in nature but fundamentally they are ways of protecting the
confidentiality, integrity or availability of information.
1. Administrative
Administrative controls (also called procedural controls) consist of approved written policies,
procedures, standards and guidelines. Administrative controls form the framework for running
the business and managing people. They inform people on how the business is to be run and how
day-to-day operations are to be conducted. Laws and regulations created by government bodies
are also a type of administrative control because they inform the business. Some industry sectors
have policies, procedures, standards and guidelines that must be followed – the Payment Card
Industry Data Security Standard (PCI DSS) required by Visa and MasterCard is such an
example. Other examples of administrative controls include the corporate security policy,
password policy, hiring policies, and disciplinary policies.
Administrative controls form the basis for the selection and implementation of logical and
physical controls. Logical and physical controls are manifestations of administrative controls.
Administrative controls are of paramount importance.
2. Logical
Logical controls (also called technical controls) use software and data to monitor and control
access to information and computing systems. For example: passwords, network and host-based
firewalls, network intrusion detection systems, access control lists, and data encryption are
logical controls.
An important logical control that is frequently overlooked is the principle of least privilege. The
principle of least privilege requires that an individual, program or system process is not granted
any more access privileges than are necessary to perform the task. A blatant example of the
failure to adhere to the principle of least privilege is logging into Windows as user Administrator
to read email and surf the web. Violations of this principle can also occur when an individual
collects additional access privileges over time. This happens when employees' job duties change,
3. Physical
Physical controls monitor and control the environment of the work place and computing
facilities. They also monitor and control access to and from such facilities.
facilities. For example: doors,
locks, heating and air conditioning, smoke and fire alarms, fire suppression systems, cameras,
barricades, fencing, security guards, cable locks, etc. Separating the network and workplace into
functional areas are also physical controls.
Defense in depth
Information security must protect information throughout the life span of the information, from
the initial creation of the information on through to the final disposal of the information. The
information must be protected while in motion and while at rest. During its lifetime, information
may pass through many different information processing systems and through many different
parts of information processing systems. There are many different ways the information and
information systems can be threatened. To fully protect the information during
during its lifetime, each
component of the information processing system must have its own protection mechanisms. The
building up, layering on and overlapping of security measures is called defense in depth. The
strength of any system is no greater than its weakest link. Using a defense in depth strategy,
should one defensive measure fail there are other defensive measures in place that continue to
provide protection.
An important aspect of information security and risk management is recognizing the value of
information and defining appropriate procedures and protection requirements for the
information. Not all information is equal and so not all information requires the same degree of
protection. This requires information to be assigned a security classification.
The first step in information classification is to identify a member of senior management as the
owner of the particular information to be classified. Next, develop a classification policy. The
policy should describe the different classification labels, define the criteria for information to be
assigned a particular label, and list the required security controls for each classification.
Some factors that influence which classification information should be assigned include how
much value that information has to the organization, how old the information is and whether or
not the information has become obsolete. Laws and other regulatory requirements are also
important considerations when classifying information.
The Business Model for Information Security enables security professionals to examine security
from systems perspective, creating an environment where security can be managed holistically,
allowing actual risks to be addressed.
The type of information security classification labels selected and used will depend on the nature
of the organization, with examples being:
In the business sector, labels such as: Public, Sensitive, Private, and Confidential.
In the government sector, labels such as: Unclassified, Unofficial, Protected,
Confidential, Secret, Top Secret and their non-English equivalents.
In cross-sectorial formations, the Traffic Light Protocol, that consists of: White, Green,
Amber, and Red.
All employees in the organization, as well as business partners, must be trained on the
classification schema and understand the required security controls and handling procedures for
each classification. The classification of a particular information asset that has been assigned
should be reviewed periodically to ensure the classification is still appropriate for the
information and to ensure the security controls required by the classification are in place and are
followed in their right procedures.
Access to protected information must be restricted to people who are authorized to access the
information. The computer programs, and in many cases the computers that process the
information, must also be authorized. This requires that mechanisms be in place to control the
access to protected information. The sophistication of the access control mechanisms should be
in parity with the value of the information being protected – the more sensitive or valuable the
information the stronger the control mechanisms need to be. The foundation on which access
control mechanisms are built start with identification and authentication.
Identification
Identification is an assertion of who someone is or what something is. If a person makes the
statement "Hello, my name is John Doe" they are making a claim of who they are. However,
their claim may or may not be true. Before John Doe can be granted access to protected
information it will be necessary to verify that the person claiming to be John Doe really is John
Doe. Typically the claim is in the form of a username. By entering that username you are
claiming "I am the person the username belongs to".
Authentication
Authentication is the act of verifying a claim of identity. When John Doe goes into a bank to
make a withdrawal, he tells the bank teller he is John Doe—a claim of identity. The bank teller
asks to see a photo ID, so he hands the teller his driver's license. The bank teller checks the
license to make sure it has John Doe printed on it and compares the photograph on the license
against the person claiming to be John Doe. If the photo and name match the person, then the
teller has authenticated that John Doe is who he claimed to be. Similarly by entering the correct
password, the user is providing evidence that they are the person the username belongs to.
There are three different types of information that can be used for authentication:
Something you know: things such as a PIN, a password, or your mother's maiden name.
Something you have: a driver's license or a magnetic swipe card.
Something you are: biometrics, including palm prints, fingerprints, voice prints and retina
(eye) scans.
Strong authentication requires providing more than one type of authentication information (two-
factor authentication). The username is the most common form of identification on computer
systems today and the password is the most common form of authentication. Usernames and
passwords have served their purpose but in our modern world they are no longer adequate.
Usernames and passwords are slowly being replaced with more sophisticated authentication
mechanisms.
After a person, program or computer has successfully been identified and authenticated then it
must be determined what informational resources they are permitted to access and what actions
they will be allowed to perform (run, view, create, delete, or change). This is called
authorization. Authorization to access information and other computing services begins with
administrative policies and procedures. The policies prescribe what information and computing
services can be accessed, by whom, and under what conditions. The access control mechanisms
are then configured to enforce these policies. Different computing systems are equipped with
different kinds of access control mechanisms—some may even offer a choice of different access
control mechanisms. The access control mechanism a system offers will be based upon one of
three approaches to access control or it may be derived from a combination of the three
approaches.
Cryptography
Information security uses cryptography to transform usable information into a form that renders
it unusable by anyone other than an authorized user; this process is called encryption.
Information that has been encrypted (rendered unusable) can be transformed back into its
original usable form by an authorized user, who possesses the cryptographic key, through the
process of decryption. Cryptography is used in information security to protect information from
unauthorized or accidental disclosure while the information is in transit (either electronically or
physically) and while information is in storage.
Cryptography provides information security with other useful applications as well including
improved authentication methods, message digests, digital signatures, non-repudiation, and
encrypted network communications. Older less secure applications such as telnet and ftp are
slowly being replaced with more secure applications such as ssh that use encrypted network
communications. Wireless communications can be encrypted using protocols such as
WPA/WPA2 or the older (and less secure) WEP. Wired communications (such as ITU-TG.hn)
are secured using AES for encryption and X.1035 for authentication and key exchange. Software
applications such as GnuPG or PGP can be used to encrypt data files and Email.
Access controls are security features that control how users and systems communicate and
interact with other systems and resources.
Access is the flow of information between a subject and an object.
A subject is an active entity that requests access to an object or the data within an object. E.g.:
user, program, process etc.
An object is a passive entity that contains the information. E.g.: Computer, Database, File,
Program etc.
Access controls give organization the ability to control, restrict, monitor, and protect resource
availability, integrity and confidentiality
Various types of users need different levels of access - Internal users, contractors,
outsiders, partners, etc.
Resources have different classification levels- Confidential, internal use only, private,
public, etc.
Diverse identity data must be kept on different types of users - Credentials, personal data,
contact information, work-related data, digital certificates, cognitive passwords, etc.
The corporate environment is continually changing- Business environment needs,
resource access needs, employee roles, actual employees, etc.
Principle of Least Privilege: States that if nothing has been specifically configured for an
individual or the groups, he/she belongs to, the user should not be able to access that
resource i.e. Default no access
Separation of Duties: Separating any conflicting areas of responsibility so as to reduce
opportunities for unauthorized or unintentional modification or misuse of organizational
assets and/or information.
Need to know : It is based on the concept that individuals should be given access only to
the information that they absolutely require in order to perform their job duties
Roles
Groups
Location
Time
Transaction Type
Security Principles
Fundamental Principles (CIA)
Identification
Authentication
Authorization
Non Repudiation
Identification describes a method of ensuring that a subject is the entity it claims to be. E.g.: A
user name or an account no.
Authentication is the method of proving the subjects identity. E.g.: Password, Passphrase, and
PIN
Authorization is the method of controlling the access of objects by the subject. E.g.: A user
cannot delete a particular file after logging into the system
Note: There must be a three step process of Identification, Authentication and Authorization in
order for a subject to access an object
Authentication Factors
Something a person knows- E.g.: passwords, PIN- least expensive, least secure
Something a person has – E.g.: Access Card, key- expensive, secure
Something a person is- E.g.: Biometrics- most expensive, most secure
Note: For a strong authentication to be in process, it must include two out of the three
authentication factors- also referred to as two factor authentication.
Authentication Methods
Biometrics
Passwords
It is a token based system used for authentication purposes where the service is used only
once
It is used in environments that require a higher level of security than static password
provides
Types of token generators
o Synchronous (e.g.: Secure ID) - A synchronous token device/generator
synchronizes with the authentication service by any of the two means.
Time Based: In this method the token device and the authentication
service must hold the same time within their internal clocks. The time
value on the token device and a secret key are used to create a onetime
password. This password is decrypted by the server and compares it to the
value that is expected.
Counter Based: In this method the user will need to initiate the logon
sequence on the computer and push a button on the token device. This
causes the token device and the authentication service to advance to the
next authentication value. This value and a base secret are hashed and
displayed to the user. The user enters this resulting value along with a user
ID to be authenticated.
o Asynchronous: A token device that is using an asynchronous token-generating
method uses a challenge/response scheme to authenticate the user. In this
situation, the authentication server sends the user a challenge, a random value also
called a nonce. The user enters this random value into the token device, which
encrypts it and returns a value that the user uses as a one-time password. The user
sends this value, along with a username, to the authentication server. If the
authentication server can decrypt the value and it is the same challenge value that
was sent earlier, the user is authenticated
Example: SecureID
o It is one of the most widely used time-based tokens from RSA Security
o It uses a time based synchronous two-factor authentication
Cryptographic Keys
Memory Cards
Smart Cards
Holds information and has the capability to process information and can provide a two
factor authentication (knows and has)
Categories of Smart Cards
o Contact
o Contactless
Hybrid- has 2 chips and supports both contact and contactless
Combi- has a microprocessor that can communicate with both a contact as
well as a contact reader.
More expensive and tamperproof than memory cards
Types of smartcard attacks
o Fault generation: Introducing of computational errors into smart card with the
goal of uncovering the encryption keys that are being used and stored on cards
o Side Channel Attacks: These are non-intrusive attacks and are used to uncover
sensitive information about how a component works without trying to
compromise any type of flaw or weakness. The following are some of the
examples
Differential Power Analysis: Examining the power emission that are
released during processing
Electromagnetic Analysis: Examining the frequency that are emitted
o Timing: How long a specific process takes to complete
o Software Attacks: Inputting instructions into the card that will allow for the
attacker to extract account information. The following are some of the examples
Microprobing: Uses needles to remove the outer protective material on the
cards circuits by using ultrasonic vibrations thus making it easy to tap the
card ROM chip
Access controls can be implemented at various layers of a network and individual systems.The
access controls can be classified into three layers or categories, each category having different
access control mechanisms that can be carried out manually or automatically.
Administrative Controls
Physical Controls
Technical or Logical Controls
Each category of access control has several components that fall within it, as described
1. Administrative
A security policy is a high-level plan that states management’s intent pertaining to how
security should be practiced within an organization, what actions are acceptable, and
what level of risk the company is willing to accept. This policy is derived from the laws,
regulations, and business objectives that shape and restrict the company.
The security policy provides direction for each employee and department regarding how
security should be implemented and followed, and the repercussions for noncompliance.
Procedures, guidelines, and standards provide the details that support and enforce the
company’s security policy.
Personnel Controls
Personnel controls indicate how employees are expected to interact with security
mechanisms, and address noncompliance issues pertaining to these expectations.
Change of Status: These controls indicate what security actions should be taken when an
employee is hired, terminated, suspended, moved into another department, or promoted.
Separation of duties: The separation of duties should be enforced so that no one
individual can carry out a critical task alone that could prove to be detrimental to the
company.
Example: A bank teller who has to get supervisory approval to cash checks over $2000 is an
example of separation of duties. For a security breach to occur, it would require collusion, which
means that more than one person would need to commit fraud, and their efforts would need to be
concerted. The use of separation of duties drastically reduces the probability of security breaches
and fraud.
Supervisory Structure
Security-Awareness Training
This control helps users/employees understand how to properly access resources, why
access controls are in place and the ramification for not using the access controls
properly.
Testing
This control states that all security controls, mechanisms, and procedures are tested on a
periodic basis to ensure that they properly support the security policy, goals, and
objectives set for them.
The testing can be a drill to test reactions to a physical attack or disruption of the
network, a penetration test of the firewalls and perimeter network to uncover
vulnerabilities, a query to employees to gauge their knowledge, or a review of the
procedures and standards to make sure they still align with business or technology
changes that have been implemented.
Security policy
Monitoring and supervising
Separation of duties
Job rotation
Information classification
Personnel procedures
Investigations
Testing
Security-awareness and training
2. Physical
Physical controls support and work with administrative and technical (logical) controls to supply
the right degree of access control.
Network Segregation
Network segregation can be carried out through physical and logical means. A section of
the network may contain web servers, routers, and switches, and yet another network
portion may have employee workstations.
Each area would have the necessary physical controls to ensure that only the permitted
individuals have access into and out of those sections.
Perimeter Security
The implementation of perimeter security depends upon the company and the security
requirements of that environment.
One environment may require employees to be authorized by a security guard by
showing a security badge that contains picture identification before being allowed to
enter a section. Another environment may require no authentication process and let
anyone and everyone into different sections.
Perimeter security can also encompass closed-circuit TVs that scan the parking lots and
waiting areas, fences surrounding a building, lighting of walkways and parking areas,
motion detectors, sensors, alarms, and the location and visual appearance of a building.
These are examples of perimeter security mechanisms that provide physical access
control by providing protection for individuals, facilities, and the components within
facilities.
Computer Controls
Each computer can have physical controls installed and configured, such as locks on the
cover so that the internal parts cannot be stolen, the removal of the floppy and CD-ROM
drives to prevent copying of confidential information, or implementation of a protection
device that reduces the electrical emissions to thwart attempts to gather information
through airwaves.
Some environments might dictate that only particular individuals can access certain areas
of the facility.
Data Backups
Backing up data is a physical control to ensure that information can still be accessed after
an emergency or a disruption of the network or a system.
There are different types of cabling that can be used to carry information throughout a
network.
Some cable types have sheaths that protect the data from being affected by the electrical
interference of other devices that emit electrical signals.
Some types of cable have protection material around each individual wire to ensure that
there is no crosstalk between the different wires.
All cables need to be routed throughout the facility in a manner that is not in people’s
way or that could be exposed to any danger of being cut, burnt, crimped, or eavesdropped
upon.
Control Zone
It is a specific area that surrounds and protects network devices that emit electrical
signals. These electrical signals can travel a certain distance and can be contained by a
specially made material, which is used to construct the control zone.
The control zone is used to resist penetration attempts and disallow sensitive information
to “escape” through the airwaves.
A control zone is used to ensure that confidential information is contained and to hinder
intruders from accessing information through the airwaves.
Companies that have very sensitive information would likely protect that information by
creating control zones around the systems that are processing that information
Fences
Locks
Badge system
Security guard
Biometric system
Mantrap doors
Lighting
Motion detectors
Closed-circuit TVs
Alarms
Backups
safe storage area of backups
3. Technical
Technical controls called logical controls are the s/w tools used to restrict subject’s access to
objects. They can be core OS components, add-on security packages, applications, n/w h/w
devices, protocols, encryption mechanisms, and access control metrics.
System Access
In this type, control of access to resources is based on the sensitivity of data, clearance
level of users, and user’s rights and permissions. As technical control for system access
can be a user name password, Kerberos implementation, biometrics, PKI, RADIUS,
TACACS or authentication using smartcards.
Network Access
This control defines the access control mechanism to access the different network
resources like the routers, switches, firewalls, bridges etc.
These controls are used to protect information as it passes throughout an n/w and resides
on computers. They preserve the confidentiality and integrity of data and enforce specific
paths for communication to take place.
Auditing
These controls track activity within a n/w, on a n/w device or on a specific computer
.They help to point out weakness of other technical controls and make the necessary
changes.
Network Architecture
This control defines the logical and physical layout of the network, and also the access
control mechanisms between different n/w segments.
ACLs
Routers
Encryption
Audit logs
IDS
Antivirus software
Firewalls
Smart cards
Dial-up call-back systems
Denial of Service(DoS/DDoS)
Overview
The purpose of DoS attacks is to force the targeted computer(s) to reset, or consume its resources
so that it can no longer provide its intended service
A DoS attack can be perpetrated in a number of ways. There are five basic types of attack:
Unfortunately, there are no effective ways to prevent being the victim of a DoS or DDoS attack,
but there are steps you can take to reduce the likelihood that an attacker will use your computer
to attack other computers:
Buffer Overflows
Overview
A buffer overflow is an anomalous condition where a process attempts to store data beyond the
boundaries of a fixed-length buffer. The result is that the extra data overwrites adjacent memory
locations. The overwritten data may include other buffers, variables and program flow data and
may cause a process to crash or produce incorrect results. They can be triggered by inputs,
specifically designed to execute malicious code or to make the program operate in an unintended
way. As such, buffer overflows cause many software vulnerabilities and form the basis of many
exploits.
Countermeasure
Overview
Countermeasures
Be skeptical of e-mails indicating that you need to make changes to your accounts or
warnings indicating that accounts will be terminated without you doing some type of
activity online.
Call the legitimate company to find out if this is a fraudulent message.
Review the address bar to see if the domain name is correct.
When submitting any type of financial information or credential data, an SSL connection
should be set up, which is indicated in the address bar (https://) and a closed-padlock icon
in the browser at the bottom-right corner.
Do not click on an HTML link within an e-mail. Type the URL out manually instead.
Do not accept e-mail in HTML format.
Emanations
Overview
All electronic devices emit electrical signals. These signals can hold important information, and
if an attacker buys the right equipment and positions himself in the right place, he could capture
this information from the airwaves and access data transmissions as if he had a tap directly on
the network wire.
Tempest: Tempest is the name of a program, and now a standardized technology that
suppresses signal emanations with shielding material. Vendors who manufacture this type
of equipment must be certified to this standard. In devices that are Tempest rated, other
components are also modified, especially the power supply, to help reduce the amount of
electricity that is used unlike the normal devices which have just an outer metal coating,
referred to as a Faraday cage. This type of protection is usually needed only in military
institutions, although other highly secured environments do utilize this type of safeguard.
o Tempest Technologies: Tempest technology is complex, cumbersome, and
expensive, and therefore only used in highly sensitive areas that really need this
high level of protection. Two alternatives to Tempest exist
White Noise: White noise is a uniform spectrum of random electrical
signals. It is distributed over the full spectrum so that the bandwidth is
constant and an intruder is not able to decipher real information from
random noise or random information.
Control Zone: Some facilities use material in their walls to contain
electrical signals. This prevents intruders from being able to access
information that is emitted via electrical signals from network devices.
This control zone creates a type of security perimeter and is constructed to
protect against unauthorized access to data or compromise of sensitive
information.
Shoulder Surfing
Overview
Shoulder surfing refers to using direct observation techniques, such as looking over someone's
shoulder, to get information. Shoulder surfing is particularly effective in crowded places because
it's relatively easy to observe someone as they:
Object Reuse
Overview
Object reuse issues pertain to reassigning to a subject media that previously contained one or
more objects.
The sensitive information that may be left by a process should be securely cleared before
allowing another process the opportunity to access the object. This ensures that information not
intended for this individual or any other subject is not disclosed.
For media that holds confidential information, more extreme methods should be taken to ensure
that the files are actually gone, not just their pointers.
Countermeasures
Data Remanence
Overview
Data remanence is the residual representation of data that has been in some way been nominally
erased or removed. This residue may be due to data being left intact by a nominal delete
operation, or through physical properties of the storage medium.
Data remanence may make inadvertent disclosure of sensitive information possible, should the
storage media be released into an uncontrolled environment.
Classes of Countermeasures
o Clearing
Clearing is the removal of sensitive data from storage devices in such a
way that there is assurance, proportional to the sensitivity of the data, that
the data may not be reconstructed using normal system functions. The data
may still be recoverable, but not without unusual effort.
Clearing is typically considered an administrative protection against
accidental disclosure within an organization. For example, before a floppy
disk is re-used within an organization, its contents may be cleared to
prevent their accidental disclosure to the next user.
o Purging
Purging or sanitizing is the removal of sensitive data from a system or
storage device with the intent that the data cannot be reconstructed by any
known technique.
Purging is generally done before releasing media outside of control, such
as before discarding old media, or moving media to a computer with
different security requirements.
Methods to Countermeasure
o Overwriting
A common method used to counter data remanence is to overwrite the
storage medium with new data. This is often called a wiping or shredding
a file or disk. Because such methods can often be implemented in software
alone, and may be able to selectively target only part of a medium, it is a
popular, low-cost option for some applications.
The simplest overwrite technique writes the same data everywhere -- often
just a pattern of all zeroes. At a minimum, this will prevent the data from
being retrieved simply by reading from the medium again, and thus is
often used for clearing.
o Degaussing
Degaussing is the removal or reduction of a magnetic field. Applied to
magnetic media, degaussing may purge an entire media element quickly
and effectively. A device, called a degausser, designed for the media being
erased, is used.
Degaussing often renders hard disks inoperable, as it erases low-level
formatting which is only done at the factory, during manufacture.
Degaussed floppy disks can generally be reformatted and reused.
o Encryption
Encrypting data before it is stored on the medium may mitigate concerns
about data remanence. If the decryption key is strong and carefully
controlled (i.e., not itself subject to data remanence), it may effectively
make any data on the medium unrecoverable. Even if the key is stored on
the medium, it may prove easier or quicker to overwrite just the key, vs
the entire disk.
Backdoor/Trapdoor
Overview
A backdoor is a malicious computer program or particular means that provide the attacker with
unauthorized remote access to a compromised system exploiting vulnerabilities of installed
software and bypassing normal authentication.
A backdoor works in background and hides from the user. It is very similar to a virus and
therefore is quite difficult to detect and completely disable.
A backdoor is one of the most dangerous parasite types, as it allows a malicious person to
perform any possible actions on a compromised computer. The attacker can use a backdoor to
o Spy on a user,
o Manage files,
o Install additional software or dangerous threats,
o Control the entire system including any present applications or hardware devices,
o Shutdown or reboot a computer or
o Attack other hosts.
Often a backdoor has additional harmful capabilities like keystroke logging, screenshot capture,
file infection, even total system destruction or other payload. Such parasite is a combination of
different privacy and security threats, which works on its own and doesn’t require to be
controlled at all.
Most backdoors are autonomic malicious programs that must be somehow installed to a
computer. Some parasites do not require the installation, as their parts are already integrated into
particular software running on a remote host. Programmers sometimes left such backdoors in
their software for diagnostics and troubleshooting purposes. Hackers often discover these
undocumented features and use them to break into the system.
Dictionary Attacks
Overview
Dictionary attacks are launched by programs which are fed with a list (dictionaries) of commonly
used words or combinations of characters, and then compare these values to capture passwords.
Once the right combination of characters is identified, the attacker can use this password to
authenticate herself as a legitimate user.
Sometimes the attacker can even capture the password file using this kind of activity.
Countermeasures
To properly protect an environment against dictionary and other password attacks, the following
practices should be followed:
Overview
Brute force is defined as “trying every possible combination until the correct one is identified.”
The most effective way to uncover passwords is through a hybrid attack, which combines a
dictionary attack and a brute force attack
A brute force attack is also known as an exhaustive attack.
These are usually used for war dialing in hopes of finding a modem that can be exploited to gain
unauthorized access.
Countermeasures
For phone brute force attacks, auditing and monitoring of this type of activity should be in place
to uncover patterns that could indicate a war dialing attack:
Social Engineering
Overview
Social engineering is a collection of techniques used for manipulation of the natural human
tendency to trust in order to obtain information that will allow a hacker to gain unauthorized
access to a valued system and the information that resides on that system.
At work Place
o In the workplace, the hacker can simply walk in the door, like in the movies, and
pretend to be a maintenance worker or consultant who has access to the
organization. Then the intruder struts through the office until he or she finds a few
passwords lying around and emerges from the building with ample information to
exploit the network from home later that night
o Another technique to gain authentication information is to just stand there and
watch an oblivious employee type in his password.
On Phone/Help Desk
o It’s most prevalent type of social engineering attack.
o A hacker will call up and imitate someone in a position of authority or relevance
and gradually pull information out of the user.
o Help desks are particularly prone to this type of attack. Hackers are able to
pretend they are calling from inside the corporation by playing tricks on the PBX
or the company operator, so caller-ID is not always the best defense
o Help desks are particularly vulnerable because they are in place specifically to
help, a fact that may be exploited by people who are trying to gain illicit
information
Dumpster Diving
o Dumpster diving, also known as trashing is another popular method of social
engineering. A huge amount of information can be collected through company
dumpsters (trash can).
o The following items turn to be a potential security leaks in our trash:
Countermeasures
Having proper security policies in place which addresses both physical and psychological
aspects of the attack
Providing proper training to employees, helpdesk personnel
Single Sign-On
Introduction
SSO is a technology that allows a user to enter credentials one time and be able to access all
resources in primary and secondary network domains
Advantages
Limitations
Every platform application and resource needs to accept the same type of credentials, in
the same format and interpret their meaning in the same way.
Disadvantages
Kerberos
Introduction
Kerberos is an authentication protocol that was designed in mid-1980 as part of MIT’s project
Athena.
The KDC can be a single point of failure. If the KDC goes down, no one can access
needed resources. Redundancy is necessary for the KDC.
The KDC must be able to handle the number of requests it receives in a timely manner. It
must be scalable.
Secret keys are temporarily stored on the users’ workstation, which means it is possible
for an intruder to obtain these cryptographic keys.
Session keys are decrypted and reside on the users’ workstations, either in a cache or in a
key table. Again, an intruder can capture these keys.
Kerberos is vulnerable to password guessing. The KDC does not know if a dictionary
attack is taking place.
Network traffic is not protected by Kerberos if encryption is not enabled.
SESAME
Introduction
SESAME uses digitally signed privileged Attribute Certificates (PAC) to authenticate subjects to
objects. PAC contains the subject’s identity, access capabilities for the object, access time
period, and life time of the PAC
Security Domain
Introduction
Introduction
Thin clients are diskless computers that are sometimes called as dumb terminals.
It is based on C/S technology where a user is supposed to logon to a remote server to use
the computing and network resources.
When the user starts the client, it runs a short list of instructions and then points itself to a
server that will actually download the operating system, or interactive operating software,
to the terminal. This enforces a strict type of access control, because the computer cannot
do anything on its own until it authenticates to a centralized server, and then the server
gives the computer its operating system, profile, and functionality.
Thin-client technology provides another type of SSO access for users, because users
authenticate only to the central server or mainframe, which then provides them access to
all authorized and necessary resources.
Introduction
An access control model is a framework that dictates how subjects access objects.
It uses access control technologies and security mechanisms to enforce the rules and
objectives of the model.
There are three main types of access control models:
o Discretionary,
o Mandatory, and
o Nondiscretionary (also called role-based).
This model is very structured and strict and is based on a security label (also known as
sensitivity label) attached to all objects
The subjects are given security clearance by classifying the subjects as secret, top secret,
confidential etc.) and the objects are also classified similarly
A RBAC is based on user roles and uses a centrally administered set of controls to
determine how subjects and objects interact.
The RBAC approach simplifies the access control administration
It is a best system for a company that has high employee turnover.
Note: The RBAC can be generally used in combination with MAC and DAC systems
Rule-based access control uses specific rules that indicate what can and cannot happen
between a subject and an object.
A subject should meet a set of predefined rules before it can access an object.
It is not necessarily an identity based i.e. it can be applicable to all the users or subjects
irrespective of their identities.
E.g.: Routers and firewall use rules to filter incoming and outgoing packets
Constrained user interfaces restrict user’s access ability by not allowing them to request
certain functions or information, or to have access to specific system resources.
There are three major types of restricted interfaces:
o Menus and Shells:
An access control matrix is a table of subjects and objects indicating what actions
individual subjects can take upon individual objects.
The access rights that are assigned to individual subjects are called capabilities and that
assigned to objects are called Access Control Lists (ACL).
This technique uses a capability table to specify the capabilities of a subject pertaining to
specific objects. A capability can be in the form of a token, ticket, or key.
o Each row is a capability and each column is an ACL for a given user.
o Kerberos uses a capability based system where every user is given a ticket, which
is his capability table.
ACL’s are list of subjects that are authorized to access a specific object and they define
what level of authorization is granted ( both at individual and at group level)
ACL’s map values from the access control matrix to the object.
Note: A capability table is bound to a subject, whereas an ACL is bound to an object.
The access decisions are based on the context of a collection of information rather than
on the sensitivity of the data.
Example: A firewall makes a context-based access decisions when they collect state
information on a packet before allowing it into the network.
Basic Concepts
Intrusion detection is the process of detecting an unauthorized use of, or attack upon, a computer,
network, or a telecommunication infrastructure.
IDS are designed to aid in mitigating the damage that can be caused by hacking, or breaking into
sensitive computer and network systems.
54 www.someakenya.com Contact: 0707 737 890
Common Components of an IDS
Sensors: collect traffic and user activity data and send it to an analyzer.
Analyzer: detects an activity that it is programmed to deem as fishy and sends an alert to
the administrative interface.
Administrative Interface: Report the alert details.
IDS Types
Network-Based IDS: A network-based IDS (NIDS) uses sensors, which are either host
computers with the necessary software installed or dedicated appliances—each with its
network interface card (NIC) in promiscuous mode. The NIC driver captures all traffic
and passes it to an analyzer to look for specific types of patterns.
Host-Based IDS: A host-based IDS (HIDS) can be installed on individual workstations
and/or servers and watch for inappropriate or anomalous activity. HIDSs are usually used
to make sure users do not delete system files, reconfigure important settings, or put the
system at risk in any other way.
IDS Technologies
These are knowledge based systems where some knowledge is accumulated about
specific attacks and a model called signatures is developed.
The main disadvantage of these systems is they cannot detect new attacks and a few
signatures need to be written and continuously updated.
Also known as misuse-detection system
Attacks
o Land Attacks ( packets modified to have the same s/c and destination IP)
These are behavioral based systems, which do not use any predefined signatures, but
rather are put in a learning mode to build a profile by continually sampling the
environments normal activities.
The longer the IDS is put in a learning mode, in most instances, the more accurate a
profile it will build and the better protection it will provide.
Once a profile is build, a different profile is build based on the same sampling on all the
future traffic and the data are compared to identify the abnormalities.
Also known as profile-based systems
Advantages
o Can detect new attacks including 0 day attacks
o Can also detect low and slow attacks in which an attacker tries to stay beneath the
radar by sending a few packets at a time over a longer period of time.
Disadvantages
o Developing a correct profile to reduce false positives can be difficult.
o There is a possibility for an attacker to integrate his/her activities into the
behavior pattern of the n/w traffic. This can be controlled by ensuring that there
are no attack activities currently underway while the IDS are in learning mode.
o The success factors for these systems are based on determining proper threshold
in order to reduce/avoid false positives (threshold set to too low) or false
negatives (threshold set to too high)
Attacks
o Bring the IDS offline by DoS and send the IDS incorrect data in order to distract
the n/w and security individuals to make them busy chasing wrong packets, while
the real attack takes place.
Techniques
o Protocol Anomaly based:
These types of IDS have specific knowledge of each protocol that they
will be monitoring.
The IDS builds a profile (model) of each protocol’s normal usage and uses
it to match with the profile build during the actual operation.
Common protocol vulnerabilities
At the DLL, the ARP does not have any protection against ARP
attacks where bogus data can be inserted into its table.
At the n/w layer, the ICMP can be used in a LOKI attack to move
data from one place to another, when this protocol was designed to
only be used to send status information. This data can be a code
which can be made to be executed by the backdoor on a
compromised system.
IP headers can be easily modified for spoofed attacks ( one acting
as other)
Rule Based
Rule-based intrusion detection is commonly associated with the use of an expert system.
An expert system is made up of a knowledge base, inference engine, and rule-based
programming.
o Knowledge is represented as rules, and the data that is to be analyzed is referred
to as facts.
o The knowledge of the system is written in rule-based programming (IF situation
THEN action). These rules are applied to the facts, the data that comes in from a
sensor, or a system that is being monitored.
Example: Consider the Rule-IF a root user creates File1 AND creates File2 SUCH
THAT they are in the same directory THEN there is a call to AdministrativeTool1
TRIGGER send alert. This rule has been defined such that if a root user creates two files
in the same directory and then makes a call to a specific administrative tool, an alert
should be sent.
The more complex the rules, the more demands on software and hardware processing
requirements
Cannot detect new attacks
Techniques
o State Based IDS
A state transition takes place when a variable’s value changes, which
usually happens continuously within every system.
In a state-based IDS, the initial state is the state prior to the execution of
an attack, and the compromised state is the state after successful
penetration.
The IDS has rules that outline what state transition sequences should
sound an alarm. The activity that takes place between the initial and
compromised state is what the state-based IDS looks for, and it sends an
alert if any of the state-transition sequences match its preconfigured rules.
This type of IDS scans for attack signatures in the context of a stream of
activity instead of just looking at individual packets. It can only identify
known attacks and requires frequent updates of its signatures.
o Model Based IDS
IDS Sensors
Network-based IDSs use sensors for monitoring purposes. A sensor, which works as an
analysis engine, is placed on the network segment the IDS is responsible for monitoring.
The sensor receives raw data from an event generator and compares it to a signature
database, profile, or model, depending upon the type of IDS.
If there is some type of a match, which indicates suspicious activity, the sensor works
with the response module to determine what type of activity needs to take place (alerting
through instant messaging, page, e-mail, or carry out firewall reconfiguration, and so on).
The sensor’s role is to filter received data, discard irrelevant information, and detect
suspicious activity.
A monitoring console can be used to monitor all sensors and supplies the network staff
with an overview of the activities of all the sensors in the network, but the difficulty
arises in a switched environment, where traffic is forwarded through a VPN and is not
rebroadcast to all the ports.This can be overcome using Spanning Ports by mirroring the
traffic from all the ports to one monitored port.
Sensor Placement
o Sensors can be placed outside of the firewall to detect attacks
o Inside the firewall (in the perimeter network) to detect actual intrusions.
o At highly sensitive areas, DMZs, and on extranets
Multiple Sensors can be used in high traffic environments to ensure all packets are
investigated. Also if necessary to optimize network bandwidth and speed, different
sensors can be set up to analyze each packet for different signatures. That way, the
analysis load can be broken up over different points.
The traditional IDS only detect that something bad may be taking place and send an alert. The
goal of an IPS is to detect this activity and not allow the traffic to gain access to the target in the
first place.
An IPS is a preventative and proactive technology, whereas an IDS is a detective and after-the-
fact technology.
There has been a long debate on IPS and it turned out to be an extension of IDS and everything
that holds for IDS also holds for IPS apart for IPS being preventative and IDS being detective.
Basic Concepts
Accountability is the method of tracking and logging the subject’s actions on the objects.
Auditing is an activity where the users/subjects actions on the objects are monitored in order to
verify that the sensitivity policies are enforced and can be used as an investigation tool.
Advantages of Auditing
Note: A security professional should be able to access an environment and its security goals
,know what actions should be audited ,and know what is to be done with that information after it
is captured – without wasting too much disk space , CPU power & staff time.
What to Audit?
System-level events
o System performance
o Logon attempts (successful and unsuccessful)
o Logon ID
o Date and time of each logon attempt
o Lockouts of users and terminals
o Use of administration utilities
o Devices used
o Functions performed
o Requests to alter configuration files
Application-level events
o Error messages
o Files opened and closed
o Modifications of files
o Security violations within application
User-level events
o Identification and authentication attempts
o Files, services, and resources used
o Commands initiated
o Security violations
Ethical hacking
An ethical hacker is a computer and networking expert who systematically attempts to penetrate
a computer system or network on behalf of its owners for the purpose of finding security
vulnerabilities that a malicious hacker could potentially exploit.
Ethical hackers use the same methods and techniques to test and bypass a system's defenses as
their less-principled counterparts, but rather than taking advantage of any vulnerability found,
they document them and provide actionable advice on how to fix them so the organization can
improve its overall security.
The purpose of ethical hacking is to evaluate the security of a network or system's infrastructure.
It entails finding and attempting to exploit any vulnerability to determine whether unauthorized
access or other malicious activities are possible. Vulnerabilities tend to be found in poor or
improper system configuration, known and unknown hardware or software flaws, and
Any organization that has a network connected to the Internet or provides an online service
should consider subjecting it to a penetration test. Various standards such as the Payment Card
Industry Data Security Standard require companies to conduct penetration testing from both an
internal and external perspective on an annual basis and after any significant change in the
infrastructure or applications. Many large companies, such as IBM, maintain employee teams of
ethical hackers, while there are plenty of firms that offer ethical hacking as a service. Trustwave
Holdings, Inc., has an Ethical Hacking Lab for attempting to exploit vulnerabilities that may be
present in ATMs, point-of-sale devices and surveillance systems. There are various organizations
that provide standards and certifications for consultants that conduct penetration testing
including:
Ethical hacking is a proactive form of information security and is also known as penetration
testing, intrusion testing and red teaming. An ethical hacker is sometimes called a legal or white
hat hacker and its counterpart a black hat, a term that comes from old Western movies, where the
"good guy" wore a white hat and the "bad guy" wore a black hat. The term "ethical hacker" is
frowned upon by some security professionals who see it has a contradiction in terms and prefer
the name "penetration tester."
SYSTEMS SECURITY
Classification
An important aspect of information security and risk management is recognizing the value of
information and defining appropriate procedures and protection requirements for the
information. Not all information is equal and so not all information requires the same degree of
protection. This requires information to be assigned a security classification.
The first step in information classification is to identify a member of senior management as the
owner of the particular information to be classified. Next, develop a classification policy. The
policy should describe the different classification labels, define the criteria for information to be
assigned a particular label, and list the required security controls for each classification.
Some factors that influence which classification information should be assigned include how
much value that information has to the organization, how old the information is and whether or
not the information has become obsolete. Laws and other regulatory requirements are also
important considerations when classifying information.
The Business Model for Information Security enables security professionals to examine security
from systems perspective, creating an environment where security can be managed holistically,
allowing actual risks to be addressed.
The type of information security classification labels selected and used will depend on the nature
of the organization, with examples being:
In the business sector, labels such as: Public, Sensitive, Private, and Confidential.
In the government sector, labels such as: Unclassified, Unofficial, Protected,
Confidential, Secret, Top Secret and their non-English equivalents.
In cross-sectorial formations, the Traffic Light Protocol, this consists of: White, Green,
Amber, and Red.
All employees in the organization, as well as business partners, must be trained on the
classification schema and understand the required security controls and handling procedures for
each classification. The classification of a particular information asset that has been assigned
should be reviewed periodically to ensure the classification is still appropriate for the
information and to ensure the security controls required by the classification are in place and are
followed in their right procedures.
People errors
In general, errors and accidents in computer systems may be classified as people errors.
Procedural errors, software errors, electromechanical problems, and “dirty data” problems.
Procedural errors
Procedural errors: some spectacular computer failures have occurred because someone didn’t
follow procedures. Consider the 2.30 hour shutdown of NASDAQ, the nation’s second largest
stock market. NASDAQ is so automated that it likes to call itself “the stock market for the next
100 years.” In July 1994, NASDAQ was shut down by an effort, ironically, to make the
computer system more user- friendly. Technicians were phasing in new software, adding
technical improvements a day at a time. A few days into this process, the technician tried to add
more features to the software flooding the data storage capability of the shortened the reading
day.
Software errors
Software errors: we are forever hearing about “software glitches” or “software bugs.” A
software bug is an error in a program that causes it to malfunction.
An example of a somewhat small error is the one a school employee in Newark, New Jersey,
made in coding the school system’s master scheduling program. When 1000 student and 90
teachers showed up for the start of school at Central High School, half the student had
incomplete or no schedules for classes. Some classrooms had no teachers while others had four
instead of one.
Especially with complex software, there are always bugs, even after the system has been
thoroughly tested and “debugged”. However, there comes a point in the software development
process where debugging must end. That is, the probability of the bugs disrupting the system is
considered to be so low that it is not worth searching further for them.
Electromechanical problems
Electromechanical problems: Mechanical systems, such as printers. And electrical systems,
such as circuit boards, don’t always work. They may be faultily constructed, get dirty or
overheated, wear out, or become damaged in some other way, power failures can shit a system
sown. Power surges can burn out equipment.
Whatever the reason, whether electromechanical failure or another problem, computer downtime
is expensive. A survey of about 450 information system executives picked from fortune 1000
companies found that companies on average suffer nine 4 hour computer system failures a year.
Each failure cost the company an average of $3330000. Because of them, companies were
unable to deliver a service, complete productivity because of idle time.
Physical security
Physical security is the protection of personnel, hardware, programs, networks, and data from
physical circumstances and events that could cause serious losses or damage to an enterprise,
agency, or institution. This includes protection from fire, natural disasters, burglary, theft,
vandalism, and terrorism
Physical security is often overlooked (and its importance underestimated) in favor of more
technical and dramatic issues such as hacking, viruses, Trojans, and spyware. However, breaches
of physical security can be carried out with little or no technical knowledge on the part of an
attacker. Moreover, accidents and natural disasters are a part of everyday life, and in the long
term, are inevitable.
There are three main components to physical security. First, obstacles can be placed in the way
of potential attackers and sites can be hardened against accidents and environmental disasters.
Such measures can include multiple locks, fencing, walls, fireproof safes, and water sprinklers.
Second, surveillance and notification systems can be put in place, such as lighting, heat sensors,
smoke detectors, intrusion detectors, alarms, and cameras. Third, methods can be implemented to
apprehend attackers (preferably before any damage has been done) and to recover quickly from
accidents, fires, or natural disasters.
Logical Security consists of software safeguards for an organization’s systems, including user
identification and password access, authenticating, access rights and authority levels. These
measures are to ensure that only authorized users are able to perform actions or access
information in a network or a workstation. It is a subset of computer security.
User IDs, also known as logins, user names, logons or accounts, are unique personal
identifiers for agents of a computer program or network that is accessible by more than
one agent. These identifiers are based on short strings of alphanumeric characters, and are
either assigned or chosen by the users.
Authentication is the process used by a computer program, computer, or network to
attempt to confirm the identity of a user. Blind credentials (anonymous users) have no
identity, but are allowed to enter the system. The confirmation of identities is essential to
Token Authentication
Token Authentication comprises security tokens which are small devices that authorized users of
computer systems or networks carry to assist in identifying that who is logging in to a computer
or network system is actually authorized. They can also store cryptographic keys and biometric
data. The most popular type of security token (RS Security) displays a number which changes
every minute. Users are authenticated by entering a personal identification number and the
number on the token. The token contains a time of day clock and a unique seed value, and the
number displayed is a cryptographic hash of the seed value and the time of day. The computer
which is being accessed also contains the same algorithm and is able to match the number by
matching the user’s seed and time of day. Clock error is taken into account, and values a few
minutes off are sometimes accepted. Another similar type of token (Cryptogram) can produce a
value each time a button is pressed. Other security tokens can connect directly to the computer
through USB, Smart card or Bluetooth ports, or through special purpose interfaces. Cell phones
and PDA's can also be used as security tokens with proper programming.
Password Authentication
Password Authentication uses secret data to control access to a particular resource. Usually, the
user attempting to access the network, computer or computer program is queried on whether they
know the password or not, and is granted or denied access accordingly. Passwords are either
created by the user or assigned, similar to usernames. However, once assigned a password, the
user usually is given the option to change the password to something of his/her choice.
Depending on the restrictions of the system or network, the user may change his/her password to
any alphanumeric sequence. Usually, limitations to password creation include length restrictions,
a requirement of a number, uppercase letter or special character, or not being able to use the past
four or five changed passwords associated with the username. In addition, the system may force
a user to change his/her password after a given amount of time.
Two-Way Authentication
Two-Way Authentication involves both the user and system or network convincing each other
that they know the shared password without transmitting this password over any communication
channel. This is done by using the password as the encryption key to transmit a randomly
Guest accounts, or anonymous logins, are set up so that multiple users can log in to the account
at the same time without a password. Users are sometimes asked to type a username. This
account has very limited access, and is often only allowed to access special public files. Usually,
anonymous accounts have read access rights only for security purposes.
Logical security protects computer software by discouraging user excess by implementing user
identifications, passwords, authentication, biometrics and smart cards. Physical security prevents
and discourages attackers from entering a building by installing fences, alarms, cameras, security
guards and dogs, electronic access control, intrusion detection and administration access
controls. The difference between logical security and physical security is logical security protects
access to computer systems and physical security protects the site and everything located within
the site.
DATA/SOFTWARE SECURITY
If security holes are found as a result of vulnerability analysis, a vulnerability disclosure may be
required. The person or organization that discovers the vulnerability, or a responsible industry
body such as the Computer Emergency Readiness Team (CERT), may make the disclosure. If
the vulnerability is not classified as a high level threat, the vendor may be given a certain amount
of time to fix the problem before the vulnerability is disclosed publicly.
The third stage of vulnerability analysis (identifying potential threats) is sometimes performed by
a white hat using ethical hacking techniques. Using this method to assess vulnerabilities, security
experts deliberately probe a network or system to discover its weaknesses. This process provides
guidelines for the development of countermeasures to prevent a genuine attack.
Vulnerability is a system susceptibility or flaw, and much vulnerability are documented in the
Common Vulnerabilities and Exposures (CVE) database and vulnerability management is the
cyclical practice of identifying, classifying, remediating, and mitigating vulnerabilities as they
are discovered. An exploitable vulnerability is one for which at least one working attack or
"exploit" exists.
To secure a computer system, it is important to understand the attacks that can be made against
it, and these threats can typically be classified into one of the categories below:
Denial-of-service attack
Denial of service attacks are designed to make a machine or network resource unavailable to its
intended users. Attackers can deny service to individual victims, such as by deliberately entering
a wrong password enough consecutive times to cause the victim account to be locked, or they
may overload the capabilities of a machine or network and block all users at once. While a
network attack from a single IP address can be blocked by adding a new firewall rule, many
forms of Distributed denial of service (DDoS) attacks are possible, where the attack comes from
a large number of points - and defending is much more difficult. Such attacks can originate from
the zombie computers of a botnet, but a range of other techniques are possible including
reflection and amplification attacks, where innocent systems are fooled into sending traffic to the
victim.
Direct-access attacks
An unauthorized user gaining physical access to a computer is often able to directly download
data from it. They may also compromise security by making operating system modifications,
installing software worms, keyloggers, or covert listening devices. Even when the system is
protected by standard security measures, these may be able to be by passed by booting another
operating system or tool from a CD-ROM or other bootable media. Disk encryption and Trusted
Platform Module are designed to prevent these attacks.
Eavesdropping
Spoofing
Spoofing of user identity describes a situation in which one person or program successfully
masquerades as another by falsifying data.
Tampering describes a malicious modification of products. So-called "Evil Maid" attacks and
security services planting of surveillance capability into routers are examples.
Privilege escalation
Privilege escalation describes a situation where an attacker with some level of restricted access is
able to, without authorization, elevate their privileges or access level. So for example a standard
computer user may be able to fool the system into giving them access to restricted data; or even
to "become root" and have full unrestricted access to a system.
Phishing
Phishing is the attempt to acquire sensitive information such as usernames, passwords, and credit
card details. Phishing is typically carried out by email spoofing or instant messaging, and it often
directs users to enter details at a fake website whose look and feel are almost identical to the
legitimate one.
Clickjacking
Clickjacking, also known as "UI redress attack or User Interface redress attack", is a malicious
technique in which an attacker tricks a user into clicking on a button or link on another webpage
while the user intended to click on the top level page. This is done using multiple transparent or
opaque layers. The attacker is basically "hijacking" the clicks meant for the top level page and
routing them to some other irrelevant page, most likely owned by someone else. A similar
technique can be used to hijack keystrokes. Carefully drafting a combination of style sheets,
iframes, buttons and text boxes, a user can be led into believing that they are typing the password
or other information on some authentic webpage while it is being channeled into an invisible
frame controlled by the attacker.
Social engineering aims to convince a user to disclose secrets such as passwords, card numbers,
etc. by, for example, impersonating a bank, a contractor, or a customer.
A state of computer "security" is the conceptual ideal, attained by the use of the three processes:
threat prevention, detection, and response. These processes are based on various policies and
system components, which include the following:
User accountaccess controls and cryptography can protect systems files and data,
respectively.
Firewalls are by far the most common prevention systems from a network security
perspective as they can (if properly configured) shield access to internal network
services, and block certain kinds of attacks through packet filtering. Firewalls can be both
hardware- or software-based.
Intrusion Detection System (IDS) products are designed to detect network attacks in-
progress and assist in post-attack forensics, while audit trails and logs serve a similar
function for individual systems.
"Response" is necessarily defined by the assessed security requirements of an individual
system and may cover the range from simple upgrade of protections to notification of
legal authorities, counter-attacks, and the like. In some special cases, a complete
destruction of the compromised system is favored, as it may happen that not all the
compromised resources are detected.
Today, computer security comprises mainly "preventive" measures, like firewalls or an exit
procedure. A firewall can be defined as a way of filtering network data between a host or a
network and another network, such as the Internet, and can be implemented as software running
on the machine, hooking into the network stack (or, in the case of most UNIX-based operating
systems such as Linux, built into the operating system kernel) to provide real time filtering and
blocking. Another implementation is a so-called physical firewall which consists of a separate
machine filtering network traffic. Firewalls are common amongst machines that are permanently
connected to the Internet.
However, relatively few organisations maintain computer systems with effective detection
systems, and fewer still have organized response mechanisms in place. As result, as Reuters
points out: "Companies for the first time report they are losing more through electronic theft of
data than physical stealing of assets". The primary obstacle to effective eradication of cybercrime
could be traced to excessive reliance on firewalls and other automated "detection" systems. Yet it
is basic evidence gathering by using packet capture appliances that puts criminals behind bars.
Reducing vulnerabilities
While formal verification of the correctness of computer systems is possible, it is not yet
common. Operating systems formally verified include seL4, and SYSGO's PikeOS - but these
make up a very small percentage of the market.
Cryptography properly implemented is now virtually impossible to directly break. Breaking them
requires some non-cryptographic input, such as a stolen key, stolen plaintext (at either end of the
transmission), or some other extra cryptanalytic information.
Social engineering and direct computer access (physical) attacks can only be prevented by non-
computer means, which can be difficult to enforce, relative to the sensitivity of the information.
Even in a highly disciplined environment, such as in military organizations, social engineering
attacks can still be difficult to foresee and prevent.
It is possible to reduce an attacker's chances by keeping systems up to date with security patches
and updates, using a security scanner or/and hiring competent people responsible for security.
The effects of data loss/damage can be reduced by careful backing up and insurance.
Security by design
Security by design, or alternately secure by design, means that the software has been designed
from the ground up to be secure. In this case, security is considered as a main feature.
The principle of least privilege, where each part of the system has only the privileges that
are needed for its function. That way even if an attacker gains access to that part, they
have only limited access to the whole system.
Automated theorem proving to prove the correctness of crucial software subsystems.
Code reviews and unit testing, approaches to make modules more secure where formal
correctness proofs are not possible.
Defense in depth, where the design is such that more than one subsystem needs to be
violated to compromise the integrity of the system and the information it holds.
Default secure settings, and design to "fail secure" rather than "fail insecure" (see fail-
safe for the equivalent in safety engineering). Ideally, a secure system should require a
deliberate, conscious, knowledgeable and free decision on the part of legitimate
authorities in order to make it insecure.
Audit trails tracking system activity, so that when a security breach occurs, the
mechanism and extent of the breach can be determined. Storing audit trails remotely,
where they can only be appended to, can keep intruders from covering their tracks.
Full disclosure of all vulnerabilities, to ensure that the "window of vulnerability" is kept
as short as possible when bugs are discovered.
Security architecture
The Open Security Architecture organization defines IT security architecture as "the design
artifacts that describe how the security controls (security countermeasures) are positioned, and
how they relate to the overall information technology architecture. These controls serve the
purpose to maintain the system's quality attributes: confidentiality, integrity, availability,
accountability and assurance services".
The relationship of different components and how they depend on each other.
The determination of controls based on risk assessment, good practice, finances, and
legal matters.
The standardization of controls.
USB dongles are typically used in software licensing schemes to unlock software
capabilities, but they can also be seen as a way to prevent unauthorized access to a
computer or other device's software. The dongle, or key, essentially creates a secure
encrypted tunnel between the software application and the key. The principle is that an
encryption scheme on the dongle, such as Advanced Encryption Standard (AES) provides
a stronger measure of security, since it is harder to hack and replicate the dongle than to
simply copy the native software to another machine and use it. Another security
application for dongles is to use them for accessing web-based content such as cloud
software or Virtual Private Networks (VPNs). In addition, a USB dongle can be
configured to lock or unlock a computer.
Trusted platform modules (TPMs) secure devices by integrating cryptographic
capabilities onto access devices, through the use of microprocessors, or so-called
computers-on-a-chip. TPMs used in conjunction with server-side software offer a way to
detect and authenticate hardware devices, preventing unauthorized network and data
access.
Computer case intrusion detection refers to a push-button switch which is triggered when
a computer case is opened. The firmware or BIOS is programmed to show an alert to the
operator when the computer is booted up the next time.
Drive locks are essentially software tools to encrypt hard drives, making them
inaccessible to thieves. Tools exist specifically for encrypting external drives as well.
Disabling USB ports is a security option for preventing unauthorized and malicious
access to an otherwise secure computer. Infected USB dongles connected to a network
from a computer inside the firewall are considered by Network World as the most
common hardware threat facing computer networks.
One use of the term "computer security" refers to technology that is used to implement secure
operating systems. Much of this technology is based on science developed in the 1980s and used
to produce what may be some of the most impenetrable operating systems ever. Though still
valid, the technology is in limited use today, primarily because it imposes some changes to
system management and also because it is not widely understood. Such ultra-strong secure
operating systems are based on operating system kernel technology that can guarantee that
certain security policies are absolutely enforced in an operating environment. An example of
such a Computer security policy is the Bell-LaPadula model. The strategy is based on a coupling
of special microprocessor hardware features, often involving the memory management unit, to a
special correctly implemented operating system kernel. This forms the foundation for a secure
operating system which, if certain critical parts are designed and implemented correctly, can
ensure the absolute impossibility of penetration by hostile elements. This capability is enabled
because the configuration not only imposes a security policy, but in theory completely protects
itself from corruption. Ordinary operating systems, on the other hand, lack the features that
assure this maximal level of security. The design methodology to produce such secure systems is
precise, deterministic and logical.
Systems designed with such methodology represent the state of the art of computer security
although products using such security are not widely known. In sharp contrast to most kinds of
software, they meet specifications with verifiable certainty comparable to specifications for size,
weight and power. Secure operating systems designed this way are used primarily to protect
national security information, military secrets, and the data of international financial institutions.
These are very powerful security tools and very few secure operating systems have been certified
at the highest level (Orange Book A-1) to operate over the range of "Top Secret" to
"unclassified" (including Honeywell SCOMP, USAF SACDIN, NSA Blacker and Boeing MLS
LAN). The assurance of security depends not only on the soundness of the design strategy, but
also on the assurance of correctness of the implementation, and therefore there are degrees of
security strength defined for COMPUSEC. The Common Criteria quantifies security strength of
products in terms of two components, security functionality and assurance level (such as EAL
levels), and these are specified in a Protection Profile for requirements and a Security Target for
product descriptions. None of these ultra-high assurance secures general purpose operating
systems have been produced for decades or certified under Common Criteria.
In USA parlance, the term High Assurance usually suggests the system has the right security
functions that are implemented robustly enough to protect DoD and DoE classified information.
Medium assurance suggests it can protect less valuable information, such as income tax
Secure coding
If the operating environment is not based on a secure operating system capable of maintaining a
domain for its own execution, and capable of protecting application code from malicious
subversion, and capable of protecting the system from subverted code, then high degrees of
security are understandably not possible. While such secure operating systems are possible and
have been implemented, most commercial systems fall in a 'low security' category because they
rely on features not supported by secure operating systems (like portability, and others). In low
security operating environments, applications must be relied on to participate in their own
protection. There are 'best effort' secure coding practices that can be followed to make an
application more resistant to malicious subversion.
Some common languages such as C and C++ are vulnerable to all of these defects (see Seacord,
"Secure Coding in C and C++"). Other languages, such as Java, are more resistant to some of
these defects, but are still prone to code/command injection and other software defects which
facilitate subversion.
Another bad coding practice occurs when an object is deleted during normal operation yet the
program neglects to update any of the associated memory pointers, potentially causing system
instability when that location is referenced again. This is called dangling pointer, and the first
known exploit for this particular problem was presented in July 2007. Before this publication the
problem was known but considered to be academic and not practically exploitable.
Unfortunately, there is no theoretical model of "secure coding" practices, nor is one practically
achievable, insofar as the code (ideally, read-only) and data (generally read/write) generally
tends to have some form of defect.
Within computer systems, two of many security models capable of enforcing privilege separation
are access control lists (ACLs) and capability-based security. Using ACLs to confine programs
has been proven to be insecure in many situations, such as if the host computer can be tricked
into indirectly allowing restricted file access, an issue known as the confused deputy problem. It
has also been shown that the promise of ACLs of giving access to an object to only one person
can never be guaranteed in practice. Both of these problems are resolved by capabilities. This
does not mean practical flaws exist in all ACL-based systems, but only that the designers of
certain utilities must take responsibility to ensure that they do not introduce flaws.
Capabilities have been mostly restricted to research operating systems, while commercial OSs
still uses ACLs. Capabilities can, however, also be implemented at the language level, leading to
a style of programming that is essentially a refinement of standard object-oriented design. An
open source project in the area is the E language.
The most secure computers are those not connected to the Internet and shielded from any
interference. In the real world, the most secure systems are operating systems where security is
not an add-on.
Response to breaches
Responding forcefully to attempted security breaches (in the manner that one would for
attempted physical security breaches) is often very difficult for a variety of reasons:
Identifying attackers is difficult, as they are often in a different jurisdiction to the systems
they attempt to breach, and operate through proxies, temporary anonymous dial-up
accounts, wireless connections, and other anonymizing procedures which make
backtracking difficult and are often located in yet another jurisdiction. If they
successfully breach security, they are often able to delete logs to cover their tracks.
The sheer number of attempted attacks is so large that organisations cannot spend time
pursuing each attacker (a typical home user with a permanent (e.g., cable modem)
connection will be attacked at least several times per day , so more attractive targets
could be presumed to see many more). Note however, that most of the sheer bulk of these
attacks is made by automated vulnerability scanners and computer worms.
Law enforcement officers are often unfamiliar with information technology, and so lack
the skills and interest in pursuing attackers. There are also budgetary constraints. It has
been argued that the high cost of technology, such as DNA testing, and improved
forensics mean less money for other kinds of law enforcement, so the overall rate of
criminals not getting dealt with goes up as the cost of the technology increases. In
addition, the identification of attackers across a network may require logs from various
points in the network and in many countries, the release of these records to law
enforcement (with the exception of being voluntarily surrendered by a network
administrator or a system administrator) requires a search warrant and, depending on the
circumstances, the legal proceedings required can be drawn out to the point where the
records are either regularly destroyed, or the information is no longer relevant.
1. Protect your computer with strong security software and keep it updated. McAfee
Total Protection provides proven PC protection from Trojans, hackers, and spyware. Its
integrated anti-virus, anti-spyware, firewall, anti-spam, anti-phishing, and backup
technologies work together to combat today’s advanced multi-faceted attacks. It scans
disks, email attachments, files downloaded from the web, and documents generated by
word processing and spreadsheet programs.
2. Use a security conscious Internet service provider (ISP) that implements strong anti-
spam and anti-phishing procedures. The SpamHaus organization lists the current top-10
worst ISPs in this category—consider this when making your choice.
3. Enable automatic Windows updates, or download Microsoft updates regularly, to keep
your operating system patched against known vulnerabilities. Install patches from other
software manufacturers as soon as they are distributed. A fully patched computer behind
a firewall is the best defense against Trojan and spyware installation.
4. Use great caution when opening attachments. Configure your anti-virus software to
automatically scan all email and instant message attachments. Make sure your email
program doesn’t automatically open attachments or automatically render graphics, and
ensure that the preview pane is turned off. Never open unsolicited emails, or attachments
that you’re not expecting—even from people you know.
5. Be careful when using P2P file sharing. Trojans hide within file-sharing programs
waiting to be downloaded. Use the same precautions when downloading shared files that
you do for email and instant messaging. Avoid downloading files with the extensions
.exe, .scr, .lnk, .bat, .vbs, .dll, .bin, and .cmd.
6. Use security precautions for your PDA, cell phone, and Wi-Fi devices. Viruses and
Trojans arrive as an email/IM attachment, are downloaded from the Internet, or are
uploaded along with other data from a desktop. Cell phone viruses and mobile phishing
attacks are in the beginning stages, but will become more common as more people access
mobile multimedia services and Internet content directly from their phones. Mobile Anti-
Virus software for a selected devices is available for free with some McAfee PC
products. Always use a PIN code on your cell phone and never install or download
mobile software from a un-trusted source.
7. Configure your instant messaging application correctly. Make sure it does not open
automatically when you fire up your computer.
8. Beware of spam-based phishing schemes. Don’t click on links in emails or IM.
9. Back up your files regularly and store the backups somewhere besides your PC. If you
fall victim to a virus attack, you can recover photos, music, movies, and personal
information like tax returns and bank statements.
10. Stay aware of current virus news by checking sites like McAfee Labs Threat Center.
Today we use our computers to do so many things. We go online to search for information, shop,
bank, do homework, play games, and stay in touch with family and friends. As a result, our
computers contain a wealth of personal information about us. This may include banking and
other financial records, and medical information - information that we want to protect. If your
computer is not protected, identity thieves and other fraudsters may be able to get access and
steal your personal information. Spammers could use your computer as a "zombie drone" to send
spam that looks like it came from you. Malicious viruses or spyware could be deposited on your
computer, slowing it down or destroying files.
By using safety measures and good practices to protect your home computer, you can protect
your privacy and your family. The following tips are offered to help you lower your risk while
you're online.
Install a firewall
A firewall is a software program or piece of hardware that blocks hackers from entering and
using your computer. Hackers search the Internet the way some telemarketers automatically dial
random phone numbers. They send out pings (calls) to thousands of computers and wait for
responses. Firewalls prevent your computer from responding to these random calls. A firewall
blocks communications to and from sources you don't permit. This is especially important if you
have a high-speed Internet connection, like DSL or cable.
Some operating systems have built-in firewalls that may be shipped in the "off" mode. Be sure to
turn your firewall on. To be effective, your firewall must be set up properly and updated
regularly. Check your online "Help" feature for specific instructions.
Anti-virus software protects your computer from viruses that can destroy your data, slow down
or crash your computer, or allow spammers to send email through your account. Anti-virus
protection scans your computer and your incoming email for viruses, and then deletes them. You
must keep your anti-virus software updated to cope with the latest "bugs" circulating the Internet.
Most anti-virus software includes a feature to download updates automatically when you are
online. In addition, make sure that the software is continually running and checking your system
for viruses, especially if you are downloading files from the Web or checking your email. Set
your anti-virus software to check for viruses when you first turn on your computer. You should
also give your system a thorough scan at least twice a month.
Spyware is software installed without your knowledge or consent that can monitor your online
activities and collect personal information while you surf the Web. Some kinds of spyware,
Spyware protection is included in some anti-virus software programs. Check your anti-virus
software documentation for instructions on how to activate the spyware protection features. You
can buy separate anti-spyware software programs. Keep your anti-spyware software updated and
run it regularly.
To avoid spyware in the first place, download software only from sites you know and trust.
Piggybacking spyware can be an unseen cost of many "free" programs. Don't click on links in
pop-up windows or in spam email.
Hackers are constantly trying to find flaws or holes in operating systems and browsers. To
protect your computer and the information on it, put the security settings in your system and
browser at medium or higher. Check the "Tool" or "Options" menus for how to do this. Update
your system and browser regularly, taking advantage of automatic updating when it's available.
Windows Update is a service offered by Microsoft. It will download and install software updates
to the Microsoft Windows Operating System, Internet Explorer, Outlook Express, and will also
deliver security updates to you. Patching can also be run automatically for other systems, such as
Macintosh Operating System.
Protect your computer from intruders by choosing passwords that are hard to guess. Use strong
passwords with at least eight characters, a combination of letters, numbers and special characters.
Don't use a word that can easily be found in a dictionary. Some hackers use programs that can try
every word in the dictionary. Try using a phrase to help you remember your password, using the
first letter of each word in the phrase. For example, HmWc@w2 - How much wood could a
woodchuck chuck. Protect your password the same way you would the key to your home. After
all, it is a "key" to your personal information.
If you use a wireless network in your home, be sure to take precautions to secure it against
hackers. Encrypting wireless communications is the first step. Choose a wireless router with an
encryption feature and turn it on. WPA encryption is considered stronger than WEP.1 Your
computer, router, and other equipment must use the same encryption. If your router enables
identifier broadcasting, disable it. Note the SSID name so you can connect your computers to the
network manually.2 Hackers know the pre-set passwords of this kind of equipment. Be sure to
change the default identifier on your router and the pre-set administrative password. Turn off
your wireless network when you're not using it.
Many consumers enjoy sharing digital files, such as music, movies, photos, and software. File-
sharing software that connects your computer to a network of computers is often available for
free. File-sharing can pose several risks. When connected to a file-sharing network, you may
allow others to copy files you didn't intend to share. You might download a virus or bit of
spyware that makes your computer vulnerable to hackers. You might also break the law by
downloading material that is copyright protected.
When shopping online, check out the Web site before entering your credit card number or other
personal information. Read the privacy policy and look for opportunities to opt out of
information sharing. (If there is no privacy policy posted, beware! Shop elsewhere.) Learn how
to tell when a Web site is secure. Look for "https" in the address bar or an unbroken padlock icon
at the bottom of the browser window. These are signs that your information will be encrypted or
scrambled, protecting it from hackers as it moves across the Internet.
Don't let your children risk your family's privacy. Make sure they know how to use the Internet
safely. For younger children, install parental control software that limits the Web sites kids can
visit. But remember - no software can substitute for parental supervision.
TRANSMISSION SECURITY
Transmission security (TRANSEC) is the process of securing data transmissions from being
infiltrated, exploited or intercepted by an individual, application or device. TRANSEC secures
data as it travels over a communication medium. It is generally implemented in military and
government organization networks and devices, such as radar and radio communication
equipment.
In the two preceding chapters we examined ways in which to keep your data safe, mainly from
within an organization. I discussed the best ways to keep hackers out of your intranet and how to
protect actual data from viruses and human error as well as the physical security of your software
and hardware. Now that you've secured your tools and applications physically and have taken all
precautions internally to keep data safe, it's time to consider how safe your data is during
transmission. This transmission from one computer to another could be within your LAN, within
your intranet, or over the Internet.
This chapter's topic, secure transmission, explores the security risks involved with data
transmission, such as eavesdropping and decrypting. It discusses why and how to establish
secure channels as well as ways to prevent or foil attacks on these secure channels. It's aimed
primarily at anyone who is trying to design a fully secure system of computers and data or for
anyone interested in encrypting data for transmission. Any individual involved with transmitting
sensitive data-whether in a business that exchanges confidential information, either inside its
corporate headquarters or with customers, or in an organization that exchanges any sensitive data
between just two computers-should not skip this chapter. This includes banks; corporations with
offices in different geographical locations that share proprietary information, regardless of
whether it's public or private; or individuals doing business on the Internet, including selling
products and conducting business transactions.
All transmissions can be intercepted. And the cautious user looks at all transmissions as if they
will be intercepted. You can minimize the risks of transmission interception, but you can never,
under any circumstances, completely rule it out. After all, it is people who design and put wires
in their place, and people can get to them. Accessing wires is somewhat comparable, although
much more difficult, to accessing a transmission sent over airwaves, as on a CB radio. For
example, as a ham, you may have a message intended only for other hams. Although hams are
the main communicators on these frequencies, anyone with the right radio equipment can tune in
and listen, so it's likely your message will be received and heard by other listeners who pick up
the frequency, whether you want them to hear it or not.
Similar risks occur with cellular phones, even though most transmission takes place over wire
and not air. One risky transmission occurred between Prince Charles and his mistress Camilla
Any computer with access to the physical network wire or in the vicinity of over-air
transmissions, however, could be instructed not to ignore the signals intended for other
computers. This is the essence of electronic eavesdropping.
Information is considered intercepted when someone other than the intended recipient receives
the information. Data can be intercepted in many ways, such as electronic eavesdropping or by
using the recipient's password. It can occur anywhere, including in a chat room or through an e-
mail exchange.
The tools required to read the transmission depend on how the information is intercepted. If an
intruder is stealing transmissions at the most basic level (stealing the data packets straight off the
wire or out of the air), the interloper will need something that translates electronic signals from
voltage changes to the numbers and letters that those changes represent. Computers for which the
transmission is intended do this automatically, because they are expecting the signal and already
know its characteristics, how to decode it, and what to do with it. A much simpler method would
be intercepting a message by just looking over someone's shoulder to read what they have
written. Again, the legitimate user already has a context in which to interpret the on-screen
information. The snooper, however, still has to interpret the message, and this isn't always so
simple.
Sniffing Devices
There are troubleshooting programs and devices designed to analyze LAN traffic. These are
commonly referred to as packet sniffers, because they are created to "sniff" packets of data for
the network engineer. As mentioned in the preceding section, all transmissions are broadcast
over all the wires. When one computer wants to communicate with another, it sends out an
electrical signal through the network, which could be copper wire, fiber optic cable, or air. The
The nice thing about LANs is that the systems administrator can use a sniffer to tap into the wire
to examine it. A systems administrator should occasionally examine these lines to check on the
raw material going over the LAN. This is where packet sniffers are helpful. Packet sniffers will
instruct your computer to look at every signal over the wire or only signals that meet certain
criteria. This allows the systems administrator to analyze and actually read electrical signals.
However, anyone with malicious intent also can use packet sniffers for analyzing and reading
network traffic.
Now, you might think there are users out there maliciously using packet sniffers to read data
worldwide, continuously. It's true that there may be many users with malicious intent snooping
around networks, but it is not as simple as just purchasing a packet sniffer. There are devices-
generally referred to as internetworking devices and more specifically referred to as routers and
bridges-that actually filter the electrical signals sent out as data packets. These devices filter
signals logically, which means that any data passing through a bridge or router must be intended
to go through that bridge or router; the destination of the data must be on the other side of the
internetworking device to get through the filter. If the destination of the data is not on the other
side of the filter, the internetworking device won't pass the signal; and if it doesn't pass the
signal, someone on the other side is unable to sniff the information, as shown in Figure 16.2.
Anytime you have a network that requires any sort of logical divisions, you need an
internetworking device. If you are connected to the Internet, you have an internetworking device.
If your local network spans a large physical distance, you have some sort of internetworking
device.
Figure 16.2: This sniffer cannot smell packets on the other side of the router.
These are generally difficult attacks to carry out because of how information is transmitted from
computer to computer. When information is transmitted, it must follow a route based on your
address. If you are using a fake address, the information returning to you will look for your fake
Figure 16.3: Spoofed packets reach their destination but not their origin.
A drawback of a spoof attack from inside the company is that if a computer on the Internet at any
time detects any other computer on the Internet with the same Internet address, both computers
will complain. In this case, if someone is spoofing you by pretending to be you and your
computer is on or being monitored, the trick would be detected easily because your computer
will tell you that there is another computer on the network with the same address.
Still another drawback of a spoofing attack is that every network interface on any computer has a
unique identifying number. Anyone trying to spoof your IP address on a local network could
disable the computer he or she is spoofing, avoiding the earlier mentioned conflict. This would
fail, however, if any other computer on the network were using the address routing protocol
(ARP). The address routing protocol matches Internet addresses to the number given to a
network card. Therefore, turning off your computer would eliminate the IP conflict, but the
interface card number mismatch would require either stealing the network card, making a special
one, or adjusting the ARP on the third computer.
Attacks in which individuals pretend to be another user can occur on several levels. The attacker
can pretend that his or her network interface is one that it isn't by manufacturing a network card
with a fake address. The user then might pretend to have the Internet address of another
computer and thus steal that computer's transmission or create transmissions under the guise of
the impersonated computer. A user could also pretend to be a different person by stealing that
person's username and password in one of about a billion ways. In addition, a user could steal
information simply by gaining access to a computer whose data was not protected against direct
physical intrusion.
At the most basic level transmission occurs over wires or in the air; every electrical signal travels
one way or the other. Transmission is more secure over wire because an eavesdropper or hacker
must be physically near the wire, whereas an interception of an air transmission can occur
anywhere in reach of the signal.
An attempt to intercept a transmission traveling via fiber by tapping into the cable would be
more easily detected than a tap into copper wire, because the tapper could easily damage or
Encryption
There are two aspects to consider when planning for transmission security. The first aspect,
discussed in the preceding paragraph, is how transmissions are physically sent (that is, over wire
or air). The impossibility of preventing physical interception should now be clear. The second
aspect of secure transmission relates to the content that is being transmitted. Securing the content
of the message is done through encryption.
Encryption involves transforming messages to make them legible only for the intended
recipients. Encryption is the process of translating plain text into ciphertext. Human-readable
information intended for transmission is plain text, whereas ciphertext is the text that is actually
transmitted. At the other end, decryption is the process of translating ciphertext back into plain
text. (Figure 16.4 demonstrates the process.) Encryption algorithm refers to the steps that a
personal computer takes to turn plain text into ciphertext. A key is a piece of information, usually
a number that allows the sender to encode a message only for the receiver. Another key also
allows the receiver to decode messages sent to him or her.
Figure 16.4: Plain text is encrypted to produce ciphertext. Ciphertext is decrypted to produce
plain text. Keys are used for both encryption and decryption.
Now that you have the basic encryption jargon down, let's look at why and how encryption is
essential for secure transmissions.
As you've learned by now, your transmissions can have only so much physical security. It is
reasonable to assume that at some point someone may intercept your transmissions. Whether you
expect an interception or whether you just generally suspect that interceptions may occur, you
should transmit your information in a format that is useless to any interceptors. At the simplest
level, this means when transmitting a message to someone, you use a coded message or slang
(nicknames) that no one else understands. When Ulysses S. Grant captured Vicksburg during the
Civil War, he sent a coded but predetermined message to Abraham Lincoln that read "The father
of waters flows unvexed to the sea," meaning that the Union now owned the whole Mississippi
river. Perhaps a good plan at the time, but still, Grant and Lincoln (or their advisers/confidantes)
had to communicate a predetermined message and the message's meaning. A more recent
example of a coded message might involve the use of nicknames. For instance, you and your
sister give nicknames to family members whom you discuss unfavorably. Should a malicious
family member decide to intercept a transmission, you would hope he wouldn't understand which
family members you and your sister refer to in your messages. The obvious drawback of this
coded message, like the Grant-Lincoln message, is that you and the recipient must establish a
system of code before you begin transmitting messages.
Another rather simple form of encryption is commonly known as private key or symmetric
encryption. It's called private key encryption because each party must know before the message
is sent how to interpret the message. For example, spies in the movies always have a sequence of
statements that they exchange to be sure of each other's identity, like "the sun is shining" must be
followed by "the ice is still slippery." This is an example of encrypting so that only the person
for whom a message is intended will understand it.
Other systems have been developed so that information can be encrypted in a general way.
Again, using history as an example, one encryption method is commonly referred to as Caesar's
code. According to history, Caesar would send messages that were encoded by replacing each
letter in the message with the letter three places higher in the alphabet (A was replaced by D, B
by E, and so on). The recipient just had to change the letters back to find out what the message
said. An enemy who intercepted the message and did not know the method of encoding it would
be unable to decipher it. Clearly though, this encoding method is not terribly difficult to break.
This is called private key encryption because the method of encryption must be kept quiet.
Anyone who knows the method could decode the message. It also is called symmetric because
the same key is used to both encrypt and decrypt the message. Other private key methods have
been devised to be more difficult to break.
Data Encrypt Standard (DES) is a private key system adopted by the U.S. government as a
standard very secure method of encryption. An even more secure private key method is called a
one-time pad. A one-time pad involves sheets of paper with random numbers on them: These
numbers are used to transform the message; each number or sequence of numbers is used only
once. The recipient of the message has an identical pad to use to decrypt the message. One-time
pads have been proven to be foolproof-without having a copy of the pad. Supposedly,
mathematicians can prove that a one-time pad is impossible to break.
The drawbacks to private key systems, however, are twofold. First, anyone who learns the
method of encryption and gets the key, or a number or sequence of numbers or the sequences'
equivalent of numbers that are used as a random input into the encrypted system, can break the
key. Second, keys must be exchanged before transmission with any recipient or potential
recipient of your message. So, to exchange keys you need a secure method of transmission, but
essentially what you've done is create a need for another secure method of transmission.
To overcome the drawbacks of private key systems, a number of mathematicians have invented
public key systems. Unknown until about 30 years ago, public key systems were developed from
some very subtle insights about the mathematics of large numbers and how they relate to the
power of computers. Public key means that anyone can publish his or her method of encryption,
Figure 16.5: Private key encryption uses one key to go both ways. Public key encryption uses
one key to encrypt (the public key) and one key to decrypt (the secret key).
Public key methods vary, but one of the most common, and also free, is PGP (pretty good
privacy). This is a public key encryption method that allows you to exchange messages with
anyone that will send you his or her key. When you receive a key from someone, your PGP
software can use that key to encode a message that only that person can interpret. The PGP
method also allows you to encode a signature that only can be decoded using your public key,
ensuring that it was you who sent the message. There are many free software packages that allow
users to encode e-mail and other files they send. These software packages also will generate a
public key for you. The software, along with the source codes, is available for almost all
common operating systems.
Public key encryption works because users can send any message to any person without first
meeting them or exchanging secret keys or secret encryption schemes. This obviously makes an
extremely powerful tool in commerce for transmission of confidential customer information
between buyers and sellers. In addition, public key encryption is extremely secure because
decrypting public key encryption methods is a matter of time. If someone had enough time, that
person could decipher your message. With commonly used methods, however, even an entire
nation of hackers with the most powerful computers would take many years to decipher
encrypted messages.
Now that I've told you about what many in the world of computer security consider the most
secure method of transmission, I must tell you that there are times when public key encryption
doesn't work. When the method used for encryption isn't secure, the message isn't secure.
Because the methods of encryption are usually public, anyone who is interested in finding a hole
has all the information necessary to find any holes. Holes often are discovered in methods
previously thought to be secure. The fact that the algorithm is public makes the method more
secure over the long term but less secure over the short term. In the long term all the flaws will
be discovered and fixed, but over the short term flaws will be discovered and perhaps exploited.
A second insecurity of public key methods in general is that public key encryption won't work
when a recipient has no method of authenticating the sender. If someone sends you his or her
public key, you can use that to encode a message for that person only-but it doesn't mean they
are who they say they are.
Public key also doesn't work if your private keys are compromised. Keeping your private key
secure is essential to the security of the system. Remember that the security of a public key
system depends on no one being able to get your private key by knowing your public key. Your
private key is what you use to decode messages sent to you and to prove your identity to others
to whom you send messages. If someone is able to gain possession of your private key, that
person could read your messages and forge messages from you.
Encryption has often involved making a choice between public and private key security methods.
Public key encryption involves a heavy computing load, meaning that transmission with a public
key takes more time and resources. Private key systems are less cumbersome but also less secure
and less versatile. To overcome the drawbacks of both security methods, users have combined
public and private key systems, such as an exchange of DES keys using a public system and then
using those keys for the private DES system. Remember that private key systems can be stronger
because it is possible to make an unbreakable private key system. A public key system is not
theoretically unbreakable; it's just too difficult to do it in real life. The weak point in a private
key system is the exchange of keys, so the very secure public key method can be used to
exchange keys, and then the completely secure private key system can be used to do the actual
transmission. A second advantage is that public key systems require a big commitment of
computing power for every message. Private key, by comparison, is far less computing intensive
and therefore cheaper and more efficient overall for transmission.
This combination likely will continue and become more common in the future, but it's unlikely
that most systems will become public key. As computing resources advance to make public key
encryption easier, the resources for cracking those keys also advance. This means that keys will
become longer while the calculations will become bigger.
Human history is full of spy stories about stolen information; these stories are never about how
someone used a computer to get the information. Of the many recent incidents of breaches of
national security-Aldrich Ames, who gave details of espionage operations; the Walkers, who
sold Navy code books; the Rosenbergs, who gave away atomic secrets-almost none involved
strictly computer-based breaches. The reason this rarely occurs is that all the data is handled by
humans-they're the ones who put data in computers-and humans have far less strict security than
computers do.
Client/Server Issues
A group known as the computer emergency response team (CERT) at Carnegie-Mellon
University makes it their business to find security holes in the Internet and then to make the
public aware of these holes. CERT especially concerns itself with computer-Internet connections
using TCP/IP protocol and maintains a list of Internet-related security holes. To find the
information about CERT, look for their home page at http://www.cert.org/.
Reading information about holes and keeping abreast of security issues will give you information
about old holes, including what holes have been discovered, allowing you to plug your system.
Usually hackers are aware of old holes and search systems for those holes, creating havoc on
private or public networks. Exploiting unplugged known holes is overwhelmingly more common
than finding a new, undiscovered hole. After an intruder has used a hole to eavesdrop your
transmissions, that person can use any information you transmit. A hacker could sell your
marketing plans, reschedule your meetings, steal product orders, or provide your customers with
inappropriate or wrong information. Most users don't keep themselves up-to-date on security
holes, exposing themselves to holes anyone else, including hackers, might know about.
In a way, anyone setting up a server or client is creating his or her own security hole. By its
nature, a Web server or a file server is a machine that invites other computers to visit and use its
resources; this basis itself is insecure. The challenge now is to prevent people from using
anything but the resources you have set up for them to access. On the client side, you are always
asking for people to be interactive. A good example is Java. With Java the user asks the server
for a LAN executable file. This means your computer is specifically taking direction from
another computer. Suppose that the server directs your computer to reconfigure its own hard
drive; this is an example of a security hole. This could happen inadvertently if you have an
incompetent programmer who has written a Java application that damages the computer, or it
could be malicious intent. Although both Java and JavaScript have extensive safeguards, there
are still lingering doubts about how secure they truly are. Never dismiss the inadvertent and
never overemphasize the malicious; they are both equally dangerous.
Almost all network computing involves one of two types of transmission: file transfer or
interactive transmission. File transfer involves one computer transferring a block of data and
expecting nothing in return other than acknowledgment of reception. Interactive transmission
involves two computers that have meaningful transmissions flowing in both directions. With file
transmission, only the file to be transferred must be encrypted. Anyone who intercepted the
transfer would only know that something had been transferred. Because only that file must be
encrypted and the file must be ready before transfer, encryption can take place at any time before
transfer. Interactive transmission, however, often involves spontaneous messages and must occur
on both ends.
File Transmission
In practice, there are several types of file transmissions most users perform, including the
transmission of files through FTP (file transfer protocol), submitting forms by a Web server, and
sending e-mail.
Using encryption in these cases is simple. Many shareware PGP programs exist to allow a user to
encrypt a file. Other stronger methods exist for purchase, including products made by RSA
security. The advantage of using these programs is that the encryption can be tested before the
file is sent, ensuring its usefulness.
Interactive Transmission
To use any computer system over a network interactively, users must overcome two security
exposures. First, users must authenticate themselves, and this exposes the authentication process
to interception. Anyone sending out his or her password over the network is often sending that
password out in clear text, which means anyone eavesdropping can pick up the password and
username and use them. Stolen password and username combinations are the most common
problem of interactive transmission. The other problem occurs while the user is using the system.
The information being typed in is most likely going out in plain text, which can be intercepted.
There are a few systems designed to limit the security risk in using a remote system interactively.
One method is called Kerberos, shown in Figure 16.6. When a user logs into a workstation, that
workstation authenticates the user so that the user's password is never sent over the network in
any form. That workstation then contacts the Kerberos server, which issues the user a ticket; that
ticket contains encrypted information used to authenticate the user of other network computers.
It's secure because the username and password are never transmitted over the network. The local
machine does the entire authentication, and then it uses a secure method of transmission to
Figure 16.6: Two computers using Kerberos for authentication require a third computer as a
Kerberos server.
With a Kerberos server this never happens. The user is authenticated locally, and all the
exchanges with the network are encrypted and completed. However, a drawback is that every
machine you want to send information to or any applications or services you wish to use must be
"Kerberized" so that the machine will accept your credentials. A second drawback is that if the
Kerberos server is ever compromised-that is, if an unauthorized person ever gains access to the
Kerberos server-then the integrity of the entire system is compromised.
If you are interacting a lot across the network, that information is insecure. With Kerberos, the
transmission between the machines is not encrypted, just the authentication process is. So
someone couldn't use passwords to gain access; but if all they wanted was to look at the
information you are sending, they could do so. For example, if you log into a financial system
and type in account numbers and financial data, an eavesdropper could get this information
without actually getting on the system.
Secure RPC (Remote Procedure Call) is another method of reducing network security exposure.
The difference between RPC and Kerberos is that after you authenticate yourself to the local
machine, which has your private key stored on it, all your transmission across the network is
encrypted. You can then authenticate yourself to other machines and transmit all your
transactions over a secure channel. Like Kerberos, the main drawback is that any machines you
want to interact with must be equipped with the proper decrypting software, which is a hassle.
Also, because RPC is a public key encryption method, you take a performance hit because all the
encryption and decryption must be done before sending out anything across the network, which
takes a lot of time and computational power.
The final encrypted transmission method is SSL (secure sockets layer). SSL is a method of
encrypting all the communications between computers. It is used to encrypt and decrypt
communications between a Web browser and a Web server. Whenever you use URLs beginning
with https://, you're using SSL. SSL is included with security capable Netscape browsers. SSL
uses technology based on the commercially available public key encryption products of RSA,
Inc. SSL itself is an open standard, and the algorithms are free to all. SSL libraries can be used to
encrypt all traffic among computers, because the encryption occurs at a level that makes it
transparent to both the user and any programs he or she is running.
Summary
When it comes to security, secure data transmission fills out the final third of the security
equation, right behind (or before, depending on how you look at it) Security of data storage and
security of the physical technology and the location of that technology. Assuming you've
satisfied the first two-thirds of the security equation, before setting out to secure your data during
transmission, first determine the value of that data and then spend accordingly to secure it.
Valuable data with little or no security can prove as costly as invaluable data with too much
unnecessary security.
After determining the value of your security, consider the most appropriate options for
transmitting data and then explore the various encryption methods necessary for protecting your
specific data transmissions. And, finally, I can't reiterate enough that a technical solution is never
the whole solution. Data originates from individuals, not from computers, so implementing
strong security policies and procedures is as important as choosing all the physical and technical
barriers to your data.
Symmetric-key algorithms are algorithms for cryptography that use the same cryptographic
keys for both encryption of plaintext and decryption of cipher text. The keys may be identical or
there may be a simple transformation to go between the two keys. The keys, in practice,
represent a shared secret between two or more parties that can be used to maintain a private
information link. This requirement that both parties have access to the secret key is one of the
main drawbacks of symmetric key encryption, in comparison to public-key encryption.
Stream ciphers encrypt the digits (typically bytes) of a message one at a time.
Block ciphers take a number of bits and encrypt them as a single unit, padding the
plaintext so that it is a multiple of the block size. Blocks of 64 bits have been commonly
used. The Advanced Encryption Standard (AES) algorithm approved by NIST in
December 2001 uses 128-bit blocks.
Implementations
Examples of popular symmetric algorithms include Twofish, Serpent, AES (Rijndael), Blowfish,
CAST5, RC4, 3DES, Skipjack, Safer+/++ (Bluetooth), and IDEA.
Symmetric ciphers are commonly used to achieve other cryptographic primitives than just
encryption.
Encrypting a message does not guarantee that this message is not changed while encrypted.
Hence often a message authentication code is added to a cipher text to ensure that changes to the
cipher text will be noted by the receiver. Message authentication codes can be constructed from
symmetric ciphers (e.g. CBC-MAC).
However, symmetric ciphers cannot be used for non-repudiation purposes except by involving
additional parties.
Another application is to build hash functions from block ciphers. See one-way compression
function for descriptions of several such methods.
Many modern block ciphers are based on a construction proposed by Horst Feistel. Feistel's
construction makes it possible to build invertible functions from other functions that are
themselves not invertible.
Key generation
When used with asymmetric ciphers for key transfer, pseudorandom key generators are nearly
always used to generate the symmetric cipher session keys. However, lack of randomness in
those generators or in their initialization vectors is disastrous and has led to cryptanalytic breaks
in the past. Therefore, it is essential that an implementation uses a source of high entropy for its
initialization.
Asymmetric encryption
A cryptographic system that uses two keys -a public key known to everyone and a private or
secret key known only to the recipient of the message. When John wants to send a secure
message to Jane, he uses Jane's public key to encrypt the message. Jane then uses her private key
to decryptit. An important element to the public key system is that the public and private keys are
related in such a way that only the public key can be used to encrypt messages and only the
corresponding private key can be used to decrypt them. Moreover, it is virtually impossible to
deduce the private key if you know the public key.
Public-key systems, such as Pretty Good Privacy (PGP), are becoming popular for transmitting
information via the Internet. They are extremely secure and relatively simple to use. The only
difficulty with public-key systems is that you need to know the recipient's public key to encrypt a
message for him or her. What's needed, therefore, is a global registry of public keys, which is
one of the promises of the new LDAPtechnology.
Because of the computational complexity of asymmetric encryption, it is typically only used for
short messages, typically the transfer of a symmetric encryption key. This symmetric key is then
used to encrypt the rest of the potentially long & heavy conversation. The symmetric
encryption/decryption is based on simpler algorithms and is much faster.
Message authentication involves hashing the message to produce a "digest," and encrypting the
digest with the private key to produce a digital signature. Thereafter anyone can verify this
signature by
(1) Computing the hash of the message,
(2) Decrypting the signature with the signer's public key, and
(3) Comparing the computed digest with the decrypted digest. Equality between the digests
confirms the message is unmodified since it was signed, and that the signer, and no one else,
intentionally performed the signature operation — presuming the signer's private key has
remained secret to the signer. The security of such procedure depends on a hash algorithm of
such quality that it is computationally impossible to alter or find a substitute message that
produces the same digest - but studies have shown that even with the MD5 and SHA-1
algorithms, producing an altered or substitute message is not impossible. The current hashing
standard for encryption is SHA-2. The message itself can also be used in place of the digest.
Public-key cryptography finds application in, amongst others, the IT security discipline
information security. Information security (IS) is concerned with all aspects of protecting
electronic information assets against security threats. Public-key cryptography is used as a
method of assuring the confidentiality, authenticity and non-reputability of electronic
communications and data storage.
Public-key
key cryptography is often used to secure electronic communication over an open
networked environment such as the internet, without relying on a covert channel even for key
exchange. Open networked environments are susceptible to a variety of communication security
problems such as man-in-the-middle
middle attacks and other security
security threats. Security properties
required for communication typically include that the communication being sent must not be
readable during transit (preserving confidentiality), the communication must not be modified
during transit (preserving the integrity
integrity of the communication), the communication must originate
from an identified party (sender authenticity) and to ensure non-repudiation
non repudiation or non
non-denial of the
sending of the communication. Combining public-key
public key cryptography with an Enveloped Public
Key Encryption
tion (EPKE) method, allows for the secure sending of a communication over an open
networked environment.
In contrast, symmetric-key algorithms – variations of which have been used for thousands of
years – use a single secret key, which must be shared and kept private by both the sender and the
receiver, for example in both encryption and decryption. To use a symmetric encryption scheme,
the sender and receiver must securely share a key in advance.
Because symmetric key algorithms are nearly always much less computationally intensive than
asymmetric ones, it is common to exchange a key using a key-exchange algorithm, then transmit
data using that key and a symmetric key algorithm. PGP and the SSL/TLS family of schemes use
this procedure, and are thus called hybrid cryptosystems.
Description
Two of the best-known uses of public-key cryptography are:
Public-key encryption, in which a message is encrypted with a recipient's public key. The
message cannot be decrypted by anyone who does not possess the matching private key,
who is thus presumed to be the owner of that key and the person associated with the
public key. This is used in an attempt to ensure confidentiality.
Digital signatures, in which a message is signed with the sender's private key and can be
verified by anyone who has access to the sender's public key. This verification proves
that the sender had access to the private key, and therefore is likely to be the person
associated with the public key. This also ensures that the message has not been tampered
with, as any manipulation of the message will result in changes to the encoded message
digest, which otherwise remains unchanged between the sender and receiver.
An analogy to public-key encryption is that of a locked mail box with a mail slot. The mail slot is
exposed and accessible to the public – its location (the street address) is, in essence, the public
key. Anyone knowing the street address can go to the door and drop a written message through
the slot. However, only the person who possesses the key can open the mailbox and read the
message.
An analogy for digital signatures is the sealing of an envelope with a personal wax seal. The
message can be opened by anyone, but the presence of the unique seal authenticates the sender.
A central problem with the use of public-key cryptography is confidence/proof that a particular
public key is authentic, in that it is correct and belongs to the person or entity claimed, and has
not been tampered with or replaced by a malicious third party. The usual approach to this
problem is to use a public-key infrastructure (PKI), in which one or more third parties – known
as certificate authorities – certify ownership of key pairs. PGP, in addition to being a certificate
Practical considerations
Enveloped Public Key Encryption (EPKE) is the method of applying public-key cryptography
and ensuring that an electronic communication is transmitted confidentially, has the contents of
the communication protected against being modified (communication integrity) and cannot be
denied from having been sent (non-repudiation). This is often the method used when securing
communication on an open networked environment such by making use of the Transport Layer
Security (TLS) or Secure Sockets Layer (SSL) protocols.
EPKE consists of a two-stage process that includes both Public Key Encryption (PKE) and a
digital signature. Both Public Key Encryption and digital signatures make up the foundation of
Enveloped Public Key Encryption (these two processes are described in full in their own
sections).
Every participant in the communication has their own unique pair of keys. The first key
that is required is a public key and the second key that is required is a private key.
Each person's own private and public keys must be mathematically related where the
private key is used to decrypt a communication sent using a public key and vice versa.
Some well-known asymmetric encryption algorithms are based on the RSA
cryptosystem.
The private key must be kept absolutely private by the owner, though the public key can
be published in a public directory such as with a certification authority.
To send a message using EPKE, the sender of the message first signs the message using their
own private key, this ensures non-repudiation of the message. The sender then encrypts their
digitally signed message using the receivers’ public key thus applying a digital envelope to the
message. This step ensures confidentiality during the transmission of the message. The receiver
of the message then uses their private key to decrypt the message thus removing the digital
envelope and then uses the sender's public key to decrypt the sender's digital signature. At this
point, if the message has been unaltered during transmission, the message will be clear to the
receiver.
Due to the computationally complex nature of RSA-based asymmetric encryption algorithms, the
time taken to encrypt a large documents or files to be transmitted can take an increased amount
of time to complete. To speed up the process of transmission, instead of applying the sender's
digital signature to the large documents or files, the sender can rather hash the documents or files
using a cryptographic hash function and then digitally sign the generated hash value, therefore
Note: The sender and receiver do not usually carry out the process mentioned above manually
though rather rely on sophisticated software to automatically complete the EPKE process.
The goal of Public Key Encryption (PKE) is to ensure that the communication being sent is kept
confidential during transit.
To send a message using PKE, the sender of the message uses the public key of the receiver to
encrypt the contents of the message. The encrypted message is then transmitted electronically to
the receiver and the receiver can then use their own matching private key to decrypt the message.
The encryption process of using the receivers’ public key is useful for preserving the
confidentiality of the message as only the receiver has the matching private key to decrypt the
message. Therefore, the sender of the message cannot decrypt the message once it has been
encrypted using the receivers public key. However, PKE does not address the problem of non-
repudiation, as the message could have been sent by anyone that has access to the receiver’s
public key.
Digital signatures
The goal of a digital signature scheme is to ensure that the sender of the communication that is
being sent is known to the receiver and that the sender of the message cannot repudiate a
message that they sent. Therefore, the purpose of digital signatures is to ensure the non-
repudiation of the message being sent. This is useful in a practical setting where a sender wishes
to make an electronic purchase of shares and the receiver wants to be able to prove who
requested the purchase. Digital signatures do not provide confidentiality for the message being
sent.
The message is signed using the sender's private signing key. The digitally signed message is
then sent to the receiver, who can then use the sender's public key to verify the signature.
Certification authority
In order for Enveloped Public Key Encryption to be as secure as possible, there needs to be a
"gatekeeper" of public and private keys, or else anyone could create key pairs and masquerade as
the intended sender of a communication, proposing them as the keys of the intended sender. This
digital key "gatekeeper" is known as a certification authority. A certification authority is a trusted
A postal analogy
An analogy that can be used to understand the advantages of an asymmetric system is to imagine
two people, Alice and Bob, who are sending a secret message through the public mail. In this
example, Alice wants to send a secret message to Bob, and expects a secret reply from Bob.
With a symmetric key system, Alice first puts the secret message in a box, and locks the box
using a padlock to which she has a key. She then sends the box to Bob through regular mail.
When Bob receives the box, he uses an identical copy of Alice's key (which he has somehow
obtained previously, maybe by a face-to-face meeting) to open the box, and reads the message.
Bob can then use the same padlock to send his secret reply.
In an asymmetric key system, Bob and Alice have separate padlocks. First, Alice asks Bob to
send his open padlock to her through regular mail, keeping his key to himself. When Alice
receives it she uses it to lock a box containing her message, and sends the locked box to Bob.
Bob can then unlock the box with his key and read the message from Alice. To reply, Bob must
similarly get Alice's open padlock to lock the box before sending it back to her.
The critical advantage in an asymmetric key system is that Bob and Alice never need to send a
copy of their keys to each other. This prevents a third party – perhaps, in this example, a corrupt
postal worker that will open unlocked boxes – from copying a key while it is in transit, allowing
the third party to spy on all future messages sent between Alice and Bob. So, in the public key
scenario, Alice and Bob need not trust the postal service as much. In addition, if Bob were
careless and allowed someone else to copy his key, Alice's messages to Bob would be
compromised, but Alice's messages to other people would remain secret, since the other people
would be providing different padlocks for Alice to use.
Another kind of asymmetric key system, called a three-pass protocol, requires neither party to
even touch the other party's padlock (or key); Bob and Alice have separate padlocks. First, Alice
puts the secret message in a box, and locks the box using a padlock to which only she has a key.
She then sends the box to Bob through regular mail. When Bob receives the box, he adds his
own padlock to the box, and sends it back to Alice. When Alice receives the box with the two
padlocks, she removes her padlock and sends it back to Bob. When Bob receives the box with
only his padlock on it, Bob can then unlock the box with his key and read the message from
Alice. Note that, in this scheme, the order of decryption is NOT the same as the order of
encryption – this is only possible if commutative ciphers are used. A commutative cipher is one
in which the order of encryption and decryption is interchangeable, just as the order of
multiplication is interchangeable (i.e., A*B*C = A*C*B = C*B*A). This method is secure for
certain choices of commutative ciphers, but insecure for others (e.g., a simple XOR). For example,
let E1() and E2() be two encryption functions, and let "M" be the message so that if Alice
encrypts it using E1() and sends E1(M) to Bob. Bob then again encrypts the message as
E2(E1(M)) and sends it to Alice. Now, Alice decrypts E2(E1(M)) using E1(). Alice will now get
E2(M), meaning when she sends this again to Bob, he will be able to decrypt the message using
Not all asymmetric key algorithms operate in this way. In the most common, Alice and Bob each
own two keys, one for encryption and one for decryption. In a secure asymmetric key encryption
scheme, the private key should not be deducible from the public key. This makes possible
public-key encryption, since an encryption key can be published without compromising the
security of messages encrypted with that key.
In other schemes, either key can be used to encrypt the message. When Bob encrypts a message
with his private key, only his public key will successfully decrypt it, authenticating Bob's
authorship of the message. In the alternative, when a message is encrypted with the public key,
only the private key can decrypt it. In this arrangement, Alice and Bob can exchange secret
messages with no prior secret agreement, each using the other's public key to encrypt, and each
using his own to decrypt.
Weaknesses
Among symmetric key encryption algorithms, only the one-time pad can be proven to be secure
against any adversary – no matter how much computing power is available. However, there is no
public-key scheme with this property, since all public-key schemes are susceptible to a "brute-
force key search attack". Such attacks are impractical if the amount of computation needed to
succeed – termed the "work factor" by Claude Shannon – is out of reach of all potential
attackers. In many cases, the work factor can be increased by simply choosing a longer key. But
other algorithms may have much lower work factors, making resistance to a brute-force attack
irrelevant. Some special and specific algorithms have been developed to aid in attacking some
public key encryption algorithms – both RSA and ElGamal encryption have known attacks that
are much faster than the brute-force approach. These factors have changed dramatically in recent
decades, both with the decreasing cost of computing power and with new mathematical
discoveries.
Aside from the resistance to attack of a particular key pair, the security of the certification
hierarchy must be considered when deploying public key systems. Some certificate authority –
usually a purpose-built program running on a server computer – vouches for the identities
assigned to specific private keys by producing a digital certificate. Public key digital certificates
are typically valid for several years at a time, so the associated private keys must be held
securely over that time. When a private key used for certificate creation higher in the PKI server
hierarchy is compromised, or accidentally disclosed, then a "man-in-the-middle attack" is
possible, making any subordinate certificate wholly insecure.
Major weaknesses have been found for several formerly promising asymmetric key algorithms.
The 'knapsack packing' algorithm was found to be insecure after the development of a new
attack. Recently, some attacks based on careful measurements of the exact amount of time it
takes known hardware to encrypt plain text have been used to simplify the search for likely
Another potential security vulnerability in using asymmetric keys is the possibility of a "man-in-
the-middle" attack, in which the communication of public keys is intercepted by a third party
(the "man in the middle") and then modified to provide different public keys instead. Encrypted
messages and responses must also be intercepted, decrypted, and re-encrypted by the attacker
using the correct public keys for different communication segments, in all instances, so as to
avoid suspicion. This attack may seem to be difficult to implement in practice, but it is not
impossible when using insecure media (e.g., public networks, such as the Internet or wireless
forms of communications) – for example, a malicious staff member at Alice or Bob's Internet
Service Provider (ISP) might find it quite easy to carry out. In the earlier postal analogy, Alice
would have to have a way to make sure that the lock on the returned packet really belongs to Bob
before she removes her lock and sends the packet back. Otherwise, the lock could have been put
on the packet by a corrupt postal worker pretending to be Bob, so as to fool Alice.
One approach to prevent such attacks involves the use of a certificate authority, a trusted third
party responsible for verifying the identity of a user of the system. This authority issues a
tamper-resistant, non-spoofabledigital certificate for the participants. Such certificates are signed
data blocks stating that this public key belongs to that person, company, or other entity. This
approach also has its weaknesses – for example, the certificate authority issuing the certificate
must be trusted to have properly checked the identity of the key-holder, must ensure the
correctness of the public key when it issues a certificate, must be secure from computer piracy,
and must have made arrangements with all participants to check all their certificates before
protected communications can begin. Web browsers, for instance, are supplied with a long list of
"self-signed identity certificates" from PKI providers – these are used to check the bona fides of
the certificate authority and then, in a second step, the certificates of potential communicators.
An attacker who could subvert any single one of those certificate authorities into issuing a
certificate for a bogus public key could then mount a "man-in-the-middle" attack as easily as if
the certificate scheme were not used at all. In an alternate scenario rarely discussed, an attacker
who penetrated an authority's servers and obtained its store of certificates and keys (public and
private) would be able to spoof, masquerade, decrypt, and forge transactions without limit.
Despite its theoretical and potential problems, this approach is widely used. Examples include
SSL and its successor, TLS, which are commonly used to provide security for web browser
transactions (for example, to securely send credit card details to an online store).
Duplicate Packets
Duplicate packets are often observed network behaviour.
If a sending host thinks a packet is not transmitted correctly because of a PacketLoss, it might
Retransmit that packet. The receiving host might already get the first packet, and will receive a
second one, which is a duplicated packet.
ConnectionOrientedProtocols such as TCP will detect duplicate packets, and will ignore them
completely.
Reasons:For most networks, duplicate packets is a typical behaviour, e.g. this will happen if the
sending side transmitted a packet correctly, but think it wasn't received at all.
Troubleshooting
If the network is configured correctly, there's not much that can be done against duplicate
packets as this is a somewhat "intended" behaviour.
Discussion
Q: Is it possible to turn off the display of duplicate packets? Over 25% of the packets for
many of my TCP scans are duplicates. I must decode the traffic of the systems now,
before the network engineers have had time to flush out the congestion causes.
A: Try using
Alternative routing provides two different cables from the local exchange to your site, so you can
protect against cable failure as your service will be maintained on the alternative route.
With diverse routing, you can protect not only against cable failure but also against local
exchange failure as there are two separate routes from two exchanges to your site.
Alternate routing:The ability to use another transmission line if the regular line is busy.
Introduction to firewalls
Generally, firewalls are configured to protect against unauthenticated interactive logins from the
outside world. This helps prevent hackers from logging into machines on your network. More
sophisticated firewalls block traffic from the outside to the inside, but permit users on the inside
to communicate a little more freely with the outside.
Firewalls are essential since they provide a single block point, where security and auditing can be
imposed. Firewalls provide an important logging and auditing function; often, they provide
summaries to the administrator about what type/volume of traffic has been processed through it.
This is an important benefit: Providing this block point can serve the same purpose on your
network as an armed guard does for your physical premises.
Packet filters
Stateful inspection
Proxys
These three categories, however, are not mutually exclusive, as most modern firewalls have a
mix of abilities that may place them in more than one of the three. For more information and
detail on each category, see the NIST Guidelines on firewalls and firewall policy.
One way to compare firewalls is to look at the Transmission Control Protocol/Internet Protocol
(TCP/IP) layers that each is able to examine. TCP/IP communications are composed of four
layers; they work together to transfer data between hosts. When data transfers across networks, it
travels from the highest layer through intermediate layers to the lowest layer; each layer adds
more information. Then the lowest layer sends the accumulated data through the physical
network; the data next moves upward, through the layers, to its destination. Simply put, the data
a layer produces is encapsulated in a larger container by the layer below it. The four TCP/IP
layers, from highest to lowest, are described further in the figure below.
Firewall implementation
The firewall remains a vital component in any network security architecture, and today's
organizations have several types to choose from. It's essential that IT professionals identify the
type of firewall that best suits the organization's network security needs.
Once selected, one of the key questions that shapes a protection strategy is "Where should the
firewall be placed?" There are three common firewall topologies: the bastion host, screened
The next decision to be made, after the topology chosen, is where to place individual firewall
systems in it. At this point, there are several types to consider, such as bastion host, screened
subnet and multi-homed firewalls.
Remember that firewall configurations do change quickly and often, so it is difficult to keep on
top of routine firewall maintenance tasks. Firewall activity, therefore, must be continuously
audited to help keep the network secure from ever-evolving threats.
Network layer firewalls generally make their decisions based on the source address, destination
address and ports in individual IP packets. A simple router is the traditional network layer
firewall, since it is not able to make particularly complicated decisions about what a packet is
actually talking to or where it actually came from.
One important distinction many network layer firewalls possess is that they route traffic directly
through them, which means in order to use one, you either need to have a validly assigned IP
address block or a private Internet address block. Network layer firewalls tend to be very fast and
almost transparent to their users.
Application layer firewalls are hosts that run proxy servers, which permit no traffic directly
between networks, and they perform elaborate logging and examination of traffic passing
through them. Since proxy applications are simply software running on the firewall, it is a good
place to do logging and access control. Application layer firewalls can be used as network
address translators, since traffic goes in one side and out the other after having passed through an
application that effectively masks the origin of the initiating connection.
However, run-of-the-mill network firewalls can't properly defend applications. As Michael Cobb
explains, application layer firewalls offer Layer 7 security on a more granular level, and may
even help organizations get more out of existing network devices.
In some cases, having an application in the way may impact performance and make the firewall
less transparent. Older application layer firewalls that are still in use are not particularly
transparent to end users and may require some user training. However, more modern application
layer firewalls are often totally transparent. Application layer firewalls tend to provide more
detailed audit reports and tend to enforce more conservative security models than network layer
firewalls.
Future firewalls will likely combine some characteristics of network layer firewalls and
application layer firewalls. It is likely that network layer firewalls will become increasingly
aware of the information going through them, and application layer firewalls have already
Proxy firewalls
Proxy firewalls offer more security than other types of firewalls, but at the expense of speed and
functionality, as they can limit which applications the network supports. Why are they more
secure? Unlike stateful firewalls or application layer firewalls, which allow or block network
packets from passing to and from a protected network, traffic does not flow through a proxy.
Instead, computers establish a connection to the proxy, which serves as an intermediary, and
initiate a new network connection on behalf of the request. This prevents direct connections
between systems on either side of the firewall and makes it harder for an attacker to discover
where the network is, because they don't receive packets created directly by their target system.
Proxy firewalls also provide comprehensive, protocol-aware security analysis for the protocols
they support. This allows them to make better security decisions than products that focus purely
on packet header information.
A new category of network security products -- called unified threat management (UTM) --
promises integration, convenience and protection from pretty much every threat out there; these
are especially valuable for enterprise use. As Mike Rothman explains, the evolution of UTM
technology and vendor offerings makes these products even more valuable to enterprises.
Security expert Karen Scarfone defines UTM products as firewall appliances that not only guard
against intrusion but also perform content filtering, spam filtering, application control, Web
content filtering, intrusion detection and antivirus duties; in other words, a UTM device
combines functions traditionally handled by multiple systems. These devices are designed to
combat all levels of malicious activity on the computer network.
An effective UTM solution delivers a network security platform comprised of robust and fully
integrated security and networking functions along with other features, such as security
management and policy management by a group or user. It is designed to protect against next
generation application layer threats and offers a centralized management through a single
console, all without impairing the performance of the network.
Convenience and ease of installation are the two key advantages of unified threat management
security appliances. There is also much less human intervention required to install and configure
them appliances. Other advantages of UTM are listed below:
Reduced complexity: The integrated all-in-one approach simplifies not only product
selection but also product integration, and ongoing support as well.
Some of the leading UTM solution providers are Check Point, Cisco, Dell, Fortinet, HP, IBM
and Juniper Networks.
UTM products are not the right solution for every environment. Many organizations already have
a set of point solutions installed that, combined, provide network security capabilities similar to
what UTMs offer, and there can be substantial costs involved in ripping and replacing the
existing technology install a UTM replacement. There are also advantages to using the individual
products together, rather than a UTM. For instance, when individual point products are
combined, the IT staff is able to select the best product available for each network security
capability; a UTM can mean having to compromise and acquire a single product that has
stronger capabilities in some areas and weaker ones in others.
Another important consideration when evaluating UTM solutions is the size of the organization
in which it would be installed. Smallest organizations might not need all the network security
features of a UTM. There is no need for a smaller firm to tax its budget with a UTM if many of
its functions aren't needed. On the other hand, a UTM may not be right for larger, more cyber-
dependent organizations either, since these often need a level of scalability and reliability in their
network security that UTM products might not support (or at least not support as well as a set of
point solutions). Also a UTM system creates a single point of failure for most or all network
security capabilities; UTM failure could conceivably shut down an enterprise, with a catastrophic
effect on company security. How much an enterprise is willing to rely on a UTM is a question
that must be asked, and answered.
Transport Layer Security (TLS) is a protocol that ensures privacy between communicating
applications and their users on the Internet. When a server and client communicate, TLS ensures
that no third party may eavesdrop or tamper with any message. TLS is the successor to the
Secure Sockets Layer (SSL).
IP has the task of delivering packets from the source host to the destination host solely based on
the IP addresses in the packet headers. For this purpose, IP defines packet structures that
encapsulate the data to be delivered. It also defines addressing methods that are used to label the
datagram with source and destination information.
The first major version of IP, Internet Protocol Version 4 (IPv4), is the dominant protocol of the
Internet. Its successor is Internet Protocol Version 6 (IPv6).
Function
The Internet Protocol is responsible for addressing hosts and for routing datagrams (packets)
from a source host to a destination host across one or more IP networks. For this purpose, the
Internet Protocol defines the format of packets and provides an addressing system that has ttwo
functions: identifying hosts; and providing a logical location service
Datagram construction
Each datagram has two components: a header and a payload. The IP header is tagged with the
source IP address, the destination IP address, and other meta-data
meta data needed to route and deliver the
datagram. The payloadad is the data that is transported. This method of nesting the data payload in
a packet with a header is called encapsulation.
IP addressing entails the assignment of IP addresses and associated parameters to host interfaces.
The address space is divided into networks and subnetworks,, involving the designation of
network or routing prefixes. IP routing is performed by all hosts, as well as routers, whose main
function is to transport packets across network boundaries. Routers communicate with one
another via specially designed routing protocols, either interior gateway protocols or exterior
gateway
teway protocols, as needed for the topology of the network.
IP routing is also common in local networks. For example, many Ethernet switches support IP
multicast operations. These switches use IP addresses and Internet Group Management Protocol
to control multicast routing but use MAC addresses for the actual routing.
routing
The design of the Internet protocols is based on the end-to-end principle. The network
infrastructure is considered inherently unreliable at any single network element or transmission
medium and assumes that it is dynamic in terms of availability of links and nodes. No central
monitoring or performance measurement facility exists that tracks or maintains the state of the
network. For the benefit of reducing network complexity, the intelligence in the network is
purposely mostly located in the end nodes of data transmission. Routers in the transmission path
forward packets to the next known, directly reachable gateway matching the routing prefix for
the destination address.
As a consequence of this design, the Internet Protocol only provides best effort delivery and its
service is characterized as unreliable. In network architectural language, it is a connectionless
protocol, in contrast to connection-oriented modes of transmission. Various error conditions may
occur, such as data corruption, packet loss, duplication and out-of-order delivery. Because
routing is dynamic, meaning every packet is treated independently, and because the network
maintains no state based on the path of prior packets, different packets may be routed to the same
destination via different paths, resulting in out-of-order sequencing at the receiver.
Internet Protocol Version 4 (IPv4) provides safeguards to ensure that the IP packet header is
error-free. A routing node calculates a checksum for a packet. If the checksum is bad, the routing
node discards the packet. The routing node does not have to notify either end node, although the
Internet Control Message Protocol (ICMP) allows such notification. By contrast, in order to
increase performance, and since current link layer technology is assumed to provide sufficient
error detection, the IPv6 header has no checksum to protect it.
All error conditions in the network must be detected and compensated by the end nodes of a
transmission. The upper layer protocols of the Internet protocol suite are responsible for
resolving reliability issues. For example, a host may cache network data to ensure correct
ordering before the data is delivered to an application.
The Transmission Control Protocol (TCP) is an example of a protocol that adjusts its segment
size to be smaller than the MTU. The User Datagram Protocol (UDP) and the Internet Control
Message Protocol
Internet Protocol Version 4 (IPv4) is the fourth revision of the IP and a widely used protocol in
data communication over different kinds of networks. IPv4 is a connectionless protocol used in
packet-switched layer networks, such as Ethernet. It provides the logical connection between
network devices by providing identification for each device. There are many ways to configure
IPv4 with all kinds of devices - including manual and automatic configurations - depending on
the network type.
IPv4 is based on the best-effort model. This model guarantees neither delivery nor avoidance of
duplicate delivery; these aspects are handled by the upper layer transport.
IPv4 is defined and specified in IETF publication RCF 791. It is used in the packet-switched link
layer in the OSI model.
IPv4 uses 32-bit addresses for Ethernet communication in five classes, named A, B, C, D and E.
Classes A, B and C have a different bit length for addressing the network host. Class D addresses
are reserved for multicasting, while class E addresses are reserved for future use.
Class A has subnet mask 255.0.0.0 or /8, B has subnet mask 255.255.0.0 or /16 and class C has
subnet mask 255.255.255.0 or /24. For example, with a /16 subnet mask, the network
192.168.0.0 may use the address range of 192.168.0.0 to 192.168.255.255. Network hosts can
take any address from this range; however, address 192.168.255.255 is reserved for broadcast
within the network.
The maximum number of host addresses IPv4 can assign to end users is 232. IPv6 presents a
standardized solution to overcome IPv4's limitations. Because of its 128-bit address length, it can
define up to 2,128 addresses.
Internet Protocol Version 6 (IPv6) is an Internet Protocol (IP) used for carrying data in packets
from a source to a destination over various networks. IPv6 is the enhanced version of IPv4 and
can support very large numbers of nodes as compared to IPv4. It allows for 2128 possible node,
or address, combinations.
IPv6 (Internet Protocol version 6) is a set of specifications from the Internet Engineering Task
Force (IETF) that's essentially an upgrade of IP version 4 (IPv4). The basics of IPv6 are similar
to those of IPv4 -- devices can use IPv6 as source and destination addresses to pass packets over
a network, and tools like ping work for network testing as they do in IPv4, with some slight
variations.
The most obvious improvement in IPv6 over IPv4 is that IP addresses are lengthened from 32
bits to 128 bits. This extension anticipates considerable future growth of the Internet and
provides relief for what was perceived as an impending shortage of network addresses. IPv6 also
supports auto-configuration to help correct most of the shortcomings in version 4, and it has
integrated security and mobility features.
Supports source and destination addresses that are 128 bits (16 bytes) long.
Requires IPSec support.
Uses Flow Label field to identify packet flow for QoS handling by router.
Allows the host to send fragments packets but not routers.
Doesn't include a checksum in the header.
Uses a link-local scope all-nodes multicast address.
Does not require manual configuration or DHCP.
Uses host address (AAAA) resource records in DNS to map host names to IPv6
addresses.
Uses pointer (PTR) resource records in the IP6.ARPA DNS domain to map IPv6
addresses to host names.
Supports a 1280-byte packet size (without fragmentation).
Moves optional data to IPv6 extension headers.
Uses Multicast Neighbor Solicitation messages to resolve IP addresses to link-layer
addresses.
Uses Multicast Listener Discovery (MLD) messages to manage membership in local
subnet groups.
Uses ICMPv6 Router Solicitation and Router Advertisement messages to determine the
IP address of the best default gateway.
The routers I use as examples are the Cisco Linksys Smart Wi-Fi AC 1750HD Video Pro
EA6500 and the Netgear N600 Wireless Dual Band Gigabit Router (WNDR3700)—with
Netgear's new Genie management software. Management software varies from router to router,
but most of the settings presented here can be found in just about all consumer wireless routers,
especially those made in the last three years.
Step 1: WPA2
I think it's common networking knowledge that there really is no excuse to not use any
encryption method other than WPA2. In all but the oldest wireless devices, just about all modern
wireless clients support it.
Changing the admin password, is usually found in the "System" or "Administration" areas of the
interface. Changing the SSID's passphrase is typically under "Wireless Settings." By the way,
you see the password I have set in the image below? Don't use that one. That's just a router for
testing, my home router has a much stronger password. For some good advice on creating
passwords, give Password Protection: How to Create Strong Passwords", a read.
Newer router interfaces are getting fancier. The most recent interface on the Cisco Linksys
routers shows all of this information plus an icon of the type of client that's connected ( a picture
of a bridge, a NAS, a computer…and so on). I've met with vendors who are also releasing cloud
But the competitive edge and other benefits of mobility can be lost if smartphones and tablet PCs
are not adequately protected against mobile device security threats. While the market shows no
sign of slowing, IT organizations identify security as one of their greatest concerns about
extending mobility. The purpose of this Learning Guide is to help assuage some of those
concerns by arming you with knowledge of mobile device security threats and how to implement
protection measures.
Mobile devices face a number of threats that pose a significant risk to corporate data. Like
desktops, smartphones and tablet PCs are susceptible to digital attacks, but they are also highly
vulnerable to physical attacks given their portability. Here is an overview of the various mobile
device security threats and the risks they pose to corporate assets.
Mobile malware – Smartphones and tablets are susceptible to worms, viruses, Trojans and
spyware similarly to desktops. Mobile malware can steal sensitive data, rack up long distance
phone charges and collect user data. High-profile mobile malware infections are few, but that is
likely to change. In addition, attackers can use mobile malware to carry out targeted attacks
against mobile device users.
Eavesdropping – Carrier-based wireless networks have good link-level security but lack end-to-
end upper-layer security. Data sent from the client to an enterprise server is often unencrypted,
allowing intruders to eavesdrop on users’ sensitive communications.
Unauthorized access – Users often store login credentials for applications on their mobile
devices, making access to corporate resources only a click or tap away. In this manner
unauthorized users can easily access corporate email accounts and applications, social media
networks and more.
Theft and loss – Couple mobile devices’ small form factor with PC-grade processing power and
storage, and you have a high risk for data loss. Users store a significant amount of sensitive
corporate data–such as business email, customer databases, corporate presentations and business
plans–on their mobile devices. It only takes one hurried user to leave their iPhone in a taxicab for
a significant data loss incident to occur.
Unlicensed and unmanaged applications – Unlicensed applications can cost your company in
legal costs. But whether or not applications are licensed, they must be updated regularly to fix
Encrypting data at rest and in motion helps prevent data loss and successful eavesdropping
attempts on mobile devices. Carrier networks have good encryption of the airlink, but the rest of
the value chain between the client and enterprise server remains open unless explicitly managed.
Contemporary tablet PCs and smartphones can secure Web and email with SSL/TLS, Wi-Fi with
WPA2 and corporate data with mobile VPN clients. The primary challenge facing IT
organizations is ensuring proper configuration and enforcement, as well as protecting credentials
and configurations to prevent reuse on unauthorized devices.
Data at rest can be protected with self-protecting applications that store email messages, contacts
and calendars inside encrypted containers. These containers separate business data from personal
data, making it easier to wipe business data should the device become lost or stolen.
Authentication and authorization controls help protect unauthorized access to mobile devices and
the data on them. Ideally, Craig Mathias, principal with advisory firm Farpoint Group, says IT
organizations should implement two-factor authentication on mobile devices, which requires
users to prove their identity using something they know–like a password–and something they
have–like a fingerprint. In addition to providing robust authentication and authorization, Mathias
says two-factor authentication can also be used to drive a good encryption implementation.
Unfortunately, two-factor authentication technology is not yet widely available in mobile
devices. Until then, IT organizations should require users to use native device-level
authentication (PIN, password).
WPA (sometimes referred to as the draft IEEE 802.11i standard) became available in 2003. The
Wi-Fi Alliance intended it as an intermediate measure in anticipation of the availability of the
more secure and complex WPA2. WPA2 became available in 2004 and is common shorthand for
the full IEEE 802.11i (or IEEE 802.11i-2004) standard.
A flaw in a feature added to Wi-Fi, called Wi-Fi Protected Setup, allows WPA and WPA2
security to be bypassed and effectively broken in many situations. WPA and WPA2 security
implemented without using the Wi-Fi Protected Setup feature are unaffected by the security
vulnerability
Stands for "Wi-Fi Protected Access." WPA is a security protocol designed to create secure
wireless (Wi-Fi) networks. It is similar to the WEP protocol, but offers improvements in the way
it handles security keys and the way users are authorized.
For an encrypted data transfer to work, both systems on the beginning and end of a data transfer
must use the same encryption/decryption key. While WEP provides each authorized system with
the same key, WPA uses the temporal key integrity protocol (TKIP), which dynamically changes
the key that the systems use. This prevents intruders from creating their own encryption key to
match the one used by the secure network.
WPA also implements something called the Extensible Authentication Protocol (EAP) for
authorizing users. Instead of authorizing computers based solely on their MAC address, WPA
More notes
Wi-Fi Protected Access (WPA) is a security standard for users of computers equipped with Wi-
Fi wireless connection. It is an improvement on and is expected to replace the original Wi-Fi
security standard, Wired Equivalent Privacy (WEP). WPA provides more sophisticated data
encryption than WEP and also provides user authentication (WEP's user authentication is
considered insufficient). WEP is still considered useful for the casual home user, but insufficient
for the corporate environment where the large flow of messages can enable eavesdroppers to
discover encryption keys more quickly
WPA's encryption method is the Temporal Key Integrity Protocol (TKIP). TKIP addresses the
weaknesses of WEP by including a per-packet mixing function, a message integrity check, an
extended initialization vector, and a re-keying mechanism. WPA provides "strong" user
authentication based on 802.1x and the Extensible Authentication Protocol (EAP). WPA depends
on a central authentication server such as RADIUS to authenticate each user.
Wi-Fi Protected Access is a subset of and will be compatible with IEEE 802.11i (sometimes
referred to as WPA2), a security standard under development. Software updates that will allow
both server and client computers to implement WPA are expected to become widely available
during 2003. Access points (see hot spots) can operate in mixed WEP/WPA mode to support
both WEP and WPA clients. However, mixed mode effectively provides only WEP-level
security for all users. Home users of access points that use only WPA can operate in a special
home-mode in which the user need only enter a password to be connected to the access point.
The password will trigger authentication and TKIP encryption.
The establishment, maintenance and continuous update of an ISMS provide a strong indication
that a company is using a systematic approach for the identification, assessment and management
of information security risks.
Different methodologies have been proposed to manage IT risks, each of them divided in
processes and steps.
According to Risk IT, it encompasses not just only the negative impact of operations and service
delivery which can bring destruction or reduction of the value of the organization, but also the
benefit\value enabling risk associated to missing opportunities to use technology to enable or
enhance business or the IT project management for aspects like overspending or late delivery
with adverse business impact.
Because risk is strictly tied to uncertainty, Decision theory should be applied to manage risk as a
science, i.e. rationally making choices under uncertainty.
Generally speaking, risk is the product of likelihood times impact (Risk = Likelihood * Impact).
The measure of an IT risk can be determined as a product of threat, vulnerability and asset
values:
A more current Risk management framework for IT Risk would be the TIK framework: Risk =
((Vulnerability * Threat) / Counter Measure) * Asset Value at Risk IT Risk.
Definitions
"Risk management is the process of identifying vulnerabilities and threats to the information
resources used by an organization in achieving business objectives, and deciding what
countermeasures, if any, to take in reducing risk to an acceptable level, based on the value of the
information resource to the organization."
There are two things in this definition that may need some clarification. First, the process of risk
management is an ongoing iterative process. It must be repeated indefinitely. The business
environment is constantly changing and new threats and vulnerability emerge every day. Second,
the choice of countermeasures (controls) used to manage risks must strike a balance between
productivity, cost, effectiveness of the countermeasure, and the value of the informational asset
being protected.
The head of an organizational unit must ensure that the organization has the capabilities needed
to accomplish its mission. These mission owners must determine the security capabilities that
their IT systems must have to provide the desired level of mission support in the face of real
world threats. Most organizations have tight budgets for IT security; therefore, IT security
spending must be reviewed as thoroughly as other management decisions. A well-structured risk
management methodology, when used effectively, can help management identify appropriate
controls for providing the mission-essential security capabilities.
Risk management in the IT world is quite a complex, multi faced activity, with a lot of relations
with other complex activities. The picture show the relationships between different related terms.
The American National Information Assurance Training and Education Center defines risk in the
IT field as:
1. The total process to identify, controls, and minimize the impact of uncertain events. The
objective of the risk management program is to reduce risk and obtain and maintain DAA
approval. The process facilitates the management of security risks by each level of
management throughout the system life cycle. The approval process consists of three
elements: risk analysis, certification, and approval.
2. An element of managerial science concerned with the identification, measurement,
control, and minimization of uncertain events. An effective risk management program
encompasses the following four phases:
1. ARisk assessment, as derived from an evaluation of threats and vulnerabilities.
2. Management decision.
3. Control implementation.
4. Effectiveness review.
3. The total process of identifying, measuring, and minimizing uncertain events affecting
AIS resources. It includes risk analysis, cost benefit analysis, safeguard selection,
security test and evaluation, safeguard implementation, and systems review.
4. The total process of identifying, controlling, and eliminating or minimizing uncertain
events that may affect system resources. lt includes risk analysis, cost benefit analysis,
selection, implementation and test, security evaluation of safeguards, and overall security
review.
Some organizations have, and many others should have, a comprehensive Enterprise risk
management (ERM) in place. The four objectives categories addressed, according to Committee
of Sponsoring Organizations of the Treadway Commission (COSO) are:
Strategy - high-level goals, aligned with and supporting the organization's mission
According to Risk It framework by ISACA, IT risk is transversal to all four categories. The IT
risk should be managed in the framework of Enterprise risk management: Risk appetite and Risk
sensitivity of the whole enterprise should guide the IT risk management process. ERM should
provide the context and business objectives to IT risk management
The term methodology means an organized set of principles and rules that drive action in a
particular field of knowledge. A methodology does not describe specific methods; nevertheless it
does specify several processes that need to be followed. These processes constitute a generic
framework. They may be broken down in sub-processes, they may be combined, or their
Due to the probabilistic nature and the need of cost benefit analysis, the IT risks are managed
following a process that accordingly to NIST SP 800-30 can be divided in the following steps:
1. risk assessment,
2. risk mitigation, and
3. Evaluation and assessment.
Effective risk management must be totally integrated into the Systems Development Life Cycle.
Context establishment
This step is the first step in ISOISO/IEC 27005 framework. Most of the elementary activities are
foreseen as the first sub process of Risk assessment according to NIST SP 800-30. This step
implies the acquisition of all relevant information about the organization and the determination
of the basic criteria, purpose, scope and boundaries of risk management activities and the
organization in charge of risk management activities. The purpose is usually the compliance with
legal requirements and provide evidence of due diligence supporting an ISMS that can be
certified. The scope can be an incident reporting plan, a business continuity plan.
Criteria include the risk evaluation, risk acceptance and impact evaluation criteria. These are
conditioned by:
Establishing the scope and boundaries, the organization should be studied: its mission, its values,
its structure; its strategy, its locations and cultural environment. The constraints (budgetary,
cultural, political, and technical) of the organization are to be collected and documented as guide
for next steps.
The setup of the organization in charge of risk management is foreseen as partially fulfilling the
requirement to provide the resources needed to establish, implement, operate, monitor, review,
maintain and improve an ISMS. The main roles inside this organization are:
1. Assets
In very general terms, an asset can be defined as anything that could be of value or importance to
the entity.
In information security, the ISO/IEC 27005 standarddistinguishes between
•Primary assets including
Processes and activities,
Information
•Supporting assets including:
Equipment,
Software,
Networks,
Personnel,
Premises,
Organisational support
This is of course a very general definition that, while common to all methods, translates into a
range of practical applications.
2. Asset damage
Clearly, risks (and their consequences) differ depending on what type of damage occurs.
Different categories of assets will be damaged in different ways, and while it is easy to list the
ways in which information can be damaged (by being lost, tampered with or exposed, among
other things), few standard classifications exist for processes or certain support-related assets.
Defining threat
ISO/IEC 27000 series of standards on risk related to information systems refer to the idea of
“threat” Which is not really defined, except to say “a threat has the potential to harm assets such
as information, processes, and systems and therefore organizations”
One might assume that a threat is similar to the “cause” mentioned above, but it is in fact quite
different: threats can apply to a wide range of aspects, particularly:
•Events or actions that can lead to the occurrence of a risk (for example an accident, fire, media
theft, etc.),
•Actions or methods of action that make the occurrence of risk possible without causing it (for
example abuse of privilege,illegal access rights or identity theft),
•Effects related to and which indicate undetermined causes (for example the saturation of an
information system),
•Behaviour (for example unauthorized use of equipment) that is not in itself an event that leads
to the occurrence of risk
These examples show that a threat is not strictly linked to the cause of a risk, but it does make
defining typologies of risk possible using a list of typical threats.
Defining vulnerability
The term vulnerability is sometimes used in risk analysis but more widely in the domain of
information systems security
If we take the example of a typed or handwritten document, where the threat would be rain or
storms in general, possible vulnerabilities would be:
•that the ink is not waterproof,
•the paper is water-sensitive,
•the material it is written on is degradable.
Often, it is more useful to think of vulnerabilities in terms of security controls and their potential
shortcomings.
Then, vulnerability is defined as a shortcoming or flaw in a security system that could be used by
a threat to strike a targeted system, object or asset.
In the example above, the exploited vulnerability was a lack of protection against storms.
From here, vulnerability branches out in many directions, as every security system has
weaknesses and any solution intended to reduce vulnerability is vulnerable itself.
If we go back to the example of the document made of degradable material an initial solution is
the storage away from storms
•Resulting vulnerabilities:
Faulty plumbing systems within the building,
Inadequate or poorly executed storage procedures,
Activation of fire protection sprinklers, Etc.
When examining the notion of vulnerability, it may be useful to keep in mind that these two
approaches are not the same.
By using these general concepts, several definitions of risk are possible and in fact proposed by
different risk management methods. At the same time, they are compatible with standard-setting
documents
Risk analysis
Security in any system should be commensurate with its risks. However, the process to
determine which security controls are appropriate and cost effective is quite often a complex and
sometimes a subjective matter. One of the prime functions of security risk analysis is to put this
process onto a more objective basis. There are a number of distinct approaches to risk analysis.
However, these essentially break down into two types: quantitative and qualitative.
This approach employs two fundamental elements; the probability of an event occurring and the
likely loss should it occur.
Quantitative risk analysis makes use of a single figure produced from these elements. This is
called the 'Annual Loss Expectancy (ALE)' or the 'Estimated Annual Cost (EAC)'. This is
calculated for an event by simply multiplying the potential loss by the probability.
The problems with this type of risk analysis are usually associated with the unreliability and
inaccuracy of the data. Probability can rarely be precise and can, in some cases, promote
complacency. In addition, controls and countermeasures often tackle a number of potential
events and the events themselves are frequently interrelated. Notwithstanding the drawbacks, a
number of organisations have successfully adopted quantitative risk analysis.
This is by far the most widely used approach to risk analysis. Probability data is not required and
only estimated potential loss is used. Most qualitative risk analysis methodologies make use of a
number of interrelated elements:
THREATS
These are things that can go wrong or that can 'attack' the system. Examples might include fire or
fraud. Threats are ever present for every system.
VULNERABILITIES
These make a system more prone to attack by a threat or make an attack more likely to have
some success or impact. For example, for fire vulnerability would be the presence of
inflammable materials (e.g. paper).
CONTROLS
These are the countermeasures for vulnerabilities. There are four types:
Risk Management is a recurrent activity that deals with the analysis, planning, implementation,
control and monitoring of implemented measurements and the enforced security policy. On the
contrary, Risk Assessment is executed at discrete time points (e.g. once a year, on demand, etc.)
and – until the performance of the next assessment - provides a temporary view of assessed risks
and while parameterizing the entire Risk Management process. This view of the relationship of
Risk Management to Risk Assessment is depicted in figure as adopted from OCTAVE.
Risk assessment receives as input the output of the previous step Context establishment; the
output is the list of assessed risks prioritized according to risk evaluation criteria. The process
can divided in the following steps:
Risk analysis, further divided in:
o Risk identification
o Risk estimation
o Risk evaluation
The following table compares these ISO 27005 processes with Risk IT framework processes:
Code of practice for information security management recommends the following be examined
during a risk assessment:
security policy,
organization of information security,
asset management,
human resources security,
physical and environmental security,
communications and operations management,
access control,
information systems acquisition, development and maintenance, (see Systems
Development Life Cycle)
information security incident management,
business continuity management, and
Regulatory compliance.
Risk identification
Risk identification states what could cause a potential loss; the following are to be identified:
assets, primary (i.e. Business processes and related information) and supporting (i.e.
hardware, software, personnel, site, organization structure)
threats
existing and planned security measures
Risk estimation
There are two methods of risk assessment in information security field, qualitative and
quantitative.
Qualitative risk assessment (three to five steps evaluation, from Very High to Low) is performed
when the organization requires a risk assessment be performed in a relatively short time or to
meet a small budget, a significant quantity of relevant data is not available, or the persons
performing the assessment don't have the sophisticated mathematical, financial, and risk
assessment expertise required. Qualitative risk assessment can be performed in a shorter period
of time and with less data. Qualitative risk assessments are typically performed through
interviews of a sample of personnel from all relevant groups within an organization charged with
the security of the asset being assessed. Qualitative risk assessments are descriptive versus
measurable. Usually a qualitative classification is done followed by a quantitative evaluation of
the highest risks to be compared to the costs of security measures.
Risk estimation has as input the output of risk analysis and can be split in the following steps:
assessment of the consequences through the valuation of assets
assessment of the likelihood of the incident (through threat and vulnerability valuation)
assign values to the likelihood and consequence of the risks
The output is the list of risks with value levels assigned. It can be documented in a risk register
During risk estimation there are generally three values of a given asset, one for the loss of one of
the CIA properties: Confidentiality, Integrity, and Availability.
Risk mitigation
Risk mitigation, the second process according to SP 800-30, the third according to ISO 27005 of
risk management, involves prioritizing, evaluating, and implementing the appropriate risk-
reducing controls recommended from the risk assessment process. Because the elimination of all
risk is usually impractical or close to impossible, it is the responsibility of senior management
and functional and business managers to use the least-cost approach and implement the most
appropriate controls to decrease mission risk to an acceptable level, with minimal adverse impact
on the organization’s resources and mission.
Risk avoidance describes any action where ways of conducting business are changed to avoid
any risk occurrence. For example, the choice of not storing sensitive information about
customers can be avoidance for the risk that customer data can be stolen.
The residual risks, i.e. the risk reaming after risk treatment decision have been taken, should be
estimated to ensure that sufficient protection is achieved. If the residual risk is unacceptable, the
risk treatment process should be iterated.
NIST SP 800 30 framework
Risk mitigation is a systematic methodology used by senior management to reduce mission risk.
Risk mitigation can be achieved through any of the following risk mitigation options:
Risk Assumption. To accept the potential risk and continue operating the IT system or to
implement controls to lower the risk to an acceptable level
Risk Avoidance. To avoid the risk by eliminating the risk cause and/or consequence
(e.g., forgo certain functions of the system or shut down the system when risks are
identified)
Address the greatest risks and strive for sufficient risk mitigation at the lowest cost, with minimal
impact on other mission capabilities: this is the suggestion contained in.
Risk communication
Risk communication is a horizontal process that interacts bi-directionally with all other processes
of risk management. Its purpose is to establish a common understanding of all aspect of risk
among all the organization's stakeholder. Establishing a common understanding is important,
since it influences decisions to be taken. The Risk Reduction Overview method is specifically
designed for this process. It presents a comprehensible overview of the coherence of risks,
measures and residual risks to achieve this common understanding.
Early integration of security in the SDLC enables agencies to maximize return on investment in
their security programs, through:
Early identification and mitigation of security vulnerabilities and misconfigurations,
resulting in lower cost of security control implementation and vulnerability mitigation;
A risk assessment frameworkestablishes the rules for what is assessed, who needs to be involved,
the terminology used in discussing risk, the criteria for quantifying, qualifying, and comparing
degrees of risk, and the documentation that must be collected and produced as a result of
assessments and follow-on activities. The goal of a framework is to establish an objective
measurement of risk that will allow an organization to understand business risk to critical
information and assets both qualitatively and quantitatively. In the end, the risk assessment
framework provides the tools necessary to make business decisions regarding investments in
people, processes, and technology to bring risk to acceptable level.
How does a company know which framework is the best fit for its needs? We'll provide an
overview of the general structure and approach to risk assessment, draw a comparison of the
frameworks, and offer some guidance for experimentation and selection of an appropriate
framework.
Asset-based assessments
All risk assessment methods require organizations to select an asset as the object of the
assessment. Generally speaking, assets can be people, information, processes, systems,
applications, or systems. However frameworks differ in how strict they are requiring
organizations to follow a particular discipline in identifying what constitutes an asset. For
example CMU's original OCTAVE framework allowed an organization to select any item
previously described as the asset to be assessed, where the most recent methodology in the
OCTAVE series, Allegro, requires assets to be information.
There are advantages and disadvantages associated with any definition of asset. For example, if
an asset is a system or application, the assessment team will need to include all information
owners affected by the system. On the other hand, if the asset is information, the scope of the
assessment would need to include all systems and applications that affect the information.
Framework technology
Risk assessments frameworks establish the meaning of terms to get everyone on the same page.
Here are some terms used in most frameworks.
Actors, motives, access: These terms describe who is responsible for the threat, what might
motivate the actor or attacker to carry out an attack, and the access that is necessary to perpetrate
an attack or carry out the threat. Actors may be a disgruntled employee, a hacker from the
Internet, or simply a well-meaning administrator who accidently damages an asset. The access
required to carry out an attack is important in determining how large a group may be able to
realize a threat. The larger the attacking community (e.g., all users on the Internet versus a few
trusted administrators), the more likely an attack can be attempted.
Asset owners: Owners have the authority to accept risk. Owners must participate in risk
assessment and management as they are ultimately responsible for allocating funding for controls
or accepting the risk resulting from a decision not to implement controls.
Asset custodians: A person or group responsible for implementing and maintaining the systems
and security controls that protect an asset. This is typically an IT entity.
Impact: The business ramifications of an asset being compromised. The risk assessment team
needs to understand and document the degree of damage that would result if the confidentiality,
integrity, or availability of an asset is lost. The terms impact, business impact, and inherent risk
are usually used to describe, in either relative or monetary terms, how the business would be
affected by the loss. It's important to note that impact assumes the threat has been realized;
impact is irrespective of the likelihood of compromise.
Information asset: An abstract logical grouping of information that is, as a unit, valuable to an
organization. Assets have owners that are responsible for protecting value of the asset.
Risk magnitude or risk measurement criteria: The product of likelihood and the impact described
above. If we consider likelihood a probability value (less than 1) and impact a value of high,
medium, or low, the risk magnitude can be "calculated" and compared to risks of various threats
on particular assets.
Security requirements: The qualities of an asset that must be protected to retain its value.
Depending on the asset, different degrees of confidentiality, integrity, and availability must be
protected. For example, confidentiality and integrity of personal identifying information may be
critical for a given environment while availability may be less of a concern.
All risk assessment methods require a risk assessment team to clearly define the scope of the
asset, the business owner of the asset, and those people responsible for the technology and
particularly the security controls for the asset. The asset defines the scope of the assessment and
the owners and custodians define the members of the risk assessment team.
NIST's approach allows the asset to be a system, application, or information, while OCTAVE is
more biased toward information and OCTAVE Allegro requires the asset to be information.
Regardless of what method you choose, this step must define the boundaries and contents of the
asset to be assessed.
2. Analyze impact
The next step is to understand the both the dimensions and magnitude of the business impact to
the organization, assuming the asset was compromised. The dimensions of compromise are
confidentiality, integrity, and availability while the magnitude is typically described as low,
medium, or high corresponding to the financial impact of the compromise.
It's important to consider the business impact of a compromise in absence of controls to avoid
the common mistake of assuming that a compromise could not take place because the controls
are assumed to be effective. The exercise of analyzing the value or impact of asset loss can help
determine which assets should undergo risk assessment. This step is mostly the responsibility of
the business team, but technical representatives can profit by hearing the value judgments of the
business.
The output of this step is a document (typically a form) that describes the business impact in
monetary terms or, more often, a graded scale for compromise of the confidentiality, integrity,
and availability of the asset.
Identify the various ways an asset could be compromised that would have an impact on the
business. Threats involve people exploiting weaknesses or vulnerabilities intentionally or
unintentionally that result in a compromise. This process typically starts at a high level, looking
at general areas of concern (e.g., a competitor gaining access to proprietary plans stored in a
database) and progressing to more detailed analysis (e.g., gaining unauthorized access through a
remote access method). The idea is to list the most common combinations of actors or
perpetrators and paths that might lead to the compromise an asset (e.g., application interfaces,
storage systems, remote access, etc.). These combinations are called threat scenarios.
The assessment team uses this list later in the process to determine whether these threats are
effectively defended against by technical and process controls. The output of this step is the list
of threats described in terms of actors, access path or vector, and the associated impact of the
compromise.
4. Investigate vulnerabilities
Use the list of threats and analyze the technical components and business processes for flaws that
might facilitate the success of a threat. The vulnerabilities may have been discovered in separate
design and architecture reviews, penetration testing, or control process reviews. Use these
vulnerabilities to assemble or inform the threat scenarios described above. For example, a
general threat scenario may be defined as a skilled attacker from the Internet motivated by
financial reward gains access to an account withdrawal function; a known vulnerability in a Web
application may make that threat more likely. This information is used in the later stage of
likelihood determination.
This step is designed to allow the assessment team to determine the likelihood that vulnerability
can be exploited by the actor identified in the threat scenario. The team considers factors such as
the technical skills and access necessary to exploit the vulnerability in rating the vulnerability
exploit likelihood from low to high. This will be used in the likelihood calculation later to
determine the magnitude of risk.
5. Analyze controls
Look at the technical and process controls surrounding an asset and consider their effectiveness
in defending against the threats defined earlier. Technical controls like authentication and
authorization, intrusion detection, network filtering and routing, and encryption are considered in
this phase of the assessment. It's important, however, not to stop there. Business controls like
reconciliation of multiple paths of transactions, manual review and approval of activities, and
audits can often be more effective in preventing or detecting attacks or errors than technical
controls. The multi-disciplinary risk assessment team is designed to bring both types of controls
into consideration when determining the effectiveness of controls.
At the conclusion of this step, the assessment team documents the controls associated with the
asset and their effectiveness in defending against the particular threats.
After identifying a particular threat, developing scenarios describing how the threat may be
realized, and judging the effectiveness of controls in preventing exploitation of a vulnerability,
use a "formula" to determine the likelihood of an actor successfully exploiting a vulnerability
and circumventing known business and technical controls to compromise an asset.
The team needs to consider the motivation of the actor, the likelihood of being caught (captured
in control effectiveness), and the ease with which the asset may be compromised, then come up
with a measure of overall likelihood, from low to high.
The calculation of risk magnitude or residual risk combines the business impact of compromise
of the asset (considered at the start of the assessment), taking into consideration the diminishing
effect of the particular threat scenario under consideration (e.g., the particular attack may only
affect confidentiality and not integrity) with the likelihood of the threat succeeding. The result is
a measure of the risk to the business of a particular threat. This is typically expressed as one of
three or four values (low, medium, high, and sometimes severe).
This measure of risk is the whole point of the risk assessment. It serves as a guide to the business
as to the importance of addressing the vulnerabilities or control weaknesses that allow the threat
to be realized. Ultimately, the risk assessment forces a business decision to treat or accept risk.
Anyone reading a risk assessment method for the first time will probably get the impression that
they describe a clean and orderly stepwise process that can be sequentially executed. However,
you'll find that you need to repeatedly return to earlier steps when information in later steps helps
to clarify the real definition of the asset, which actors may be realistically considered in a threat
scenario, or what the sensitivity of a particular asset is. It often takes an organization several
attempts to get used to the idea that circling back to earlier steps is a necessary and important
part of the process.
Over the years, many risk frameworks have been developed and each has its own advantages and
disadvantages. In general, they all require organizational discipline to convene a multi-
disciplinary team, define assets, list threats, evaluate controls, and conclude with an estimate of
the risk magnitude.
OCTAVE, probably the most well-known of the risk frameworks, comes in three sizes. The
original, full-featured version is a heavyweight process with substantial documentation meant for
large organizations. OCTAVE-S is designed for smaller organizations where the multi-
disciplinary group may be represented by fewer people, sometimes exclusively technical folks
with knowledge of the business. The documentation burden is lower and the process is lighter
weight.
One of the benefits of the OCTAVE series is that each of the frameworks provides templates for
worksheets to document each step in the process. These can either be used directly or customized
for a particular organization.
The NIST framework, described in NIST Special Publication 800-30, is a general one that can be
applied to any asset. It uses slightly different terminology than OCTAVE, but follows a similar
structure. It doesn't provide the wealth of forms that OCTAVE does, but is relatively
straightforward to follow. Its brevity and focus on more concrete components (e.g., systems)
makes it a good candidate for organizations new to risk assessment. Furthermore, because it's
defined by NIST, it's approved for use by government agencies and organizations that work with
them.
ISACA's COBIT and the ISO 27001 and 27002 are IT management and security frameworks that
require organizations to have a risk management program. Both offer but don't require their own
versions of risk management frameworks: COBIT has RISK IT and ISO has ISO 27005:2008. .
They recommend repeatable methodologies and specify when risk assessments should take
place. The ISO 27000 series is designed to deal with security, while COBIT encompasses all of
IT; consequently, the risk assessments required by each correspond to those scopes. In other
words, risk assessment in COBIT -- described in RISK IT -- goes beyond security risks and
includes development, business continuity and other types of operational risk in IT, whereas ISO
27005 concentrates on security exclusively.
ISO 27005 follows a similar structure to NIST but defines terms differently. The framework
includes steps called context establishment, risk identification and estimation, in which threats,
vulnerabilities and controls are considered, and a risk analysis step that discusses and documents
threat likelihood and business impact.. ISO 27005 includes annexes with forms and examples,
but like other risk frameworks, it's up to the organization implementing it to evaluate or quantify
risk in ways that are relevant to its particular business.
Organizations that do not have a formal risk assessment methodology would do well to review
the risk assessment requirements in ISO 27001 and 27002 and consider the 27005 or NIST
approach. The ISO standards provide a good justification for formal risk assessments and outline
requirements, while the NIST document provides a good introduction to a risk assessment
framework.
A resource (both physical and logical) can have one or more vulnerabilities that can be exploited
by a threat agent in a threat action. The result can potentially compromises the Confidentiality,
Integrity or Availability properties of resources (potentially different that the vulnerable one) of
the organization and others involved parties (customers, suppliers).
The so-called CIA triad is the basis of Information Security.
The attack can be active when it attempts to alter system resources or affect their operation: so it
compromises Integrity or Availability. A "passive attack" attempts to learn or make use of
information from the system but does not affect system resources: so it compromises
Confidentiality.
A Threat is a potential for violation of security, which exists when there is a circumstance,
capability, action, or event that could breach security and cause harm. That is, a threat is a
possible danger that might exploit vulnerability. A threat can be either "intentional" (i.e.,
intelligent; e.g., an individual cracker or a criminal organization) or "accidental" (e.g., the
possibility of a computer malfunctioning, or the possibility of an "act of God" such as an
earthquake, a fire, or a tornado).
A set of policies concerned with information security management, the information security
management systems (ISMS), has been developed to manage, according to Risk management
principles, the countermeasures in order to accomplish to a security strategy set up following
rules and regulations applicable in a country.
Overview
When you incorporate security features into your application's design, implementation, and
deployment, it helps to have a good understanding of how attackers think. By thinking like
attackers and being aware of their likely tactics, you can be more effective when applying
Asset. A resource of value such as the data in a database or on the file system, or a
system resource
Threat. A potential occurrence — malicious or otherwise — that may harm an asset
Vulnerability. A weakness that makes a threat possible
Attack (or exploit). An action taken to harm an asset
Countermeasure. A safeguard that addresses a threat and mitigates risk
This chapter also identifies a set of common network, host, and application level threats, and the
recommended countermeasures to address each one. The chapter does not contain an exhaustive
list of threats, but it does highlight many top threats. With this information and knowledge of
how an attacker works, you will be able to identify additional threats. You need to know the
threats that are most likely to impact your system to be able to build effective threat models.
These threat models are the subject of "Threat Modeling."
Become familiar with specific threats that affect the network host and application.
The threats are unique for the various parts of your system, although the attacker's goals
may be the same.
Use the threats to identify risk. Then create a plan to counter those threats.
Apply countermeasures to address vulnerabilities. Countermeasures are summarized
in this chapter. Use Part III, "Building Secure Web Applications," and Part IV, "Securing
Your Network, Host, and Application," of this guide for countermeasure implementation
details.
When you design, build, and secure new systems, keep the threats in this chapter in
mind. The threats exist regardless of the platform or technologies that you use.
Anatomy of an Attack
By understanding the basic approach used by attackers to target your Web application, you will
be better equipped to take defensive measures because you will know what you are up against.
The basic steps in attacker methodology are summarized below and illustrated in Figure 2.1:
Survey and assess
Exploit and penetrate
Escalate privileges
Maintain access
Deny service
Surveying and assessing the potential target are done in tandem. The first step an attacker usually
takes is to survey the potential target to identify and assess its characteristics. These
characteristics may include its supported services and protocols together with potential
vulnerabilities and entry points. The attacker uses the information gathered in the survey and
assess phase to plan an initial attack.
For example, an attacker can detect a cross-site scripting (XSS) vulnerability by testing to see if
any controls in a Web page echo back output.
Having surveyed a potential target, the next step is to exploit and penetrate. If the network and
host are fully secured, your application (the front gate) becomes the next channel for attack.
For an attacker, the easiest way into an application is through the same entrance that legitimate
users use — for example, through the application's logon page or a page that does not require
authentication.
Escalate Privileges
After attackers manage to compromise an application or network, perhaps by injecting code into
an application or creating an authenticated session with the operating system, they immediately
attempt to escalate privileges. Specifically, they look for administration privileges provided by
accounts that are members of the Administrators group. They also seek out the high level of
privileges offered by the local system account.
Using least privileged service accounts throughout your application is a primary defense against
privilege escalation attacks. Also, many network level privilege escalation attacks require an
interactive logon session.
Having gained access to a system, an attacker takes steps to make future access easier and to
cover his or her tracks. Common approaches for making future access easier include planting
back-door programs or using an existing account that lacks strong protection. Covering tracks
typically involves clearing logs and hiding tools. As such, audit logs are a primary target for the
attacker.
Log files should be secured, and they should be analyzed on a regular basis. Log file analysis can
often uncover the early signs of an attempted break-in before damage is done.
Deny Service
Attackers who cannot gain access often mount a denial of service attack to prevent others from
using the application. For other attackers, the denial of service option is their goal from the
outset. An example is the SYN flood attack, where the attacker uses a program to send a flood of
TCP SYN requests to fill the pending connection queue on the server. This prevents other users
from establishing network connections.
While there are many variations of specific attacks and attack techniques, it is useful to think
about threats in terms of what the attacker is trying to achieve. This changes your focus from the
identification of every specific attack — which is really just a means to an end — to focusing on
the end results of possible attacks.
STRIDE
Threats faced by the application can be categorized based on the goals and purposes of the
attacks. A working knowledge of these categories of threats can help you organize a security
strategy so that you have planned responses to threats. STRIDE is the acronym used at Microsoft
to categorize different threat types. STRIDE stands for:
Spoofing. Spoofing is attempting to gain access to a system by using a false identity. This
can be accomplished using stolen user credentials or a false IP address. After the attacker
successfully gains access as a legitimate user or host, elevation of privileges or abuse
using authorization can begin.
Tampering. Tampering is the unauthorized modification of data, for example as it flows
over a network between two computers.
Repudiation. Repudiation is the ability of users (legitimate or otherwise) to deny that
they performed specific actions or transactions. Without adequate auditing, repudiation
attacks are difficult to prove.
Information disclosure. Information disclosure is the unwanted exposure of private
data. For example, a user views the contents of a table or file he or she is not authorized
to open, or monitors data passed in plaintext over a network. Some examples of
information disclosure vulnerabilities include the use of hidden form fields, comments
Threat Countermeasures
Spoofing user Use strong authentication.
identity Do not store secrets (for example, passwords) in plaintext.
Do not pass credentials in plaintext over the wire.
Protect authentication cookies with Secure Sockets Layer (SSL).
Tampering with Use data hashing and signing.
data Use digital signatures.
Use strong authorization.
Use tamper-resistant protocols across communication links.
Secure communication links with protocols that provide message integrity.
Repudiation Create secure audit trails.
Use digital signatures.
Information Use strong authorization.
disclosure Use strong encryption.
Secure communication links with protocols that provide message confidentiality.
Do not store secrets (for example, passwords) in plaintext.
Denial of Use resource and bandwidth throttling techniques.
service Validate and filter input.
Elevation of Follow the principle of least privilege and use least privileged service accounts to
privilege run processes and access resources.
The primary components that make up your network infrastructure are routers, firewalls, and
switches. They act as the gatekeepers guarding your servers and applications from attacks and
intrusions. An attacker may exploit poorly configured network devices. Common vulnerabilities
Information gathering
Sniffing
Spoofing
Session hijacking
Denial of service
a) Information Gathering
Network devices can be discovered and profiled in much the same way as other types of systems.
Attackers usually start with port scanning. After they identify open ports, they use banner
grabbing and enumeration to detect device types and to determine operating system and
application versions. Armed with this information, an attacker can attack known vulnerabilities
that may not be updated with security patches.
b) Sniffing
Sniffing or eavesdropping is the act of monitoring traffic on the network for data such as
plaintext passwords or configuration information. With a simple packet sniffer, an attacker can
easily read all plaintext traffic. Also, attackers can crack packets encrypted by lightweight
hashing algorithms and can decipher the payload that you considered to be safe. The sniffing of
packets requires a packet sniffer in the path of the server/client communication.
Use strong physical security and proper segmenting of the network. This is the first step
in preventing traffic from being collected locally.
Encrypt communication fully, including authentication credentials. This prevents sniffed
packets from being usable to an attacker. SSL and IPSec (Internet Protocol Security) are
examples of encryption solutions.
c) Spoofing
Spoofing is a means to hide one's true identity on the network. To create a spoofed identity, an
attacker uses a fake source address that does not represent the actual address of the packet.
Spoofing may be used to hide the original source of an attack or to work around network access
control lists (ACLs) that are in place to limit host access based on source address rules.
Filter incoming packets that appear to come from an internal IP address at your
perimeter.
Filter outgoing packets that appear to originate from an invalid local IP address.
d) Session Hijacking
Also known as man in the middle attacks, session hijacking deceives a server or a client into
accepting the upstream host as the actual legitimate host. Instead the upstream host is an
attacker's host that is manipulating the network so the attacker's host appears to be the desired
destination.
e) Denial of Service
Denial of service denies legitimate users access to a server or services. The SYN flood attack is a
common example of a network level denial of service attack. It is easy to launch and difficult to
track. The aim of the attack is to send more requests to a server than it can handle. The attack
exploits a potential vulnerability in the TCP/IP connection establishment mechanism and floods
the server's pending connection queue.
Host threats are directed at the system software upon which your applications are built. This
includes Windows 2000, Microsoft Windows Server 2003, Internet Information Services (IIS),
A virus is a program that is designed to perform malicious acts and cause disruption to your
operating system or applications. A Trojan horse resembles a virus except that the malicious
code is contained inside what appears to be a harmless data file or executable program. A worm
is similar to a Trojan horse except that it self-replicates from one server to another. Worms are
difficult to detect because they do not regularly create files that can be seen. They are often
noticed only when they begin to consume system resources because the system slows down or
the execution of other programs halt. The Code Red Worm is one of the most notorious to afflict
IIS; it relied upon a buffer overflow vulnerability in a particular ISAPI filter.
Although these three threats are actually attacks, together they pose a significant threat to Web
applications, the hosts these applications live on, and the network used to deliver these
applications. The success of these attacks on any system is possible through many vulnerabilities
such as weak defaults, software bugs, user error, and inherent vulnerabilities in Internet
protocols.
Countermeasures that you can use against viruses, Trojan horses, and worms include:
Stay current with the latest operating system service packs and software patches.
Block all unnecessary ports at the firewall and host.
Disable unused functionality including protocols and services.
Harden weak, default configuration settings.
Foot printing
Examples of foot printing are port scans, ping sweeps, and NetBIOS enumeration that can be
used by attackers to glean valuable system-level information to help prepare for more significant
attacks. The type of information potentially revealed by foot printing includes account details,
operating system and other software versions, server names, and database schema details.
Password Cracking
If the attacker cannot establish an anonymous connection with the server, he or she will try to
establish an authenticated connection. For this, the attacker must know a valid username and
password combination. If you use default account names, you are giving the attacker a head start.
Then the attacker only has to crack the account's password. The use of blank or weak passwords
makes the attacker's job even easier.
Denial of Service
Denial of service can be attained by many methods aimed at several targets within your
infrastructure. At the host, an attacker can disrupt service by brute force against your application,
or an attacker may know of a vulnerability that exists in the service your application is hosted in
or in the operating system that runs your server.
Configure your applications, services, and operating system with denial of service in
mind.
Stay current with patches and security updates.
Harden the TCP/IP stack against denial of service.
Make sure your account lockout policies cannot be exploited to lock out well known
service accounts.
Make sure your application is capable of handling high volumes of traffic and that
thresholds are in place to handle abnormally high loads.
Review your application's failover functionality.
Use an IDS that can detect potential denial of service attacks.
If an attacker can execute malicious code on your server, the attacker can either compromise
server resources or mount further attacks against downstream systems. The risks posed by
arbitrary code execution increase if the server process under which the attacker's code runs is
over-privileged. Common vulnerabilities include weak IIS configuration and unpatched servers
that allow path traversal and buffer overflow attacks, both of which can lead to arbitrary code
execution.
Unauthorized Access
Inadequate access controls could allow an unauthorized user to access restricted information or
perform restricted operations. Common vulnerabilities include weak IIS Web access controls,
including Web permissions and weak NTFS permissions.
Category Threats
Input validation Buffer overflow; cross-site scripting; SQL injection; canonicalization
Authentication Network eavesdropping; brute force attacks;
Input Validation
Input validation is a security issue if an attacker discovers that your application makes unfounded
assumptions about the type, length, format, or range of input data. The attacker can then supply
carefully crafted input that compromises your application.
When network and host level entry points are fully secured; the public interfaces exposed by
your application become the only source of attack. The input to your application is a means to
both test your system and a way to execute code on an attacker's behalf. Does your application
blindly trust input? If it does, your application may be susceptible to the following:
Buffer overflows
Cross-site scripting
SQL injection
Canonicalization
The following section examines these vulnerabilities in detail, including what makes these
vulnerabilities possible.
Buffer Overflows
Buffer overflow vulnerabilities can lead to denial of service attacks or code injection. A denial of
service attack causes a process crash; code injection alters the program execution address to run
char szBuffer[10];
// Input is copied straight into the buffer when no type checking is performed
strcpy(szBuffer, pszInput);
...
Managed .NET code is not susceptible to this problem because array bounds are automatically
checked whenever an array is accessed. This makes the threat of buffer overflow attacks on
managed code much less of an issue. It is still a concern, however, especially where managed
code calls unmanaged APIs or COM objects.
Perform thorough input validation. This is the first line of defense against buffer
overflows. Although a bug may exist in your application that permits expected input to
reach beyond the bounds of a container, unexpected input will be the primary cause of
this vulnerability. Constrain input by validating it for type, length, format and range.
When possible, limit your application's use of unmanaged code, and thoroughly inspect
the unmanaged APIs to ensure that input is properly validated.
Inspect the managed code that calls the unmanaged API to ensure that only appropriate
values can be passed as parameters to the unmanaged API.
Use the /GS flag to compile code developed with the Microsoft Visual C++®
development system. The /GS flag causes the compiler to inject security checks into the
compiled code. This is not a fail-proof solution or a replacement for your specific
validation code; it does, however, protect your code from commonly known buffer
overflow attacks. For more information, see the .NET Framework Product documentation
http://msdn.microsoft.com/en-us/library/8dbf701c(VS.71).aspx and Microsoft
Knowledge Base article 325483 "WebCast: Compiler Security Checks: The –GS
compiler switch."
An attacker can exploit a buffer overflow vulnerability to inject code. With this attack, a
malicious user exploits an unchecked buffer in a process by supplying a carefully constructed
The attacker's code usually ends up running under the process security context. This emphasizes
the importance of using least privileged process accounts. If the current thread is impersonating,
the attacker's code ends up running under the security context defined by the thread
impersonation token. The first thing an attacker usually does is call the RevertToSelf API to
revert to the process level security context that the attacker hopes has higher privileges.
Make sure you validate input for type and length, especially before you call unmanaged code
because unmanaged code is particularly susceptible to buffer overflows.
Cross-Site Scripting(XSS)
An XSS attack can cause arbitrary code to run in a user's browser while the browser is connected
to a trusted Web site. The attack targets your application's users and not the application itself, but
it uses your application as the vehicle for the attack.
Because the script code is downloaded by the browser from a trusted site, the browser has no
way of knowing that the code is not legitimate. Internet Explorer security zones provide no
defense. Since the attacker's code has access to the cookies associated with the trusted site and
are stored on the user's local computer, a user's authentication cookies are typically the target of
attack.
To initiate the attack, the attacker must convince the user to click on a carefully crafted
hyperlink, for example, by embedding a link in an email sent to the user or by adding a malicious
link to a newsgroup posting. The link points to a vulnerable page in your application that echoes
the invalidated input back to the browser in the HTML output stream. For example, consider the
following two links.
www.yourwebapplication.com/logon.aspx?username=bob
www.yourwebapplication.com/logon.aspx?username=<script>alert('hacker code')</script>
If the Web application takes the query string, fails to properly validate it, and then returns it to
the browser, the script code executes in the browser. The preceding example displays a harmless
pop-up message. With the appropriate script, the attacker can easily extract the user's
authentication cookie, post it to his site, and subsequently make a request to the target Web site
as the authenticated user.
Perform thorough input validation. Your applications must ensure that input from query
strings, form fields, and cookies are valid for the application. Consider all user input as
possibly malicious, and filter or sanitize for the context of the downstream code. Validate
all input for known valid values and then reject all other input. Use regular expressions to
validate input data received via HTML form fields, cookies, and query strings.
Use HTMLEncode and URLEncode functions to encode any output that includes user
input. This converts executable script into harmless HTML.
SQL Injection
A SQL injection attack exploits vulnerabilities in input validation to run arbitrary commands in
the database. It can occur when your application uses input to construct dynamic SQL statements
to access the database. It can also occur if your code uses stored procedures that are passed
strings that contain unfiltered user input. Using the SQL injection attack, the attacker can execute
arbitrary commands in the database. The issue is magnified if the application uses an over-
privileged account to connect to the database. In this instance it is possible to use the database
server to run operating system commands and potentially compromise other servers, in addition
to being able to retrieve, manipulate, and destroy data.
Your application may be susceptible to SQL injection attacks when you incorporate invalidated
user input into database queries. Particularly susceptible is code that constructs dynamic SQL
statements with unfiltered user input. Consider the following code:
Attackers can inject SQL by terminating the intended SQL statement with the single quote
character followed by a semicolon character to begin a new command, and then executing the
command of their choice. Consider the following character string entered into the txtuid field.
This results in the following statement being submitted to the database for execution.
This deletes the Customers table, assuming that the application's login has sufficient permissions
in the database (another reason to use a least privileged login in the database). The double dash (-
Note The semicolon is not actually required. SQL Server will execute two commands separated
by spaces.
Other more subtle tricks can be performed. Supplying this input to the txtuid field:
' OR 1=1 -
Because 1=1 is always true, the attacker retrieves every row of data from the Users table.
Perform thorough input validation. Your application should validate its input prior to
sending a request to the database.
Use parameterized stored procedures for database access to ensure that input strings are
not treated as executable statements. If you cannot use stored procedures, use SQL
parameters when you build SQL commands.
Use least privileged accounts to connect to the database.
Canonicalization
Different forms of input that resolve to the same standard name (the canonical name), is referred
to as canonicalization. Code is particularly susceptible to canonicalization issues if it makes
security decisions based on the name of a resource that is passed to the program as input. Files,
paths, and URLs are resource types that are vulnerable to canonicalization because in each case
there are many different ways to represent the same name. File names are also problematic. For
example, a single file could be represented as:
c:\temp\somefile.dat
somefile.dat
c:\temp\subdir\..\somefile.dat
..\somefile.dat
Avoid using file names as input where possible and instead use absolute file paths that
cannot be changed by the end user.
Make sure that file names are well formed (if you must accept file names as input) and
validate them within the context of your application. For example, check that they are
within your application's directory hierarchy.
Ensure that the character encoding is set correctly to limit how input can be represented.
Check that your application's Web.config has set the requestEncoding and
responseEncoding attributes on the <globalization> element.
Authentication
Network eavesdropping
Brute force attacks
Dictionary attacks
Cookie replay attacks
Credential theft
Network Eavesdropping
If authentication credentials are passed in plaintext from client to server, an attacker armed with
rudimentary network monitoring software on a host on the same network can capture traffic and
obtain user names and passwords.
Use authentication mechanisms that do not transmit the password over the network such
as Kerberos protocol or Windows authentication.
Make sure passwords are encrypted (if you must transmit passwords over the network) or
use an encrypted communication channel, for example with SSL.
Brute force attacks rely on computational power to crack hashed passwords or other secrets
secured with hashing and encryption. To mitigate the risk, use strong passwords. Additionally,
Dictionary Attacks
This attack is used to obtain passwords. Most password systems do not store plaintext passwords
or encrypted passwords. They avoid encrypted passwords because a compromised key leads to
the compromise of all passwords in the data store. Lost keys mean that all passwords are
invalidated.
Most user store implementations hold password hashes (or digests). Users are authenticated by
re-computing the hash based on the user-supplied password value and comparing it against the
hash value stored in the database. If an attacker manages to obtain the list of hashed passwords, a
brute force attack can be used to crack the password hashes.
With the dictionary attack, an attacker uses a program to iterate through all of the words in a
dictionary (or multiple dictionaries in different languages) and computes the hash for each word.
The resultant hash is compared with the value in the data store. Weak passwords such as
"Yankees" (a favorite team) or "Mustang" (a favorite car) will be cracked quickly. Stronger
passwords such as "?You'LlNevaFiNdMeyePasSWerd!", are less likely to be cracked.
Note Once the attacker has obtained the list of password hashes, the dictionary attack can be
performed offline and does not require interaction with the application.
Use strong passwords that are complex, are not regular words, and contain a mixture of
upper case, lower case, numeric, and special characters.
Store non-reversible password hashes in the user store. Also combine a salt value (a
cryptographically strong random number) with the password hash.
With this type of attack, the attacker captures the user's authentication cookie using monitoring
software and replays it to the application to gain access under a false identity.
If your application implements its own user store containing user account names and passwords,
compare its security to the credential stores provided by the platform, for example, a Microsoft
Active Directory® directory service or Security Accounts Manager (SAM) user store. Browser
history and cache also store user login information for future use. If the terminal is accessed by
someone other than the user who logged on, and the same page is hit, the saved login will be
available.
Authorization
Based on user identity and role membership, authorization to a particular resource or service is
either allowed or denied. Top threats that exploit authorization vulnerabilities include:
Elevation of privilege
Disclosure of confidential data
Data tampering
Luring attacks
Elevation of Privilege
When you design an authorization model, you must consider the threat of an attacker trying to
elevate privileges to a powerful account such as a member of the local administrators group or
the local system account. By doing this, the attacker is able to take complete control over the
application and local machine. For example, with classic ASP programming, calling the
RevertToSelf API from a component might cause the executing thread to run as the local system
account with the most power and privileges on the local machine.
The main countermeasure that you can use to prevent elevation of privilege is to use least
privileged process, service, and user accounts.
The disclosure of confidential data can occur if sensitive data can be viewed by unauthorized
users. Confidential data includes application specific data such as credit card numbers, employee
details, financial records and so on together with application configuration data such as service
account credentials and database connection strings. To prevent the disclosure of confidential
Perform role checks before allowing access to the operations that could potentially reveal
sensitive data.
Use strong ACLs to secure Windows resources.
Use standard encryption to store sensitive data in configuration files and databases.
Data Tampering
Use strong access controls to protect data in persistent stores to ensure that only
authorized users can access and modify the data.
Use role-based security to differentiate between users who can view data and users who
can modify data.
Luring Attacks
A luring attack occurs when an entity with few privileges is able to have an entity with more
privileges perform an action on its behalf.
To counter the threat, you must restrict access to trusted code with the appropriate authorization.
Using .NET Framework code access security helps in this respect by authorizing calling code
whenever a secure resource is accessed or a privileged operation is performed.
Configuration Management
Administration interfaces are often provided through additional Web pages or separate Web
applications that allow administrators, operators, and content developers to managed site content
and configuration. Administration interfaces such as these should be available only to restricted
and authorized users. Malicious users able to access a configuration management function can
potentially deface the Web site, access downstream systems and databases, or take the
application out of action altogether by corrupting configuration data.
Because of the sensitive nature of the data maintained in configuration stores, you should ensure
that the stores are adequately secured.
Lack of auditing and logging of changes made to configuration information threatens the ability
to identify when changes were made and who made those changes. When a breaking change is
made either by an honest operator error or by a malicious change to grant privileged access,
action must first be taken to correct the change. Then apply preventive measures to prevent
If application and service accounts are granted access to change configuration information on the
system, they may be manipulated to do so by an attacker. The risk of this threat can be mitigated
by adopting a policy of using least privileged service and application accounts. Be wary of
granting accounts the ability to modify their own configuration information unless explicitly
required by design.
Sensitive Data
Sensitive data is subject to a variety of threats. Attacks that attempt to view or modify sensitive
data can target persistent data stores and networks. Top threats to sensitive data include:
You must secure sensitive data in storage to prevent a user — malicious or otherwise — from
gaining access to and reading the data.
Use restricted ACLs on the persistent data stores that contain sensitive data.
Store encrypted data.
Use identity and role-based authorization to ensure that only the user or users with the
appropriate level of authority are allowed access to sensitive data. Use role-based security
to differentiate between users who can view data and users who can modify data.
Network Eavesdropping
The HTTP data for Web application travels across networks in plaintext and is subject to
network eavesdropping attacks, where an attacker uses network monitoring software to capture
and potentially modify sensitive data.
Data tampering refers to the unauthorized modification of data, often as it is passed over the
network.
One countermeasure to prevent data tampering is to protect sensitive data passed across the
network with tamper-resistant protocols such as hashed message authentication codes (HMACs).
1. The sender uses a shared secret key to create a hash based on the message payload.
2. The sender transmits the hash along with the message payload.
3. The receiver uses the shared key to recalculate the hash based on the received message
payload. The receiver then compares the new hash value with the transmitted hash value.
If they are the same, the message cannot have been tampered with.
Session Management
Session management for Web applications is an application layer responsibility. Session security
is critical to the overall security of the application.
Session hijacking
Session replay
Man in the middle
Session Hijacking
A session hijacking attack occurs when an attacker uses network monitoring software to capture
the authentication token (often a cookie) used to represent a user's session with an application.
With the captured cookie, the attacker can spoof the user's session and gain access to the
application. The attacker has the same level of privileges as the legitimate user.
Use SSL to create a secure communication channel and only pass the authentication
cookie over an HTTPS connection.
Implement logout functionality to allow a user to end a session that forces authentication
if another session is started.
Make sure you limit the expiration period on the session cookie if you do not use SSL.
Although this does not prevent session hijacking, it reduces the time window available to
the attacker.
Session replay occurs when a user's session token is intercepted and submitted by an attacker to
bypass the authentication mechanism. For example, if the session token is in plaintext in a cookie
or URL, an attacker can sniff it. The attacker then posts a request using the hijacked session
token.
A man in the middle attack occurs when the attacker intercepts messages sent between you and
your intended recipient. The attacker then changes your message and sends it to the original
recipient. The recipient receives the message, sees that it came from you, and acts on it. When
the recipient sends a message back to you, the attacker intercepts it, alters it, and returns it to
you. You and your recipient never know that you have been attacked.
Use cryptography. If you encrypt the data before transmitting it, the attacker can still
intercept it but cannot read it or alter it. If the attacker cannot read it, he or she cannot
know which parts to alter. If the attacker blindly modifies your encrypted message, then
the original recipient is unable to successfully decrypt it and, as a result, knows that it has
been tampered with.
Use Hashed Message Authentication Codes (HMACs). If an attacker alters the message,
the recalculation of the HMAC at the recipient fails and the data can be rejected as
invalid.
Cryptography
Most applications use cryptography to protect data and to ensure it remains private and
unaltered. Top threats surrounding your application's use of cryptography include:
Attackers can decrypt encrypted data if they have access to the encryption key or can derive the
encryption key. Attackers can discover a key if keys are managed poorly or if they were
generated in a non-random fashion.
Countermeasures to address the threat of poor key generation and key management include:
Use built-in encryption routines that include secure key management. Data Protection
application programming interface (DPAPI) is an example of an encryption service
provided on Windows 2000 and later operating systems where the operating system
manages the key.
Use strong random key generation functions and store the key in a restricted location —
for example, in a registry key secured with a restricted ACL — if you use an encryption
mechanism that requires you to generate or manage the key.
Encrypt the encryption key using DPAPI for added security.
Expire keys regularly.
Checksum Spoofing
Do not rely on hashes to provide data integrity for messages sent over networks. Hashes such as
Secure Hash Algorithm (SHA1) and Message Digest compression algorithm (MD5) can be
intercepted and changed. Consider the following base 64 encoding UTF-8 message with an
appended Message Authentication Code (MAC).
Hash: T0mUNdEQh13IO9oTcaP4FYDX6pU=
If an attacker intercepts the message by monitoring the network, the attacker could update the
message and recompute the hash (guessing the algorithm that you used). For example, the
message could be changed to:
Hash: oEDuJpv/ZtIU7BXDDNv17EAHeAU=
When recipients process the message, and they run the plaintext ("Place 100 orders") through the
hashing algorithm, and then recompute the hash, the hash they calculate will be equal to
whatever the attacker computed.
To counter this attack, use a MAC or HMAC. The Message Authentication Code Triple Data
Encryption Standard (MACTripleDES) algorithm computes a MAC, and HMACSHA1 computes
an HMAC. Both use a key to produce a checksum. With these algorithms, an attacker needs to
know the key to generate a checksum that would compute correctly at the receiver.
Parameter Manipulation
Parameter manipulation attacks are a class of attack that relies on the modification of the
parameter data sent between the client and Web application. This includes query strings, form
fields, cookies, and HTTP headers. Top parameter manipulation threats include:
Users can easily manipulate the query string values passed by HTTP GET from client to server
because they are displayed in the browser's URL address bar. If your application relies on query
string values to make security decisions, or if the values represent sensitive data such as
monetary amounts, the application is vulnerable to attack.
Avoid using query string parameters that contain sensitive data or data that can influence
the security logic on the server. Instead, use a session identifier to identify the client and
store sensitive items in the session store on the server.
Choose HTTP POST instead of GET to submit forms.
Encrypt query string parameters.
The values of HTML form fields are sent in plaintext to the server using the HTTP POST
protocol. This may include visible and hidden form fields. Form fields of any type can be easily
modified and client-side validation routines bypassed. As a result, applications that rely on form
field input values to make security decisions on the server are vulnerable to attack.
Cookie Manipulation
Cookies are susceptible to modification by the client. This is true of both persistent and memory-
resident cookies. A number of tools are available to help an attacker modify the contents of a
memory-resident cookie. Cookie manipulation is the attack that refers to the modification of a
cookie, usually to gain unauthorized access to a Web site.
While SSL protects cookies over the network, it does not prevent them from being modified on
the client computer. To counter the threat of cookie manipulation, encrypt and use an HMAC
with the cookie.
HTTP headers pass information between the client and the server. The client constructs request
headers while the server constructs response headers. If your application relies on request
headers to make a decision, your application is vulnerable to attack.
Do not base your security decisions on HTTP headers. For example, do not trust the HTTP
Referrer to determine where a client came from because this is easily falsified.
Exception Management
Exceptions that are allowed to propagate to the client can reveal internal implementation details
that make no sense to the end user but are useful to attackers. Applications that do not use
exception handling or implement it poorly are also subject to denial of service attacks. Top
exception handling threats include:
One of the important features of the .NET Framework is that it provides rich exception details
that are invaluable to developers. If the same information is allowed to fall into the hands of an
attacker, it can greatly help the attacker exploit potential vulnerabilities and plan future attacks.
The type of information that could be returned includes platform versions, server names, SQL
command strings, and database connection strings.
Countermeasures to help prevent internal implementation details from being revealed to the
client include:
Use exception handling throughout your application's code base.
Handle and log exceptions that are allowed to propagate to the application boundary.
Return generic, harmless error messages to the client.
Attackers will probe a Web application, usually by passing deliberately malformed input. They
often have two goals in mind. The first is to cause exceptions that reveal useful information and
the second is to crash the Web application process. This can occur if exceptions are not properly
caught and handled.
Auditing and logging should be used to help detect suspicious activity such as foot printing or
possible password cracking attempts before an exploit actually occurs. It can also help deal with
the threat of repudiation. It is much harder for a user to deny performing an operation if a series
of synchronized log entries on multiple servers indicate that the user performed that transaction.
The issue of repudiation is concerned with a user denying that he or she performed an action or
initiated a transaction. You need defense mechanisms in place to ensure that all user activity can
be tracked and recorded.
Audit and log activity on the Web server and database server, and on the application
server as well, if you use one.
Log key events such as transactions and login and logout events.
Do not use shared accounts since the original source cannot be determined.
System and application-level auditing is required to ensure that suspicious activity does not go
undetected.
Your log files must be well-protected to ensure that attackers are not able to cover their tracks.
Analysis
The analysis phase consists of impact analysis, threat analysis and impact scenarios.
Next, the impact analysis results in the recovery requirements for each critical function.
Recovery requirements consist of the following information:
The business requirements for recovery of the critical function, and/or
The technical requirements for recovery of the critical function
The impact of an epidemic can be regarded as purely human, and may be alleviated with
technical and business solutions. However, if people behind these plans are affected by the
disease, then the process can stumble.
During the 2002–2003 SARS outbreak, some organizations grouped staff into separate teams,
and rotated the teams between primary and secondary work sites, with a rotation frequency equal
to the incubation period of the disease. The organizations also banned face-to-face intergroup
contact during business and non-business hours. The split increased resiliency against the threat
of quarantine measures if one person in a team was exposed to the disease.
Impact scenarios
After identifying the applicable threats, impact scenarios are considered to support the
development of a business recovery plan. Business continuity testing plans may document
scenarios for each identified threats and impact scenarios. More localized impact scenarios – for
example loss of a specific floor in a building – may also be documented. The BC plans should
reflect the requirements to recover the business in the widest possible damage. The risk
assessment should cater to developing impact scenarios that are applicable to the business or the
premises it operates. For example, it might not be logical to consider tsunami in the region of
Mideast since the likelihood of such a threat is negligible.
Recovery requirement
After the analysis phase, business and technical recovery requirements precede the solutions
phase. Asset inventories allow for quick identification of deployable resources. For an office-
based, IT-intensive business, the plan requirements may cover desks, human resources,
applications, data, manual workarounds, computers and peripherals. Other business
environments, such as production, distribution, warehousing etc. will need to cover these
elements, but likely have additional issues.
The robustness of an emergency management plan is dependent on how much money an
organization or business can place into the plan. The organization must balance realistic
feasibility with the need to properly prepare. In general, every $1 put into an emergency
management plan will prevent $7 of loss.
Solution design
The solution design phase identifies the most cost-effective disaster recovery solution that meets
two main requirements from the impact analysis stage. For IT purposes, this is commonly
expressed as the minimum application and data requirements and the time in which the minimum
application and application data must be available.
Implementation
The implementation phase involves policy changes, material acquisitions, staffing and testing.
Tabletop exercises
Tabletop exercises typically involve a small number of people and concentrates on a specific
aspect of a BCP. They can easily accommodate complete teams from a specific area of a
business.
Another form involves a single representative from each of several teams. Typically, participants
work through simple scenario and then discuss specific aspects of the plan. For example, a fire is
discovered out of working hours.
The exercise consumes only a few hours and is often split into two or three sessions, each
concentrating on a different theme.
Medium exercises
A medium exercise is conducted within a "Virtual World" and brings together several
departments, teams or disciplines. It typically concentrates on multiple BCP aspects, prompting
interaction between teams. The scope of a medium exercise can range from a few teams from
one organisation co-located in one building to multiple teams operating across dispersed
locations. The environment needs to be as realistic as practicable and team sizes should reflect a
realistic situation. Realism may extend to simulated news broadcasts and websites.
A medium exercise typically lasts a few hours, though they can extend over several days. They
typically involve a "Scenario Cell" that adds pre-scripted "surprises" throughout the exercise.
Maintenance
Biannual or annual maintenance cycle maintenance of a BCP manual is broken down into three
periodic activities.
Confirmation of information in the manual, roll out to staff for awareness and specific
training for critical individuals.
Testing and verification of technical solutions established for recovery operations.
Testing and verification of organization recovery procedures.
Issues found during the testing phase often must be reintroduced to the analysis phase.
Information/targets
The BCP manual must evolve with the organization. Activating the call tree verifies the
notification plan's efficiency as well as contact data accuracy. Like most business procedures,
business continuity planning has its own jargon. Organisation-wide understanding of business
continuity jargon is vital and glossaries are available.Types of organisational changes that should
be identified and updated in the manual include:
Staffing
Important clients
Vendors/suppliers
Organization structure changes
Company investment portfolio and mission statement
Communication and transportation infrastructure such as roads and bridges
Technical
Specialized technical resources must be maintained. Checks include:
Virus definition distribution
Application security and service patch distribution
Hardware operability
Application operability
Data verification
Data application
Team Descriptions
1. Business Continuity Management Team
a) Organization Support Teams
b) Damage Assessment/ Salvage Team
c) Transportation Team
d) Physical Security Team
e) Public Information Team
f) Insurance Team
g) Telecommunication Team
Full Backup
Full back up is a method of backup where all the files and folders selected for the backup will be
backed up. When subsequent backups are run, the entire list of files and folders will be backed
up again. The advantage of this backup is, restores are fast and easy as the complete list of files
are stored each time. The disadvantage is that each backup run is time consuming as the entire
list of files is copied again. Also, full backups take up a lot more storage space when compared
to incremental or differential backups.
Incremental backup
Incremental backup is a backup of all changes made since the last backup. With incremental
backups, one full backup is done first and subsequent backup runs are just the changes made
since the last backup. The result is a much faster backup then a full backup for each backup run.
Differential backup
Differential backup is a backup of all changes made since the last full backup. With differential
backups, one full backup is done first and subsequent backup runs are the changes made since
the last full backup. The result is a much faster backup then a full backup for each backup run.
Storage space used is much less than a full backup but more then with Incremental backups.
Restores are slower than with a full backup but usually faster than with Incremental backups.
Mirror Backup
Mirror backups are as the name suggests a mirror of the source being backed up. With mirror
backups, when a file in the source is deleted, that file is eventually also deleted in the mirror
backup. Because of this, mirror backups should be used with caution as a file that is deleted by
accident or through a virus may also cause the mirror backups to be deleted as well.
In this backup, it is not the individual files that are backed up but entire images of the hard drives
of the computer that is backed up. With the full PC backup, you can restore the computer hard
drives to its exact state when the backup was done. With the Full PC backup, not only can the
work documents, picture, videos and audio files be restored but the operating system, hard ware
drivers, system files, registry, programs, emails etc. can also be restored.
Local Backup
Local backups are any kind of backup where the storage medium is kept close at hand or in the
same building as the source. It could be a backup done on a second internal hard drive, an
attached external hard drive, CD/ DVD –ROM or Network Attached Storage (NAS). Local
backups protect digital content from hard drive failures and virus attacks. They also provide
protection from accidental mistakes or deletes. Since the backups are always close at hand they
are fast and convenient to restore.
Offsite Backup
When the backup storage media is kept at a different geographic location from the source, this is
known as an offsite backup. The backup may be done locally at first but once the storage
medium is brought to another location, it becomes an offsite backup. Examples of offsite backup
include taking the backup media or hard drive home, to another office building or to a bank safe
deposit box.
Beside the same protection offered by local backups, offsite backups provide additional
protection from theft, fire, floods and other natural disasters. Putting the backup media in the
Online Backup
These are backups that are ongoing or done continuously or frequently to a storage medium that
is always connected to the source being backed up. Typically the storage medium is located
offsite and connected to the backup source by a network or Internet connection. It does not
involve human intervention to plug in drives and storage media for backups to run. Many
commercial data centers now offer this as a subscription service to consumers. The storage data
centers are located away from the source being backed up and the data is sent from the source to
the storage data center securely over the Internet.
Remote Backup
Remote backups are a form of offsite backup with a difference being that you can access, restore
or administer the backups while located at your source location or other location. You do not
need to be physically present at the backup storage facility to access the backups. For example,
putting your backup hard drive at your bank safe deposit box would not be considered a remote
backup. You cannot administer it without making a trip to the bank. Online backups are usually
considered remote backups as well.
Cloud Backup
This term is often used interchangeably with Online Backup and Remote Backup. It is where
data is backed up to a service or storage facility connected over the Internet. With the proper
login credentials, that backup can then be accessed or restored from any other computer with
Internet Access.
FTP Backup
This is a kind of backup where the backup is done via FTP (File Transfer Protocol) over the
Internet to an FTP Server. Typically the FTP Server is located in a commercial data center away
from the source data being backed up. When the FTP server is located at a different location, this
is another form of offsite backup.
Backup Strategy
A backup strategy or backup policy is essentially a set of procedures that you prepare and
implement to protect your important digital content from hard drive failures, virus attacks and
other events or disasters.
The following are features to aim for when designing your backup strategy:
1. What To Backup
The first step in planning your backup strategy is identifying what needs to be backed up.
Identify the files and folders that you cannot afford to lose? It involves going through your
documents, databases, pictures, videos, music and program setup or installation files. Some of
these media like pictures and videos may be irreplaceable. Others like documents and databases
may be tedious or costly to recover from hard copies. These are the files and folders that need to
be in your backup plan.
2. Where to Backup to
This is another fundamental consideration in your backup plan. In light of some content being
irreplaceable, the backup strategy should protect against all events. Hence a good backup
strategy should employ a combination of local and offsite backups.
Local backups are needed due to its lower cost allowing you to back up a huge amount of data.
Local backups are also useful for its very fast restore speed allowing you to get back online in
minimal time. Offsite backups are needed for its wider scope of protection from major disasters
or catastrophes not covered by local backups.
3. When to Backup
Frequency:How often you backup your data is the next major consideration when planning your
backup policy. Some folders are fairly static and do not need to be backed up very often. Other
folders are frequently updated and should correspondingly have a higher backup frequency like
once a day or more.
Your decision regarding backup frequency should be based on a worst case scenario. For
example, if tragedy struck just before the next backup was scheduled to run, how much data
would you lose since the last backup. How long would it take and how much would it cost to re
key that lost data?
Backup Start Time: You would typically want to run your backups when there’s minimal usage
on the computers. Backups may consume some computer resources that may affect performance.
Also, files that are open or in use may not get backed up.
So if the first hour on a business day morning is your busiest time, you would not want your
computer doing its backups then. If you always shut down or put your computer in sleep or
hibernate mode at the end of a work day, maybe your lunch time would be a better time to
schedule a backup. Just leave the computer on but logged-off when you go out for lunch.
Since servers are usually left running 24 hours, overnight backups for servers are a good choice.
4. Backup Types
Many backup software offer several backup types like Full Backup, Incremental Backup and
Differential backup. Each backup type has its own advantages and disadvantages. Full backups
are useful for projects, databases or small websites where many different files(text, pictures,
videos etc.) are needed to make up the entire project and you may want to keep different
versions of the project.
As part of your backup plan, you also need to decide if you want to apply any compression to
your backups. For example, when backing up to an online service, you may want to apply
compression to save on storage cost and upload bandwidth. You may also want to apply
compression when backing up to storage devices with limited space like USB thumb drives.
If you are backing up very private or sensitive data to an offsite service, some backup tools and
services also offer support for encryption. Encryption is a good way to protect your content
should it fall into malicious hands. When applying encryption, always ensure that you remember
your encryption key. You will not be able to restore it without your encryption key or phrase.
A backup is only worth doing if it can be restored when you need it most. It is advisable to
periodically test your backup by attempting to restore it. Some backup utilities offer a validation
option for your backups. While this is a welcome feature, it is still a good idea to test your
backup with an actual restore once in a while.
Simply copying and pasting files and folders to another drive would be considered a backup.
However the aim of a good backup plan is to set it up once and leave it to run on its own. You
would check up on it occasionally but the backup strategy should not depend on your ongoing
interaction for it to continue backing up. A good backup plan would incorporate the use of good
quality, proven backup software utilities and backup services.
Hot Site:
A Hot Site can be defined as a backup site, which is up and running continuously. A Hot Site
allows a company to continue normal business operations, within a very short period of time
after a disaster. Hot Site can be configured in a branch office, data center or even in cloud. Hot
Site must be online and must be available immediately.
Hot site must be equipped with all the necessary hardware, software, network, and Internet
connectivity. Data is regularly backed up or replicated to the hot site so that it can be made fully
operational in a minimal amount of time in the event of a disaster at the original site. Hot Site
must be located far away from the original site, in order to prevent the disaster affecting the hot
site also.
Hot sites are essentially mirrors of your datacenter infrastructure. The backup site is populated
with servers, cooling, power, and office space (if applicable). The most important feature offered
from a hot site is that the production environment(s) are running concurrently with your main
datacenter. This syncing allows for minimal impact and downtime to business operations. In the
event of a significant outage event to your main datacenter, the hot site can take the place of the
impacted site immediately. However, this level of redundancy does not come cheap, and
businesses will have to weigh the cost-benefit-analysis (CBA) of hot site utilization.
Warm Site:
A Warm Site is another backup site, is not as equipped as a Hot Site. Warm Site is configured
with power, phone, network etc. May have servers and other resources. But a Warm Site is not
ready for immediate switch over. The time to switch over from the disaster affected site to Warm
Site is more than that of a Hot Site. But less cost is the attraction.
A warm site is the middle ground of the two disaster recovery options. Warm sites offer office
space/datacenter space and will have some pre-installed server hardware. The difference between
a hot site and a warm site is that while the hot site provides a mirror of the production data-center
and its environment(s), a warm site will contain only servers ready for the installation of
production environments. Warm sites make sense for aspect of the business which is not critical,
but requires a level of redundancy (ex. Administrative roles). A CBA conducted on whether to
use a warm site versus a hot site should include the downtime associated with the software-
loading/configuration requirements for engineering.
Unplanned outages can severely risk a business' ability to generate revenue, and service clients.
A disaster recovery site can help mitigate the impact of those outages on production systems.
Business owners need only add this detail to their disaster recovery plans to ensure collective
peace-of-mind in the event of an emergency.
Cold Site contains even fewer facilities than a Warm Site. Cold Site will take more time than a
Warm Site or Hot Site to switch operation but it is the cheapest option. Cold Site may contain
tables, chairs, bathrooms, and basic technical facilities but will require days or even weeks to set
up properly and start operation from Cold Site.
A cold site is essentially office or datacenter space without any server-related equipment
installed. The cold site provides power, cooling, and/or office space which waits in the event of a
significant outage to the main work site or datacenter. The cold site will require extensive
support from engineering and IT personnel to get all necessary servers and equipment migrated
and functional. Cold sites are the cheapest cost-recovery option for businesses to utilize
An information technology disaster recovery plan (IT DRP) should be developed in conjunction
with the business continuity plan. Priorities and recovery time objectives for information
technology should be developed during the business impact analysis. Technology recovery
strategies should be developed to restore hardware, applications and data in time to meet the
needs of the business recovery.
Businesses large and small create and manage large volumes of electronic information or data.
Much of that data is important. Some data is vital to the survival and continued operation of the
business. The impact of data loss or corruption from hardware failure, human error, hacking or
malware could be significant. A plan for data backup and restoration of electronic information is
essential.
Recovery strategies
Recovery strategies should be developed for Information technology (IT) systems, applications
and data. This includes networks, servers, desktops, laptops, wireless devices, data and
connectivity. Priorities for IT recovery should be consistent with the priorities for recovery of
business functions and processes that were developed during the business impact analysis. IT
resources required to support time-sensitive business functions and processes should also be
identified. The recovery time for an IT resource should match the recovery time objective for the
business function or process that depends on the IT resource.
Computer room environment (secure computer room with climate control, conditioned
and backup power supply, etc.)
Hardware (networks, servers, desktop and laptop computers, wireless devices and
peripherals)
Connectivity to a service provider (fiber, cable, wireless, etc.)
Software applications (electronic data interchange, electronic mail, enterprise resource
management, office productivity, etc.)
Data and restoration
Some business applications cannot tolerate any downtime. They utilize dual data centers capable
of handling all data processing needs, which run in parallel with data mirrored or synchronized
between the two centers. This is a very expensive solution that only larger companies can afford.
However, there are other solutions available for small to medium sized businesses with critical
business applications and data to protect.
Many businesses have access to more than one facility. Hardware at an alternate facility can be
configured to run similar hardware and software applications when needed. Assuming data is
backed up off-site or data is mirrored between the two sites, data can be restored at the alternate
site and processing can continue.
There are vendors that can provide “hot sites” for IT disaster recovery. These sites are fully
configured data centers with commonly used hardware and software products. Subscribers may
provide unique equipment or software either at the time of disaster or store it at the hot site ready
for use.
Data streams, data security services and applications can be hosted and managed by vendors.
This information can be accessed at the primary business site or any alternate site using a web
browser. If an outage is detected at the client site by the vendor, the vendor automatically holds
data until the client’s system is restored. These vendors can also provide data filtering and
detection of malware threats, which enhance cyber security.
Identify critical software applications and data and the hardware required to run them. Using
standardized hardware will help to replicate and reimage new hardware. Ensure that copies of
program software are available to enable re-installation on replacement equipment. Prioritize
hardware and software restoration.
Document the IT disaster recovery plan as part of the business continuity plan. Test the plan
periodically to make sure that it works.
Program policies establish the security program. They provide its form and character. The
sections that make up a program policy include purpose, scope, responsibilities, and compliance.
Following are the basic components of a security policy:
Purpose includes the objectives of the program, such as:
Improved recovery times
Reduced costs or downtime due to loss of data
Reduction in errors for both system changes and operational activities
Regulatory compliance
Management of overall confidentiality, integrity, and availability
Scope provides guidance on whom and what are covered by the policy. Coverage may
include:
Facilities
Lines of business
Employees or departments
Technology
Processes
Responsibilities for the implementation and management of the policy are assigned in this
section. Organizational units or individuals are potential assignment candidates.
Policy Implementation
After gaining management support and sign off, implementation planning begins. The roll out of
a new policy includes the following activities:
1. Ensure everyone is aware of the new policy. Post it on your Intranet, send notification email,
or perform whatever other mass distribution actions work well within your organization.
2. Discuss the content of the policy at management and staff meetings. It's important during
these discussions to include a review of the intended results of following the policy. This
helps your organization's employees see the standards and guidelines from the proper
perspective.
3. Conduct training sessions. Training should occur at three levels - management, general staff,
and technical staff.
Management training is intended to educate managers about their role in enforcement
and compliance activities. It should include a "big picture" view of where the policy fits
in the overall security program.
General staff training is provided to all staff levels in the organization. In addition to
making employees aware of the contents of the policy, it should also address any
questions about how the objectives, standards, and guidelines will impact day to day
operation of the business. Staff training should always precede any attempts to sanction
an employee for failure to follow a security policy.
Technical staff training is typically provided for the IS staff. The focus of this training is
how the new policy affects existing system or network configurations and baselines.
4. Development of supporting standards, guidelines, procedures and baselines
5. Implement a user awareness program
The components of a security policy will change by organization based on size, services offered,
technology, and available revenue. Here are some of the typical elements included in a security
policy.
Security Definition – All security policies should include a well-defined security vision for the
organization. The security vision should be clear and concise and convey to the readers the intent
of the policy. In example:
Enforcement – This section should clearly identify how the policy will beenforced and how
security breaches and/or misconduct will be handled.
The Chief Information Officer (CIO) and the Information Systems Security Officer (ISSO)
typically have the primary responsibility for implementing the policy and ensuring compliance.
However, you should have a member of senior management, preferably the top official,
implement and embrace the policy. This gives you the enforcement clout and much needed ‘buy-
in’.
This section may also include procedures for requesting short-term exceptions to the policy. All
exceptions to the policy should be reviewed and approved, or denied, by the Security Officer.
Senior management should not be given the flexibility to overrule decisions. Otherwise, your
security program will be full of exceptions that will lend themselves toward failure.
User Access to Computer Resources - This section should identify the roles and
responsibilities of users accessing resources on the organization’s network. This should include
information such as:
· Procedures for obtaining network access and resource level permission;
· Policies prohibiting personal use of organizational computer systems;Passwords;
· Procedures for using removal media devices;
· Procedures for identifying applicable e-mail standards of conduct;
· Specifications for both acceptable and prohibited Internet usage;
· Guidelines for applications;
· Restrictions on installing applications and hardware;
· Procedures for Remote Access;
· Guidelines for use of personal machines to access resources (remote access);
· Procedures for account termination;
· Procedures for routine auditing;
· Procedures for threat notification; and
· Security awareness training;
Depending on the size of an organization’s network, a more detailed listing may be required for
the connected Wide Area Networks (WAN), other Local Area Networks (LAN), Extranets, and
Virtual Private Networks (VPN). Some organizations may require that other connected (via
LAN, WAN, VPN) or trusted agencies meet the terms and conditions identified in the
organization’s security policy before they are granted access. This is done for the simple reason
that your security policy is only as good as the weakest link. For example, If Company ‘A’ has a
rigid security policy and Company ‘B’ has a substandard policy and wants to partner with
Company ‘A’, Company ‘B’ may request to have a network connection to Company ‘A’ (behind
the firewall). If Company’ A’ allows this without validating Company ‘B’s’ security policy then
Company ‘A’ can now be compromised by exploits launched from Company ‘B’. When
developing a security policy one should take situations such as this one very serious and develop
Security Profiles - A good security policy should also include information that identifies how
security profiles will be applied uniformly across common devices (e.g., servers, workstations,
routers, switches, firewalls, proxy servers, etc.). The policy should reference applicable standards
and procedures for locking downdevices. Those standards may include security checklists to
follow when adding and/or reconfiguring devices.
New devices come shipped with the default configuration for ease of deployment and it also
ensures compatibility with most architectures. This is very convenient for the vendor, but a
nightmare for security professionals. An assessment needs to be completed to determine what
services are necessary on which devices to meet the organizational needs and requirements. All
other services should be turned off and/or removed and documented in the corresponding
standard operating procedure.
For example, if your agency does not have a need to host Internet or Intranet based applications
then do not install Microsoft IIS. If you have a need to host HTML services, but do not have a
requirement for allowing FTP, then disable it.
Another tip to consider is that you should be logging all successful and failed logon attempts. A
hacker may be trying several accounts to logon to your network. If you see several ‘failed’ logon
attempts in a row and then no activity; does this mean the hacker gave up or did he
“successfully” logon?
E-mail – An email usage policy is a must. Several viruses, Trojans, and malware use email as
the vehicle to propagate themselves throughout the Internet. A few of the more recent worms
Internet – The World Wide Web was the greatest invention, but the worst nightmare from a
security standpoint. The Internet is the pathway in which vulnerabilities are manifested. The
black-hat community typically launches their ‘zero day’ and old exploits on the Internet via IRC
chat rooms, through Instant Messengers, and free Internet email providers (Hotmail, yahoo, etc.).
Therefore, the Internet usage policy should restrict access to these types of sites and should
clearly identify what, if any, personal use is authorized. Moreover, software should be employed
to filter out many of the forbidden sitesthat include pornographic, chat rooms, free web-based
email services (Hotmail, Yahoo, etc.), personals, etc. There are several Internet content filtering
applications available that maintain a comprehensive database of forbidden URLs.
The following are being provided for additional information Back-up and Recovery – A
comprehensive back-up and recovery plan is critical to mitigating incidents. You never know
when a natural or other disaster may occur. For example take the 9/11 incident. What would have
happened if there were no off-site storage locations for the companies in the World Trade
Center?
Answer: All data would have been permanently lost! Back-ups are your key to the past.
Organizations must have effective back-up and recovery plans that are established through a
comprehensive risk assessment of all systems on the network. Your back-up procedures may be
different for a number of systems on your network. For example, your budget and payroll system
will have different back-up requirements than a miscellaneous file server.
You may be required to restore from a tape back-up, if the system crashes, you get hacked,
upgrade hardware, and/or files get inadvertently deleted. You should be prepared. Your back-up
and recovery policy (separate document) should stand on its own , but be reflected in the security
policy. At a minimum, your back-up recovery plan should include:
· Back-up schedules;
· Identification of the type of tape back-up (full, differential, etc.)
· The type of equipment used;
· Tape storage location (on and off-site);
· Tape labeling convention;
· Tape rotation procedures;
· Testing restorations; and
· Checking log files.
Intrusion detection tools will help assists in the detection and mitigation of access attempts into
your network. You need to make the decision through the risk assessment process of whether to
implement network or host based NDIS or a combination of both. Additional standard operating
procedures should be derived from the policy to specifically address intrusion detection
processes and procedures. Following are some examples of NDIS systems:
· ISS - (http://www.iss.com)
· Cisco - (http://www.cisco.com/warp/public/cc/pd/sqsw/sqidsz/)
· Snort - (http://www.linuxsecurity.com/feature_stories/usingsnort. html)
· Zone Alarm – (http://www.zonealram.com)
Remote Access - Dial-up access to your network will represent one of your greatest risks. Your
policy should identify the procedures that one must follow in order to be granted dial-up access.
You also need to address whether or not personal machines will be allowed to access your
organization’s resources.
The whole issue of remote access causes heartburn for security officials. You can lock down
your perimeter, but all it takes in one remote access client dialing into the network (behind the
firewall) who has been compromised while surfing the Internet with that Trojan ready and
willing to start looking for other unsuspecting prey. Next thing you know your network has been
compromised.
Following are some examples to include in your policy:
Install and configure personal firewall on remote client machines (examples, Norton o
BlackIce Defender);
Ensure antivirus software, services packs and security patches are maintained and up-to-date;
Ensure modems are configured to not auto answer;
Ensure file sharing is disabled;
If not using token or PKI certificates, then username and password should be encrypted;
If possible push policies from server to client machines; and
Prohibit the use of organizational machines from being configured to access personal Internet
Service Provider accounts.
Auditing - All security programs should be audited on a routine and random basis to assess their
effectiveness. The security officer must be given the authority, inwriting, by the head of the
organization to conduct audits of the program. If not, he or she could be subject to legal action
for malicious conduct. Random and scheduled audits should be conducted and may include:
Password auditing using password cracking utilities such as LC3 (Windows) and
PWDump (Unix and Windows);
Auditing user accounts database for active old accounts (persons who left the agency)
Awareness Training - Security Awareness training for organizational staff must be performed
to ensure a successful program. Training should be provided at different levels for staff,
executives, system administrators, and security officers.
Additionally, staff should be retrained on a periodic basis (e.g., every two years).
A process should be in place for training newly hired staff within a certain time period. Staff
completing training should be required to sign a written certification statement. This signed
statement helps the security officer and management enforce the organization’s security policies.
Trained staff can help alleviate some of the security burden from security officers.
Trained staff can and often do provide advanced notification of suspicious events encountered on
their machines which could prevent a worm or other Trojan from propagating throughout the
entire network.
Information Security Policy /ISP/ is a set or rules enacted by an organization to ensure that all
users or networks of the IT structure within the organization’s domain abide by the prescriptions
regarding the security of data stored digitally within the boundaries the organization stretches its
authority.
An ISP is governing the protection of information, which is one of the many assets a corporation
needs to protect. The present writing will discuss some of the most important aspects a person
should take into account when contemplates developing an ISP. Putting to work the logical
arguments of rationalization, one could say that a policy can be as broad as the creators want it to
be: Basically, everything from A to Z in terms of IT security, and even more. For that reason, the
emphasis here is placed on a few key elements, but you should make a mental note of the liberty
of thought organizations have when they forge their own guidelines.
2.2 Scope
ISP should address all data, programs, systems, facilities, other tech infrastructure, users of
technology and third parties in a given organization, without exception.
An organization that strive to compose a working ISP needs to have well-defined objectives
concerning security and strategy on which management have reached an agreement. Any
existing dissonances in this context may render the information security policy project
dysfunctional. The most important thing that a security professional should remember is that his
knowing the security management practices would allow him to incorporate them into the
documents he is entrusted to draft, and that is a guarantee for completeness, quality and
workability.
Simplification of policy language is one thing that may smooth away the differences and
guarantee consensus among management staff. Consequently, ambiguous expressions are to be
avoided. Beware also of the correct meaning of terms or common words. For instance, “musts”
express negotiability, whereas “shoulds” denote certain level of discretion. Ideally, the policy
should be briefly formulated to the point. Redundancy of the policy’s wording (e.g., pointless
repetition in writing) should be avoided as well as it would make documents long-winded and
So how management views IT security seems to be one of the first steps when a person intends
to enforce new rules in this department. Furthermore, a security professional should make sure
that the ISP has an equal institutional gravity as other policies enacted within the corporation. In
cases where an organization has sizeable structure, policies may differ and therefore be
segregated in order to define the dealings in the intended subset of this organization.
Donn Parker, one of the pioneers in the field of IT security, expanded this threefold paradigm by
suggesting also “authenticity” and “utility”.
Typically, a security policy has a hierarchical pattern. It means that inferior staff is usually bound
not to share the little amount of information they have unless explicitly authorized. Conversely, a
senior manager may have enough authority to make a decision what data can be shared and with
whom, which means that they are not tied down by the same information security policy terms.
So the logic demands that ISP should address every basic position in the organization with
specifications that will clarify their authoritative status.
Access to company’s network and servers, whether or not in the physical sense of the word,
should be via unique logins that require authentication in the form of either passwords,
biometrics, ID cards, or tokens etc. Monitoring on all systems must be implemented to record
logon attempts (both successful ones and failures) and exact date and time of logon and logoff.
Data can have different value. Gradations in the value index may impose separation and specific
handling regimes/procedures for each kind. An information classification system therefore may
succeed to pay attention to protection of data that has significant importance for the organization,
and leave out insignificant information that would otherwise overburden organization’s
resources. Data classification policy may arrange the entire set of information as follows:
1. High Risk Class– data protected by state and federal legislation (the Data Protection Act,
HIPAA, FERPA) as well as financial, payroll, and personnel (privacy requirements) are
included here.
2. Confidential Class – the data in this class does not enjoy the privilege of being under the
wing of law, but the data owner judges that it should be protected against unauthorized
disclosure.
3. Class Public – This information can be freely distributed.
Data owners should determine both the data classification and the exact measures a data
custodian needs to take to preserve the integrity in accordance to that level.
Movement of data
Sharing IT security policies with staff is a critical step. Making them read and sign to
acknowledge a document does not necessarily mean that they are familiar with and understand
the new policies. A training session would engage employees in positive attitude to information
security, which will ensure that they get a notion of the procedures and mechanisms in place to
protect the data, for instance, levels of confidentiality and data sensitivity issues. Such an
awareness training should touch on a broad scope of vital topics: how to collect/use/delete data,
maintain data quality, records management, confidentiality, privacy, appropriate utilization of IT
systems, correct usage social networking, etc. A small test at the end is perhaps a good idea.
General considerations in this direction lean towards responsibility of persons appointed to carry
out the implementation, education, incident response, user access reviews, and periodic updates
of an ISP.
Prevention of theft, information know-how and industrial secrets that could benefit competitors
are among the most cited reasons why a business may want to employ an ISP to defend its digital
assets and intellectual rights.
Virus Protection Procedure, Intrusion Detection Procedure, Remote Work Procedure, Technical
Guidelines, Audit, Employee Requirements, Consequences for Non-compliance, Disciplinary
Actions, Terminated Employees, Physical Security of IT, References to Supporting Documents
and so on.
Out of carelessness mostly, many organizations without giving a much thought choose to
download IT policy samples from a website and copy/paste this ready-made material in attempt
to readjust somehow their objectives and policy goals to a mould that is usually crude and has
too broad-spectrum protection. Understandably, if the fit is not a quite right, the dress would
eventually slip off.
A high-grade ISP can make the difference between growing business and successful one.
Improved efficiency, increased productivity, clarity of the objectives each entity has,
understanding what IT and data should be secured and why, identifying the type and levels of
security required and defining the applicable information security best practices are enough
reasons to back up this statement. To put a period to this topic in simple terms, let’s say that if
you want to lead a prosperous company in today’s digital era, you certainly need to have a good
information security policy.
The successful development of any company depends on correctly formulated strategic purposes
and the methods of their reaching. It is customary to assume that financial indices, for example
Problems
Development and growth of enterprises are very tightly connected with an increase in company's
IT-infrastructure, complexity and scales of which is constantly growing, generating new forms of
threats, vulnerabilities and risks, which has its influence on the activity of organization.
The appearance of problems in Information Security leads both to the financial and reputation
losses. Important task of management is to avoid these threats, to minimize risks and to ensure
the proper level of IT-infrastructure safety.
The Information Security Policy is inseparably connected with the development of company, her
strategic planning, it determines general principles and order of providing Information Security
in enterprise. The Information Security Policy is tightly integrated with work of enterprise in
entire stage of its existence. All solutions, undertaken in enterprise, must consider its
requirements.
The effective guarantee of Information Security required level is possible only with formalized
approach to the fulfillment of measures for the protection of information. The main purpose of
Information Security Policy is to create the united system of views and understanding of
purposes, tasks and principles by which Information Security provides.
The packet of documents on providing Information Security includes the following types of
documents:
The concrete drafts of necessary documents are determined during the inspection of customer's
Information Security existing level, its organizational structure and business processes.
The program of staff awareness increase is the complex of educational measures, which make it
possible to reach the required level of understanding the importance and needs for fulfilling the
requirements of Information Security.
Control of risks is the continuous process, which ensures development, estimation and
minimization of Information Security threats, directed toward the organization assets.
to obtain the urgent idea about the level of organization Information Security at the
current moment;
to determine the most vulnerable places in Information Security system;
to determine the cost substantiation of expenditures for guaranteeing Information
Security;
to minimize expenses on Information Security.
the construction of business processes interaction model for the purpose of isolation
organization's most critical assets;
the construction of disturber model and the model of threats;
the estimation of threats realization, directed toward the critical organization assets;
the development of measures for reducing risks of threats;
the development of plan for reduction risks;
the estimation of the residual risk after the introduction of decrease measures
3. Control of vulnerabilities
Existing software is imperfect, new vulnerabilities are appearing constantly. Frequently, after the
detection of such vulnerabilities appears malware software, which gives the probability of using
the vulnerabilities by criminals for the theft, distortion of information or failure in servicing of
critical systems. Control of vulnerabilities makes it possible to minimize risks and to decrease
losses with the appearance of destructive software or actions of criminals. specialist solve the
following tasks:
Active tasks:
Internal information security threats include company employee threats both intentional (fraud,
theft, confidential data corruption or destruction, industrial espionage and so on) and
unintentional (changes or destruction of information caused by the employee’s poor qualification
or carelessness) – as well as failures in the software or hardware used to process and store
information.
Companies are offered the following services to help reduce their internal information security
threats:
This service envisages designing a comprehensive control and counteraction system against
of data transmission channels – content filter systems (Internet, e-mail, ICQ, P2P);
at employees’ workstations – the control of information-carrying media (USB devices –
flash drives, external hard drives), print queues, access to network resources.
This system allows the company to establish centralized control management and conduct
effective countermeasures. It also helps the company collect the necessary evidence of security
incidents. At the same time, the system remains completely transparent to its users.
This represents a package of organizational and technical measures aimed at preventing the
compromise, theft, modification or destruction of confidential information by internal security
intruders and third parties. These services offer:
This involves the design of a centralized application-oriented, server and firmware vulnerability
management system. This system helps to provide real-time and effective responses to any
emerging information system vulnerabilities, which in turn helps to reduce the risk of these
vulnerabilities being attacked by malicious software or intruders of local computer networks and
workstations.
External security threats include threats that emerge from the external environments. These
include:
Point lane offers the following external threat protection solutions of information security:
This package of measures includes the implementation of the following corporate systems:
The deployment of intrusion detection and intrusion prevention services (IDS/IPS). These
systems represent firmware packages that analyze traffic for the signature of the attack
and then automatically react to block it;
The organization of cross-network firewalls, and the design of Internet or inter-branch
access systems.
Introduction
The security methodology described in this document is designed to help security professionals
develop a strategy to protect the availability, integrity, and confidentiality of data in an
organization's information technology (IT) system. It will be of interest to information resource
managers, computer security officials, and administrators, and of particular value to those trying
to establish computer security policies. The methodology offers a systematic approach to this
important task and, as a final precaution, also involves establishing contingency plans in case of
a disaster.
Data in an IT system is at risk from various sources—user errors and malicious and non-
malicious attacks. Accidents can occur and attackers can gain access to the system and disrupt
services, render systems useless, or alter, delete, or steal information.
An IT system may need protection for one or more of the following aspects of data:
Security administrators need to decide how much time, money, and effort needs to be spent in
order to develop the appropriate security policies and controls. Each organization should analyze
its specific needs and determine its resource and scheduling requirements and constraints.
Computer systems, environments, and organizational policies are different, making each
computer security services and strategy unique. However, the principles of good security remain
the same, and this document focuses on those principles.
Although a security strategy can save the organization valuable time and provide important
reminders of what needs to be done, security is not a one-time activity. It is an integral part of the
system lifecycle. The activities described in this document generally require either periodic
updating or appropriate revision. These changes are made when configurations and other
conditions and circumstances change significantly or when organizational regulations and
policies require changes. This is an iterative process. It is never finished and should be revised
and tested periodically.
Establishing an effective set of security policies and controls requires using a strategy to
determine the vulnerabilities that exist in our computer systems and in the current security
policies and controls that guard them. The current status of computer security policies can be
determined by reviewing the list of documentation that follows. The review should take notice of
areas where policies are lacking as well as examine documents that exist:
Assessing an organization's security needs also includes determining its vulnerabilities to known
threats. This assessment entails recognizing the types of assets that an organization has, which
will suggest the types of threats it needs to protect itself against. Following are examples of some
typical asset/threat situations:
The security administrator of a bank knows that the integrity of the bank's information is
a critical asset and that fraud, accomplished by compromising this integrity, is a major
threat. Fraud can be attempted by inside or outside attackers.
The security administrator of a Web site knows that supplying information reliably (data
availability) is the site's principal asset. The threat to this information service is a denial
of service attack, which is likely to come from an outside attacker.
A law firm security administrator knows that the confidentiality of its information is an
important asset. The threat to confidentiality is intrusion attacks, which might be
launched by inside or outside attackers.
A security administrator in any organization knows that the integrity of information on
the system could be threatened by a virus attack. A virus could be introduced by an
employee copying games to his work computer or by an outsider in a deliberate attempt
to disrupt business functions.
Listing the threats (and most organizations will have several) helps the security administrator to
identify the various methods, tools, and techniques that can be used in an attack. Methods can
range from viruses and worms to password and e-mail cracking. It is important that
administrators update their knowledge of this area on a continual basis, because new methods,
tools, and techniques for circumventing security measures are constantly being devised.
For each method, the security plan should include a proactive strategy as well as a reactive
strategy.
The proactive or pre-attack strategy is a set of steps that helps to minimize existing security
policy vulnerabilities and develop contingency plans. Determining the damage that an attack will
cause on a system and the weaknesses and vulnerabilities exploited during this attack helps in
developing the proactive strategy.
The reactive strategy or post-attack strategy helps security personnel to assess the damage
caused by the attack, repair the damage or implement the contingency plan developed in the
proactive strategy, document and learn from the experience, and get business functions running
as soon as possible.
The last element of a security strategy, testing and reviewing the test outcomes, is carried out
after the reactive and proactive strategies have been put into place. Performing simulation attacks
on a test or lab system makes it possible to assess where the various vulnerabilities exist and
adjust security policies and controls accordingly.
These tests should not be performed on a live production system because the outcome could be
disastrous. Yet, the absence of labs and test computers due to budget restrictions might preclude
simulating attacks. In order to secure the necessary funds for testing, it is important to make
management aware of the risks and consequences of an attack as well as the security measures
that can be taken to protect the system, including testing procedures. If possible, all attack
scenarios should be physically tested and documented to determine the best possible security
policies and controls to be implemented.
Certain attacks, such as natural disasters such as floods and lightning cannot be tested, although a
simulation will help. For example, simulate a fire in the server room that has resulted in all the
servers being damaged and lost. This scenario can be useful for testing the responsiveness of
administrators and security personnel, and for ascertaining how long it will take to get the
organization functional again.
Testing and adjusting security policies and controls based on the test results is an iterative
process. It is never finished and should be evaluated and revised periodically so that
improvements can be implemented.
Good practice calls for forming an incident response team. The incident response team should be
involved in the proactive efforts of the security professional. These include:
These efforts will provide knowledge that the organization can use and information to issue
before and during incidents.
After the security administrator and incident response team have completed these proactive
functions, the administrator should hand over the responsibility for handling incidents to the
incident response team. This does not mean that the security administrator should not continue to
be involved or be part of the team, but the administrator may not always be available and the
team should be able to handle incidents on its own. The team will be responsible for responding
to incidents such as viruses, worms, or other malicious code; intrusions; hoaxes; natural
The following section discusses a methodology for defining a computer security strategy that can
be used to implement security policies and controls to minimize possible attacks and threats. The
methods can be used for all types of attacks on computer systems, whether they are malicious,
non-malicious or natural disasters, and can thus be re-used repeatedly for different attack
scenarios. The methodology is based on the various types of threats, methods of attack, and
vulnerabilities discussed in "Security Threats." The following flow chart outlines the
methodology.
Flowchart 1
The first phase of the methodology outlined in Flowchart 1 is to determine the attacks that can be
expected and ways of defending against these attacks. It is impossible to prepare against all
attacks; therefore, prepare for the most likely attacks that the organization can expect. It is
always better to prevent or minimize attacks than to repair the damage after an attack has already
occurred.
Consider all of the possible threats that cause attacks on systems. These will include malicious
attackers, non-malicious threats, and natural disasters. The figure below classifies the various
threats to systems.
Threats such as ignorant or careless employees and natural disasters do not involve motives or
goals; therefore no predetermined methods, tools, or techniques are used to launch an attack.
Almost all of these attacks or security infiltrations are internally generated; rarely will they be
initiated by someone outside of the organization.
For these types of threats, security personnel need to implement separate proactive and reactive
strategies, following the guidelines in Flowchart 1.
In order to launch an attack, a malicious attacker needs a method, tool or technique to exploit
various vulnerabilities in systems, security policies, and controls. A malicious attacker can use
different methods to launch the same attack. Therefore, the defense strategy must be customized
for each type of method used in each type of threat. Again, it is important that security
professionals keep current on the various methods, tools, and techniques used by attackers. A
detailed discussion of these can be found in "Security Threats." Following is a short list of these
techniques:
Proactive Strategy
The proactive strategy is a set of predefined steps that should be taken to prevent attacks before
they occur. The steps include looking at how an attack could possibly affect or damage the
computer system and the vulnerabilities it exploits (steps 1 and 2). The knowledge gained in
these assessments can help in implementing security policies that will control or minimize the
attacks. These are the three steps of the proactive strategy:
Following these steps to analyze each type of attack has a side benefit; a pattern will begin to
emerge, because many factors will overlap for different attacks. This pattern can be helpful in
determining the areas of vulnerability that pose the greatest risk to the enterprise. It is also
necessary to take note of the cost of losing data versus the cost of implementing security
controls. Weighing the risks and the costs are part of a system risk analysis, and are discussed in
the white paper "Security Planning."
Security policies and controls will not, in every case, be completely effective in eliminating
attacks. For this reason it is necessary to develop contingency and recovery plans in the event
that security controls are penetrated.
Possible damages can range the gamut from minor computer glitches to catastrophic data loss.
The damage caused to the system will depend on the type of attack. Use a test or lab
environment to clarify the damages resulting from different types of attacks, if possible. This will
enable security personnel to see the physical damage caused by an experimental attack. Not all
attacks cause the same damage. Here are some examples of tests to run:
Simulate an e-mail virus attack on the lab system, and see what damage was caused and
how to recover from the situation.
Use social engineering to acquire a username and password from an unsuspecting
employee and observe whether he or she complies.
Simulate what would happen if the server room burned down. Measure the production
time lost and the time taken to recover.
It is also a good idea to involve the incident response team mentioned earlier, because a team is
more likely than an individual to spot all of the different types of damage that have occurred.
If the vulnerabilities that a specific attack exploits can be discovered, current security policies
and controls can be altered or new ones implemented to minimize these vulnerabilities.
Determining the type of attack, threat, and method makes it easier to discover existing
vulnerabilities. This can be proved by an actual test.
Following is a list of possible vulnerabilities. These represent just a few of the many that exist
and include examples in the areas of physical, data, and network security.
Physical Security:
Data Security:
What access controls, integrity controls, and backup procedures are in place to limit
attacks?
Are there privacy policies and procedures that users must comply to?
What data access controls (authorization, authentication, and implementation) are there?
What user responsibilities exist for management of data and applications?
Have direct access storage device management techniques been defined? What is their
impact on user file integrity?
Are there procedures for handling sensitive data?
Network Security:
What kinds of access controls (Internet, wide area network connections, etc.) are in
place?
Minimizing the security system's vulnerabilities and weaknesses that were determined in the
previous assessment is the first step in developing effective security policies and controls. This is
the payoff of the proactive strategy. By minimizing vulnerabilities, security personnel can
minimize both the likelihood of an attack, and its effectiveness, if one does occur. Be careful not
to implement too stringent controls because the availability of information could then become a
problem. There must be a careful balance between security controls and access to information.
Information should be as freely available as possible to authorized users.
A contingency plan is an alternative plan that should be developed in case an attack penetrates
the system and damages data or any other assets with the result of halting normal business
operations and hurting productivity. The plan is followed if the system cannot be restored in a
timely manner. Its ultimate goal is to maintain the availability, integrity and confidentiality of
data—it is the proverbial "Plan B."
There should be a plan per type of attack and/or per type of threat. Each plan consists of a set of
steps to be taken in the event that an attack breaks through the security policies. The contingency
plan should:
Address who must do what, when, and where to keep the organization functional.
Be rehearsed periodically to keep staff up-to-date with current contingency steps.
Cover restoring from backups.
Discuss updating virus software.
Cover moving production to another location or site.
The following points outline the various evaluation tasks that should be evaluated to develop a
contingency plan:
Draw up a detailed document outlining the various findings in the above tasks. The document
should list:
Reactive Strategy
A reactive strategy is implemented when the proactive strategy for the attack has failed. The
reactive strategy defines the steps that must be taken after or during an attack. It helps to identify
the damage that was caused and the vulnerabilities that were exploited in the attack, determine
why it took place, repair the damage that was caused by it, and implement a contingency plan if
one exists. Both the reactive and proactive strategies work together to develop security policies
and controls to minimize attacks and the damage caused during them.
The incident response team should be included in the steps taken during or after the attack to
help assess it and to document and learn from the event.
Determine the damage that was caused during the attack. This should be done as swiftly as
possible so that restore operations can begin. If it is not possible to assess the damage in a timely
manner, a contingency plan should be implemented so that normal business operations and
productivity can continue.
To determine the cause of the damage it is necessary to understand what resources the attack was
aimed at and what vulnerabilities were exploited to gain access or disrupt services. Review
It is very important that the damage be repaired as quickly as possible in order to restore normal
business operations and any data lost during the attack. The organization's disaster recovery
plans and procedures (discussed in "Security Planning") should cover the restore strategy. The
incident response team should also be available to handle the restore and recovery process and to
provide guidance on the recovery process.
It is important that once the attack has taken place, it is documented. Documentation should
cover all aspects of the attack that are known, including: the damage that is caused (hardware,
software, data loss, loss in productivity), the vulnerabilities and weaknesses that were exploited
during the attack, the amount of production time lost, and the procedures taken to repair the
damage. Documentation will help to modify proactive strategies for preventing future attacks or
minimizing damages.
If a contingency plan already exists, it can be implemented to save time and to keep business
operations functioning correctly. If no contingency plan exists, develop an appropriate plan
based on the documentation from the previous step.
The second major step in the security strategy is to review the findings established in the first
step (Predicting the Attack). After the attack or after defending against it, review the attack's
outcome with respect to the system. The review should include: loss in productivity, data or
hardware lost, and time taken to recover. Also document the attack and, if possible, track where
the attack originated from, what methods were used to launch the attack and what vulnerabilities
were exploited. Do simulations in a test environment to gain the best results.
If policies exist for defending against an attack that has taken place, they should be reviewed and
checked for their effectiveness. If no policies exist, new ones must be drawn up to minimize or
prevent future attacks.
If the policy's effectiveness is not up to standard, the policy should be adjusted accordingly.
Updates to policies must be coordinated by the relevant managerial personnel, security officer,
administrators, and the incident response team. All policies should comply with the
Examples
An employee, John Doe, does not want to lose any information that he has saved to his hard disk.
He wants to make a backup of this information, so he copies it to his home folder on the server
that happens to also be the company's main application server. The home folders on the server
have no disk quotas defined for the users. John's hard drive has 6.4 Gigabytes of information and
the server has 6.5 Gigabytes of free space. The application server stops responding to updates
and requests because it is out of disk space. The result is that users are denied the applications
server services and productivity stops. Below is the methodology that should have taken place
before John decided to back up his hard drive to his home folder.
Jane Doe writes viruses and hacks into systems as a hobby. Jane releases a new virus that will
disrupt e-mail systems throughout the world.
An employee, Bob Roberts, works for a company that designs space ships. Bob is contacted by
the competition and is offered a large amount of money to steal information on his company's
Company XYZ does not have fire protection and detection systems in their server room. An
administrator of the company's computer systems leaves a couple of manuals lying on the air-
conditioner. During the night the air conditioner overheats and starts a fire that burns down the
server room and a couple of offices.
The company’s management must have confidence that its business is protected and able to
prevent any attempt to steal information – whether these attempts come from outside the
company or from its own staff.
The uncontrolled use of the Internet and portable storage media, as well as inability to monitor
the information coming off company’s printers vastly increases the chances that your
strategically important information will be stolen and transferred to your competitors. It also
increases the chances that your business might grind to a halt because its key asset – information
– has been destroyed.
Information security (IS) auditing involves the study and assessment of the current state of the
organization’s information resources and corporate systems, checking them for their
correspondence to the standards and requirements demanded by the client.
the following are types of auditing of information security services:
Expert audit
Penetration test
Web security audit
Comprehensive audit
Preparation for ISO certification
1. Preparation – first and one of the most important step is proper forensic case preparation
this can include: understanding local law and legal issues (this can determine tools and
procedures we can or cannot use), understanding of assignment (what we are asked to
do), reconnaissance of amount and type of computers and operation systems we will have
to deal with. Preparing our team, checking equipment and much more…
2. Collection – from technical point of view distinguish three types of digital evidence
collection models: First is on site acquisition – in this type we are making binary copy of
hard drives, and then leave original ones. Second is evidence collecting and taking it to
the lab where one can make acquisition. Third is live forensic when one is collecting
evidences from powered on computers.
It is hard to point word wide standards in computer forensics, reason of this are differences in
legal systems. There are efforts to change this. Many organizations and institutions presents its
best practices. Below I present sample Best Practices from: computer-forensics-recruiter.com
Whenever possible, do not examine the original media. Write protect the original, copy it,
and examine only the copy.
Use write blocking technology to preserve the original while it is being copied.
Computer forensic examiners must meet minimum proficiency standards.
Examination results should be reviewed by a supervisor and peer reviewed on a regular
schedule.
All hardware and software should be tested to ensure they produce accurate and reliable
results.
Forensic examiners must observe the highest ethical standards.
Forensic examiners must remain objective at all times.
Forensic examiners must strictly observe all legal restrictions on their examinations.
Incidence handling
SEE PPT ATTACHED
Host-based evidence includes logs, records, documents, and any other information that is found
on a system and not obtained from network-based nodes.
For example, host-based information might be a system backup that harbors evidence at a
specific period in time. Host-based data collection efforts should include gathering information
in two different manners:
live data collection
forensic duplication
In some cases, the evidence that is required to understand an incident is ephemeral (temporary or
fleeting) or lost when the victim/relevant system is powered down. This volatile data can provide
critical information when attempting to understand the nature of an incident. Therefore, the first
step of data collection is the collection of any volatile information from a host before this
information is lost. The volatile data provides a “snap-shot” of a system at the time you respond.
You record the following volatile information:
The system date and time
The applications currently running on the system
The currently established network connections
222 www.someakenya.com Contact: 0707 737 890
The currently open sockets (ports)
The applications listening on the open sockets
In-depth response
This goes beyond obtaining merely the volatile data. The CSIRT obtains enough additional
information from the target/victim system to determine a valid response strategy. Nonvolatile
information such as logfiles are collected to help understand the nature of the incident.
Seek Legal Counsel: Before continuing, we wish to note the importance of legal guidance in
responding to a possible data breach. Your legal obligations in the event of exposing patient
medical records differ dramatically from your obligations in the event of revealing a partner
company's business plans or your customers' credit card numbers. A prompt call to an attorney
who specializes in privacy and data security issues is critical. Preferably, your business would
have prepared a data breach response plan under legal guidance in advance, helping avoid the
possibility of early missteps. Nothing in this post should be interpreted as a substitute for legal
advice.
Seek Technical Help: Specialists such as Elysium Digital with experience in assisting firms
facing a possible breach can be retained to investigate. The goals of the technical investigation
are:
Preserve the Evidence: The success of the investigation depends on the quality of the available
evidence. To foil a potential investigation, attackers may delete files or perform other
modifications to cover their tracks. By using or modifying a system after a breach, you may
inadvertently destroy evidence of actions that a forensic investigation could otherwise uncover.
Turn off your server(s) (just pull out the power plug)
Swap all hard drives out of the affected servers
Use a properly-trained forensic consultant to create court-defensible forensic images of
server hard drives.
Rebuild a secured system on new drives
Create forensically-sound images of backup media, network monitoring details (such as
network logging, router/firewall logs, or intrusion detection systems), and all relevant log
files, as these may contain evidence of the attack over time
Document and preserve copies of your network layout and configuration at the time of
the attack, including network topology and the configuration of any routers and firewalls.
Be careful how images are collected, and always use forensic specialists for this task. For all of
their other invaluable skills, IT departments often are not aware of the specific steps that enable
a preservation effort to stand up in court. Forensic images are perfect copies of the entire
contents of a storage drive, including deleted and fragmentary data that cannot be captured by
doing an ordinary file copy. GHOST and similar backup tools do not capture forensically-sound
images. If in doubt about how to collect forensic images, do not hesitate to call Elysium for free
advice.
If you cannot remove the hard drives and cannot immediately call in a forensic investigator, you
should attempt to back up as much of the system as possible before modifying the system to
secure it. A complete copy of all data would be ideal, but at a minimum, you should preserve
originals of any modified files and take care to ensure that your preservation process retains
metadata such as creation and last-modified dates. You should also store copies of any system,
application, server, FTP, database, and other logs as soon as possible. Even if an attacker has
modified log files, they still may contain useful information. Preservation of backups and logs is
particularly urgent if they may be deleted or overwritten as time passes.
In addition, you should document any changes that you make (system settings, accounts, firewall
settings, etc.) and any remediation steps that you undertake. If these changes can be
independently documented or verified via log files, copies of files, etc., you should also preserve
evidence supporting the changes so that others can verify them later. For example, you could
take a screenshot of the configuration screen or back up the configuration files both before and
after a change is made. Among other benefits, this evidence demonstrates your remediation
process to any interested parties who may assess your efforts to mitigate the breach.
When working with a specialist such as Elysium on the response to a data breach, the specialist
has a duty to provide independent analysis. They may:
Having an outsider dig through your systems following a suspected data breach may be
intimidating. If you believe that you have already patched any vulnerabilities, you may be
tempted to simply move on. However, an investigation can be a critical step, even when not
legally required. An investigation may settle questions regarding the data accessed and provide
confidence in any remediation steps, including confirming that the attacker has not left any “back
doors” in your system to maintain access for future attacks. The investigation also may uncover
ways to reduce future risk, preempting the need to repeat an unpleasant process. Experienced
investigators understand that this review may be unpleasant, and they attempt to perform their
work objectively and professionally. Preservation of evidence can make this process as smooth
and painless as possible, helping you to achieve the goal of protecting both your data and the
trust you have built with your customers and clients.
Even though data privacy is high on the security agenda these days, security has other important
goals including protecting corporate data and intellectual property (IP) as well as controlling
fraud. The relative importance of this protection will always be driven by the likelihood of
attack, coupled with the value of the information or product that is lost. Stakes are clearly highest
for organizations performing transactions in untrusted locations and over the Internet, those
whose competitive position is driven by the data they own, or manufacturers of high value
products, particularly in outsourced facilities. Examples of fraud include:
Risks
Products and services from Thales e-Security can help many different types of organizations
reduce the risk of fraud and theft of intellectual property. Cryptography can play a vital role in
ensuring the confidentiality of information, particularly as it is exposed in hostile environments,
and can be used to verify the integrity and authenticity of almost any form of electronic
document or message. In some cases cryptographic protection, particularly in the form of
encryption, can be easily deployed in a completely transparent way. Network level encryption
using the Datacryptor family of encryption platforms can be used to protect virtually any form of
backbone network connection and is particularly valuable in protecting virtual private networks
(VPNs) to remote manufacturing or logistics locations.
Looking beyond even key management, organizations also need to protect the application
processes that actually use those keys, for example to approve the issuance of an embedded
digital ID for a manufactured device, approve the loading of secure firmware, signing of a
transaction, or counting of a vote. In remote and often untrusted locations these processes can be
made secure only through advanced levels of physical and logical security. The CodeSafe
capability of nShield HSMs enables high-tech manufacturers and software providers to create
tamper-resistant processes that protect their critical processes, business models, and intellectual
property, reducing the risk of abuses and counterfeiting. With CodeSafe, organizations can
secure sensitive processes (such as identity management or metering) behind a physically
tamper-resistant barrier. As a result, manufacturers can be more confident in their ability to
outsource securely, while software providers can maximize revenue by enforcing license
agreements through secure metering capabilities.
Benefits:
Infringement of trademarks and copyrights can be criminal offences, as well as being actionable
in civil law. A range of criminal provisions are set out in the relevant Acts, and other offences
such as those under the Fraud Act 2006 may also be applied. These criminal offences are most
often associated with organized crime groups who are dealing for profit in fake branded goods or
pirated products. However, these offences can also occur in legitimate business, for example if
Criminal IP offences are also known as “IP crime” or “counterfeiting” and “piracy”.
Counterfeiting can be defined as the manufacture, importation, distribution and sale of products
which falsely carry the trade mark of a genuine brand without permission and for gain or loss to
another. Piracy, which includes copying, distribution, importation etc. of infringing works, does
not always require direct profits from sales - wider and indirect benefits may be enough along
with inflicting financial loss onto the rights holder. For example possession of an infringing copy
of a work protected by copyright in the course of your business may be a criminal offence under
section 107 (1)(c) of the Copyright, Designs and Patents Act 1988.
Not all cases that fall within the criminal law provisions will be dealt with as criminal offences
and in many cases business to business type disputes are tackled by the civil law. Further
information is available on what is the law and the guide to offences.
“Infringement” is a legal term for an act that means breaking a law. IP rights are infringed when
a product, creation or invention protected by IP laws are exploited, copied or otherwise used
without having the proper authorization, permission or allowance from the person who owns
those rights or their representative.
All of these acts will constitute a civil infringement but some copyright and trade mark
infringements may also be a criminal offence such as the sale of counterfeits including clothing.
Trading standards are primarily responsible for enforcing the criminal IP laws, with support from
the police, and with investigative assistance from the IP rights owners. Private criminal
investigations and prosecutions may also be launched by the right owners in some cases.
Criminal IP offences may be taking place in your workplace in a variety of ways. These include:
employees selling copies of protected works or supplying fake goods within the working
environment
company servers and equipment being used to make available (i.e. uploading) infringing
content to the internet with the knowledge of management
using the work intranet to offer for sale infringing products to colleagues
external visitors entering your premises, to sell counterfeit and pirated items
Not only can IP crime make you and your business liable to a potential fine of up to £50,000, and
a custodial sentence of up to 10 years, counterfeiting and piracy can affect your business security
and reputation, threaten your IT infrastructure and risk the health and safety of your staff and
consumers.
IP rights infringement and in particular IP crime threaten legitimate businesses, their staff, and
undermines consumer confidence. Your business may face a number of risks if you do not take
appropriate steps to tackle IP crime within your working environment.
Failure to address the problem could leave you and your business liable and at risk to criminal
and/or civil action. Under civil law you may be subject to court action and have to pay damages.
Criminal action may lead to unlimited fines, or a custodial sentence (which could be up to a
maximum of 10 years). You may also be vulnerable to threats from computer viruses and
malware.
You need to think about not only the way your business is conducted, but also be aware that the
behaviour of your staff – and their actions at work may also incur liability for the organisation as
a whole.
Activities which results in IP rights being infringed can raise both civil and criminal law
liabilities. In some cases these activities may relate to something done directly by the business.
In other instances it may relate to an independent action of a member of staff at work.
There are many security risks to a business from IP crime. These include the infiltration of
viruses and malware which can aid identity theft, threaten system security and slow down IT
networks.
Good businesses attract respect and the trust of future partners. Adverse publicity relating to any
civil or criminal court action could affect how other businesses view you and how they choose to
deal with you.
IP crime can impact on the productivity of your business. Resource implications, such as staff
neglecting work tasks to carry out illegal activities, and IT system failure due to malware
problems, can have a detrimental effect.
IP rights are unfamiliar to many and can be complicated. One item can be protected by a number
of different IP rights, which can be infringed in different ways. A music CD will have copyright
in the music, so-called “mechanical” rights in the recording, design rights in the cover, and well-
known brands often register their names as trademarks.
In order to protect your business, and avoid serious legal and security risks, it is important:
To assist in identifying instances where IP rights infringement can occur, a range of activities
and examples have been identified. Advice is available on steps to help you deal with an IP
rights infringement in your business.
A business can infringe the IP rights of others by not having the correct license to support the
activities that take place within the business.
Staff infringing IP rights at work can impact productivity, put your systems at risk from malware
and put you and your business at risk of legal liability for their actions.
Letting traders onto your premises to sell items to your staff could leave your business facing
legal liability. It can also compromise your site security plans.
There are many more potential problem areas, therefore it is vital that you and your business
understand how these problems might arise, so you can take steps to avoid them.
The needs of businesses will vary. What is right for a factory unit or a small office may not suit
larger more complex organisations. The common thread is that doing nothing is not a sensible
option given the risks it can pose for you and your business. Whether your business is small or
Preventative steps will help to safeguard you and your business, but once infringing activities
have been identified, a fast and effective response is essential. You therefore need to be prepared,
even if you are not currently aware of any such problems in your business.
Clear processes and procedures will help you to embed respect for IP with managers and staff,
creating the right company ethos and ensuring that you identify potential problem areas and
manage them properly.
Staff and managers need to understand what IP is, how IP rights can be infringed and the risks
this can pose - both for them and for the business. Staff in corporate functions, such as Human
Resources (HR), Information Technology (IT), finance and procurement have a particularly
important role to play in spreading information and good practice.
Guidance is available on the procedures and processes you and your business can adopt to
prevent infringement occurring. Information includes: HR policies, license management and
processes for site visits. Advice on what to do if you identify any criminal IP offences relating to
IP rights infringement taking place in your business is also covered.
Practical tools have been developed to help you educate staff and management about the
importance of IP and how to comply with the relevant law. These include sample slide packs to
help raise awareness and improve understanding.
5 Civil Infringement
The infringement of an IP right is a civil matter in the case of patents, trademarks, designs and
copyright. In the case of trademarks and copyright the act may also constitute a criminal IP
offence.
There are many potential problem areas; therefore it is vital that you and your business take
action to avoid these problems. Advice and guidance on dealing with IP rights infringement is
available.
It is important that you and your business take preventative steps to avoid infringing the IP rights
of others by seeking permission - which usually means obtaining a license for the activity.
If you are believed to be infringing IP rights, the owner may wish to take action through the civil
courts; other methods can also be used, such as mediation, the use of “cease and desist” letters or
by seeking to use other services in resolving disputes.
6. Copyright infringement
Copyright owners generally have the right to authorize or prohibit any of the following things in
relation to their works:
Copying the work in any way. For example, photocopying, reproducing a printed page by
handwriting, typing or scanning into a computer, or making a copy of recorded music
issuing copies of the work to the public
Renting or lending copies of the work to the public. However, some lending of copyright
works falls within the Public Lending Right Scheme and this lending does not infringe
copyright
Performing, showing or playing the work in public. Obvious examples are performing
plays and music, playing sound recordings and showing films or videos in public. Letting
a broadcast be seen or heard in public also involves performance of music and other
copyright material contained in the broadcast
Broadcasting the work or other communication to the public by electronic transmission.
This includes putting copyright material on the internet or using it in an on demand
service where members of the public choose the time that the work is sent to them
making an adaptation of the work, such as by translating a literary or dramatic work,
transcribing a musical work and converting a computer program into a different computer
language or code
Copyright is infringed when any of the above acts are done without permission, whether directly
or indirectly and whether the whole or a substantial part of a work is used, unless what is done
falls within the scope of exceptions to copyright permitting certain minor uses.
Copyright is essentially a private right so decisions about how to enforce your right that is what
to do when your copyright work is used without your permission, are generally for you to take.
7. Patent infringement
The owner of a patent can take legal action against you and claim damages if you infringe their
patent.
Patent applicants have to provide a full description of the invention. You can ask for an opinion
to check if what you want to do would infringe a particular patent. If it would infringe, you may
be able to agree terms with the owner, or even buy the patent from them.
If you are infringing get professional advice quickly from a patent attorney or solicitor, because
the owner can sue you.
There are two basic types of defence if someone claims you are infringing their patent:
You are not infringing - what you are doing does not infringe their patent claims, or the patent is
invalid - you can take legal action to challenge the validity of the patent. If you win, their patent
may be cancelled (revoked). The loser usually has to pay both sides’ costs, so think hard before
starting legal action. If someone intends to sue you for infringement, you can try to reach
agreement with them on using their patent. Get professional advice from a patent attorney or
solicitor, but do not do or say anything yourself.
8. Design infringement
By registering a design the proprietor obtains the exclusive right for 25 years (provided renewal
fees are paid every 5 years) to make, offer, put on the market, import or export the design, or
stock the product for the above purposes.
These rights are infringed by a third party who does any of the above with the design, for
commercial gain.
The Intellectual Property Office (IPO) cannot advise you on whether your design would infringe
an existing design. If you are concerned that you may be infringing, you may wish to obtain
professional advice from a patent attorney, trade mark attorney or a solicitor.
If you are infringing you should be aware that the owner may be able to sue you. The legal
practitioner may also be able to advise you on agreeing, if it is possible, some form of terms
between you and the owner of the registered design (such as licensing the right to use the design
or buying it from them).
There are two basic types of defence if someone claims you are infringing their design:
you are not infringing - what you are doing does not infringe their design, or
Get professional advice. You may be able to get a court order to force the infringer to cease
trading. You should then consider whether to negotiate or to take legal action for compensation.
However, infringement actions must be taken to the High Court of England and Wales, the High
Court of Northern Ireland or the Court of Session in Scotland. The IPO does not handle such
actions.
It is important that you and your business take preventative steps to avoid infringing the IP rights
of others by seeking permission - which usually means obtaining a license for the activity.
If you are believed to be infringing IP rights, the owner may wish to take action through the civil
courts; other methods can also be used, such as mediation, the use of “cease and desist” letters or
by seeking to use other services in resolving disputes.
If you use an identical or similar trade mark for identical or similar goods and services to a
registered trade mark - you may be infringing the registered mark if your use creates a likelihood
of confusion on the part of the public. This includes the case where because of the similarities
between the marks the public are led to the mistaken belief that the trade marks, although
different, identify the goods or services of one and the same trader.
Where the registered mark has a significant reputation, infringement may also arise from the use
of the same or a similar mark which, although not causing confusion, damages or takes unfair
advantage of the reputation of the registered mark. This can occasionally arise from the use of
the same or similar mark for goods or services which are dissimilar to those covered by the
registration of the registered mark.
There is no available remedy for trade mark infringement if the earlier trade mark is
unregistered. Some unregistered trademarks may be protected under Common Law and this is
known as Passing off. However, whether or not they are protected will depend on the particular
circumstances, in particular:
Get legal advice. There may be a number of potential courses of action or defenses open to you,
but this will very much depend on the particular circumstances of your case.
Some traders who think they may be infringing an earlier trade mark choose to cease trading
under the offending sign; others choose to approach the earlier trade mark owner and attempt to
negotiate a way forward that suits both parties, which may include a co-existence agreement.
If you decide that you are not infringing, or you have a good defence, you may decide to stand
your ground or even to sue the trade mark holder for making unjustified threats. In the worst case
scenario, you may have to change your trade mark and re-brand your products or services.
Get legal advice as the most suitable course of action will depend on the particular circumstances
of your case.
One potential option open to you is to write to the infringer. However you must be satisfied that
the earlier trade mark that you own and the activities of the infringer justify this. This is because
the law also protects traders from unjustifiable threats of trade mark infringement.
You may be able to negotiate a settlement which suits both parties, which may involve a co-
existence agreement. Another option is that you may be able to get a court order to force the
infringer to cease trading and pay compensation for damages. However, infringement actions
must be taken to the High Court or in Scotland, the Court of Session. We do not handle such
actions.
A coexistence agreement is a legal agreement whereby two parties agree to trade in the same or
similar market using an identical or similar trade mark.
The agreement is drawn up between parties and sets the parameters for each to use their trade
mark without the fear of infringement or legal action from the other(s).
The coexistence agreement set the terms and conditions the parties have agreed, to allow each
other to undertake their respective business activities.
The specific details of a coexistence agreement are a matter only for the parties involved to
negotiate and the IPO cannot become a party to the negotiations.
Many groups of copyright owners are represented by a collecting society. A collecting society
will be able to agree licenses with users on behalf of owners and will collect any royalties the
owners are owed. In many cases a collecting society will offer a blanket license for all the works
by owners it represents, for example for music to be played in a shop or restaurant.
There are many collecting societies who operate for various types of copyright material:
printed material
artistic works and characters
broadcast material
TV listings
film
The Copyright Tribunal is an independent tribunal established by the Copyright, Designs and
Patents Act 1988. Its main role is to adjudicate in commercial licensing disputes between
collecting societies and users of copyright material in their business. It does not deal with
copyright infringement cases or with criminal “piracy” of copyright works. Copyright
infringement can be dealt with in the civil courts such as the High Court (Chancery Division),
the Intellectual Property Enterprise Court and certain county courts where there is also a
Chancery District Registry. Criminal matters are dealt with in the criminal courts. Where parties
are unable to reach agreement in commercial licensing disputes they might also wish to consider,
as an alternative to the Copyright Tribunal, mediation services.
Legal professionals who specialize in IP are useful in helping you to understand, obtain and
defend your IP rights. Details of professionals in your area can be obtained from any of the
following organisations:
If you have concerns or are aware of any person that may be involved in IP crime, then you may
report this through your local trading standards services - who are the leading authority enforcing
IP legislation - via Citizens Advice Bureau and/or the anonymous reporting system of the charity
CrimeStoppers and Action Fraud
People involved with IP crime are generally involved with other types of crime such as benefit
fraud, drugs and people trafficking. Therefore, it is imperative that you report any instance of IP
crime that you are aware of, to the enforcement authorities.
Technology can be a double-edged sword. It can be the source of many benefits but it can also
create new opportunities for invading your privacy, and enabling the reckless use of that
information in a variety of decisions about you.
Information rights and obligations: What information rights do individuals and organizations
possess with respect to themselves? What can they protect?
Property rights and obligations: How will traditional intellectual property rights be protected
in a digital society in which tracing and accounting for ownership is difficult and ignoring such
property rights is so easy?
Accountability and control: Who can and will be held accountable and liable for the harm
done to individual and collective information and property rights?
System quality: What standards of data and system quality should we demand to protect
individual rights and the safety of society?
Ethical choices are decisions made by individuals who are responsible for the consequences of
their actions. Responsibility is a key element and means that you accept the potential costs,
duties, and obligations for the decisions you make. Accountability is a feature of systems and
social institutions and means mechanisms are in place to determine who took responsible action,
and who is responsible. Liability is a feature of political systems in which a body of laws is in
place that permits individuals to recover the damages done to them by other actors, systems, or
organizations. Due process is a related feature of law-governed societies and is a process in
which laws are known and understood, and there is an ability to appeal to higher authorities to
ensure that the laws are applied correctly.
Privacy is the claim of individuals to be left alone, free from surveillance or interference from
other individuals or organizations, including the state. Most American and European privacy law
is based on a regime called Fair Information Practices (FIP) first set forth in a report written in
1973 by a federal government advisory committee (U.S. Department of Health, Education, and
Welfare, 1973).
In Europe, privacy protection is much more stringent than in the United States. Unlike the United
States, European countries do not allow businesses to use personally identifiable information
without consumers’ prior consent. Informed consent can be defined as consent given with
knowledge of all the facts needed to make a rational decision.
Working with the European Commission, the U.S. Department of Commerce developed a safe
harbor framework for U.S. firms. A safe harbor is a private self-regulating policy and
enforcement mechanism that meets the objectives of government regulators and legislation but
does not involve government regulation or enforcement.
Internet technology has posed new challenges for the protection of individual privacy.
Information sent over this vast network of networks may pass through many different computer
systems before it reaches its final destination. Each of these systems is capable of monitoring,
capturing, and storing communications that pass through it.
Trade Secrets
Any intellectual work product – a formula, device, pattern, or compilation of data-used for a
business purpose can be classified as a trade secret, provided it is not based on information in the
public domain.
Copyright
Copyright is a statutory grant that protects creators of intellectual property from having their
work copied by others for any purpose during the life of the author plus an additional 70 years
after the author’s death.
Patents
A patent grants the owner an exclusive monopoly on the ideas behind an invention for 20 years.
The congressional intent behind patent law was to ensure that inventors of new machines,
devices, or methods receive the full financial and other rewards of their labor and yet make
widespread use of the invention possible by providing detailed diagrams for those wishing to use
the idea under license from the patent’s owner.
Lower level employees many be empowered to make minor decisions but the key policy
decisions may be as centralized as in the past.
Computer abuse is the commission of acts involving a computer that may not be illegal but that
are considered unethical. The popularity of the Internet and e-mail has turned one form of
computer abuse – spamming – into a serious problem for both individuals and businesses. Spam
is junk e-mail sent by an organization or individual to a mass audience of Internet users who
have expressed no interest in the product or service being marketed.
Reengineering work is typically hailed in the information systems community as a major benefit
of new information technology. It is much less frequently noted that redesigning business
processes could potentially cause millions of mid-level managers and clerical workers to lose
their jobs. One economist has raised the possibility that we will create a society run by a small
“high tech elite of corporate professionals…in a nation of permanently unemployed” (Rifkin,
1993). Careful planning and sensitivity to employee needs can help companies redesign work to
minimize job losses.
Several studies have found that certain ethnic and income groups in the United States are less
likely to have computers or online Internet access even though computer ownership and Internet
access have soared in the past five years. A similar digital divide exists in U.S. schools, with
The most common occupational disease today is repetitive stress injury (RSI). RSI occurs when
muscle groups are forced through repetitive actions often with high-impact loads (such as tennis)
or tens of thousands of repetitions under low-impact loads (such as working at a computer
keyboard).
The single largest source of RSI is computer keyboards. The most common kind of computer-
related RSI is carpal tunnel syndrome (CTS), in which pressure on the median nerve through the
wrist’s bony structure, called a carpal tunnel, produces pain. Millions of workers have been
diagnosed with carpal tunnel syndrome. Computer vision syndrome (CVS) refers to any
eyestrain condition related to display screen use in desktop computers, laptops, e-readers, smart-
phones, and hand-held video games. Its symptoms, which are usually temporary, include
headaches, blurred vision, and dry and irritated eyes.
The newest computer-related malady is technostress, which is stress induced by computer use.
Its symptoms include aggravation, hostility toward humans, impatience, and fatigue.
Technostress is thought to be related to high levels of job turnover in the computer industry, high
levels of early retirement from computer-intense occupations, and elevated levels of drug and
alcohol abuse.
Summary
Technology can be a double-edged sword. It can be the source of many benefits but it can also
create new opportunities for invading your privacy, and enabling the reckless use of that
information in a variety of decisions about you. The computer has become a part of our lives –
personally as well as socially, culturally, and politically. It is unlikely that the issues and our
choices will become easier as information technology continues to transform our world. The
growth of the Internet and the information economy suggests that all the ethical and social issues
we have described will be heightened further as we move into the first digital century.
“To become the absolute best place to work, communication and collaboration will be important,
so we need to be working side-by-side,” Mayer wrote in a memo to employees. “Speed and
quality are often sacrificed when [employees] work from home.”
Just three days prior to the memo’s distribution, Nicholas Bloom, professor of economics,
published a study called “Does Working from Home Work? Evidence from a Chinese
Experiment.” The study found that employees who worked from home enjoyed a 13 percent
increase in productivity compared to their office-bound peers, and has since been extensively
cited in articles contesting Mayer’s approach.
“It’s very far out there to say that no one [at all] can work at home,” Bloom said. “I can see two
reasons [for the extreme action]. One [is] to ‘reset’ everything and reconnect people to the office.
The second is that it’s a cheap way to downsize—to make people quit.”
I don’t know about the motivation “to make people quit,” but I can sympathize with the idea of
getting people to reconnect. We have become too dependent on electronic devices in our lives –
smart phones, tablet, laptops, and desktops. It seems as though few people want to talk directly
with others. We vent our feelings through the impersonal world of Facebook and Twitter and say
things – sometimes hurtful things – that we would not say in person and to someone’s face.
I like to analyze ethical issues using a utilitarian analysis, which calls for evaluating benefits and
harms of alternative actions. Starting with the benefits, telecommuting supports alternative life-
styles especially two-wage-earner families with children. The quality of life can improve through
work-balance decisions and children benefit by having a parent around at times when the child
would otherwise be in day care.
Telecommuting also opens up opportunities for the disabled to be more productive members of
the workforce by utilizing the skills they have developed. In other words there is the human
element of telecommuting that seems to be more important than the fact there is little “face
time.” Face-to-face meetings still can occur when needed through advance planning.
A variety of concerns have been raised about telecommuting including the following:
Researchers have found that while teleworkers have more flexibility to manage work and life, it
also creates distractions, particularly when a home office is not clearly defined. Experts say
working from the kitchen table is not the best as distractions in the home can make it difficult to
focus on work.
From a corporate culture standpoint, there are certain positions more conducive to telework,
which include professional specialty type positions; executive, administrative, managerial, sales
and administrative support, including clerical. The services industry employs the most
telecommuters than any other industry. Teleworkers and employers both agree it takes discipline
to telecommute. Tact and communications skills are important because of the loss of face-to-face
contact with clients, coworkers and bosses. This is especially true with email because the intent
of messages can be lost in the interpretation without the ability to see the nonverbal cues.
From an ethical viewpoint, a conclusion can be drawn that teleworking has advantages for
employers and employees for the reasons stated earlier. As technology continues to improve and
the need to reduce costs remains at the forefront of improved profits and earnings, more
companies will begin to look for ways to implement telework programs. Companies that have
succeeded use telework as a competitive advantage to recruit and retain the best talent, as a cost
efficiency measure to improve profit margins, and as a cost effective way to do business.
Yahoo's policy may be bucking a trend in this regard and it might cost them good employees in
the long run.
Public-sector agencies, like their counterparts in the private sector, are embracing the idea of
telework. President Barack Obama signed the Telework Enhancement Act of 2010, which
required Federal agencies to improve their use of telework as a strategic management tool. As
early as 2003, the federal government began experimenting with telework when 130 employees
from nine federal departments and agencies participated in a free telecenter program offered by
the General Services Administration. The GSA surveyed the workers after a 60-day pilot
program and found 75 percent of those that participated chose to continue teleworking.
It has been said that ethics is all about what you do when no one is looking. This applies to
telecommuting in particular since issues of supervision and what one does while working on a
job create challenges for those who monitor behavior.
Code of Ethics
I acknowledge:
Standard of Conduct
Keep my personal knowledge up-to-date and insure that proper expertise is available
when needed.
Share my knowledge with others and present factual and objective information to
management to the best of my ability.
Accept full responsibility for work that I perform.
Not misuse the authority entrusted to me.
Not misrepresent or withhold information concerning the capabilities of equipment,
software or systems.
Make every effort to ensure that I have the most current knowledge and that the proper
expertise is available when needed.
Avoid conflict of interest and insure that my employer is aware of any potential conflicts.
Present a fair, honest, and objective viewpoint.
Protect the proper interests of my employer at all times.
Protect the privacy and confidentiality of all information entrusted to me.
Not misrepresent or withhold information that is germane to the situation.
Not attempt to use the resources of my employer for personal gain or for any purpose
without proper approval.
Not exploit the weakness of a computer system for personal gain or personal satisfaction.
The ethical values as defined in 1992 by the Computer Ethics Institute; a nonprofit organization
whose mission is to advance technology by ethical means, lists these rules as a guide to computer
ethics:
1. Thou shalt not use a computer to harm other people.
2. Thou shalt not interfere with other people's computer work.
3. Thou shalt not snoop around in other people's computer files.
4. Thou shalt not use a computer to steal.
5. Thou shalt not use a computer to bear false witness.
6. Thou shalt not copy or use proprietary software for which you have not paid.
7. Thou shalt not use other people's computer resources without authorization or proper
compensation.
8. Thou shalt not appropriate other people's intellectual output.
9. Thou shalt think about the social consequences of the program you are writing or the
system you are designing.
10. Thou shalt always use a computer in ways that ensure consideration and respect for your
fellow humans.
The types of activities which are taking place on the Net can be analysed as follows:
What are the implications of this range of services for the ethics debate?
1. The Internet is not one network but many – indeed it is a network of networks. It does not
provide one type of service offering but many – and this range will increase. These
services have many different characteristics and the ethics debate has to take account of
this – how we approach chat rooms many not be how we approach newsgroups,
especially where children are concerned.
2. The Internet has many actors with different interests. Infrastructure companies like Cisco
or Oracle may have little or no involvement in content. Microsoft may start by ‘simply’
providing a browser (Explorer) and then go into the portal business (MSN). Not all
Internet Service Providers (ISPs) provide access to all newsgroups and most chat rooms
are not hosted by ISPs. If one is attempting to bring a sense of ethics to the Internet in any
particular instance, it is essential to know who has the control and the responsibility.
3. There is still a poor sense of understanding of the issues. On the one hand, those who
campaign for more ‘control’ of the Internet often have little understanding of the
technological complexities. Typically they do not know how newsgroups and chat rooms
are hosted and many politicians do not know the difference between a newsgroup and a
In considering whether there is a place for ethics on the Internet, we need to have understanding
of what such a grand word as ‘ethics’ means in this context. I suggest that it means four things:
This means that the World Wide Web is not the wild Web, but instead a place where
values in the broadest sense should take a part in shaping content and services. This is
recognition that the Internet is not something apart from civil society, but increasingly a
fundamental component of it.
This means that we do not invent a new set of values for the Internet but, for all the
practical problems, endeavor to apply the law which we have evolved for the physical
space to the world of cyberspace. These laws might cover issues like child pornography,
race hate, libel, copyright and consumer protection.
This means recognizing that, while originally most Internet users were white, male
Americans, now the Internet belongs to all. As a pervasively global phenomenon, it
cannot be subject to one set of values like a local newspaper or national television station;
somehow we have to accommodate a multiplicity of value systems.
This means recognizing that users of the Internet – and even non-users – are entitled to
have a view on how it works. At the technical level, this is well understood – bodies like
the Internet Engineering Task Force (IETF), the Internet Corporation for Assigned
Names and Numbers (ICANN) and the World Wide Web Consortium (W3C) endeavor to
understand and reflect user views. However, at no level do we have similar mechanisms
for capturing user opinions on content and access to it.
In seeking to apply a sense of ethics to cyberspace, there are some major problems but also some
useful solutions.
Jurisdictional competence: Laws are nation-based but cyberspace is global. How does
one apply up to 170 separate and different legal systems to the Internet?
Technological complexities: The Internet is a complex technical network and one cannot
simply apply ‘old’ regulatory conventions from the worlds of publishing or broadcasting.
The ‘geeks’ vs the ‘suits’: As many Internet-related companies have grown, there is
now an internal tension between the old-timers, with their vast technical knowledge, and
We need to give Internet users more relevant information. This should start at the point at
which one purchases a PC or other Internet-enabled device. There should then be further
information in both appropriate physical places – like school rooms – and relevant
cyberspaces – like child-focused chat rooms.
We need a more informed debate through education and awareness campaigns. We
cannot leave the terrain to civil libertarian ‘purists’, who too often see the Internet as a
a. Makes, or permits or directs another to make, materially false and misleading entries in
an entity’s financial statements or records; or
b. Fails to correct an entity’s financial statements or records that are materially false and
misleading when he or she has the authority to record an entry; or
c. Signs, or permits or directs another to sign, a document containing materially false and
misleading information.
A member has been asked to perform litigation services for the plaintiff in connection
with a lawsuit filed against a client of the member's firm.
A member has provided tax or personal financial planning (PFP) services for a married
couple who are undergoing a divorce, and the member has been asked to provide the
services for both parties during the divorce proceedings.
In connection with a PFP engagement, a member plans to suggest that the client invest in
a business in which he or she has a financial interest.
A member provides tax or PFP services for several members of a family who may have
opposing interests.
A member has a significant financial interest, is a member of management, or is in a
position of influence in a company that is a major competitor of a client for which the
member performs management consulting services.
A member serves on a city's board of tax appeals, which considers matters involving
several of the member's tax clients.
A member has been approached to provide services in connection with the purchase of
real estate from a client of the member's firm.
A member refers a PFP or tax client to an insurance broker or other service provider,
which refers clients to the member under an exclusive arrangement to do so.
A member recommends or refers a client to a service bureau in which the member or
partner(s) in the member's firm hold material financial interest(s).
1. The member should consider whether (a) the entry or the failure to record a transaction in
the records, or (b) the financial statement presentation or the nature or omission of
disclosure in the financial statements, as proposed by the supervisor, represents the use of
an acceptable alternative and does not materially misrepresent the facts. If, after
appropriate research or consultation, the member concludes that the matter has
1. To perform tax or consulting services engagements that involve acting as an advocate for
the client.
2. To act as an advocate in support of the client's position on accounting or financial
reporting issues, either within the firm or outside the firm with standard setters,
regulators, or others.
Services provided or actions taken pursuant to such types of client requests are professional
services [ET section 92.11] governed by the Code of Professional Conduct and shall be
performed in compliance with Rule 201, General Standards [ET section 201.01], Rule 202,
Compliance With Standards [ET section 202.01], and Rule 203, Accounting Principles [ET
section 203.01], and interpretations thereof, as applicable. Furthermore, in the performance of
any professional service, a member shall comply with rule 102 [ET section 102.01], which
requires maintaining objectivity and integrity and prohibits subordination of judgment to others.
When performing professional services requiring independence, a member shall also comply
with rule 101 [ET section 101.01] of the Code of Professional Conduct.
Moreover, there is a possibility that some requested professional services involving client
advocacy may appear to stretch the bounds of performance standards, may go beyond sound and
reasonable professional practice, or may compromise credibility, and thereby pose an