SSE Co-3

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 74

Session-20

Code Analysis
Software Security problems - Overview
• Nearly all attacks on software applications have one fundamental cause: The software is
not secure owing to defects in its design, coding, testing, and operations.
• A vulnerability is a software defect that an attacker can exploit.
• Discovering and eliminating bugs during code analysis takes care of roughly half of the
problem when tackling software security.
• Defects typically fall into one of two categories: bugs and flaws.

• A bug is a problem introduced during software implementation. Most bugs can be easily
discovered and corrected.
• Buffer overflows, race conditions, unsafe system calls, and incorrect input validation.
• A flaw is a problem at a much deeper level. Flaws are more subtle, typically originating
in the design and being instantiated in the code.
• Compartmentalization problems in design, error-handling problems, and broken or
illogical access control.
Common Security Bugs and Attack Strategies
Input Validation
• Trusting user and parameter input is a frequent source of security
problems. Attacks that take advantage of little to no input validation
include cross-site scripting, illegal pointer values, integer overflows, and
DNS cache poisoning
• Inadequate input validation can lead to buffer overflows and SQL defects.

Exceptions
• Exceptions are events that disrupt the normal flow of code.
• Programming languages may use a mechanism called an exception
handler to deal with unexpected events such a divide-by-zero attempt,
violation of memory protection, or a floating-point arithmetic error.
Common Security Bugs and Attack Strategies Cont….
Buffer Overflows
• Buffer overflows are a leading method used to exploit software by remotely
injecting malicious code into a target application.
• The root cause of buffer overflow problems is that commonly used programming
languages such as C and C++ are inherently unsafe.
• No bounds checks on array and pointer references are carried out, meaning that a
developer must check the bounds or risk encountering problems.

SQL Injection
• SQL injection is currently the principal technique used by attackers to take
advantage of non-validated input defects to pass SQL commands through an
application for execution by a database.
• The security model used by many applications assumes that a SQL query is a
trusted command. In this case, the defect lies in the software's construction of a
dynamic SQL statement based on user input.
Common Security Bugs and Attack Strategies Cont….
Race Conditions
• Race conditions take on many forms but can be characterized as
scheduling dependencies between multiple threads that are not properly
synchronized, causing an undesirable timing of events.
Race conditions fall into three main categories:
• Infinite loops, which cause a program to never terminate or never return
from some flow of logic or control
• Deadlocks, which occur when the program is waiting on a resource
without some mechanism for timeout or expiration and the resource or
lock is never released.
• Resource collisions, which represent failures to synchronize access to
shared resources, often resulting in resource corruption or privilege
escalations
Source Code Review
• Source code review for security ranks high on the list of sound practices
intended to enhance software security.
• Structured design and code inspections, as well as peer review of source
code, can produce substantial improvements in software security.
• The reviewers meet one-on-one with developers and review code
visually to determine whether it meets previously established secure
code development criteria.
• Reviewers consider coding standards and use code review checklists as
they inspect code comments, documentation, the unit test plan, and the
code's compliance with security requirements.
• Unit test plans detail how the code will be tested to demonstrate that it
meets security requirements and design/coding standards intended to
reduce design flaws and implementation bugs.
Static Code Analysis Tools
• Static analysis tools look for a fixed set of patterns or rules in the code in a
manner similar to virus-checking programs.
• More advanced tools allow new rules to be added to the rulebase, the tool
will never find a problem if a rule has not been written for it.
Some examples of problems detected by static code analyzers:
• Syntax problems
• Unreachable code
• Unconditional branches into loops
• Undeclared variables
• Uninitialized variables
• Parameter type mismatches
• Uncalled functions and procedures
• Variables used before initialization
• Non-usage of function results
• Possible array bound errors
Metric Analysis
• Metric analysis produces a quantitative measure of the degree to which
the analyzed code possesses a given attribute. An attribute is a
characteristic or a property of the code.
Example:
When considered separately, "lines of code" and "number of security
breaches" are two distinct measures that provide very little business
meaning because there is no context for their values. A metric made up as
"number of breaches/lines of code" provides a more interesting relative
value. A comparative metric like this can be used to compare and contrast a
given system's "security defect density" against a previous version or similar
systems and thus provide management with useful data for decision making
Qualitative Software Metric Classification
1.Absolute
• Absolute metrics are numerical values that represent a characteristic of
the code, such as the probability of failure, the number of references to
a particular variable in an application, or the number of lines of code.
Absolute metrics do not involve uncertainty. There can be one and only
one correct numerical representation of a given absolute metric.
2.Relative
• Relative metrics provide a numeric representation of an attribute that
cannot be precisely measured, such as the degree of difficulty in testing
for buffer overflows
Session 21
Coding Practice
Coding Practice
• Coding practices typically describe
Methods
Techniques
Processes
Tools
Runtime libraries
that can prevent or limit exploits against vulnerabilities.
• Secure coding requires an understanding of programming errors that
commonly lead to software vulnerabilities.
• Secure coding can benefit from the proper use of software development
tools, including compilers.
• Compilers typically have options that allow increased or specific
diagnostics to be performed on code during compilation.
• Resolving these warnings can improve the security of the deployed
software system.
Most Vulnerabilities caused by Programming Errors

• 64% of the vulnerabilities in NVD in 2004 are due


to programming errors
– 51% of those due to classic errors like buffer overflows, cross-site-scripting,
injection flaws
– Heffley/Meunier (2004): Can Source Code Auditing Software Identify
Common Vulnerabilities and Be Used to Evaluate Software Security?

• Cross-site scripting, SQL injection at top of the


statistics (CVE, Bugtraq) in 2006
• "We wouldn't need so much network security if
we didn't have such bad software security"
--Bruce Schneier
Unexpected Integer Values

An unexpected value is not one


you would expect to get using a
pencil and paper
Unexpected values are a common source of software vulnerabilities (even
when this behavior is correct).

Fun With Integers


char x, y;
x = -128;
y = -x;

if (x == y) puts("1");
if ((x - y) == 0) puts("2");
if ((x + y) == 2 * x) puts("3");
if (x != -y) puts("4");
CERT Secure Coding Standard
CERT Vulnerability Analysis
CERT Secure Coding Initiative
Work with software developers and software development
organizations to eliminate vulnerabilities resulting from coding
errors before they are deployed.
– Reduce the number of vulnerabilities to a level where they
can be handled by computer security incident response
teams (CSIRTs)
– Decrease remediation costs by eliminating vulnerabilities
before software is deployed

Advance the state of the practice in secure coding


Identify common programming errors that lead to software
vulnerabilities
Establish standard secure coding practices
Educate software developers
CERT Secure Coding Standards

Identify coding practices that can be used to improve


the security of software systems under development
Specific objectives include
– avoiding undefined behaviour
– avoiding implementation defined behaviour
– improving clarity for review and maintenance
– providing a consistent style across a program or set of
programs
– avoiding common programmer errors
– incorporating good practice, particularly with regard to
‘future proofing’
Community Development Process

Rules are solicited from


Secure coding the community

standards
development Published candidate rules and
recommendations at:
is a www.securecoding.cert.org
community
effort
Threaded discussions for public vetting

Candidate coding practices are moved


into a secure coding standard when
consensus is reached
Priorities and Levels

High severity,
likely, L1 P12-P27
inexpensive to
repair flaws
L2 P6-P9

L3 P1-P4

Low severity,
Med severity,
unlikely,
probable, med
expensive to
cost to repair
repair flaws
flaws
Best Practices in Coding - Summary
In a mobile phone, can you insert sim card wrongly? You cannot. Because the sim card design
is like that. So the highest level maturity is not to give an opportunity to make mistakes

• Some basic issues in every development team


– fundamental discipline issues in coding
– number of years of exp. does not match delivery
– by the time they actually become good coders, they are
promoted to leads
– 70% of the whole employees, belong 0-4 years of experience
• In India, senior developer means 3-5 years of exp,in USA, senior
developer means 6-10 years of exp – There is a huge gap in
expectation and quality
• Coding is not rocket science
• The rework time in coding is the actual killer
• 20-25% of the time is spent in reworking in software – that
means, fix the mistakes done
Session-22

Software Security Testing


Software Security Testing
• The goal of security testing is to ensure that the software being tested is
robust and continues to function in an acceptable manner even in the
presence of a malicious attack.
• Security test activities are primarily performed to demonstrate that a
system meets its security requirements and to identify and minimize the
number of security vulnerabilities in the software before the system goes
into production.
• Testing is laborious, time-consuming, and expensive, so the choice of
testing approaches should be based on the risks to the software and the
system.
• An effective testing approach balances efficiency and effectiveness to
identify the greatest number of critical defects for the least cost.
Contrasting Software Testing and Software Security Testing
• Security-related bugs can differ from traditional software bugs in a
number of ways. Users do not normally try to search out software bugs.
• Malicious attackers do search for security-related vulnerabilities in an
intelligent and deliberate manner.
• One important difference between security testing and other testing
activities is that the security test engineer needs to emulate an intelligent
attacker.
• Malicious attackers are known to script successful attacks and distribute
exploit scripts throughout their communities. This proliferation of
attacker knowledge can cause problems for a large number of users,
whereas a hard-to-find software bug typically causes problems for only a
few users.
Contrasting Software Testing and Software Security Testing Cont…

• Security testing differs from traditional software testing in that it emphasizes


what an application should not do rather than what it should do.
• Many security requirements, such as "An attacker should never beable to take
control of the application," would be regarded as untestable in a traditional
software development setting. Many security requirements, however, can be
neither refined nor dropped even if they are untestable.
• Many traditional software bugs can have security implications. Buggy behavior
is almost by definition unforeseen behavior, and as such it presents an attacker
with the opportunity for a potential exploit.

Ninety Percent Right


Finding 90 percent of a software program's vulnerabilities does not necessarily make the software less
vulnerable; it merely reduces the cost of future fixes and increases the odds of finding the remaining
problems before attackers do. The result is that secure software development is intrinsically more
challenging than traditional software development. Given this fact, security testing needs to address these
unique considerations and perspectives to the extent possible and practical.
Security Testing Methods
Two common methods for testing whether software has met its security
requirements are functional security testing and risk-based security
testing

• Functional testing is meant to ensure that software behaves as specified


and so is largely based on demonstrating that requirements defined in
advance during requirements engineering are satisfied at an acceptable
level.

• Risk-based testing probes specific risks that have been identified through
risk analysis. The next two sections discuss how functional and risk-based
testing can be used to enhance confidence in the software's security.
Functional Testing
• Functional testing usually means testing the system's adherence to its
functional requirements.
"When a specific thing happens, then the software should respond in a
certain way."
This way of specifying a requirement is convenient for the tester, who can
exercise the "if" part of the requirement and then confirm that the software
behaves as it should.
• A common software development practice is to ensure that every
requirement can be mapped to a specific software artifact meant to
implement that requirement.
• Testing a mitigation measure is not enough to guarantee that the
corresponding risk has truly been eliminated, and this caveat is especially
important to keep in mind when the risk in question is a severe one.
Risk-Based Testing
• Risk-based testing addresses negative requirements, which state what a
software system should not do.
• Tests for negative requirements can be developed in a number of ways.
• They should be derived from a risk analysis, which should encompass not
only the high-level risks identified during the design process but also low-
level risks derived from the software itself.
Defining Tests for Negative Requirements
• A mature test organization typically has a set of test templates that
outline the test techniques to be used for testing against specific risks and
requirements in specific types of software modules.
• Another way to derive test scenarios from past experience is to use
incident reports. Incident reports can simply be bug reports, but in the
context of security testing they can also be forensic descriptions of
successful intruder activity
Session-23

Security Testing Considerations


Throughout the SDLC
Basics of Security Testing with SDLC Integration

Cyberpunks break into computer systems to steal, change or destroy information as a


form of cyber-terrorism.
These persons are sneak enough to take advantage of hidden vulnerabilities of the
web application. Thus, it requires security testing.

• What is Security Testing?


Security testing is a process that is performed with the purpose to encounter and
expose the flaws in the security mechanism of software application.
while regression to ensure that application is following safety sets to protect itself
from
 Loopholes
 data breaching
 unforeseen actions that can exploit web application or software.

• Activities related to testing take place throughout the software life cycle, not just
after coding is complete.
• Activities related to testing take place throughout the software life cycle,
not just after coding is complete.
• Preparations for security testing can begin even before the planned
software system has definite requirements and before a risk analysis has
been conducted.
• During the requirements phase, test planning focuses on outlining how
each requirement can and will be tested.
• A security risk analysis (as discussed in Chapters 4 and 7) is an integral
part of secure software development, and it should drive requirements
derivation and system design as well as security testing.
• Risks identified during this phase may inspire additional requirements
that call for features to mitigate those risks.
Integrate Security Controls Across SDLC Phases

Security
Awareness

Security Security
Maintenance Requirements

Secure Secure by
Deployment Design

Security Review Secure


& Response Implementation

Security Testing
Security in the SDLC
• Essential that security is embedded in all stages of the SDLC
 Requirements definition
 Design
 Development
 Testing
 Maintenance
• The software development process can be expected to go more smoothly
if these security measures are defined early in the SDLC, when they can
be more easily implemented.
Testing Libraries and Executable Files
• Libraries need special attention in security testing, because components found in a library might
eventually be reused in ways that are not anticipated in the current system design.
Example
• A buffer overflow in a particular library function might seem to pose little risk. Because
attackers cannot control any of the data processed by that function.
• This function might be reused in a way that makes it accessible to outside attackers.

• libraries may be reused in future software development projects even if such reuse was not
planned during the design of the current system.
• In many development projects, unit testing is closely followed by a test effort that focuses on
libraries and executable files.
• Coverage analysis measures the degree to which the source code has been fully tested, including
 Statements
 Conditions
 Paths
 Entry/exit conditions
Security Testing Types
• Unit testing, where individual classes, methods, functions, or other
relatively small components are tested.

• Functional testing, where software is tested for adherence to


requirements.

• Integration testing, where the goal is to test whether software


components work together as they should.

• System testing, where the entire system is under test.


Unit Testing
Unit Testing
Unit testing is usually the first stage of testing that a software artifact goes through.
Involves exercising
 individual functions,
 Methods
 Classes
 stubs.
White-box testing is typically very effective in validating design decisions and assumptions.
White-box testing is effective in finding programming errors and implementation errors.
It focuses on analyzing
 data flows
 control flows
 information flows
 coding practices
 exception and error handling
Session-24

Security Testing Considerations


Throughout the SDLC_Session2
Functional testing

• Functional testing is meant to ensure that software behaves as it should.


Therefore, it is largely based on software requirements.
• This means testing the system’s adherence to its functional requirements.
Example
• If security requirements state that the length of any user input must be
checked, then functional testing is part of the process of determining whether
this requirement was implemented and whether it works correctly.
• There are two major aspects of security testing in terms of functional testion:
Testing security functionality to ensure that it works and testing the
subsystem in light of malicious attack.
Security testing is motivated by probing undocumented assumptions and
areas of particular complexity to determine how a program can be broken.
Integration Testing
• Integration testing focuses on a collection of subsystems, which may contain many
executable components.
• Numerous software bugs are known to appear only because of the way
components interact, and the same is true for security bugs as well.
• Integration errors often arise when one subsystem makes unjustified assumptions
about other subsystems.

Example
• An integration error can occur if the calling function and the called function
each assume that the other is responsible for bounds checking and neither one
actually does the check.

During security testing, it is especially important to determine that data flows and
controls flows cannot be influenced by a potential attacker.
System Testing
• System testing means testing the system as a whole.
• All the modules/components are integrated in order to verify if the system works as
expected or not.
• System testing is done after integration testing. This plays an important role in
delivering a high-quality product.
• Different types of system testing are stress testing and penetration testing
• Stress testing is relevant to security because software performs differently when it is
under stress.
Example
when one component is disabled because of insufficient resources, other
components may compensate in insecure ways.
 An executable that crashes may leave sensitive information in places that are
accessible to attackers.
• Penetration testing assess how an attacker is likely to try to subvert a system.
• The term "penetration testing" refers to testing the security of a computer system
Network-based penetration testing
1. Target acquisition - The test engineer identifies legitimate test targets.
 Performed using a combination of manual and automated approaches.
 In which the person responsible for the system under test provides a starting list of network addresses
and the test engineer uses software tools to look for additional computers in the network vicinity.
2. Inventory - The test engineer uses a set of tools to conduct an inventory of available network services to be
tested.

3. Probe - The test engineer probes the available targets to determine whether they are susceptible to
compromise.

4. Penetrate - Each identified vulnerability (or potential vulnerability) is exploited in an attempt to penetrate
the target system.
 The level of invasiveness involved in exploiting a vulnerability can influence this step dramatically.
5. Host based assessment - This step is typically carried out for any system that is successfully
penetrated.
 It enables the test engineer to identify vulnerabilities that provide additional vectors of attack, including
those that provide the ability to escalate privileges once the system is compromised.
6. Continue - The test engineer obtains access on any of the systems where identified vulnerabilities were
exploited and continues the testing process from the network location(s) of each compromised system.
Black-Box Testing
Black-box testing uses methods that do not require access to source code.
Black-box testing focuses on the externally visible behavior of the software, such as
requirements, protocol specifications, APIs, or even attempted attacks.
Within the security test arena, black-box testing is normally associated with activities that
occur during the pre-deployment test phase (system test) or on a periodic basis after the system
has been deployed.
Black-box test activities almost universally involve the use of tools, which typically focus on
specific areas such as
 Network security
 Database security
 Security subsystems
 Web application security.

The key secure coding practices


Using sound and proven secure coding practices to aid in reducing software defects introduced during implementation.

Performing source code review using static code analysis tools, metric analysis, and manual review to minimize
implementation-level security bugs.
Summary
SDLC Phases Security Processes

Requirements Security analysis to check abuse/misuse cases and requirement gathering in order to
identify compliance and regulatory risks involved with provision of alternatives.

Design High level risk assessment on Functional specification. Need to document those
functional assumptions and security areas of application. Develop test plan including
security tests. Security functional requirement and sec design considerations.

Coding and Unit Testing Develop Security controls and secure code, Covering Session management,
Authentication and Error handling. Static and Dynamic tools Testing and Security white
box testing.

Integration Testing Black Box Testing, Security & regress testing, Secure coding, Automated test, threat
analysis.
System Testing Black Box Testing and Vulnerability scanning

Implementation Penetration Testing, Vulnerability Scanning, Secure migration from dev to production

Support Impact analysis of Patches


Session -25
Security Failures
Security Failure
• A failure—an externally observable event—occurs when a system
does not deliver its expected service as specified or desired.
• An error is an internal state that may lead to failure if the system
does not handle the situation correctly.
• A fault is the cause of an error.

Example,
In a buffer overflow, the error might be in a functional
component that does not check the size of user input.
An attacker could exploit that error by sending an input stream
that is larger than available storage and that includes executable
code.

• The functional error can be leveraged into a security failure.


Biggest Security Testing Failures

• How can a software application fail if it was designed


and created to serve the sole purpose of meeting its
objective tasks?
Because it wasn’t meticulously tested for bugs and issues and
security loops!
• Security Failures are not just a missed shot but they are
disrupting our information data on a large scale.
• Surprisingly, in 2016, Yahoo suffered a massive security
breach with hackers stealing data of nearly one billion
people.
• 2017 would see the worst of times! As per CNBC, 918
cyber breaches exploited 1.9 billion data records only in
the first half of 2017. It is a drastic increase of 164% if
compared to the previous year 2016.
Various Scenarios of Security Failures
• Significant number of security vulnerabilities have been associated with
errors in functional components rather than in security functions.
• Try to put a system into a state that was not anticipated during
development.
Lead to a system crash and hence a denial of service.
Let the attacker bypass the authentication and authorization
controls and access normally protected information.
• Attacks have also exploited errors in parts of the system
That were not fully analyzed during development or
That were poorly configured during deployment
• Attacks can also create an unlikely collection of circumstances
That were not considered in the design and
Exploit aspects of an interface that developers have left exposed.
Categories of Errors
• To aid in the analysis of security failures, errors can be
categorized according to their occurrence in these five
system elements
• Specific interface. An interface controls access to a
service or component.
Interfaces that fail to validate input often crop up on published
vulnerability lists.
• Component-specific integration. Assembly problems
often arise because of conflicts in the design assumptions
for the components.
• Architecture integration mechanisms
Integration of a third party tool into the system
Increased risk that the dynamic integration mechanisms could
be misused or exploited
Categories of Errors
• System behavior - Component Interactions is not the simple sum
of the behaviors of its individual components.
Components may individually meet all specifications,
when they are aggregated into a system, the unanticipated
interactions among components can lead to unacceptable
system behavior.
The technical problems for this category of errors can be
significantly more challenging to solve

• Operations and usage. Operational errors are also a frequent


source of system failures
An operator installing a corrupted top-level domain name
server (DNS) database at Network Solutions
Attacker Behavior
• Modeling attacker behavior presents significant
challenges in software failure analysis.
Example 1
• Model the work processes so as to generate authentication
and authorization requirements.
• Have to model an active agent — the attacker
Example 2
• Social engineering exploits
• An attacker typically tries to convince users or administrators
to take an action
• The attacker might try to impersonate a user and convince
help-desk personnel
SESSION-26
Security Analysis
Need for Security Analysis

• As applications continue to be a primary target for attacks, software


security analysis has become a critical tool for protecting organizations
from a broad range of threats.

• A security analysis of an application can help to identify and remediate


Vulnerabilities
Flaws

• These security breaches costing millions in


Damage to data
Reputation
Business opportunities.
Security Analysis
• Security analysis has to be carried out with two perspectives
Functional perspective
Attacker's perspective

Functional perspective
• The functional perspective identifies the importance of an issue to
the business functionality of the system.
• It is a component of a risk assessment

Attacker's perspective
• The attacker's perspective considers the opportunities that business
usage and the specifics of a technology create.

Example
• Easily configure Web services could also be used by attackers to
configure those services to support their objectives.
Security Analysis – Web Services - Functional
Perspective
• Web services are often deployed to support business
requirements for the integration of geographically distributed
systems.

• By "functional," we mean the added value from the


organizational perspective.

• The functional perspective of web services includes the capability


of dynamically exchanging information without having to
hardwire the mechanics of that exchange into each system.
Security Analysis – Web Services - Functional
Perspective
• Information systems and their operations have become increasingly
decentralized and heterogeneous.

• Business processes are distributed among far-flung business divisions,


suppliers, partners, and customers.

• Each participant having its own special needs for technology and
automation.

• The demand for a high degree of interoperability among disparate


information systems has never been greater.

• Participants continually modify their systems in response to new or


changing business requirements.
Security Analysis – Web Services Attacker's
Perspective
• Assumption - Web services have been implemented using the Simple
Object Access Protocol (SOAP).
• SOAP is an XML-based protocol that lets applications exchange
information.
• Shirey's model [Shirey 1994], which categorizes threats in terms of their
impact as disclosure, deception, disruption, and usurpation
• Further category of threats
service-level threats
• common to most distributed systems
message-level threats
• affect Web services XML messages

Web services are designed to support interchanges among diverse systems


Security Analysis – Web Services Attacker's
Perspective
Two main risk factors

Distributed systems risks

Malicious input attacks such as SQL injection

Because the system is distributed on a network

Message risks

Risks to the document and data


Security Analysis – Identity Management -
Functional Perspective
• The functional perspective identifies the importance of an issue to
the business functionality of the system and
• It is a component of a risk assessment.
• Identity management (IM) is an administrative system that deals
with the creation, maintenance, and use of digital identities.
• IM includes the business processes associated with
 Organization governance
 The supporting computing infrastructure
 Specific applications
 Database management systems

Organizational governance policies


These policies can be implemented consistently across that diverse collection of
identities and access control mechanisms.
Security Analysis – Identity Management -
Functional Perspective
• Technical challenges
• Access control
Consume identity data from the principal to make and
enforce access control decisions
• Audit and reporting
Used to record, track, and trace identity information
throughout systems
• Identity mapping services
Transform identities in a variety of ways so that a principal on
one system can be mapped to a principal on another system
• Domain provisioning services
The organizational identity and authorization information
must be mapped to each domain.
Security Analysis – Identity Management -
Attacker's Perspective
• Identity information leakage can occur due to
Identity providers supply more information than is necessary
to perform the functional task
Do not protect the identity information when it is transmitted
across the domains' boundaries.

• Emerging technologies such as Web services and federated


identity have direct implications on identity information leakage.
An objective for federated identity is to enable a portable
identity by sharing identity
Information among normally autonomous security domains

Directory services that replicate identity information at the data level can also create
exposure by replicating more identity information than is required for dependent
systems.
Shirey’s Threat Categorites
DISCLOSURE OF INFORMATION
• Attack tactics The XML-based messages may be passed without
encryption (in the clear) and may contain valuable business information,
but an attacker may be more interested in gaining knowledge about the
system to craft attacks against the service directly and the system in
general. A security vulnerability in the service registry might let the
attacker identify the system's data types and operations. That same
information could be extracted from messages sent in the clear.
Messages may also contain valuable information such as audit logs and
may lead to identity spoofing and replay attacks, which use message
contents to create a new message that might be accepted.
• Mitigations Authentication and authorization mechanisms may be used
to control access to the registry. There are no centralized access control
mechanisms that can protect the XML messages, but message-level
mechanisms such as encryption and digital signatures can be used.
DECEPTION
• Attack tactics An attack can try to spoof the identity of the service requester
by sending a well-formed message to the service provider. The identity of
the service provider could also be spoofed. XML messages are passed
without integrity protection by default. Without integrity protection, an
attacker could tamper with the XML message to execute code or gain
privileges and information on service requesters and providers.
• Mitigations Web services provide a number of integrity and authentication
mechanisms that can mitigate deception. For example, WS-Security defines
how to include X.509, Kerberos, and username and password security
information in the XML message to support end-to-end authentication.
Message integrity is supported through digital signatures and message origin
authentication.
DISRUPTION
• Attack tactics An attacker could execute a denial of service at the network level
against a Web service. Messages could also be used for a denial-of-service attack.
For example, an attacker could send a specially formed XML message that forces
the application into an infinite loop that consumes all available computing
resources. The receipt of a large volume of malformed XML messages may exceed
logging capabilities.
• Mitigations A network-level denial of service is mitigated in a similar fashion to a
Web application denial of service —that is, by using routers, bandwidth
monitoring, and other hardware to identify and protect against service disruption.
Mitigation of message-level disruptions depends on validating the messages, but
that mitigation can be tricky, because the target of such attacks is that mitigation
component. One tactic would be to encapsulate message validation in a service
that is applied before messages are passed to applications.
USURPATION
• Attack tactics An attacker may usurp command of a system by elevating his or her privileges. One way to do so is to exploit
the service registry to redirect service requests, change security policy, and perform other privileged operations. XML
messages may be used to propagate viruses that contain malicious code to steal data, usurp privileges, drop and alter
tables, edit user privileges, and alter schema information.
• Mitigations When service registries are used in Web services, they become a central organizing point for a large amount of
sensitive information about services. The service registry (and communication to and from the service registry) should be
hardened to the highest degree of assurance that is feasible in the system. Vulnerability analysis of source code pays
particular attention to system calls to privileged modules in the operating system. The service registry can affect policy,
runtime, and locale for other services and hence is analogous in importance to the operating system. Therefore particular
attention must be paid to how service requesters access the service registry. At the message level, vendors are beginning
to realize the significant threat that viruses, when attached and posted with XML documents, may pose to the
environment. For systems that may have XML or binary attachments, virus protection services should be deployed to scan
XML and binary messages for viruses in a similar fashion to email messages—that is, before the messages are executed for
normal business operations
SESSION_27
System Complexity
System Complexity

• Today, managing security can be a complex endeavor.


• The growing complexity of networks, business requirements for innovation
and rapid delivery of services and applications require a new approach to
managing security.
• Software complexity is a natural byproduct of the functional complexity that
the code is attempting to enable.
• With multiple system interfaces and complex requirements, the complexity of
software systems sometimes grows beyond control.
• rendering applications and portfolios overly costly to maintain and risky to
enhance.
• Software complexity can run rampant in delivered projects, leaving behind
bloated, cumbersome applications.
“The software needed to run all of Google’s Internet services—from Google Search to
Gmail to Google Maps—Spans some 2 billion lines of code,” – Metz

“By comparison, Microsoft’s Windows operating system—one of the most complex


software tools ever built for a single computer, a project under development since the
1980s—is likely in the realm of 50 million lines.
System Complexity Drivers and Security

• Satisfying business requirements increasingly depends on


integrating and extending existing systems.
• Security risk assessments are affected by
Unanticipated risks
Reduced visibility
The wider spectrum of failures possible.
• Factors which affects the security in the software
development process are
Less development freedom
Changing goals, and
The importance of incremental development
Unanticipated Risks
• The dynamic nature of the operational environment raises
software risks that are typically not addressed in current systems.
• Interoperability across multiple systems may involve resolving
conflicting risk profiles and associated risk mitigations.
• change becomes increasingly difficult to control.
• Changes might invalidate the existing security analysis.
• It’s easy to think that with enough testing a program won’t fail.
• But “normal accident” theory holds that, as a system gets more
complex, its chances of failure increase
• No matter how careful you are with all the requisite components,
because of unexpected interactions between them.

“From our infrastructure to our privacy, our software suffers from ‘software sucks’
syndrome, which doesn’t sound as important as a Big Mean Attack of Cyberterrorists,”
- Tufekci
Reduced Visibility
• security visibility is the ability to deliver an unobstructed
view into the operation of security controls, making the
pertinent information easy to see, and therefore
manage.
• When we talk about visibility, we’re essentially talking
about having a complete picture of your software
security posture.
• The most effective threat management will have an
integrated, advanced visual dashboard.
• Which shows
How devices are configured
Any attack in process (or) about to happen
Noncompliance with policy
Any other associated risk.
Reduced Visibility
• Testing the subsets of such systems is not sufficient to establish
confidence in the fully networked system.
It is much more difficult to distinguish an attacker-induced
error from a non malicious event.
• Business requirements may increase the need for interoperability
with less than fully trusted systems.
The security architect cannot have the in-depth knowledge
that existing techniques often assume.
Wider spectrum of failures
• The cause of a system security failure may be not a single
event,
• Rather a combination of events that individually would
be considered insignificant.
• Vulnerability derive from interactions among multiple
system components.
• The probability of some adverse combination of events
occurring can increase as system size increases.
• Security risks are associated with high-impact, low-
probability events.
• An unexpected pattern of usage might overload a shared
resource and lead to a denial of service
System failures
Partitioning Security Analysis
• Application size and complexity are the underlying cause of numerous
security vulnerabilities in code.
• To mitigate the risks arising from such vulnerabilities, various techniques
have been used to isolate the execution of sensitive code from
From the rest of the application
From other software on the platform (e.g. the operating system).

• The system complexity associated with business system integration


requirements expands the spectrum of development, user, and system
management failures
• Two perspectives for the analysis of work processes that span multiple
systems.
First perspective focuses on the global work process.
Second perspective is that of a service provider.
System Complexity and Security - Mitigations
• One approach to simplifying security across multiple systems is to share
essential security services such as user authentication and authorization.
• An essential design task for a large system is delegating the
responsibilities for meeting security requirements. Delegation of
responsibilities goes beyond system components and includes users and
system management.
• Business integration requirements and the use of technologies such as
Web services to support the integration of distributed systems can affect
the delegation of responsibilities.
• A risk for security is that it is typically treated as a separate concern,
with responsibility being assigned to different parts of the organization
that often function independently. That isolation becomes even more
problematic as the scope and scale of systems expand.
• Guidance on risk assessment and security concerns for COTS and legacy
systems are defined in BSI30 and BSI31.
Conflicting or Changing Goals Complexity
• Conflicting goals occur when desired product quality
attributes or customer values conflict with one another.
• There may be conflicts between portability and
performance requirements.
• Conflicts frequently arise between security and ease-of-
use requirements.
• Conflicting goals affect both the developer and the
project manager
• An application might need to support multiple security
protocols
The existence of unanticipated hard-to-solve problems and conflicts and changes in
requirements are often just a recognition that our understanding of the problem domain
and the tradeoffs among requirements or design options is incomplete when a
development project is initiated.

You might also like