SSE Co-3
SSE Co-3
SSE Co-3
Code Analysis
Software Security problems - Overview
• Nearly all attacks on software applications have one fundamental cause: The software is
not secure owing to defects in its design, coding, testing, and operations.
• A vulnerability is a software defect that an attacker can exploit.
• Discovering and eliminating bugs during code analysis takes care of roughly half of the
problem when tackling software security.
• Defects typically fall into one of two categories: bugs and flaws.
• A bug is a problem introduced during software implementation. Most bugs can be easily
discovered and corrected.
• Buffer overflows, race conditions, unsafe system calls, and incorrect input validation.
• A flaw is a problem at a much deeper level. Flaws are more subtle, typically originating
in the design and being instantiated in the code.
• Compartmentalization problems in design, error-handling problems, and broken or
illogical access control.
Common Security Bugs and Attack Strategies
Input Validation
• Trusting user and parameter input is a frequent source of security
problems. Attacks that take advantage of little to no input validation
include cross-site scripting, illegal pointer values, integer overflows, and
DNS cache poisoning
• Inadequate input validation can lead to buffer overflows and SQL defects.
Exceptions
• Exceptions are events that disrupt the normal flow of code.
• Programming languages may use a mechanism called an exception
handler to deal with unexpected events such a divide-by-zero attempt,
violation of memory protection, or a floating-point arithmetic error.
Common Security Bugs and Attack Strategies Cont….
Buffer Overflows
• Buffer overflows are a leading method used to exploit software by remotely
injecting malicious code into a target application.
• The root cause of buffer overflow problems is that commonly used programming
languages such as C and C++ are inherently unsafe.
• No bounds checks on array and pointer references are carried out, meaning that a
developer must check the bounds or risk encountering problems.
SQL Injection
• SQL injection is currently the principal technique used by attackers to take
advantage of non-validated input defects to pass SQL commands through an
application for execution by a database.
• The security model used by many applications assumes that a SQL query is a
trusted command. In this case, the defect lies in the software's construction of a
dynamic SQL statement based on user input.
Common Security Bugs and Attack Strategies Cont….
Race Conditions
• Race conditions take on many forms but can be characterized as
scheduling dependencies between multiple threads that are not properly
synchronized, causing an undesirable timing of events.
Race conditions fall into three main categories:
• Infinite loops, which cause a program to never terminate or never return
from some flow of logic or control
• Deadlocks, which occur when the program is waiting on a resource
without some mechanism for timeout or expiration and the resource or
lock is never released.
• Resource collisions, which represent failures to synchronize access to
shared resources, often resulting in resource corruption or privilege
escalations
Source Code Review
• Source code review for security ranks high on the list of sound practices
intended to enhance software security.
• Structured design and code inspections, as well as peer review of source
code, can produce substantial improvements in software security.
• The reviewers meet one-on-one with developers and review code
visually to determine whether it meets previously established secure
code development criteria.
• Reviewers consider coding standards and use code review checklists as
they inspect code comments, documentation, the unit test plan, and the
code's compliance with security requirements.
• Unit test plans detail how the code will be tested to demonstrate that it
meets security requirements and design/coding standards intended to
reduce design flaws and implementation bugs.
Static Code Analysis Tools
• Static analysis tools look for a fixed set of patterns or rules in the code in a
manner similar to virus-checking programs.
• More advanced tools allow new rules to be added to the rulebase, the tool
will never find a problem if a rule has not been written for it.
Some examples of problems detected by static code analyzers:
• Syntax problems
• Unreachable code
• Unconditional branches into loops
• Undeclared variables
• Uninitialized variables
• Parameter type mismatches
• Uncalled functions and procedures
• Variables used before initialization
• Non-usage of function results
• Possible array bound errors
Metric Analysis
• Metric analysis produces a quantitative measure of the degree to which
the analyzed code possesses a given attribute. An attribute is a
characteristic or a property of the code.
Example:
When considered separately, "lines of code" and "number of security
breaches" are two distinct measures that provide very little business
meaning because there is no context for their values. A metric made up as
"number of breaches/lines of code" provides a more interesting relative
value. A comparative metric like this can be used to compare and contrast a
given system's "security defect density" against a previous version or similar
systems and thus provide management with useful data for decision making
Qualitative Software Metric Classification
1.Absolute
• Absolute metrics are numerical values that represent a characteristic of
the code, such as the probability of failure, the number of references to
a particular variable in an application, or the number of lines of code.
Absolute metrics do not involve uncertainty. There can be one and only
one correct numerical representation of a given absolute metric.
2.Relative
• Relative metrics provide a numeric representation of an attribute that
cannot be precisely measured, such as the degree of difficulty in testing
for buffer overflows
Session 21
Coding Practice
Coding Practice
• Coding practices typically describe
Methods
Techniques
Processes
Tools
Runtime libraries
that can prevent or limit exploits against vulnerabilities.
• Secure coding requires an understanding of programming errors that
commonly lead to software vulnerabilities.
• Secure coding can benefit from the proper use of software development
tools, including compilers.
• Compilers typically have options that allow increased or specific
diagnostics to be performed on code during compilation.
• Resolving these warnings can improve the security of the deployed
software system.
Most Vulnerabilities caused by Programming Errors
if (x == y) puts("1");
if ((x - y) == 0) puts("2");
if ((x + y) == 2 * x) puts("3");
if (x != -y) puts("4");
CERT Secure Coding Standard
CERT Vulnerability Analysis
CERT Secure Coding Initiative
Work with software developers and software development
organizations to eliminate vulnerabilities resulting from coding
errors before they are deployed.
– Reduce the number of vulnerabilities to a level where they
can be handled by computer security incident response
teams (CSIRTs)
– Decrease remediation costs by eliminating vulnerabilities
before software is deployed
standards
development Published candidate rules and
recommendations at:
is a www.securecoding.cert.org
community
effort
Threaded discussions for public vetting
High severity,
likely, L1 P12-P27
inexpensive to
repair flaws
L2 P6-P9
L3 P1-P4
Low severity,
Med severity,
unlikely,
probable, med
expensive to
cost to repair
repair flaws
flaws
Best Practices in Coding - Summary
In a mobile phone, can you insert sim card wrongly? You cannot. Because the sim card design
is like that. So the highest level maturity is not to give an opportunity to make mistakes
• Risk-based testing probes specific risks that have been identified through
risk analysis. The next two sections discuss how functional and risk-based
testing can be used to enhance confidence in the software's security.
Functional Testing
• Functional testing usually means testing the system's adherence to its
functional requirements.
"When a specific thing happens, then the software should respond in a
certain way."
This way of specifying a requirement is convenient for the tester, who can
exercise the "if" part of the requirement and then confirm that the software
behaves as it should.
• A common software development practice is to ensure that every
requirement can be mapped to a specific software artifact meant to
implement that requirement.
• Testing a mitigation measure is not enough to guarantee that the
corresponding risk has truly been eliminated, and this caveat is especially
important to keep in mind when the risk in question is a severe one.
Risk-Based Testing
• Risk-based testing addresses negative requirements, which state what a
software system should not do.
• Tests for negative requirements can be developed in a number of ways.
• They should be derived from a risk analysis, which should encompass not
only the high-level risks identified during the design process but also low-
level risks derived from the software itself.
Defining Tests for Negative Requirements
• A mature test organization typically has a set of test templates that
outline the test techniques to be used for testing against specific risks and
requirements in specific types of software modules.
• Another way to derive test scenarios from past experience is to use
incident reports. Incident reports can simply be bug reports, but in the
context of security testing they can also be forensic descriptions of
successful intruder activity
Session-23
• Activities related to testing take place throughout the software life cycle, not just
after coding is complete.
• Activities related to testing take place throughout the software life cycle,
not just after coding is complete.
• Preparations for security testing can begin even before the planned
software system has definite requirements and before a risk analysis has
been conducted.
• During the requirements phase, test planning focuses on outlining how
each requirement can and will be tested.
• A security risk analysis (as discussed in Chapters 4 and 7) is an integral
part of secure software development, and it should drive requirements
derivation and system design as well as security testing.
• Risks identified during this phase may inspire additional requirements
that call for features to mitigate those risks.
Integrate Security Controls Across SDLC Phases
Security
Awareness
Security Security
Maintenance Requirements
Secure Secure by
Deployment Design
Security Testing
Security in the SDLC
• Essential that security is embedded in all stages of the SDLC
Requirements definition
Design
Development
Testing
Maintenance
• The software development process can be expected to go more smoothly
if these security measures are defined early in the SDLC, when they can
be more easily implemented.
Testing Libraries and Executable Files
• Libraries need special attention in security testing, because components found in a library might
eventually be reused in ways that are not anticipated in the current system design.
Example
• A buffer overflow in a particular library function might seem to pose little risk. Because
attackers cannot control any of the data processed by that function.
• This function might be reused in a way that makes it accessible to outside attackers.
• libraries may be reused in future software development projects even if such reuse was not
planned during the design of the current system.
• In many development projects, unit testing is closely followed by a test effort that focuses on
libraries and executable files.
• Coverage analysis measures the degree to which the source code has been fully tested, including
Statements
Conditions
Paths
Entry/exit conditions
Security Testing Types
• Unit testing, where individual classes, methods, functions, or other
relatively small components are tested.
Example
• An integration error can occur if the calling function and the called function
each assume that the other is responsible for bounds checking and neither one
actually does the check.
During security testing, it is especially important to determine that data flows and
controls flows cannot be influenced by a potential attacker.
System Testing
• System testing means testing the system as a whole.
• All the modules/components are integrated in order to verify if the system works as
expected or not.
• System testing is done after integration testing. This plays an important role in
delivering a high-quality product.
• Different types of system testing are stress testing and penetration testing
• Stress testing is relevant to security because software performs differently when it is
under stress.
Example
when one component is disabled because of insufficient resources, other
components may compensate in insecure ways.
An executable that crashes may leave sensitive information in places that are
accessible to attackers.
• Penetration testing assess how an attacker is likely to try to subvert a system.
• The term "penetration testing" refers to testing the security of a computer system
Network-based penetration testing
1. Target acquisition - The test engineer identifies legitimate test targets.
Performed using a combination of manual and automated approaches.
In which the person responsible for the system under test provides a starting list of network addresses
and the test engineer uses software tools to look for additional computers in the network vicinity.
2. Inventory - The test engineer uses a set of tools to conduct an inventory of available network services to be
tested.
3. Probe - The test engineer probes the available targets to determine whether they are susceptible to
compromise.
4. Penetrate - Each identified vulnerability (or potential vulnerability) is exploited in an attempt to penetrate
the target system.
The level of invasiveness involved in exploiting a vulnerability can influence this step dramatically.
5. Host based assessment - This step is typically carried out for any system that is successfully
penetrated.
It enables the test engineer to identify vulnerabilities that provide additional vectors of attack, including
those that provide the ability to escalate privileges once the system is compromised.
6. Continue - The test engineer obtains access on any of the systems where identified vulnerabilities were
exploited and continues the testing process from the network location(s) of each compromised system.
Black-Box Testing
Black-box testing uses methods that do not require access to source code.
Black-box testing focuses on the externally visible behavior of the software, such as
requirements, protocol specifications, APIs, or even attempted attacks.
Within the security test arena, black-box testing is normally associated with activities that
occur during the pre-deployment test phase (system test) or on a periodic basis after the system
has been deployed.
Black-box test activities almost universally involve the use of tools, which typically focus on
specific areas such as
Network security
Database security
Security subsystems
Web application security.
Performing source code review using static code analysis tools, metric analysis, and manual review to minimize
implementation-level security bugs.
Summary
SDLC Phases Security Processes
Requirements Security analysis to check abuse/misuse cases and requirement gathering in order to
identify compliance and regulatory risks involved with provision of alternatives.
Design High level risk assessment on Functional specification. Need to document those
functional assumptions and security areas of application. Develop test plan including
security tests. Security functional requirement and sec design considerations.
Coding and Unit Testing Develop Security controls and secure code, Covering Session management,
Authentication and Error handling. Static and Dynamic tools Testing and Security white
box testing.
Integration Testing Black Box Testing, Security & regress testing, Secure coding, Automated test, threat
analysis.
System Testing Black Box Testing and Vulnerability scanning
Implementation Penetration Testing, Vulnerability Scanning, Secure migration from dev to production
Example,
In a buffer overflow, the error might be in a functional
component that does not check the size of user input.
An attacker could exploit that error by sending an input stream
that is larger than available storage and that includes executable
code.
Functional perspective
• The functional perspective identifies the importance of an issue to
the business functionality of the system.
• It is a component of a risk assessment
Attacker's perspective
• The attacker's perspective considers the opportunities that business
usage and the specifics of a technology create.
Example
• Easily configure Web services could also be used by attackers to
configure those services to support their objectives.
Security Analysis – Web Services - Functional
Perspective
• Web services are often deployed to support business
requirements for the integration of geographically distributed
systems.
• Each participant having its own special needs for technology and
automation.
Message risks
Directory services that replicate identity information at the data level can also create
exposure by replicating more identity information than is required for dependent
systems.
Shirey’s Threat Categorites
DISCLOSURE OF INFORMATION
• Attack tactics The XML-based messages may be passed without
encryption (in the clear) and may contain valuable business information,
but an attacker may be more interested in gaining knowledge about the
system to craft attacks against the service directly and the system in
general. A security vulnerability in the service registry might let the
attacker identify the system's data types and operations. That same
information could be extracted from messages sent in the clear.
Messages may also contain valuable information such as audit logs and
may lead to identity spoofing and replay attacks, which use message
contents to create a new message that might be accepted.
• Mitigations Authentication and authorization mechanisms may be used
to control access to the registry. There are no centralized access control
mechanisms that can protect the XML messages, but message-level
mechanisms such as encryption and digital signatures can be used.
DECEPTION
• Attack tactics An attack can try to spoof the identity of the service requester
by sending a well-formed message to the service provider. The identity of
the service provider could also be spoofed. XML messages are passed
without integrity protection by default. Without integrity protection, an
attacker could tamper with the XML message to execute code or gain
privileges and information on service requesters and providers.
• Mitigations Web services provide a number of integrity and authentication
mechanisms that can mitigate deception. For example, WS-Security defines
how to include X.509, Kerberos, and username and password security
information in the XML message to support end-to-end authentication.
Message integrity is supported through digital signatures and message origin
authentication.
DISRUPTION
• Attack tactics An attacker could execute a denial of service at the network level
against a Web service. Messages could also be used for a denial-of-service attack.
For example, an attacker could send a specially formed XML message that forces
the application into an infinite loop that consumes all available computing
resources. The receipt of a large volume of malformed XML messages may exceed
logging capabilities.
• Mitigations A network-level denial of service is mitigated in a similar fashion to a
Web application denial of service —that is, by using routers, bandwidth
monitoring, and other hardware to identify and protect against service disruption.
Mitigation of message-level disruptions depends on validating the messages, but
that mitigation can be tricky, because the target of such attacks is that mitigation
component. One tactic would be to encapsulate message validation in a service
that is applied before messages are passed to applications.
USURPATION
• Attack tactics An attacker may usurp command of a system by elevating his or her privileges. One way to do so is to exploit
the service registry to redirect service requests, change security policy, and perform other privileged operations. XML
messages may be used to propagate viruses that contain malicious code to steal data, usurp privileges, drop and alter
tables, edit user privileges, and alter schema information.
• Mitigations When service registries are used in Web services, they become a central organizing point for a large amount of
sensitive information about services. The service registry (and communication to and from the service registry) should be
hardened to the highest degree of assurance that is feasible in the system. Vulnerability analysis of source code pays
particular attention to system calls to privileged modules in the operating system. The service registry can affect policy,
runtime, and locale for other services and hence is analogous in importance to the operating system. Therefore particular
attention must be paid to how service requesters access the service registry. At the message level, vendors are beginning
to realize the significant threat that viruses, when attached and posted with XML documents, may pose to the
environment. For systems that may have XML or binary attachments, virus protection services should be deployed to scan
XML and binary messages for viruses in a similar fashion to email messages—that is, before the messages are executed for
normal business operations
SESSION_27
System Complexity
System Complexity
“From our infrastructure to our privacy, our software suffers from ‘software sucks’
syndrome, which doesn’t sound as important as a Big Mean Attack of Cyberterrorists,”
- Tufekci
Reduced Visibility
• security visibility is the ability to deliver an unobstructed
view into the operation of security controls, making the
pertinent information easy to see, and therefore
manage.
• When we talk about visibility, we’re essentially talking
about having a complete picture of your software
security posture.
• The most effective threat management will have an
integrated, advanced visual dashboard.
• Which shows
How devices are configured
Any attack in process (or) about to happen
Noncompliance with policy
Any other associated risk.
Reduced Visibility
• Testing the subsets of such systems is not sufficient to establish
confidence in the fully networked system.
It is much more difficult to distinguish an attacker-induced
error from a non malicious event.
• Business requirements may increase the need for interoperability
with less than fully trusted systems.
The security architect cannot have the in-depth knowledge
that existing techniques often assume.
Wider spectrum of failures
• The cause of a system security failure may be not a single
event,
• Rather a combination of events that individually would
be considered insignificant.
• Vulnerability derive from interactions among multiple
system components.
• The probability of some adverse combination of events
occurring can increase as system size increases.
• Security risks are associated with high-impact, low-
probability events.
• An unexpected pattern of usage might overload a shared
resource and lead to a denial of service
System failures
Partitioning Security Analysis
• Application size and complexity are the underlying cause of numerous
security vulnerabilities in code.
• To mitigate the risks arising from such vulnerabilities, various techniques
have been used to isolate the execution of sensitive code from
From the rest of the application
From other software on the platform (e.g. the operating system).