Software Testing Types and Methodologies
What is a Test Case?
format
A test case is standard used in testing to check if the software is working as per the requirements. It
comprises a collection of conditions that are required to be verified to validate if the actual outcomes
on the software are matching with the expected ones.
Sections –
• Functionality Name,
• Test Case,
• Name of Tester,
• Test Scenario,
• Test Case Description,
• Test Steps,
• Preconditions,
• Test Priority,
• Test Data,
• Expected Outcome,
• Test Settings,
• Actual Outcome,
• Test Status,
• Comments.
What is a Test Suite?
Test suite is a container that has a set of tests which helps testers in executing and reporting the test
execution status. It can take any of the three states namely Active, Inprogress and completed.
Functional vs. Non-Functional Testing
Functional Testing
Definition: Functional testing evaluates the software against functional requirements to ensure that
it performs its intended functions correctly. It focuses on what the system does.
Key Characteristics:
• Verifies specific actions or functions of the code
• Based on functional requirements and specifications
• Tests features visible to users
• Typically answers the question: "Can the user do this?"
Non-Functional Testing
Definition: Non-functional testing evaluates aspects of software that aren't related to specific
functions but rather to operational qualities like performance, usability, and reliability. It focuses on
how well the system performs.
Key Characteristics:
• Verifies quality attributes of the system
• Based on non-functional requirements
• Tests system qualities like performance, security, usability
• Typically answers questions like: "How fast? How secure? How user-friendly?"
Comparison
Aspect Functional Testing Non-Functional Testing
Focus What the system does How well the system performs
Basis Functional requirements Non-functional requirements
Priority Usually performed first Usually performed after functional testing
Verification Features and functions Quality attributes like performance, security
Testing Typically uses black-box Uses specialized techniques for each quality
approach techniques attribute
Ease of testing Usually more straightforward Often more complex and requires specialized
tools
Levels of Testing
Unit Testing
Definition: Unit testing is the process of testing individual components or modules of software in
isolation to verify that each unit of code performs as expected.
Key Characteristics:
• Conducted by developers
• Tests smallest testable parts of software individually
• Often automated using frameworks like JUnit, NUnit, etc.
• Isolates the unit from dependencies using stubs, mocks, or drivers
• Aims to ensure each component works correctly before integration
Integration Testing
Definition: Integration testing verifies that different modules or services work well together. It tests
the interfaces between components and detects issues in their interactions.
Key Characteristics:
• Tests combinations of units/modules working together
• Focuses on data and control flow between modules
• Identifies interface defects
• Can be performed incrementally or non-incrementally
• More complex than unit testing as it involves multiple components
System Testing
Definition: System testing evaluates the complete and integrated software system to verify that it
meets specified requirements. It tests the software as a whole.
Key Characteristics:
• Tests the entire application in an environment similar to production
• Verifies both functional and non-functional requirements
• Performed after integration testing
• Usually black-box testing (tester doesn't need to know internal code)
• Often performed by a dedicated testing team
User Acceptance Testing (UAT)
Definition: User Acceptance Testing is the final testing phase where actual users test the software to
verify it meets their business requirements and is ready for deployment.
Key Characteristics:
• Performed by end users or client representatives
• Validates the system against business requirements
• Last phase before software goes live
• Focuses on real-world usage scenarios
• Ensures the system is fit for business purpose
• Types include Alpha, Beta, and Contract Acceptance Testing
Types of Integration Testing
Incremental Testing
Definition: Incremental integration testing combines and tests units or modules one at a time until
all modules are integrated and tested as a complete system.
Types of Incremental Testing:
1. Top-Down Integration:
o Testing begins with high-level modules, with lower modules being integrated
progressively
o Uses stubs to simulate lower-level modules not yet integrated
o Advantages: Major design flaws found early, early prototype available
o Disadvantages: Basic functionality tested late, stubs creation needed
2. Bottom-Up Integration:
o Testing begins with atomic modules at lowest level, gradually moving upward
o Uses drivers to simulate higher-level modules
o Advantages: Basic functionality tested early, no stubs needed
o Disadvantages: Higher-level design issues found late, no early prototype
3. Sandwich/Hybrid Integration:
o Combines both top-down and bottom-up approaches
o Testing occurs simultaneously from top and bottom levels
o Middle layers are integrated last
o Advantages: Parallel testing possible, combines benefits of both approaches
o Disadvantages: Complex coordination, middle integration might be challenging
Non-Incremental Testing (Big Bang)
Definition: Non-incremental integration testing, often called "Big Bang" testing, combines all
modules at once and tests them as a whole.
Key Characteristics:
• All components are integrated simultaneously and then tested
• No incremental integration of modules
• Suitable for small systems
• Advantages: Simple approach, no stubs or drivers needed
• Disadvantages: Difficult to isolate faults, late integration issues, complex debugging
Types of System Testing
End-to-End Testing
Definition: End-to-End testing verifies the complete software system along with its integration with
external interfaces and systems to ensure the entire workflow functions as expected.
Key Characteristics:
• Tests the entire software workflow from start to finish
• Verifies system dependencies and integrations with external systems
• Simulates real user scenarios across the entire application
• Focuses on user experience and business processes
• Often uses real production-like data
Sanity Testing
Definition: Sanity testing is a narrow regression test that focuses on particular functionality after
changes to ensure that bugs have been fixed and no new bugs were introduced.
Key Characteristics:
• Brief, focused testing after small changes or bug fixes
• Not exhaustive - checks specific functionality
• Often unscripted and relatively informal
• Determines if further testing is warranted
• "Surface level" testing to ensure basic functionality works
Smoke Testing
Definition: Smoke testing is a preliminary test to reveal simple failures severe enough to reject a
prospective software release. It determines if the build is stable enough for further testing.
Key Characteristics:
• Performed after software build is created
• Covers core functionality in a quick manner
• Critical path testing to ensure key features work
• "Build verification testing"
• Decides whether to proceed with more intensive testing
• Usually automated and part of CI/CD pipeline
Black Box Testing
Definition: Black box testing examines the functionality of an application without knowledge of its
internal structure, code, or implementation details.
Key Characteristics:
• Tester has no knowledge of internal workings
• Tests are based on requirements and specifications
• Focus is on inputs and expected outputs
• Testing from an external or user perspective
• Identifies requirement and design issues
Monkey Testing
Definition: Monkey testing involves random inputs to the system to check if it crashes or has major
issues when faced with unexpected or random actions.
Key Characteristics:
• Random testing without specific test cases
• Inputs generated randomly or by specialized tools
• Can discover unexpected bugs not found in structured testing
• Less systematic than other testing approaches
• Used for robustness and stability validation
Black Box Testing vs. White Box Testing
Black Box Testing
Definition: Testing approach where the tester doesn't know the internal code structure and tests the
software based only on requirements and specifications.
Key Characteristics:
• No knowledge of internal implementation
• Based on external specifications
• Tests inputs and outputs
• Focuses on functional behavior
• Techniques include: Equivalence partitioning, Boundary value analysis, Decision tables
White Box Testing
Definition: Testing approach where the tester examines the internal structure, code, and
implementation details of the software to design test cases.
Key Characteristics:
• Full knowledge of internal implementation
• Based on code structure
• Tests internal operations
• Focuses on code coverage
• Techniques include: Path testing, Statement coverage, Branch coverage
Comparison
Aspect Black Box Testing White Box Testing
Knowledge of No knowledge required Detailed knowledge required
code
Focus Functionality from user perspective Internal structure and logic
Performed by Typically testers Typically developers
Test design Requirements and specifications Source code and design
basis
Testing Equivalence partitioning, boundary Statement coverage, branch
techniques value analysis coverage, path testing
Detects Missing functions, interface errors Logic errors, implementation errors
Timing Can start earlier, even with incomplete Requires code to be written
implementation
Skill Understanding of system functionality Programming knowledge
requirement
Performance Testing and Its Types
Definition: Performance testing evaluates how a system performs under a particular workload in
terms of responsiveness, stability, scalability, and resource usage.
Types of Performance Testing:
1. Load Testing: Definition: Evaluates system performance under expected user loads to
identify bottlenecks.
o Aims to determine system behavior under normal and peak load conditions
o Verifies if system meets performance requirements
o Helps determine maximum operating capacity
2. Stress Testing: Definition: Tests system behavior beyond normal operational capacity to
determine breaking points.
o Pushes system beyond specified limits
o Identifies how system fails and recovers
o Determines system stability under extreme conditions
3. Endurance Testing (Soak Testing): Definition: Verifies system stability over an extended
period of continuous operation.
o Runs system under expected load for extended periods
o Detects memory leaks, resource depletion issues
o Validates system behavior during sustained use
4. Spike Testing: Definition: Tests system response to sudden, significant increases and
decreases in load.
o Evaluates system handling of dramatic load changes
o Determines recovery time after sudden user spikes
o Identifies performance degradation during traffic surges
5. Volume Testing: Definition: Tests system handling of large volumes of data.
o Validates database performance with large datasets
o Verifies system response with substantial data volumes
o Tests data-related operations (reads, writes, updates)
6. Scalability Testing: Definition: Determines system's capability to scale up or down based on
user load changes.
o Tests gradual scaling of users, data, or transactions
o Identifies performance bottlenecks during scaling
o Helps in capacity planning
Security Testing and Its Types
Definition: Security testing identifies vulnerabilities in software systems and ensures that data and
resources are protected from possible intruders.
Types of Security Testing:
1. Vulnerability Scanning: Definition: Automated testing to identify known security
vulnerabilities in systems and networks.
o Uses automated tools to scan for known vulnerabilities
o Identifies security holes, misconfigurations
o Regular scanning helps maintain security posture
2. Penetration Testing: Definition: Simulated attacks on a system to identify security
weaknesses that could be exploited.
o Ethical hacking to find exploitable vulnerabilities
o Tests defense mechanisms
o Provides real-world attack simulation
o Often performed by specialized security experts
3. Security Scanning: Definition: Comprehensive analysis of systems to identify security
weaknesses.
o Includes network and system scanning
o More in-depth than vulnerability scanning
o Covers both automated and manual techniques
4. Risk Assessment: Definition: Systematic process to identify security risks and their potential
impact.
o Identifies and prioritizes security risks
o Evaluates potential impact and likelihood
o Guides security resource allocation
5. Security Auditing: Definition: Systematic evaluation of security against established criteria or
requirements.
o Reviews security controls and policies
o Verifies compliance with security standards
o Internal or external auditing processes
6. Ethical Hacking: Definition: Authorized attempt to gain unauthorized access to systems to
identify security weaknesses.
o Comprehensive security assessment
o Uses hacker techniques for benign purposes
o More extensive than standard penetration testing
7. Posture Assessment: Definition: Overall evaluation of security standing, including policies,
network security, vulnerabilities, and potential threats.
o Comprehensive security status evaluation
o Combines multiple security testing techniques
o Provides holistic view of security posture
Other Types of Non-Functional Testing
Usability Testing
Definition: Usability testing evaluates how user-friendly and intuitive a software application is by
testing it with representative users.
Key Characteristics:
• Focuses on ease of use and user satisfaction
• Often involves observing real users performing tasks
• Evaluates learnability, efficiency, memorability, errors, and satisfaction
• Methods include think-aloud protocols, surveys, and task analysis
• Crucial for customer-facing applications
Compatibility Testing
Definition: Compatibility testing verifies that software works correctly across different environments,
hardware, operating systems, network environments, devices, and browsers.
Key Characteristics:
• Tests application across various platforms and environments
• Ensures consistent behavior in different configurations
• Types include:
o Hardware compatibility
o Operating system compatibility
o Browser compatibility
o Mobile device compatibility
o Network compatibility
• Especially important for web and mobile applications
Incremental Sandwich Testing
Definition: Incremental sandwich testing combines both top-down and bottom-up integration
approaches simultaneously, integrating and testing both high-level and low-level components first,
then working toward the middle layers.
Key Characteristics:
• Middle layers are tested last after top and bottom layers
• Requires both stubs (for top-down) and drivers (for bottom-up)
• Allows parallel testing of different system layers
• Suitable for complex systems with clearly defined architectural layers
• Combines advantages of both incremental approaches
• Allows early validation of both UI flows and core functionality