STE Unit 1

Download as pdf or txt
Download as pdf or txt
You are on page 1of 4

Software Testing UNIT 01

1.1 7. Absence of Error (Fallacy): Passed tests


don't guarantee perfection. ,Zero bugs
Software Testing
≠ Zero problems ,Flawless ≠ Finished,
Software testing is a crucial process in the test deeper.
software development lifecycle, ensuring the
1.2
quality, functionality, and performance of
software before it's released to users. It's like a Testing Terminologies
thorough examination to identify any bugs or
1. Failure: System outcome deviates
errors that could hinder the software's
from expectations.
intended use.
2. Error: Mistake in code or logic causing
Imagine a car being manufactured. Software
malfunction.
testing is like taking the car out for a rigorous
test drive on various terrains and conditions to 3. Fault: Underlying condition enabling
ensure it runs smoothly, handles corners well, an error to occur.
and brakes effectively before it's delivered to
the customer. 4. Defect: Deviation from desired
behavior or specification.
Objectives of Software testing
5. Bug: Slang term for a defect causing
1. Identify and Prevent Defects unexpected behavior.
2. Verify Requirements and Specifications
3. Increase Confidence in the Product 1.3
4. Inform and Optimize Development Test Case
5. Enhance Growth and Innovation
A test case is a set of actions that verify if a
Principles of Software testing software application is working as per the
1. Defect Presence, Not Absence: Identify client's requirements. It includes the steps to
bugs, not guarantee their follow, the input values to use, and the
absence. Focus on critical areas and expected outcomes.
high-impact issues. A Test Case has following attributes:
2. Early Testing Saves: Catch bugs early
and fix them cheaply. Integrate testing 1. Test Case ID
throughout the development process. 2. Objective
3. Defect Clustering: Bugs often 3. Pre Condition
cluster. Analyze past data to prioritize 4. Data to be tested
testing efforts. 5. Steps to follow
4. Pesticide Paradox: Avoid repetitive 6. Expected Results
testing methods. Use diverse 7. Actual Results
techniques and tools for optimal 8. Evaluation (Pass or Fail)
coverage. Entry Criteria:
5. Context-Dependent Testing: Tailor
your testing strategy to the software's • Documents and artifacts: All necessary
context, audience, and use. documents, like
6. Early Tester Involvement: Include requirements, specifications, test
testers early in development for plans, and test cases, should be
valuable feedback and design finalized and approved.
influence. • Environment setup: The testing
environment, including
Software Testing UNIT 01

hardware, software, and network Method Static analysis, Testing, user


configurations, should be ready and s reviews, feedback
stable. walkthroughs
• Resources Scope Design, coding, Functionality,
available: Testers, tools, and any other documentatio usability, user
required resources should be readily n needs
available. Respons Developers, Users,
ibility QA teams stakeholders
• Dependencies met: All prior phases
Outcom Internal Misalignment
and their deliverables should be
e defects, errors with
completed and validated.
requirements,
Exit Criteria: user needs
• Test cases executed: All planned test Example Verifying a Validating if
cases should be login form users find it
executed, documented, and reviewed. works easy to use
• Defects identified and Goal Building it right Building the
documented: All identified right thing
bugs, defects, and issues should be Synergy Crucial for Work together
documented with clear high-quality to meet user
descriptions, steps to reproduce, and software needs
severity levels. QC & QA
• Test reports finalized: Comprehensive
Feature QC (Quality QA (Quality
test reports summarizing the testing Control) Assurance)
process, findings, and Focus Detecting Preventing
recommendations should be created. defects in defects
• Pass/fail criteria met: The project or finished throughout the
phase should meet the defined success products development
criteria, often based on defect process
rates, coverage, and functionality. Role Reactive, Proactive,
1.4 verifies quality ensures quality
after throughout
V Model production development
Method Inspection, Reviews,
s testing, requirements
sampling analysis, risk
assessment,
testing
Scope Specific Entire
product or development
batch lifecycle,
processes, and
tools
Verification & Validation Respons Independent Embedded in
ibility inspectors or the
Feature Verification Validation
production development
Focus Internal External results
personnel team or
processes
dedicated QA
Stage Throughout At the end
team
development
Goal Minimizing Building quality
Questio Did we build it Did we build
defects in into the
ns Asked right? the right thing?
Software Testing UNIT 01

delivered product from improves code


products the start quality
Outcom Identifies and Prevents Weakne Can't find all Requires test
e corrects defects and sses bugs, may data and
existing improves generate false execution
defects overall quality positives environment,
Example Checking Analyzing can be
manufactured requirements resource-
items for to identify intensive
cosmetic flaws potential bugs Example Checking for Unit testing
in the design s syntax errors, individual
stage identifying functions, UI
Overall Reactive, Proactive, dead code, testing user
Approac catching preventing verifying interactions,
h problems at problems requirements performance
the end before they completeness testing under
occur load
Synergy Complements Foundation for Overall Proactive, Reactive,
QA by ensuring effective QC by Approac prevents identifies
quality of final reducing h defects before defects by
product defects in the coding observing
first place program
1.5 execution
Ideal Early in the Throughout
Static Testing & Dynamic Testing Use development development
process, for and after
Featur Static Testing Dynamic
code reviews deployment for
e Testing
and regression
Executio Analyzes code, Runs the
continuous testing and
n documents, program with
integration performance
Method and design test data to
monitoring
without observe
1.6
running the behavior
program White Box Testing
Timing Performed Performed
early in the throughout the • Catches logic errors, boundary
development development issues, and unreachable code.
lifecycle lifecycle • Improves code quality and
Focus Code logic, Functionality, understanding.
syntax, performance, • Enables comprehensive coverage of
structure, user internal workings.
requirements experience • Can be time-consuming and complex
Tools Static code Unit testing to implement.
analyzers, frameworks,
• Requires knowledge of programming
code review integration
languages and testing frameworks.
tools, testing tools,
• May not uncover user-facing issues
requirements performance
checkers testing tools entirely.
Strength Detects errors Identifies Inspections:
s early and runtime issues, • Most formal and structured: Require
cheaply, verifies real- specific training for
world behavior
Software Testing UNIT 01

participants, including a 1.7


moderator, inspector, and recorder. Black Box Testing
• Follow a defined checklist: Focus on • Tests the "what," not the "how":
specific defects like logic Focuses on user experience and
errors, security vulnerabilities, and outputs.
non-compliance with coding • Like a closed box: Only interacts from
standards. outside, not seeing the internal
• Highly thorough: Aim to find as many workings.
defects as possible in a single session. • User-centric testing: Ensures software
• Best for critical modules or high-risk feels natural and does what users
projects. expect.
Walkthroughs: • Diverse techniques: Partitioning,
• Less formal than inspections: Typically boundary values, error guessing,
involve only the author, a reviewer or exploration.
two, and perhaps a moderator. • Independent testing: Unbiased
• Informal discussion: Author narrates evaluations from fresh eyes.
the code, focusing on logic flow and Equivalence Partitioning: Divide input data
decision points, while reviewers ask into groups of "good" and "bad" based on
questions and suggest improvements. expected behaviour, ensuring all scenarios are
• Useful for early drafts or catching covered.
major design flaws. Boundary Value Analysis: Test the edges of
Technical Reviews: these partitions, like minimum and maximum
• Informal discussion or allowed values, to catch potential flaws at
presentation: Similar to walkthroughs extremes.
but broader scope, often covering
design documents, specifications, and
test plans alongside code.
• Focus on overall quality and
feasibility: Aim to identify technical
flaws, inconsistencies, and potential
risks.
• Good for final stages of development
or evaluating new technologies.
Code Coverage
• Statement Coverage: Aims to execute
every line of code at least once. A good
starting point, but potentially
insufficient.
• Branch Coverage: Ensures all branches
of conditional statements (if-else, etc.)
are tested. More thorough than
statement coverage.
• Path Coverage: Executes all possible
execution paths through the code. The
most comprehensive but also the most
complex and expensive to achieve.

You might also like