Bbs T Exploring
Bbs T Exploring
Bbs T Exploring
Overview
I coined the phrase exploratory testing in 1983 to describe the practice of some of the best testers in Silicon Valley. Naturally, the concept has evolved (and diverged) over the years. ET has been a lightning rod for criticism, some of it justified. This lecture considers where I think we are now, controversies, misunderstandings and valid concerns surrounding ET. ET has become fashionable. To accommodate non-students who have asked for access to the ET videos, I cross-references to some points from prior videos. If you are taking the full BBST course, I trust/hope that these cross-references will provide a helpful review.
Black Box Software Testing Copyright Kaner 2006 2
Outline
An opening contrast: Scripted testing The nature of testing The other side of the contrast: Exploration Exploratory testing: Learning Exploratory testing: Design Exploratory testing: Execution Exploratory testing: Interpretation Exploratory testing after 23 years
Scripted testing
A script specifies the test operations the expected results the comparisons the human or machine should make These comparison points are useful, but fallible and incomplete, criteria for deciding whether the program passed or failed the test Scripts can control manual testing by humans automated test execution or comparison by machine
Black Box Software Testing Copyright Kaner 2006 6
Scripts require a big investment. What do we get back? The scripting process provides opportunities to achieve several key benefits: Careful thinking about the design of each test, optimizing it for its most important attributes (power, credibility, whatever) Review by other stakeholders Reusability Known comprehensiveness of the set of tests If we consider the set sufficiently comprehensive, we can calculate as a metric the percentage completed of these tests.
Black Box Software Testing Copyright Kaner 2006 7
Intended inputs
Configuration and system resources From other cooperating processes, clients or servers
Black Box Software Testing
Monitored outputs
Impacts on connected devices / system resources To other cooperating processes, clients or servers
8
10
The scripted approach means the test stays the same, even thought the risk profile is changing.
12
Who is in a better position to spot changes in risk or to notice new variables to look at?
13
Analogy to Manufacturing QC
Scripting makes a lot of sense because we have: Fixed design Well understood risks The same set of errors appear on a statistically understood basis Test for the same things on each instance of the product
A suite of regression tests becomes a pool of tests that have one thing in common the program has passed all of them. Thats OK for manufacturing QC. But for software?
14
Analogy to Design QC
The difference between manufacturing defects and design defects is that: A manufacturing defect appears in an individual instance of the product A design defect appears in every instance of the product. The challenge is to find new design errors, not to look over and over and over again for the same design error
Software testing is assessment of a design, not of the quality of manufacture of the copy.
15
Unless you are the outsource service provider, scripting is probably an industry worst practice for design QC.
16
18
Imagine
Imagine crime scene investigators (real investigators of real crime scenes) following a script. How effective do you think they would be?
20
As service providers, it is our task to learn (or figure out) what services our clients want or need this time, and under these circumstances.
21
How to decide what other result variables to attend to in the event of intermittent failure How to troubleshoot and simplify a failure, so as to better motivate a stakeholder who might advocate for a fix enable a fixer to identify and stomp the bug more quickly How to expose, and who to expose to, undelivered benefits, unsatisfied implications, traps, and missed opportunities.
22
Different objectives require different testing tools and strategies and will yield different tests, different test documentation and different test results.
23
We pick the technique that provides the best set of attributes, given the information objective and the context.
24
Test attributes
Power: If a problem exists, the test will reveal it Valid: If the test reveals a problem, it is a genuine problem Value: Reveals things your clients want to learn Credible: Client will believe people will do whats done in this test Motivating: Client will want to fix problems exposed by this test Representative: of events most likely to be encountered by the user (xref: Musas Software Reliability Engineering) Non-redundant: Represents a larger set that address the same risk Performable: Test can be performed as designed. Maintainable: Easy to update to reflect product changes Repeatable: Easy and inexpensive to reuse the test Potential disconfirmation: Critical case for proving / disproving a key assumption or relationship (xref Karl Popper, Conjectures & Refutations) Coverage: Exercises product in ways not handled by other tests Easy to evaluate Supports troubleshooting: Provides useful information for the debugging programmer Appropriately complex: As programs get more stable, you can use more complex tests to better simulate use by experienced users Accountable: You can explain, justify, & prove you ran it. Cost: Includes time and effort as well as direct costs. Opportunity cost: Developing and performing this test prevents you from doing other work.
The fundamental difference between test techniques lies in how much they emphasize each attribute.
25
Not focused doesnt mean, never is. It means that this is a factor that we dont treat as critical in developing or evaluating this type of test.
26
Under this view: Quality is inherently subjective Different stakeholders will perceive the same product as having different levels of quality
Black Box Software Testing
Software error
An attribute of a software product that reduces its value to a favored stakeholder or increases its value to a disfavored stakeholder without a sufficiently large countervailing benefit. An error: May or may not be a coding error May or may not be a functional error
28
Tasks beyond your personal skill set may still be within your scope.
29
Software testing
is an empirical technical investigation conducted to provide stakeholders with information about the quality of the product or service under test
30
Unscripted doesnt mean unprepared. Its about enabling choice, not constraining it.
33
The exploratory tester is always responsible for managing the value of her own time.
At any point in time, this might include: > Reusing old tests > Creating and running new tests > Creating test-support artifacts, such as failure mode lists > Conducting background research that can then guide test design Black Box Software Testing Copyright Kaner 2006
The explorer can do any combination of learning, designing, executing and interpreting at any time.
34
Exploratory testing
Learning: Anything that can guide us in what to test, how to test, or how to recognize a problem. Design: to create, fashion, execute, or construct according to plan; to conceive and plan out in the mind (Websters) Designing is not scripting. The representation of a plan is not the plan. Explorers designs can be reusable. Execution: Doing the test and collecting the results. Execution can be automated or manual. Interpretation: What do we learn from program as it performs under our test about the product and about how we are testing the product?
Black Box Software Testing Copyright Kaner 2006 35
36
Study competitive products (how they work, what they do, what expectations they create) Research the history of this / related products (design / failures / support) Inspect the product under test (and its data) (create function lists, data relationship charts, file structures, user tasks, product benefits, FMEA) Question: Identify missing info, imagine potential sources and potentially revealing questions (interview users, developers, and other stakeholders, use reference materials to supplement answers) Review written sources: specifications, other authoritative documents, culturally authoritative sources, persuasive sources Try out potentially useful tools
Black Box Software Testing Copyright Kaner 2006 38
40
43
A model of learning
COGNITIVE PROCESSES KNOWLEDGE DIMENSIONS Facts Concepts Procedures Cognitive strategies Models Skills Attitudes Metacognition Remember Understand Apply Analyze Evaluate Create
This is an adaptation of Anderson/Krathwohls learning taxonomy. For a summary and links, see http://www.satisfice.com/kaner/?p=14 Black Box Software Testing Copyright Kaner 2006 44
Focusing on models
All tests are based on models But any cognitive or perceptual psychologist will tell you that all perceptions and all judgments are based on models > Most of which are implicit
45
A model of learning
COGNITIVE PROCESSES KNOWLEDGE DIMENSIONS Facts Concepts Procedures Cognitive strategies Remember Understand Apply Analyze Evaluate Create
Models
Skills Attitudes Metacognition
This is an adaptation of Anderson/Krathwohls learning taxonomy. For a summary and links, see http://www.satisfice.com/kaner/?p=14 Black Box Software Testing Copyright Kaner 2006 46
Focusing on models
All tests are based on models But any cognitive or perceptual psychologist will tell you that all perceptions and all judgments are based on models > Most of which are implicit So the question is, Is it useful to focus on discovering, evaluating, extending, and creating models Or are we sometimes better off leaving the models in the background while we focus on the things we are modeling?
Black Box Software Testing
48
Design: to create, fashion, execute, or construct according to plan; to conceive and plan out in the mind (Websters) Designing is not scripting. The representation of a plan is not the plan. Explorers designs can be reusable.
Execution: Doing the test and collecting the results. Execution can be automated or manual.
Interpretation: What do we learn from program as it performs under our test about the product and about how we are testing the product?
49
50
53
When it is not clear how to work backwards to the relevant test, four tactics sometimes help: Ask someone for help Ask Google for help. (Look for discussions of the type of failure; look for discussions of different faults and see what types of failures they yield) Review your toolkit of techniques, searching for a test type with relevant characteristics Turn the failure into a story and gradually evolve the story into something you can test from There are no guarantees in this, but you get better at it as you practice, and as you build a broader inventory of techniques.
Black Box Software Testing Copyright Kaner 2006 54
For more on developing testing stories, see the lectures on scenario testing.
57
58
Execution: Doing the test and collecting the results. Execution can be automated or manual.
Interpretation: What do we learn from program as it performs under our test about the product and about how we are testing the product?
59
Scripted execution
COGNITIVE PROCESSES KNOWLEDGE DIMENSIONS Facts Concepts Procedures Cognitive strategies Models Skills Attitudes Metacognition Remember Understand Apply Analyze Evaluate Create
The individual contributor (tester rather than test planner or manager) Black Box Software Testing Copyright Kaner 2006 61
Exploratory execution
COGNITIVE PROCESSES KNOWLEDGE DIMENSIONS Facts Concepts Procedures Cognitive strategies Models Skills Attitudes Metacognition Remember Understand Apply Analyze Evaluate Create
The individual contributor (tester rather than test planner or manager) Black Box Software Testing Copyright Kaner 2006 62
63
Interpretation: What do we learn from program as it performs under our test about the product and about how we are testing the product?
Black Box Software Testing Copyright Kaner 2006 64
Interpretation activities
Part of interpreting the behavior exposed by a test is determining whether the program passed or failed the test. A mechanism for determining whether a program passed or failed a test is called an oracle. We discuss oracles in detail, on video and in slides Oracles are heuristic: they are incomplete and they are fallible. One of the key interpretation activities is determining which oracle is useful for a given test or test result
65
Jon Bach, Mike Kelly, and James Bach are working on a broad listing / tutorial of ET activities. See Exploratory Testing Dynamics at
http://www.quardev.com/whitepapers.html
We reviewed preliminary drafts at the Exploratory Testing Research Summit (spring 2006) and Consultants Camp 2006 (August), looking specifically at teaching issues. This short paper handout provides an outline for what should be a 3-4 day course. Its a stunningly rich set of skills. In this abbreviated form, the lists are particularly useful for audit and mentoring purposes, to highlight gaps in your test activities or those of someone whose work you are evaluating.
Black Box Software Testing Copyright Kaner 2006 68
Areas of progress
70
Areas of agreement*
Definitions Everyone does ET to some degree ET is an approach, not a technique ET is the response (the antithesis) to scripting But a piece of work can be a blend, to some degree exploratory and to some degree scripted
* Agreement among the people who agree with me (many of whom are sources of my ideas). This is a subset of the population of ET-thinkers who I respect, and a smaller subset of the pool of testers who feel qualified to write about ET. (YMMV)
Black Box Software Testing Copyright Kaner 2006 71
Areas of controversy
Areas of progress
72
Areas of controversy
ET is not quicktesting
A quicktest (or an attack) is a cheap test that requires little preparation, knowledge or time to perform. A quicktest is a technique that starts from a theory of error (how the program could be broken) and generates tests optimized for errors of that type. Example: Boundary analysis (domain testing) is optimized for misclassification errors (IF A<5 miscoded as IF A<=5) Quicktesting may be more like scripted testing or more like ET depends on the mindset of the tester.
73
Areas of controversy
ET is not quicktesting
ET is about learning and choice, not about constraints on scope. If our stakeholders need the information, and we can provide the information, its in our scope.
74
ET is not quicktesting
Areas of controversy
ET can involve tools of any kind and can be as computerassisted as anything else we would call automated
Along with traditional test automation tools, Emerging tool support for ET such as Test Explorer BBTest Assistant and better thought support tools Like Mind Manager and Inspiration Qualitative analysis tools like Atlas.ti
Black Box Software Testing Copyright Kaner 2006
75
76
77
Idle
Ringing
Caller hung up
You hung up
Connected
79
Idle Ringing
Caller hung up
You hung up
Connected
81
This testing is automated glass box, but a classic example of exploratory testing.
Areas of controversy
ET is not quicktesting ET is not only functional testing ET can involve tools of any kind and can be as computer-assisted as anything else we would call automated
83
84
86
ET is a way of testing
We learn about the product in its market and technological space (keep learning until the end of the project) We take advantage of what we learn to design better tests and interpret results more sagely We run the tests, shifting our focus as we learn more, and learn from the results. Black Box Software Testing Copyright Kaner 2006
87
Areas of controversy
ET is not quicktesting ET is not only functional testing ET can involve tools of any kind and can be as computer-assisted as anything else we would call automated ET is not focused primarily around test execution
88
Areas of controversy
ET is not quicktesting ET is not only functional testing ET can involve tools of any kind and can be as computer-assisted as anything else we would call automated ET is not focused primarily around test execution ET can involve complex tests that require significant preparation
89
Areas of progress
90
Areas of progress
We know a lot more about quicktests
Well documented examples from Whittakers How to Break series and Hendricksons and Bachs courses
91
Areas of progress
We know a lot more about quicktests
92
Areas of progress
We know a lot more about quicktests We have a better understanding of the oracle problem and oracle heuristics
93
Areas of progress
We know a lot more about quicktests We have a better understanding of the oracle problem and oracle heuristics We have growing understanding of ET in terms of theories of learning and cognition
Areas of progress
For more on psychological issues in testing, see my presentation on Software Testing as a Social Science
www.kaner.com/pdfs/KanerSocialScienceDal.pdf
96
97
98
The individual contributor (tester rather than test planner or manager) Black Box Software Testing Copyright Kaner 2006 99
What level of skill, domain knowledge, intelligence, testing experience (overall strength in testing) does exploratory testing require?
We are still early in our wrestling with modeling and implicit models > How to teach the models > How to teach how to model
Black Box Software Testing Copyright Kaner 2006 100
The individual contributor (tester rather than test planner or manager) Black Box Software Testing Copyright Kaner 2006 101
The individual contributor (tester rather than test planner or manager) Black Box Software Testing Copyright Kaner 2006 102
The individual contributor (tester rather than test planner or manager) Black Box Software Testing Copyright Kaner 2006 103
The individual contributor (tester rather than test planner or manager) Black Box Software Testing Copyright Kaner 2006 104
Construct validity (a key issue in measurement theory) is still an unknown concept in Computer Science.
105
Closing notes
If you want to attack any approach to testing as unskilled, attack scripted testing If you want to hammer any testing approach on coverage, look at the fools who think they have tested a spec or requirements document when they have one test case per spec item, or code with one test per statement / branch / basis path. Testing is a skilled, fundamentally multidisciplinary area of work. Exploratory testing brings to the fore the need to adapt to the changing project with the information available.
Black Box Software Testing Copyright Kaner 2006 108