Black Box Testing
Black Box Testing
Timisoara, 2011
Table of Contents
Chapter 1. Introduction to Black Box Testing ......................................................................................... 3 Chapter 2. Testing Strategies/Techniques .............................................................................................. 5 Chapter 3. Performing Black Box Testing................................................................................................ 6 3.1 Test Case Format........................................................................................................................... 6 3.2 Clear Descriptions ......................................................................................................................... 7 Chapter 4. Black Box Test Case Automation ........................................................................................... 9 Chapter 5. Advantages and Disadvantages of Black Box Testing ......................................................... 10 5.1 Advantages.................................................................................................................................. 10 5.2 Disadvantages ............................................................................................................................. 10 Chapter 6. Conclusions ......................................................................................................................... 11 Chapter 7. References........................................................................................................................... 13
place after verifications are completed. Sofware validation is the process of assessing/evaluating the system or component at the end of the development process to determine whether it satisfied requirements. In other words it is making sure the end result satisfies requirements. Verification Validation Are we building the product right? Are we builing the right product?
I know this game has money and players and Go but this is not the game I wanted Figure 1: Verification vs. Validation The following terms with their associated definitions are helpful for understanding these concepts: - Mistake a human action that produces an incorrect result. - Fault [or Defect] an incorrect step, process, or data definition in a program. - Failure the inability of a system or component to perform its required function within the specified performance requirement. - Error the difference between a computed, observed, or measured value or condition and the true, specified, or theoretically correct value or condition. - Specification a document that specifies in a complete, precise, verifiable manner, the requirements, design, behavior, or other characteristic of a system or component, and often the procedures for determining whether these provisions have been satisfied. Black Box Testing is testing without knowledge of the internal workings of the item being tested. For example, when black box testing is applied to software engineering, the tester would only know the "legal" inputs and what the expected outputs should be, but not how the program actually arrives at those outputs. It is because of this that black box testing can be considered testing with respect to the specifications, no other knowledge of the program is necessary. For this reason, the tester and the programmer can be independent of one another, avoiding programmer bias toward his own work. Black box testing is sometimes also called as "Opaque Testing", "Functional/Behavioral Testing" and "Closed Box Testing". In order to implement Black Box Testing Strategy, the tester is needed to be thorough with the requirement specifications of the system and as a user, should know, how the system should behave in response to the particular action. Various testing types that fall under the Black Box Testing strategy are: functional testing, stress testing, recovery testing, volume testing, User Acceptance Testing (also known as UAT), system testing, Sanity or Smoke testing, load testing, Usability testing, Exploratory testing, ad-hoc testing, alpha testing, beta testing etc. These testing types are again divided in two groups: a) Testing in which user plays a role of tester and b) User is not required I landed on Go but I didnt get my $200!
Figure 2: Black Box Testing. A black-box test takes into account only the input and output of the software without regard to the internal code of the program.
First, you give each test case a unique identifier. When you are tracking large projects, you might need to itemize those test cases that have not yet passed. This identifier is recorded in the first column. For example, you might need to say something like, All my test cases are running except playerMovement1. Im working on that one today. Next in the second column of the table, you specifically describe the set of steps and/or input for the particular condition you want to test (including what needs to be done to prepare for the test case to be run). The third column is the expected results for an input/output oracle what is expected to come out of the black box based upon the input (as described in the description). An oracle is any program, process, or body of data that specified the expected outcome of a set of tests as applied to a tested object [BEI90]; and input/output oracle is an oracle that specifies the expected output for a specified input [BEI90]. In the last column, the actual results are recorded after the tests are run. If a test passes, the actual results will indicate Pass. If a test fails, it is helpful to record Fail and a description of the failure (what came out) in the actual results column.
Table 2 Poor Specification of a Test Case The problem is that the description does not give exact values of how many spaces the players moved. This is an overly simplistic problem but maybe the program crashes for some reason when Player 1 and Player 2 land on the same spot. If you dont remember what was actually rolled (you let the rolls be determined randomly and dont record them), you might never be able to cause the problem to happen again because you dont remember the circumstances leading up to the problem. Recreating the problem is essentially important in testing so that problems that are identified can be repeated and corrected. Instead write specific descriptions, such as shown in Table 3.
Table 3: Preferred Specification of a Test Case There a few things to notice about the test cases in Table 4. First, notice the Precondition in the Description field. The precondition defines what has to happen before the test case can run properly. There may be an order of execution [COP04] whereby a test case may depend upon another test case running successfully and leaving the system in a state such that the second test case can successfully be executed. For example, maybe one test case (call it Test 11) tests whether a new user can create an ID in a system. Another test case (call it Test 22) may depend upon this new user logging in. Therefore Test 11 must run before Test 22 can run. Additionally, if Test 11 fails, than Test 22 cannot be run yet. Alternately, perhaps Test 11 passes but Test 22 fails. Later when the functionality is fixed, Test 11 must be re-run before the testers try to re-run Test 22. Or, maybe a database or the system needs to be re-initialized before a test case can run. Theres also something else important to notice in the Preconditions for test case 3 in Table 3. How can the test case ensure the player rolled a 3 when the value the dice rolls needs to be random in the real game? Sometimes we have to add a bit of extra functionality to put a program in test mode so we can run our test cases in a repeatable manner and so we can easily force a condition happen. For example, we may want to test what happens when a player lands on Go or on Go to Jail and want to force this situation to occur. The Monopoly programmers needed to create a test mode in which (1) the dice rolls could be input manually and (2) the amount of money each player starts with is input manually. It is also important to run some non-repeatable test cases in the regular game mode to test whether random dice input does not appear to change expected behavior. The expected results must also be written in a very specific way, as in Table 4. You need to record what the output of the program should be, given a particular input/set of steps. Otherwise, how will you know if the answer is correct (every time you run it) if you dont know what the answer is supposed to be? Perhaps your program performs mathematical calculations. You need to take out your calculator, perform some calculations by hand, and put the answer in the expected result field. You need to pre-determine what your program is supposed to do ahead of time, so youll know right away if your program responds properly or not.
5.1 Advantages
more effective on larger units of code than glass box testing tester needs no knowledge of implementation, including specific programming languages tester and programmer are independent of each other tests are done from a user's point of view will help to expose any ambiguities or inconsistencies in the specifications test cases can be designed as soon as the specifications are complete
5.2 Disadvantages
only a small number of possible inputs can actually be tested, to test every possible input stream would take nearly forever without clear and concise specifications, test cases are hard to design there may be unnecessary repetition of test inputs if the tester is not informed of test cases the programmer has already tried may leave many program paths untested cannot be directed toward specific segments of code which may be very complex (and therefore more error prone) most testing related research has been directed toward glass box testing
10
Chapter 6. Conclusions
Chapter 6. Conclusions
You need to test for what the customer wants the program to do, not what the programmer programmed it to do. The programmer is biased (through no fault of her/her own) by knowing the intimate details of what the program does. Black box testing is best done by someone with a fresh, objective perspective of the customer requirements. Use the four-item test case template (ID, Description, Expected Results, Actual Results) when planning your test cases. In the test case, specify exactly what the tester has to do to create the desired input conditions and exactly how the program should respond (the output). Be explicit in this documentation so that multiple testers (other than yourself) would be able to run the exact same test case using the directions in the test case. These directions will be especially important if a failure need to be recreated for the programmer to a failure. Test early and often. Write the simplest test cases that could possibly reveal a mode of failure. (Test cases can also be error-prone.) Use equivalence class partitioning to manage the number of test cases run. Test cases in the same equivalence class will all reveal the same fault. Use boundary value analysis to find the very-common bugs that lurk in corners and congregate at boundaries. Use decision tables to record complex business rules that the system must implement and that must be tested. Run the equivalence class test cases first. If the program doesnt work for the simplest case (smack in the middle of an equivalence class), it probably wont work for the boundaries either. If you run a boundary test first, youll probably go run the general case (equivalence class test) before investigating the problem. So, instead just run the simple case first. Avoid having test cases dependant upon each other (i.e. having preconditions of another test case passing). Consider that you have 17 test cases, each having a precondition of the prior test case passing and you pass the first 16 test cases but fail the 17th test case. It take you some time (until the next day) to debug your program. Now, in order to re-run the 17th test case to see if it now passes, you have to re-run the 16 you know pass. This can be time consuming Write each test case so that it can reveal one type of fault. Consider a test case that has three different forms of invalid input. If the test case fails, you might not know which of the three inputs make it the test case fail, and you will have to run different, smaller test cases to see which of the inputs caused problems. Think diabolically! What are the worst things someone could try to do to your program? Write test for these. Encourage a collaborative approach to acceptance testing with the customer. 11
Chapter 6. Conclusions
When black box test cases surface failures, they only reveal the symptoms of faults. You need to use your detective skills to find the fault in the code that caused the failure to occur.
Reminds Dijkstra, Program testing can be used to show the presence of bugs, but never to show their absence![DIJ70] Mostly, testing can be used to check how well defect prevention activities worked. As a beneficial side effect, testing can also be used to identify anomalies in code via dynamic execution of the code. Complete, exhaustive testing is impractical. However, there are good software engineering strategies, such as equivalence class partitioning and boundary value analysis, for writing test cases that will maximize your chance of uncovering as many defects as possible with a reasonable amount of testing. It is most prudent to plan your test cases as early in the development cycle as possible, as a beneficial extension of the requirements gathering process. Likewise, it is beneficial to integrate code as often as possible and to test the integrated code. In this manner, we can isolate defects in the new code and find and fix them as efficiently as possible. Lastly, we learned the benefits of partnering with a customer to write the acceptance test cases and to automate the execution of these (and other test cases) to form compile-able and executable documentation of the system.
Chapter 7. References
[KAN02] Kaner C., James Bach, Bret Pettichord, Lessons Learned in Software Testing. Wiley Computer Publishing, 2002. [BEI90] [PRE01] Beizer B., Software Testing Techniques. Van Nostrand Reinhold, 1990. Pressman R., Software Engineering: McGraw Hill, 2001. [COP04] Copeland L., A Practitioner's Guide to Software Test Design. Boston: Artech House Publishers, 2004. [MAR03] Martin R. C., Agile Software Development: Principles, Patterns, and Practices.Upper Saddle River: Prentice Hall, 2003. [DIJ70] Dijkstra E. W., "Notes on Structured Programming," Technological University Eindhoven T.H. Report 70-WSK-03, Second edition, April 1970. A Practitioner's Approach. Boston:
13