Methods of black box testing
1) Graph based testing 2) Equivalence partitioning 3) Boundary value analysis 4) Orthogonal array testing 5) Model based testing
Graph based testing
i. The first step in black-box testing is to understand the objects that are modeled in software and the relationships that connect these objects. Once this has been accomplished, the next step is to define a series of tests that verify "all objects have the expected relationship to one another". In other words, software testing begins by creating a graph of important objects and their relationships and then devising a series of tests that will cover the graph so that each object and relationship is exercised and errors are uncovered. To accomplish these steps, you begin by creating a graph-a collection of nodes that represent objects, links that represent the relationships between objects, node weights that describe the properties of a node, and link weights that describe some characteristic of a link. Nodes are represented as circles connected by links that take a number of different forms. A directed link (represented by an arrow) indicates that a relationship moves in only one direction. A bidirectional link also called a symmetric link implies that the relationship applies in both directions. Parallel links are used when a number of different relationships are established between graph nodes.
ii.
iii.
iv.
v. As a simple example, consider a portion of a graph for a word-processing application where Object 1: new File (menu selection) Object 2: document Window Object 3: document Text vi. Referring to the figure, a menu select on new File generates a document window. The node weight of document Window provides a list of the window attributes that are to be expected when the window is generated. The link weight indicates that the window must be generated in less than 1.0 second. An undirected link establishes a symmetric relationship between the new File menu selection and document text, and parallel links indicate relationships between document window and document text. In reality, a far more detailed graph would have to be generated as a precursor to testcase design. You can then derive test cases by traversing the graph and covering each of the relationships shown. These test cases are designed in an attempt to find errors in any of the relationships.
vii.
Equivalence partitioning
i. Equivalence partitioning is a black-box testing method that divides the input domain of a program into classes of data from which test cases can be derived. An ideal test case single-handedly uncovers a class of errors (e.g. incorrect processing of all character data) that might otherwise require many test cases to be executed before the general error is observed.
ii.
Test-case design for equivalence partitioning is based on an evaluation of equivalence classes for an input condition. If a set of objects can be linked by relationships that are symmetric, transitive, and reflexive, an equivalence class is present. An equivalence class represents a set of valid or invalid states for input conditions. Typically, an input condition is either a specific numeric value, a range of values, a set of related values, or a Boolean condition.
iii.
Equivalence classes may be defined according to the following guidelines:
1. If an input condition specifies a range, one valid and two invalid equivalence classes are defined. 2. If an input condition requires a specific value, one valid and two invalid equivalence classes are defined. 3. If an input condition specifies a member of a set, one valid and one invalid equivalence class are defined. 4. If an input condition is Boolean, one valid and one invalid class are defined.
iv.
By applying the guidelines for the derivation of equivalence classes, test cases for each input domain data item can be developed and executed. Test cases are selected so that the largest numbers of attributes of an equivalence class are exercised at once.
Boundary value analysis
i.
A greater number of errors occur at the boundaries of the input domain rather than in the center. It is for this reason that boundary value analysis (BVA) has been developed as a testing technique. Boundary value analysis leads to a selection of test cases that exercise bounding values. Boundary value analysis is a test-case design technique that complements equivalence partitioning. Rather than selecting any element of an equivalence class, BVA leads to the selection of test cases at the "edges" of the class. Rather than focusing solely on input conditions, BVA derives test cases from the output domain as well. Guidelines for BVA are similar in many respects to those provided for equivalence partitioning: l. If an input condition specifies a range bounded by values a and b, test cases should be designed with values a and b, and just above and just below a and b.
ii.
iii.
2. If an input condition specifies a number of values, test cases should be developed that exercise the minimum and maximum numbers. Values just above and below minimum and maximum are also tested. 3. Apply guidelines I and 2 to output conditions. 4. If internal program data structures have prescribed boundaries (e .g. a table has a defined limit of 100 entries) be certain to design a test case to exercise the data structure at its boundary.
Orthogonal array testing
i. Orthogonal array testing can be applied to problems in which the input domain is relatively small but too large to accommodate exhaustive testing. The orthogonal array testing method is particularly useful in finding region faults-an error category associated with faulty logic within a software component.
ii.
Consider a system that has three input items X, Y and Z. Each of these input items has three discrete values associated with it. There are 27 possible test cases. Let us consider a geometric view of the possible test cases associated with x, y and z illustrated in figure. Referring to the figure, one input item at a time maybe varied in sequence along each input axis. This results in relatively limited coverage of the input domain (represented by the left-hand cube in the figure).
A geometric view of test cases
iii.
To illustrate the use of the L9 orthogonal array, consider the send function for a fax application. Four parameters P1, P2, P3 and P4 are passed to the send function. Each takes on three discrete values. For example P1 takes on values: P1: 1 send it now P1: 2 send it one hour later P1: 3 send it after midnight P2, P3, and P4 would also take on values of 1, 2, and 3, signifying other send functions. Given the relatively small number of input parameters and discrete values, exhaustive testing is possible. Then number of tests required is 3 raise to 4: 81, large but manageable. All faults associated with data item permutation would be found, but the effort required is relatively high. The orthogonal array testing approach enables you to provide good test coverage with far fewer test cases than
iv.
the exhaustive strategy. An L9 orthogonal array for the fax send function is illustrated in figure below.
Model based testing
i. Model-based testing (MBT) is a black-box testing technique that uses information contained in the requirements model as the basis for the generation of test cases. In many cases, the model-based testing technique uses UML state diagrams, an element of the behavioural model, as the basis for the design of test cases. The MBT technique requires five steps: 1) Analyze an existing behavioural model for the software or create one. 2) Traverse the behavioural model and specify the inputs that will force the software to make the transition from state to state. 3) Review the behavioural model and note the expected outputs as the software makes the transition from state to state. 4) Execute the test cases. 5) Compare actual and expected results and take corrective action as required.
ii.
Alpha and Beta testing
1) It is virtually impossible for a software developer to foresee how the customer will really use a program. Instructions for use may be misinterpreted, strange combinations of data may be regularly used, and output that seemed clear to the tester may be unintelligible to a user in the field. 2) When custom software is built for one customer, a series of acceptance tests are conducted to enable the customer to validate all requirements. Conducted by the end user rather than software engineers, an acceptance test can range from an informal test drive, to a planned and systematically executed series of tests. In fact, acceptance testing can be conducted over a period of weeks or months, thereby uncovering cumulative errors that might degrade the system over time. 3) If software is developed as a product to be used by many customers it is impractical to perform formal acceptance tests with each one. Most software product builders use a process called alpha and beta testing to uncover errors that only the end user seems able to find.
4) The alpha test is conducted at the developer's site by a representative group of end users. The software is used in a natural setting with the developer looking over the shoulder of the users and recording errors and usage problems. Alpha tests are conducted in a controlled environment. 5) The beta test is conducted at one or more end-user sites. Unlike alpha testing, the developer generally is not present. Therefore the beta test is a "live" application of the software in an environment that cannot be controlled by the developer. The customer records all problems (real or imagined) that are encountered during beta testing and reports these to the developer at regular intervals. As a result of problems reported during beta tests, you make modification and then prepare for release of the software product to the entire customer base
System testing
System testing is actually a series of different tests whose primary purpose is to fully exercise the computer-based system. Although each test has a different purpose, all work to verify that system elements have been properly integrated and perform allocated functions. In the sections that follow, I discuss the types of system tests that are worthwhile for software-based systems. 1) Recovery testing i) Many computer-based systems must recover from faults and resume processing with little or no downtime. In some cases, a system must be fault tolerant; that is, processing faults must not cause overall system function to cease. In other cases, a system failure must be corrected within a specified period of time or severe economic damage will occur. ii) Recovery testing is a system test that forces the software to fail in a variety of ways and verifies that recovery is properly performed. If recovery is automatic (performed by the system itself), reinitialization, check pointing mechanisms, data recovery, and restart are evaluated for correctness. If recovery requires human intervention, the mean-time-to-repair (MTTR) is evaluated to determine whether it is within acceptable limits. 2) Security testing i) Any computer-based system that manages sensitive information or causes actions that can improperly harm (or benefit) individuals is a target for improper or illegal penetration. Security testing attempts to verify that protection mechanisms built into a system will, in fact, protect it from improper penetration . ii) During security testing the tester plays the role(s) of the individual who desires to penetrate the system. The tester may attempt to acquire passwords through external clerical means; may attack the system with custom software designed to break down any defences that have been constructed may overwhelm the system, thereby denying service to others; may purposely cause system errors, hoping to penetrate during recovery; may browse through insecure data, hoping to find the key to system entry.
iii) Given enough time and resources good security testing will ultimately penetrate a system. The role of the system designer is to make penetration cost more than the value of the information that will be obtained. 3) Stress testing Stress testing executes a system in a manner that demands resources in abnormal quantity, frequency or volume. For example: 1) Special tests may be designed that generate ten interrupts per second, when one or two is the average rate 2) Input data rates may be increased by an order of magnitude to determine how input functions will respond 3) Test cases that require maximum memory or other resources are executed 4) Test cases that may cause thrashing in a virtual operating system are designed 5) Test cases that may cause excessive hunting for disk-resident data are created. Essentially the tester attempts to break the program 4) Performance testing i) Performance testing is designed to test the run-time performance of software within the context of an integrated system. Performance testing occurs throughout all steps in the testing process, even at the unit level, the performance of an individual module may be assessed as tests are conducted. However, it is not until all system elements are fully integrated that the true performance of a system can be ascertained. ii) Performance tests are often coupled with stress testing and usually require both hardware and software instrumentation. That is, it is often necessary to measure resource utilization (e.g., processor cycles) in an exacting fashion. External instrumentation can monitor execution intervals, log events (e.g., interrupts) as they occur, and sample machine states on a regular basis. By instrumenting a system, the tester can uncover situations that lead to degradation and possible system failure. Deployment testing i)In many cases software must execute on a variety of platforms and under more than one operating system environment. Deployment testing, sometimes called configuration testing, exercises the software in each environment in which it is to operate. In addition, deployment testing examines all installation procedures and specialized installation software (e.g." installers") that will be used by customers, and all documentation that will be used to introduce the software to end users. ii)As an example, consider the Internet-accessible version of safeHomes software that would allow a customer to monitor the security system from remote locations. The SafeHome web App must be tested using all web browsers that are likely to be encountered. A more thorough deployment test might encompass combinations of web browsers with various operating systems (e.g. Linux, Mac OS, windows). Because security is a major issue, a complete set of security tests would be integrated with the deployment test.