0% found this document useful (0 votes)
106 views15 pages

My Assignment in Software Testing-Diversity Analyzer

The document discusses the EMMA software testing tool. It provides an overview of software testing and its importance. It then describes various types of testing tools, focusing on the Diversity Analyzer application testing tool. The Diversity Analyzer allows measuring code coverage, complexity, and defect isolation. It supports languages like C/C++ and Java and provides benefits like prioritized reporting.

Uploaded by

ezhiltham
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
106 views15 pages

My Assignment in Software Testing-Diversity Analyzer

The document discusses the EMMA software testing tool. It provides an overview of software testing and its importance. It then describes various types of testing tools, focusing on the Diversity Analyzer application testing tool. The Diversity Analyzer allows measuring code coverage, complexity, and defect isolation. It supports languages like C/C++ and Java and provides benefits like prioritized reporting.

Uploaded by

ezhiltham
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 15

Assignment No: 1 Assignment Topic Software Testing Tool

Submitted by : R.Ramya

Tool Name

: EMMA

Subject

: Software Testing and Quality Assurance

Department

: MCA II Year

Date

: 17/02/2012

Submitted to, Mr. R.P.SEENIVASAN Asst Professor, Department of Computer Science, Pondicherry University.

SOFTWARE TESTING:
ABSTRACT:

Software testing is any activity aimed at evaluating an attribute or capability of a program or system and determining that it meets its required results. Although crucial to software quality and widely deployed by programmers and testers, software testing still remains an art, due to limited understanding of the principles of software. The difficulty in software testing stems from the complexity of software: we cannot completely test a program with moderate complexity. Testing is more than just debugging. The purpose of testing can be quality assurance, verification and validation, or reliability estimation. Testing can be used as a generic metric as well. Correctness testing and reliability testing are two major areas of testing. Software testing is a trade-off between budget, time and quality.
INTRODUCTION:

Software Testing is the process of executing a program or system with the intent of finding errors. Or, it involves any activity aimed at evaluating an attribute or capability of a program or system and determining that it meets its required results.
IMPORTANCE OF TESTING:

It is very important to have decent quality softwares. A lot of effort is required to keep this quality to a reasonable standard. Testing is one of the most important parts in quality assurance especially in the development stages. As the development of the program comes to an end, it becomes harder to fix errors; in fact it becomes harder to spot errors. Testing should be done during the development stages and if it is not done during the development stages, then it is more than likely that there will be a lot of bugs and errors. Some problems which may not have been seen during the development stages, without testing at the end, could be something like a function being used whilst the stack is empty. This could lead to a system crashing. However, if testing is done this could be spotted before proceeding to the next stage. Humans are prone to make mistakes and so if they do everything then it may not be as efficient. Some code may be missed out with humans and so this could cause errors in the system.
WHAT ARE TESTING TOOLS:

Testing tools are a form of automated testing. It is basically used in programs to do

different testing tasks, i.e. doing the tests through some form of automated method. They are computerized and help in assisting every type of testing. Some of these testing tasks include checking the conditions, also checking the predicted results with the actual results along with many other types of testing involved. A lot of these testing tools have commonly used coding languages such as Java and C. The testing tools themselves have been made using these programming languages. A lot of the tools used have different options available to help with testing. This can be very useful when producing the essential reports of that particular product.
WHY TESTING TOOLS ARE USED:

Manual testing takes too long and can waste a lot of time. With the aid of testing tools this can increase efficiency and get the deadlines met. Testing tools are used for the following purposes:

To improve the quality. For Verification & Validation (V&V) We cannot test quality directly, but we can test related factors to make quality visible.

Quality has three sets of factors -- functionality, engineering, and adaptability.

Functionality (Exterior Quality) Correctness Reliability Usability Integrity

Engineering (Interior Quality) Efficiency Testability Documentation Structure

Adaptability (Future Quality) Flexibility Reusability Maintainability

TYPES OF TESTING TOOLS:

The different types of testing tools are:

Information testing tools Application testing tools Web testing tools Other tools Each testing tool is capable of doing different things as each one has different abilities

to do different things. These are considered to be in the testing tools environment.


APPLICATION TESTING TOOL:

Application testing tool is one of the major testing tool, which was further classified into the following sub divisions Source test tools Functional test tools Performance test tools Java test tools Embedded test tools Database test tools

SOURCE TESTING TOOL:

The source test tools are again classified into many sub categories. One among those is Diversity Analyzer, about which we are going to discuss.

DIVERSITY ANALYZER:
INTRODUCTION:

We can obtain a competitive edge in software quality by utilizing our unique, patented software diversity technology. The Diversity Analyzer allows software quality-assurance professionals and software developers to automatically measure and improve the quality of

their testing by measuring code coverage. With the same cost as coverage, we can go away beyond coverage and improve the testing by analyzing the internal software control-flow diversity, by analyzing the data-flow diversity, by measuring dynamic code complexity, and by improving bug isolation. By using this tool, we can quickly find out where to test, what to test, how well we can test, how complex the code is, and how well our test cases isolate faults. Use this unique information we can improve the quality of the testing by diversifying the testing to increase the probability of defect detection. We can also measure the dynamic code complexity, to identify some portions of code, with high run-time complexity. Identifying the test cases is best-suited for debugging the application.
THE SUPPORTED TECHNOLOGIES FOR DIVERSITY ANALYZER:

C C++ C# Java Visual basic Microsoft windows .NET

BENEFITS OF DIVERSITY ANALYZER:

Measure Test Diversity Measure Code Coverage Measure Code Complexity Measure Defect Isolation Unique Technology Prioritized reports
MEASURE TEST DIVERSITY:

What is Test Diversity? Test diversity is a test-dispersion measure that tells us where the testing is concentrated, what we have tested, and how well we have tested the code. This measure could be used to evaluate any type of testing currently in existence, such as black-box testing, white-box testing, statistical testing, etc. Test diversity relates the quality of any type of testing to control diversity and data diversity at the source-code level.

It is further classified into Conditional Diversity Data Diversity

Measure Conditional Diversity: Conditional Diversity is a measure of control dispersion/variation at the source-code level. The true/false evaluation frequencies of conditional expressions in conditional statements are used to measure the control dispersion of a test suite. These evaluation frequencies could be un-evenly distributed for a particular test suite. For example, the true branches in the code could be more heavily exercised than the false ones. Therefore, conditional diversity points, to the portions of source code, with high and low test concentration. Conditional diversity is used to determine, if we are gaining false confidence in the testing by running the "same" or "similar" test over and over again. Conditional diversity can be used to determine in the code where we need to apply balancing and skewing schemes to diversify the testing and increase chances of defect detection. Conditional diversity is expressed as a conditional diversity vector, where each value in the vector is the conditional diversity of a particular conditional expression in the code. Executions of the program on multiple test cases give a conditional diversity matrix, which consists of conditional diversity vectors, each vector corresponding to a particular test case. The conditional diversity matrix is used to calculate the standard deviation vector, which is interpreted as a measure of dispersion of conditional diversities relative to the conditional diversity mean value. More dispersed the conditional diversities are among test cases, higher control program-execution variation results from the test cases. Benefits of Conditional diversity: Determine test distribution Determine control flow variation Balance the test Improve test quality

Measure Data Diversity: Data Diversity is a measure of data dispersion/variation at the source-code level. Data

Diversity is used to determine if we are testing the code with the same, similar or different internal data. Testing with different data at the GUI/interface level might result in arbitrarily

same or similar internal-program data, which would give us false confidence that the testing is data diversified. It is highly desirable to know the internal data distribution involved in testing, to take steps to increase it, and to continually measure it, improve it, and diversify it. Higher internal program data diversity indicates high data variation among test cases, whereas lower data diversity indicates the code is covered with the same or similar data over and over again. Control and data are tightly related with respect to test diversity. For example, branch selection in the code is governed by values of program variables, and vice versa. The values of program variables are governed by branch selection. If two test suites result in different conditional diversities, then the internal data states, involved in the test suites are different, in turn, resulting in different data flowing throughout the program. In effect, conditional diversity is used to measure data diversity. Data diversity is calculated as an average of the individual data diversities for each conditional statement. The individual data diversities are calculated as a percentage of test suites for which, a conditional expression has distinct conditional diversities. If two test cases have different conditional diversities then they execute different paths in the code, which is only possible if different data flows throughout the program. Therefore, higher conditional diversity means that the internal-program data involved in testing is more diverse. The conditional diversity matrix is used to calculate the data diversity vector, where each value in the vector is the data diversity of a particular conditional expression in the code. The conditional diversity matrix contains the conditional diversity vectors associated with each test case. The data diversity vector is calculated by comparing the corresponding values in the conditional diversity vectors to determine the percentage of distinct conditional diversity values. Benefits of Data Diversity: Determine data flow variation Determine data flow distribution Diversify your test Improve test quality Why test diversity?

Without test diversity you may be gaining false confidence in your testing. Test cases that are different at the interface level might be same or similar in terms of program control and internal data, giving you a false sense of diversified testing.
MEASURE CODE COVERAGE:

What is conditional coverage? Conditional coverage is a white-box testing method that measures the percentage of exercised branches in the code. Conditional coverage is a special case of test diversity. Conditional Coverage: Conditional coverage is only a special case of test diversity. Conditional coverage, also known as branch testing or decision coverage, measures the percentage of exercised conditional expressions in conditional statements, multi-branching statements, and/or loops. A conditional expression is covered when testing results in both true and false evaluation of the expression. For example, in the C statement: if(x == 1 || y == 1) The conditional expression x == 1 || y == 1 is completely conditionally covered with values 1 for x and 1 for y, and 0 for x and 0 for y. In a multi-branching statement, such as a switch statement in C++, conditional coverage measures the percentage of exercised branches in the multi-branching statement. A multi-branching statement is conditionally covered when all the branches get hit by testing. For example, in the C switch statement: switch(choice) { case 1: x=x+y; break; case 2: x=x*y; break; default: x=0; } Choice=1 covers the case 1 branch, choice=2 covers the case 2 branch, and any other value for choice covers the default branch. Benefits of conditional coverage: Measure conditional coverage Discover un tested code Improve test quality

Conditional Coverage vs. Test Diversity: Test criteria, such as coverage criteria, neither make a distinction between the potential infinity of test suites that satisfy the given criteria nor do they make a distinction between test suites that do not satisfy the given criteria. Typically, some of the large numbers of suits that satisfy a criterion detect the defects while other suites that satisfy the criterion do not detect the defects and vice versa, some of the large numbers of suits that do not satisfy a criterion detect the defects while other suites that do not satisfy the criterion do not detect the defects. That is, even if you have obtained 100% branch coverage, or 100% of any other type of coverage, you could have done that in a restricted and concentrated manner with respect to internal control and data. Such concentrated coverage has no chance of detecting defects outside of this restricted area. Smaller the restricted area is smaller the chance are that all the defects in the code will be detected. Regardless of the test criterion used or the reliability model used, test cases that yield similar control-flows and data-flows exercise the program in a restricted, confined, dependent, and concentrated manner, whereas test cases with different control flows and data flows increase the chance of defect the detection by exercising the program in fundamentally different, dispersed, and independent ways. Why conditional coverage? An un-exercised branch could contain a defect that will not be detected. An exercised branch increases the confidence that the branch contains no defects.
MEASURE CODE COMPLEXITY:

What is Pi Measure? The Pi Measure is a unique software-complexity measure. It is a purely dynamic complexity measure that does not rely on static code properties. It is based on the degree of control and data that surprise the software under measure shows in execution. Highly complex code changes internal control and data, in unpredictable and diverse ways. Pi Measure Software complexity measures are divided into static and dynamic measures. The static measures are concerned with measuring program attributes such as program size and the complexity of its structure. The dynamic measures are a product of static complexity and an operational profile. Unlike existing complexity measures, the Pi measure is based only on

run-time code properties. According to this complexity, metric software is more complex if it is executed in more diverse ways. More complex code, based on a measure of diversity, contains higher degree of control and data surprise than less complex code. That is, code with low complexity tends to group same or similar executions, whereas highly complex code could change control flows and data flows in highly unpredictable and diversified ways. Test cases that cause same or similar control flows and data flows through the program tend to be dependent, since they all tend to be grouped around either correct or faulty control flows and data flows. We observe that good test case design and highly complex software intersect at test case independence. More control and data variation in the code indicates more complex code, since such code carries more control and data surprise in execution. Code with no control and data variation always performs an identical computation, and as such is of trivial complexity regardless of other known measures, such as, software science metrics, cyclometric complexity, or a combination of the two. On the other side of the spectrum, code with infinite control and data diversity always performs different computation, and highest complexity regardless of other known measures such as, lines of code, relative program complexity, or bandwidth metric. The Pi complexity of a program depends on the actual execution of the program, that is, complexity is defined in terms of the actual diversity and a set of test cases carries a particular program. For instance, a program with 10 lines of code is of higher complexity if the test set exercises it in independent, highly diverse ways, than a program with 1000 lines of code whose test set contains only one identical test case repeated many times. As this example hints, the Pi Measure is used to measure the strength of the correlation between existing complexity metrics and software faults. In particular, higher value of the Pi Measure gives stronger correlation between existing complexity measures and software faults. Benefits of PI measure: Measure execution complexity Purely dynamic measure Improve testing Find more defects Prioritize and evaluate testing Why Pi Measure?

All of the existing complexity measures give a potential or a hypothetical complexity. That is because they measure the program at rest due to their static nature. The Pi Measure measures the actual dynamic complexity of a program in execution. The Pi Measure shows the degree to which the potential, static complexity is realized in execution.

MEASURE DEFECT ISOLATION:

What is convergence debugging? Convergence Debugging is an automated debugging method that isolates a set of test cases that converge on the internal root cause of a failure. Convergence Debugging gives means to measure the debugging effectiveness of a set of test cases. It can also select a set of test cases that maximize debugging effectiveness. Convergence Debugging: Convergence Debugging is a fully automated debugging method, which measures the debugging effectiveness of a subset of the executed test set with respect to its convergence on the internal control and data of the failure-causing test case. The highly-concentrated testing involved in Convergence Debugging could be seen as the opposite of Diversity Testing. In particular, Diversity Testing is used to find dispersed faults in the program, while the test cases selected in Convergence Debugging are used to find closely-related test cases to the one that detected a fault. However, the converging test cases should also show diversity of control and data in the vicinity of the control and data involved in the failed test case, in order to expose related but distinct failures, and also, related but distinct successful executions. Without assuring diversity in the vicinity of a failure, the internal control and data of the program could be identical for all of the debug test cases. Such undiversified debug test cases would not provide any significant debugging information. The differences between the failure-causing test case and the diversified, converging test cases could then be used to analyze the root cause of failures. Benefits of convergence debugging: Isolate effective debug test cases Measure debug effectiveness Select effective debug test cases

Why convergence debugging? Converging test cases are closer to the fault with respect to program control and data. Analyzing differences among converging test cases is more effective in locating faults than diverging test cases. UNIQUE TECHNOLOGY: The Diversity Analyzer implements a unique, patented technology of how to instrument, what to instrument, and how and when to collect diversity data. This technology allows great flexibility in instrumenting source code, and in analyzing diversity. With our technology you can analyze a variety of projects written in different programming languages and various platforms. Analyze diversity at the time and codelocation of your choosing.

Common Instrumentation: Our simple instrumentation allows uniform instrumentation of various language

dialects, parts of source files or complete source files, multiple language projects, and multiple related or unrelated projects. The instrumentation strategy and instrumentation code is almost identical for all major industrial languages. A minimal to-the-point instrumentation is used, by inserting a general coverage-distribution recording calls at conditional statements. The called distribution-recording function is located in a library, and is common for all conditional statements and all programming languages. The projects that use instrumented source files need only link to this library. The instrumentation takes less than 5% of the time to build the original, un-instrumented code. The time it takes to build the instrumented code is virtually identical to the time it takes to build the original code. The instrumented code could be distributed on multiple machines and tests run in a usual lab setup. The Diversity Analyzer does not need to be installed on each machine where the test is run, but it has to be installed only on the machine where the instrumentation and viewing of results is performed. Benefits of common instrumentation: Simple General Uniform

Target Flexibility:

It place simple zoom statement in the code, to delimit the scope of diversity analysis to the sections of code that are of particular interest. In order to make the diversity analysis more precise and flexible, comments with the keyword ZOOM_BEGIN and ZOOM_END could be inserted into the code by a tester to delimit the scope of the analysis, to the block of code delimited, with these keywords. This allows the testing to zoom in to various code segments, and causes the reports to be simpler with less data, resulting in to-the-point quick analysis. Flexibility of the entire instruments source code which belongs to the different projects, written in different programming languages, and dialects are comprised of various targets, such as exes, dlls, and ocxs. Place only the files of interest, into target directories are selected for analysis. Use this feature when you need to analyze code that is related but resides in different projects, and potentially runs on different machines. The Diversity Analyzer allows one or more computer software source files, which are part of one or many projects, one or many executable, and/or one or many libraries, written in potentially different programming languages to be selected for diversity analysis. Benefits of target flexibility: Simple reports Instrument dlls exes Multi-threaded code Multiple projects Multiple machines Analysis Flexibility: Flexibility in the time of diversity and coverage analysis, analyze the diversity and coverage at break points in debug mode, as the program is still running, or it has crashed. The permanent true/false distribution record is updated immediately after execution of every conditional expression, keeping up-to-date record of distribution data. This allows great flexibility in the analysis of distribution data, even as the program under test is still running, or it has crashed, or has terminated successfully. It also allows fine granularity of analysis, where the distribution data could be analyzed per individual test or even anywhere in the middle of a test by using a debugger to interrupt the execution of the program. If the average of 1.4 - 1.6 increases in run time is too great, the Diversity Analyzer gives you an option of

faster run time by writing data to a permanent record only when the application under tests exits. Benefits of analysis flexibility: Debug mode Release mode During execution Per individual tests PRIORITIZED REPORTS: The Diversity Analyzer creates summary reports, hot-spot reports, and Excel detailed reports. Use the summary reports to quickly find out diversity and coverage by source file. The hot-spot reports give you ordered diversity and coverage reports by source-code line. We are a click away from seeing source code annotated with diversity and coverage information. Use the Excel reports to sort and graph diversity and coverage data. Conditional Diversity Report Data Diversity Report Conditional Coverage Report CONCLUSION: Testing is a big issue in software development. It is always hard and a time consuming task to get done. By making testing tools there has been a big advancement on how long softwares can be made and finished by. Some programs do require for manual testing to take place, which is very long and time consuming to get done, however, with the aid of new testing tools even as time goes on we reduce the amount of manual testing that has to be done. With the aid of automated testing tools we reduce the amount of time needed and manpower needed and with that a lot of money can be saved. Also with these automated testing tools there can be less room for errors to occur. As more testing tools are being made they can only reduce the amount of errors that a program could face. Having testing tools can also bring out some problems. Although it can reduce costs by having testing tools, it may not always be the case. Also another problem with it is that with using the testing tools, there has to be enough knowledge and resources in able to use them. The programmers should be capable of using these tools and so consequently they have

to have the same amount of knowledge and understand within the software development. REFERENCES:
1. Software testing

http://en.wikipedia.org/wiki/Software_testing

2. Software Quality Assurance Testing and Test Tool Resources


http://www.aptest.com/resources.html 3. Diversity Analyzer http://www.vidakquality.com/ 4. Quality Assurance testing http://www.cs.nott.ac.uk/~cah/G53QAT/QAT09Report

You might also like