UNIT – 5
SOFTWARE TESTING
DEVELOPMENT TESTING
Development testing includes all testing activities that are carried out by the team developing the system.
The tester of the software is usually the programmer who developed that software, although this is not
always the case. Some development processes use programmer/tester pairs where each programmer has
an associated tester who develops tests and assists with the testing process. For critical systems, a more
formal process may be used, with a separate testing group within the development team. They are
responsible for developing tests and maintaining detailed records of test results.
During development, testing may be carried out at three levels of granularity:
1. Unit testing, where individual program units or object classes are tested. Unit testing should focus
on testing the functionality of objects or methods.
2. Integration testing, where several individual units are integrated to create composite
components. Integration testing should focus on testing component interfaces.
3. System testing, where some or all of the components in a system are integrated and the system
is tested as a whole. System testing should focus on testing component interactions.
Development testing is primarily a defect testing process, where the aim of testing is to discover bugs in
the software. It is therefore usually interleaved with debugging— the process of locating problems with
the code and changing the program to fix these problems.
1. UNIT TESTING
Unit testing is the process of testing program components, such as methods or object classes. Individual
functions or methods are the simplest type of component. Your tests should be calls to these routines
with different input parameters. You can use the approaches to test case design, to design the function
or method tests. When you are testing object classes, you should design your tests to provide coverage
of all of the features of the object. This means that you should:
Test all operations associated with the object;
Set and check the value of all attributes associated with the object;
Put the object into all possible states. This means that you should simulate all events that cause a
state change.
2. INTEGRATION TESTING
The following diagram shows the component interface testing;
Assume that components A, B, and C have been integrated to create a larger component or subsystem.
The test cases are not applied to the individual components but rather to the interface of the composite
component created by combining these components. Interface errors in the composite component may
not be detectable by testing the individual objects because these errors result from interactions between
the objects in the component.
3. SYSTEM TESTING
System testing in software engineering is a crucial phase in the software development life cycle (SDLC)
where the entire software system is tested as a whole. It aims to validate that the software meets its
specified requirements and functions correctly in the intended environment. System testing verifies the
system's compliance with both functional and non-functional requirements, ensuring its readiness for
deployment to end-users.
TEST-DRIVEN DEVELOPMENT (TDD)
Test-driven development (TDD) is an approach to program development in which you interleave testing
and code development. Essentially, you develop the code incrementally, along with a test for that
increment. You don’t move on to the next increment until the code that you have developed passes its
test. Test-driven development was introduced as part of agile methods such as Extreme Programming.
However, it can also be used in plan-driven development processes.
The fundamental TDD process is shown below;
The steps in the process are as follows:
1. You start by identifying the increment of functionality that is required. This should normally be
small and implementable in a few lines of code.
2. You write a test for this functionality and implement this as an automated test. This means that
the test can be executed and will report whether or not it has passed or failed.
3. You then run the test, along with all other tests that have been implemented. Initially, you have
not implemented the functionality so the new test will fail. This is deliberate as it shows that the
test adds something to the test set.
4. You then implement the functionality and re-run the test. This may involve refactoring existing
code to improve it and add new code to what’s already there.
5. Once all tests run successfully, you move on to implementing the next chunk of functionality.
One of the most important benefits of test-driven development is that it reduces the costs of regression
testing. Regression testing involves running test sets that have successfully executed after changes have
been made to a system. The regression test checks that these changes have not introduced new bugs into
the system and that the new code interacts as expected with the existing code. Regression testing is very
expensive and often impractical when a system is manually tested, as the costs in time and effort are very
high. In such situations, you have to try and choose the most relevant tests to re-run and it is easy to miss
important tests.
However, automated testing, which is fundamental to test-first development, dramatically reduces the
costs of regression testing. Existing tests may be re-run quickly and cheaply. After making a change to a
system in test-first development, all existing tests must run successfully before any further functionality
is added. As a programmer, you can be confident that the new functionality that you have added has not
caused or revealed problems with existing code. Test-driven development is of most use in new software
development where the functionality is either implemented in new code or by using well-tested standard
libraries. Test-driven development has proved to be a successful approach for small and medium-sized
projects.
RELEASE TESTING
Release testing is the process of testing a particular release of a system that is intended for use outside of
the development team. Normally, the system release is for customers and users. In a complex project,
however, the release could be for other teams that are developing related systems. For software products,
the release could be for product management who then prepare it for sale.
There are two important distinctions between release testing and system testing during the development
process:
1. A separate team that has not been involved in the system development should be responsible for
release testing.
2. System testing by the development team should focus on discovering bugs in the system (defect
testing). The objective of release testing is to check that the system meets its requirements and
is good enough for external use (validation testing).
The primary goal of the release testing process is to convince the supplier of the system that it is good
enough for use. If so, it can be released as a product or delivered to the customer. Release testing,
therefore, has to show that the system delivers its specified functionality, performance, and
dependability, and that it does not fail during normal use. It should take into account all of the system
requirements, not just the requirements of the end-users of the system.
Release testing is usually a black-box testing process where tests are derived from the system
specification. The system is treated as a black box whose behavior can only be determined by studying its
inputs and the related outputs. Another name for this is ‘functional testing’, so-called because the tester
is only concerned with functionality and not the implementation of the software.
1. REQUIREMENTS-BASED TESTING
A general principle of good requirements engineering practice is that requirements should be testable;
that is, the requirement should be written so that a test can be designed for that requirement. A tester
can then check that the requirement has been satisfied. Requirements-based testing, therefore, is a
systematic approach to test case design where you consider each requirement and derive a set of tests
for it. Requirements-based testing is validation rather than defect testing—you are trying to demonstrate
that the system has properly implemented its requirements.
2. SCENARIO TESTING
Scenario testing is an approach to release testing where you devise typical scenarios of use and use these
to develop test cases for the system. A scenario is a story that describes one way in which the system
might be used. Scenarios should be realistic and real system users should be able to relate to them.
In a short paper on scenario testing, Kaner (2003) suggests that a scenario test should be a narrative story
that is credible and fairly complex. It should motivate stakeholders; that is, they should relate to the
scenario and believe that it is important that the system passes the test. He also suggests that it should
be easy to evaluate. If there are problems with the system, then the release testing team should recognize
them.
3. PERFORMANCE TESTING
Once a system has been completely integrated, it is possible to test for emergent properties, such as
performance and reliability. Performance tests have to be designed to ensure that the system can process
its intended load. This usually involves running a series of tests where you increase the load until the
system performance becomes unacceptable.
USER TESTING
User testing, also known as usability testing or user acceptance testing (UAT), is a critical phase in software
engineering where real end-users or representatives of the target audience interact with the software to
evaluate its usability, functionality, and user experience. The primary goal of user testing is to ensure that
the software meets user expectations, addresses user needs, and is intuitive and easy to use.
User testing is essential, even when comprehensive system and release testing have been carried out. The
reason for this is that influences from the user’s working environment have a major effect on the
reliability, performance, usability, and robustness of a system. It is practically impossible for a system
developer to replicate the system’s working environment, as tests in the developer’s environment are
inevitably artificial. For example, a system that is intended for use in a hospital is used in a clinical
environment where other things are going on, such as patient emergencies, conversations with relatives,
etc. These all affect the use of a system, but developers cannot include them in their testing environment.
In practice, there are three different types of user testing:
1. Alpha testing, where users of the software work with the development team to test the software
at the developer’s site.
2. Beta testing, where a release of the software is made available to users to allow them to
experiment and to raise problems that they discover with the system developers.
3. Acceptance testing, where customers test a system to decide whether or not it is ready to be
accepted from the system developers and deployed in the customer environment.
In alpha testing, users and developers work together to test a system as it is being developed. This means
that the users can identify problems and issues that are not readily apparent to the development testing
team. Developers can only really work from the requirements but these often do not reflect other factors
that affect the practical use of the software. Users can therefore provide information about practice that
helps with the design of more realistic tests.
Beta testing takes place when an early, sometimes unfinished, release of a software system is made
available to customers and users for evaluation. Beta testers may be a selected group of customers who
are early adopters of the system. Alternatively, the software may be made publicly available for use by
anyone who is interested in it. Beta testing is mostly used for software products that are used in many
different environments (as opposed to custom systems which are generally used in a defined
environment). It is impossible for product developers to know and replicate all the environments in which
the software will be used. Beta testing is therefore essential to discover interaction problems between
the software and features of the environment where it is used. Beta testing is also a form of marketing—
customers learn about their system and what it can do for them.
Acceptance testing is an inherent part of custom systems development. It takes place after release testing.
It involves a customer formally testing a system to decide whether or not it should be accepted from the
system developer. Acceptance implies that payment should be made for the system. There are six stages
in the acceptance testing process, as shown in following figure;
1. Define acceptance criteria: This stage should, ideally, take place early in the process before the
contract for the system is signed. The acceptance criteria should be part of the system contract
and be agreed between the customer and the developer. In practice, however, it can be difficult
to define criteria so early in the process. Detailed requirements may not be available and there
may be significant requirements change during the development process.
2. Plan acceptance testing: This involves deciding on the resources, time, and budget for acceptance
testing and establishing a testing schedule. The acceptance test plan should also discuss the
required coverage of the requirements and the order in which system features are tested. It
should define risks to the testing process, such as system crashes and inadequate performance,
and discuss how these risks can be mitigated.
3. Derive acceptance tests: Once acceptance criteria have been established, tests have to be
designed to check whether or not a system is acceptable. Acceptance tests should aim to test
both the functional and non-functional characteristics (e.g., performance) of the system. They
should, ideally, provide complete coverage of the system requirements. In practice, it is difficult
to establish completely objective acceptance criteria. There is often scope for argument about
whether or not a test shows that a criterion has definitely been met.
4. Run acceptance tests: The agreed acceptance tests are executed on the system. Ideally, this
should take place in the actual environment where the system will be used, but this may be
disruptive and impractical. Therefore, a user testing environment may have to be set up to run
these tests. It is difficult to automate this process as part of the acceptance tests may involve
testing the interactions between end-users and the system. Some training of end-users may be
required.
5. Negotiate test results: It is very unlikely that all of the defined acceptance tests will pass and that
there will be no problems with the system. If this is the case, then acceptance testing is complete
and the system can be handed over. More commonly, some problems will be discovered. In such
cases, the developer and the customer have to negotiate to decide if the system is good enough
to be put into use. They must also agree on the developer’s response to identified problems.
6. Reject/accept system: This stage involves a meeting between the developers and the customer
to decide on whether or not the system should be accepted. If the system is not good enough for
use, then further development is required to fix the identified problems. Once complete, the
acceptance testing phase is repeated.