1
Refresh lecture 1 content
Test Levels
Test Types
2 Outline of Lecture 2
3
Lecture 3
4 Objectives and Principles of
Test Automation
5 Software Testing Process
after Planning and Analysis
6 Testing objectives (reminder)
Detect defects in software
Get evidence that the system corresponds to the
specification
Get confidence that software does not do what it
should not do
Get confidence that system executes as needed
Get understanding of how far we can «load» the
system before it collapses
Get understanding of the risks posed to users by a
system release
Definitions
7 Positive Testing (Traditional Understanding)
Testing is any activity aimed at evaluating attributes or capabilities of a
program or system and determining that they satisfy the expected results.
Negative Testing (Finding Defects out of the Scope of Requirements)
Testing is the process of running a program or system to find defects
Rapid Software Testing (introduced by James Bach)
Testing is the process by which we study and understand the status of the
benefits and risks associated with the release of a software system
Risk areas:
business, security and safety criticality;
commercial / public visibility;
experience in testing similar or related systems;
experience in testing previous versions of the same system;
user opinions;
views of analysts, designers and implementers
8 Main Objectives of Testing
Find defects
not ensure that they are not
there
Inform stakeholders about
found defects
not improve software quality
but rather provide information
useful for decision making
9 Testers must provide valuable information
Quality is a value for a stakeholder
Defect is anything that threaten this
value
“Rapid Software Testing” by James Bach, 1995-
2016 [Link]
RST: Check or Test?
10
Run the product to test
a specific output
means
Observe Evaluate Inform
Interact with the Apply algorithmic Report any
product in a decision-making violation as FAIL,
specific way to rules to these otherwise call it
collect specific observations successful (PASS)
observations
“Rapid Software Testing” by James Bach, 1995-2016
[Link]
11 Check has three elements
1. Observation related to…
2. Decision-making rule such as …
3. Both observations and decision-making rules can
be used in an algorithmic way
Check or Execute?
With a machine help that cannot By a human whom instructions
think (but is rapid and accurate) not to think have been given
(and who is slow and changing)
“Rapid Software Testing” by James Bach, 1995-2016
[Link]
12 RST: Perspectives of Checks
People like and use checks. They help us discover
if a product is able to work.
Testing means training. But checks fail to train.
The demonstration that a product is capable of
working is far from illustrating that it will work.
Checks can be automated. But designing,
interpreting, and improving checks is testing (and
requires “intelligence”).
“Rapid Software Testing” by James Bach, 1995-2016
[Link]
RST: Checking vs Testing
13
Testing is the process of evaluating a
product by gaining knowledge about the
product through research and
experimentation, which in some way
includes questioning, research, modeling,
observation, inference, and so on.
A test is an instance of testing.
Checking is the process of evaluating,
applying algorithmic decision-making rules
to product-specific observations.
A check is an instance of checking.
From: “Testing and Checking Refined” by James Bach, 2013 [Link]
14 RST: Testing
From: “Rapid Software Testing” by James Bach, 1995-2016 [Link]
15 Automated Software Testing
Software verification process
in which the basic functions and steps of the test
(such as start-up, initialization, execution, analysis and
output of the result)
executed automatically by automated testing tools.
16 The Classic Argument of Marketing
Automated tests perform a sequence of actions without
human intervention. This approach helps reduce human
errors and provides faster results. Because many products
require many times to run tests, automated testing
generally leads to significant reductions in labor costs over
time. Typically, a company will be able to feel it after two
or three runs of automated tests.
James Bach. «Test automation snake oil».
[Link]
Reckless Assumption #1.
17
Testing is a “sequence of actions”
Testing is rather a sequence of interactions that is
sprinkled with evaluation.
Interactions can be predictable and expressed in
completely objective terms. But much of the interaction
is complex, ambiguous and erratic.
Automating all the necessary tests will lead to high costs
and low efficiency.
James Bach. «Test automation snake oil».
[Link]
Reckless Assumption #2.
18
Testing is a repetition of one and the same activities
Once a specific test has been run and no defect has
been found, there is little chance that this test will ever
find a defect until a new one is introduced.
Test variations and changes (as in manual testing)
increase the likelihood of dealing with new and old
defects as they focus on new features and specific
areas.
Highly repeatable testing can even reduce the chances
of detecting all important issues.
James Bach. «Test automation snake oil».
[Link]
Reckless Assumption #3.
19 We can automate all testing activities
Not all human activities can be mimicked by a
computer: the most difficult thing for a
computer is to interpret test results.
Automated testing is greatly affected by
Uncertainties and changes
Lack of complete and accurate product specification
Finding the right test tools
James Bach. «Test automation snake oil».
[Link]
20 Reckless Assumption #4.
An automated test is faster because it does
not require human intervention
All automated tests require human intervention, even if
only to diagnose the results and identify failed tests.
It is quite difficult to make a complex set of tests run
smoothly, usually due to changes affecting the software
under test, memory problems, file system problems,
sudden network crashes, or errors in the test tools
themselves.
An automated test is faster only when runs
James Bach. «Test automation snake oil».
[Link]
21 Reckless Assumption #5.
Automation reduces human errors
Yes, some errors and defects are reduced: usually those
that affect everyday mental and tactile activities
But other defects only increase - systematically invisible
bugs will not be noticed
It is easier to spot similar violations in a manual test case
thanks to basic test management documentation,
reviews, or practices.
Automated tests requires regular verification or review
James Bach. «Test automation snake oil».
[Link]
Reckless Assumption #6.
22
We can determine the amount of costs and
benefits between manual and automated testing
These processes are NOT comparable with each other:
Due to the dynamics of the process
Due to defect types
Due to the specifics of real projects
Test automation should be considered as PART OF excellent
testing strategy for many aspects of the occupation rather
than as the dominant activity
James Bach. «Test automation snake oil».
[Link]
Reckless Assumption #7
23
Automation has to lead to "significant savings in
labor costs"
Automated testing costs include:
Automation development costs
Cost of enabling automated tests
Automation maintenance costs when the product changes
The cost of any other new task required by automation
+ Remaining manual testing costs
Automation can lead even to increasing costs in certain
circumstances
James Bach. «Test automation snake oil».
[Link]
24 Reckless Assumption #8.
Automation will not harm the testing project
The set of tests increases and does not decrease, which
leads to a situation where no one really knows what this
set of tests actually tests and what it means that the
product has "successfully passed the set of tests".
If ever the entire test system fails, there will be no manual
process to return it. Manual testing is more dynamic, more
flexible, and depends on a relatively smaller set of
principles or documents.
Test strategy must be clear before the automation!
James Bach. «Test automation snake oil».
[Link]
25 Automated Testing Benefits
Repeatability
Fast execution
Lower maintenance costs when a test strategy is clear
Reports
Execution without human participation
26 Disadvantages of Automated Testing
Repeatability
Maintenance costs increases with the frequency of
changes
High development costs
Automation tool price
Defect omission
27
Automated testing is beneficial
ONLY IF
the benefits outweigh the disadvantages
Principles of Test Automation
28
System (TAS) Development
29 Main Principles
Maintain a distinction between automation and the process that it
automates
Automation is a part of testing
Select testing tools carefully
The test management system needs to be carefully thought
Each test set execution must result in a status report showing what the
tests passed and failed.
Assure that the product is mature enough so that the maintenance
costs of the automation do not outweigh the benefits
James Bach. «Test automation snake oil».
[Link]
Think Before Automation
30
Is test automation useful in the context of the
project?
A plan should be established for the
development of automated tests, which
should include:
What to automate?
How to automate?
What to use to automate?
31 What to Automate?
Hard to reach places in the system (back-end processes, file
logging, database recording)
Frequently used functionality where the risk of error is high enough
Routines: Typical application usage scenarios or individual actions
Basic operations for creating, reading, modifying and deleting
entities (CRUD operations)
Validation reports
Long end-to-end scenarios
Data verification that requires accurate mathematical calculations
Checking the correctness of the data search
Interfaces, working with files or other things that are not easy to test
manually.
And another, depending on the requirements and capabilities of
testing tools
32 How to Automate?
1. How well does the automation tool recognize the
software program controls?
The more the better
2. How much time is needed to maintain the script?
3. How handy is the tool for writing new scripts?
How much time does it take?
Code Structuring (OOP support)
Code readability
Code refactoring convenience
Why? What? How to?
33 What to Use to Automate Depends on
Test item and test scenario requirements
Tool capabilities
Tool security
Tool power
Tool capability
Tool performance
Tool compatibility
Tool Do Not Disturb
34 Test Automation Pyramid (Mike Cohn’s)
([Link]
Martin Fowler «TestPyramid», 2012. [Link]
35 Test Automation Establishing (1/6)
1. Convince the management
Test automation is expensive
Testing tools are expensive
An automation architect or engineer is also expensive
Tuning the automation system can take 2-3 months
Testers will be needed to focus on new / important functionality
[Link]
36 Test Automation Establishing (2/6)
2. Find automation tool experts
Automation architects
Automation engineers
[Link]
37 Test Automation Establishing (3/6)
3. Use the right tools for automation
They should fit your budget
They must support the technology used in your app
You need to have people who have the skills and abilities to work
with these tools
The tools should be a good reporting mechanism
The tools must match the specifics of your project!
[Link]
38 Test Automation Establishing (4/6)
4. Understand which applications are most suitable for test automation
The application should not be in the early stages of development
The user interface must be stable
Manual test examples of the application must be in writing
[Link]
39 Test Automation Establishing (5/6)
5. Team training
6. Create a test automation framework (TAF)
The TAF is based on a set of rules and careful planning of script
writing such as to reduce maintenance costs. Scripts should not be
significantly affected by any changes to the app.
Types of the TAFs (by scripting techniques):
Linear (may include capture/playback approach),
modular (structured),
data-driven (distinguishes in test inputs),
keyword-driven (one control script; data files + action words),
Process-driven (scenario-based definitions; workflows perspective),
model-based (automated generation of test cases),
Hybrid (a combination framework)
[Link]
40 Test Automation Establishing (6/6)
7. Development of an implementation plan
Selection of which environment (OS, browser and hardware
configuration) the scripts will execute
You need to know this before developing scripts
8. Script writing
Configuration management
9. Reporting
The tool usually provides several forms
10. Script maintenance!
A number of automation projects fail precisely because of poor
script maintenance
[Link]
41 Other Test Automation Forms
Classical Modified Classical Ice Cream Ice Cream in troubles
Hourglass Test Trophy
42 Testing Support Tools
43 Classification of Testing Tools (1)
Testing tools are classified according to the supported
testing activities
Some tools support a lot of activities, some just one
Tool manufacturers can provide an integrated toolkit for
the testing lifecycle (Mercury, Compuware, Borland,
etc.)
Any part of purchased or locally developed software
can be considered as a tool
Tools can be very useful in improving the quality and
efficiency of the testing process…
… But it can also be just a waste of time and effort
44 Classification of Testing Tools(2)
Some tools can be intrusive
i. e., they affect the test result … the so-called Probe Effect
Computer Aided Software Test (CAST) tools:
some better support developer activities, such as static analyzers
some better support parts of the software development lifecycle
(e. g., unit testing, system testing)
45 Terminology (1)
Test script - A document, program, or object that defines the test
object, condition, initial state, input, expected result, and eligibility
criteria for each test or subtest in a test suite.
Test passed (successful) - if the actual result obtained after passing it
coincides with the predicted one and there have been no incidents
(symptoms).
Test failed (unsuccessful) - if the actual result obtained after passing it
does not match the predicted one and / or there have been incidents
(symptoms).
Tested - an object is successfully tested if all scheduled tests have been
performed without symptoms, which means that all tests have been
passed.
Bug-free - an object is defect-free if we consider the probability that
the object will show symptoms, or it will provoke symptoms to other
objects to be sufficiently low.
46 Terminology (2)
Coincidental correctness - Successful completion of all tests does not
mean that the object is defect-free. Although the actual result
matches the predicted one, there may be a defect in the program
because the match is random.
Blindness - Any test method has a type of defects against which it is
blind. Exception is when it tests all possible inputs and initial states.
Falsifiable - A statement is rebuttable (LV: atspēkojams) if you can
create an experiment that either confirms or denies its truth.
Requirements - what the object must meet and / or the properties that
it should have. The selection of requirements is relatively arbitrary, but
they must be consistent, reasonably complete, enforceable and, most
importantly, rebuttable.
Feature - the desired behavior of the object; calculation or evaluation
performed by the object. A requirement is a set of features.
Test Oracle - A source to determine expected results to compare with
the actual result of the software under test. An oracle may be the
existing system (for a benchmark), a user manual, or an individual’s
specialized knowledge, but should not be the code.
47 Tool Support
Test management Any test activities over the
Incident management entire SDLC
Test management tools Requirements management
Configuration management
Continuous Integration
Static analysis Review process support
Static testing tools
Modeling
Test design Test data preparation
Test specification tools
Test case generation
Execution of tests Test harnesses
Test execution and event logging tools Test comparators Coverage measurement
Security Test execution
Dynamic analysis Performance / load / stress
Performance, monitoring tools
Monitoring
Embedded systems Web testing
Application specific tools
Language specific
Word processor Operating system utilities
Other tools
Spreadsheet program SQL, code debugging
48 Test management tools
Management tools can be used throughout the
software development lifecycle
During unit testing, acceptance testing, etc.
They can be used by the whole project team -
managers, developers, testers, etc.
There are online solutions
Examples:
TestLodge - [Link]
Jira (can be adapted to support test management)
49 Test Management Tools (2): Test Management
Helps manage tests and activities
Interface with other tools
Test automation
Defect tracking
Requirements management
Supports version control
Provides traceability
reqs – test cases – defects/incidents
Supports logging
Provides analysis
50 Test Management Tools (3): Reqs Management
The main purpose is to store requirements (in various formats)
It is often possible to set the priority of requirements
To support testing, the tool must be able to:
Validate and verify requirements
Allow test cases to be related to requirements
The test team should review the requirements as early as
possible to verify their testability and consistency, which will
allow test cases to be developed based on them.
Examples: MicroFocus Caliber (former Borland Caliber), IBM
Rational Req Pro as a part of Rational Suite tools, IBM Rational
DOORS, JIRA (after proper configuration) and others
51 Test Management Tools (4): Incident Management
Defect tracking tools
Record defects / problems / incidents / anomalies with any aspect of the
project
For example, operational problems, failed tests, documentation errors (but still rarely
used to report errors to other employees)
Widely used in a team - especially used by testers
Supports
Setting incident priorities (high to low),
Assignment of incidents and appropriate actions (investigate, test, determine costs,
etc.),
Recording the current status and history of the incident (new, canceled, resolved,
pending verification, etc.)
Generating an analysis report (some tools) on trends in incident generation, resolution,
recorded crowds, etc.,
Adaptation to project needs (most tools)
Examples: MicroFocus StarTeam, LeanTesting, JIRA, IBM Rational ClearQuest
52 Test Management Tools (5):
Configuration Mng and Continuous Integration
Allows you to maintain and manage baselines
Not testing tools, but very actively use testing commands to
Get a specific version of the software (code, documentation…)
Maintain your own test materials
Traceability between test software and software under test
can be ensured by using build / release features for the
appropriate items
Essential when managing different versions of a system for
different target platforms or different work releases
Examples: IBM Rational ClearCase, Microsoft Azure
DevOps, Git
53 Tool Support
Test management Any test activities over the
Incident management entire SDLC
Test management tools Requirements management
Configuration management
Continuous Integration
Static analysis Review process support
Static testing tools
Modeling
Test design Test data preparation
Test specification tools
Test case generation
Execution of tests Test harnesses
Test execution and event logging tools Test comparators Coverage measurement
Security Test execution
Dynamic analysis Performance / load / stress
Performance, monitoring tools
Monitoring
Embedded systems Web testing
Application specific tools
Language specific
Word processor Operating system utilities
Other tools
Spreadsheet program SQL, code debugging
54 Static Testing Tools
Symbolic execution methods for testing because they
do not test the software at runtime
Language specific
Tools include:
Flow analyzers
Path tests
Coverage analyzers
Interface analyzers
Examples: Veracode Static Analysis Tool, SonarQube,
Codacy, Spotbugs, etc.
55 Tool Support
Test management Any test activities over the
Incident management entire SDLC
Test management tools Requirements management
Configuration management
Continuous Integration
Static analysis Review process support
Static testing tools
Modeling
Test design Test data preparation
Test specification tools
Test case generation
Execution of tests Test harnesses
Test execution and event logging tools Test comparators Coverage measurement
Security Test execution
Dynamic analysis Performance / load / stress
Performance, monitoring tools
Monitoring
Embedded systems Web testing
Application specific tools
Language specific
Word processor Operating system utilities
Other tools
Spreadsheet program SQL, code debugging
56 Test specification tools (1): Test design tools
Automatically generates one or all of the following:
Test input, which can be data from an external system or a user
Test cases
Expected results
They use requirements, analysis models, data models,
state models, code
The model must be in "readable" format (formal language or tool
format)
Able to search for input data boundary conditions, state
transitions, paths in the system
The automatic generation of test cases is still not widely
used in the industry
57 Test specification tools (2): Test data preparation
Allows selecting data from existing databases, files or
recorded broadcasts
Allows you to create, generate, manipulate data and
edit it for testing
The most advanced tools can work with various file and
database formats
It is often used in testing to "enter" data into databases,
files, etc.
One of the benefits is the generation of anonymous
data, which allows you to protect real data
Examples: IBM DB2 Test Database Generator, Red-Gate
SQL Data Generator, etc.
58 Tool Support
Test management Any test activities over the
Incident management entire SDLC
Test management tools Requirements management
Configuration management
Continuous Integration
Static analysis Review process support
Static testing tools
Modeling
Test design Test data preparation
Test specification tools
Test case generation
Execution of tests Test harnesses
Test execution and event logging tools Test comparators Coverage measurement
Security Test execution
Dynamic analysis Performance / load / stress
Performance, monitoring tools
Monitoring
Embedded systems Web testing
Application specific tools
Language specific
Word processor Operating system utilities
Other tools
Spreadsheet program SQL, code debugging
59 Automated Test Model
Getting test data Using the item I
n
t
e
Data Script r Item
f
a
c
e
Execution and Event Logging Tools (1): Test
60 Execution
Provides devices for recording, playback, and comparison
The tools simulate mouse movements, button presses, and
keyboard input
Test scripts are usually written in a programmable scripting
language (VB, C#, Java, etc.)
Additional code can be added to scripts to manipulate
data, perform tests, iterations, and more.
The test is executed automatically, following a recorded /
programmed script and using defined input data
Compares results with predefined expected values and
records the event (e.g., using a test comparator)
61 Classification Criteria for Test Execution Tools
Test data getting.
This determines how the script obtains the data, whether the data is
separated from the script, or is part of the script.
Script definition type.
Determines how the script is developed, what its structure is, and how
it is interpreted.
Item usage way.
Determines how the item is used, whether the actions are performed
sequentially or in parallel.
Item interface type.
Determines the system level at which the script interacts (UIs, APIs,
protocols, services).
62 Method of Getting Test Data
Data is an integral part of a script.
This means that the script contains data as constant values and always executes
with the same data set.
Externally defined data.
Data can be stored separately from the script and in another form. The data set
is still the same in this variant, but it can be changed without changing the script
itself.
Data tables.
A table can store multiple sets of test data for a single script. The script can then
be executed as many times as there are data sets in the table. In this way,
several test cases are met, which correspond to the same test procedure.
Data generators.
Data generators are separate components that automatically generate test
data according to certain rules. The script in this variant also works iteratively,
executing a new test case each time, but instead of determining the values of
the data itself, the tester sets the rules that this data must comply with.
63 Script Definition Type
Script in a programming language.
In this case, either universal (Java, C#, Python) or specialized (TSL
from WinRunner, SCL from OpenSTA) programming language is used
to implement the script.
Script in a declarative language.
In this case, the script is also implemented as code in text form, only
unlike the first class, a declarative language is used.
Visually defined script.
The script is developed using the tool's visual user interface.
64 Item Usage Way
Sequential command execution (one client mode)
Parallel script execution (multiple client mode)
65 Item’s Interface Type
Software level
The system and test software are linked together
API (Application Programming Interface) level
The test automation system invokes the
functions/operations/methods provided at a (remote) API
Service level
Via web services, RESTful services, etc.
Protocol level
Via HTTP, TCP, etc.
66 Execution and Event Logging Tools (2):
Test Harnesses and Unit Testing
Test harnesses (LV: testaizjūgi), drivers un stubs (LV:
aizbāzni)
Used to:
Simulate components not yet available - testing can continue
Simulate other components to allow detailed and controlled
testing of specific components
Simulate events that can be difficult (dangerous) to trigger in the
system
Commercial and custom-written tools, e.g., unit test
frameworks (xUnit series – JUnit, HTTPUnit, etc.)
67 Execution and Event Logging Tools (3):
Test Comparators
Test comparators are used to distinguish between actual and
expected results at the time of software testing
Integrated (most of test comparators)
Non-integrated (typically handle a range of file and database
formats), e.g., ExamDiff
Test execution tools often have built-in comparators that deal
with screen characteristics, user interface objects, bitmap
images.
Test comparators often have filtering or masking capabilities -
they can ignore rows or columns of data or display areas
68 Execution and Event Logging Tools (4):
Coverage Measuring
A standard option in many commercial tools
(statements and branches)
Some tools add additional code to the source code
(instrumentation) for detailed information, but this
can cause a probing effect
After execution, the event log is analyzed and
coverage statistics are generated
69 Tool Support
Test management Any test activities over the
Incident management entire SDLC
Test management tools Requirements management
Configuration management
Continuous Integration
Static analysis Review process support
Static testing tools
Modeling
Test design Test data preparation
Test specification tools
Test case generation
Execution of tests Test harnesses
Test execution and event logging tools Test comparators Coverage measurement
Security Test execution
Dynamic analysis Performance / load / stress
Performance, monitoring tools
Monitoring
Embedded systems Web testing
Application specific tools
Language specific
Word processor Operating system utilities
Other tools
Spreadsheet program SQL, code debugging
70
Performance and Monitoring Support (1):
Performance (load, stress) testing tools
Two main objectives:
For load generation measurement
For test transactions measurement
Load generators
Are able to simulate multi-user mode or large amounts of input data
User interface or test drivers are used for load generation
The following events may be logged
Number of completed transactions
Response time for selected transactions
Provides reports based on test event logs and load graphs against
response times
Examples, RadView WebLoad, Dotcom-Monitor LoadView Stress Testing,
Micro Focus LoadRunner, IBM Rational Performance Tester, Apache
JMeter, etc.
71 Performance and Monitoring Support(2):
Monitoring Tools
Not “testing” tools, but it allows identification of existing or
potential failures (e.g., Prometheus, Splunk)
Monitor and have specialization in:
System memory
CPU usage
File space
Network traffic
Disk I / O
Executable processes
Database optimization
Can be configured to report potential failures
72 Tool Support
Test management Any test activities over the
Incident management entire SDLC
Test management tools Requirements management
Configuration management
Continuous Integration
Static analysis Review process support
Static testing tools
Modeling
Test design Test data preparation
Test specification tools
Test case generation
Execution of tests Test harnesses
Test execution and event logging tools Test comparators Coverage measurement
Security Test execution
Dynamic analysis Performance / load / stress
Performance, monitoring tools
Monitoring
Embedded systems Web testing
Application specific tools
Language specific
Word processor Operating system utilities
Other tools
Spreadsheet program SQL, code debugging
73 Application Specific Tools
Tools for specific tasks, environments, or applications, e.g.:
Clutches and monitoring tools for embedded systems (Tessy,
Parasoft DTP, KlocWork, etc.)
Testing and monitoring tools for Robot Platforms and components
(e.g., AWS RoboMaker simulations, Amazon CloudWatch)
Testing tools for Web Application Specifics
Hyperlink Testing Tools (Spiders; e.g., W3C Link Checker)
Performance tools specifically for web user interfaces
Page Speed Testing Tools
Cross-Browser Testing Tools (e.g. Comparium, Selenium Box),
Etc.
Tools for specific operating systems
Tools for specific languages
74 Tool Support
Test management Any test activities over the
Incident management entire SDLC
Test management tools Requirements management
Configuration management
Continuous Integration
Static analysis Review process support
Static testing tools
Modeling
Test design Test data preparation
Test specification tools
Test case generation
Execution of tests Test harnesses
Test execution and event logging tools Test comparators Coverage measurement
Security Test execution
Dynamic analysis Performance / load / stress
Performance, monitoring tools
Monitoring
Embedded systems Web testing
Application specific tools
Language specific
Word processor Operating system utilities
Other tools
Spreadsheet program SQL, code debugging
75 Examples of Non-Commercial Tools for
Functional Testing Automation
Selenium Canoo WebTest
Robotium Watir
Sahi Eclipse TPTP (Test and
SoapUI Performance Tools Platform)
AutoIt Marathon
Cucumber
Httest
TestNG
76 Examples of Commercial Tools for
Functional and Non-functional Testing
Micro Focus
LoadRunner,
Unified Functional Testing (UFT), former HP QuickTest Professional,
ALM (Application Lifecycle Management)/Quality Center, former HP Quality
Center;
Segue SilkPerformer;
IBM Rational
Functional Tester,
Performance Tester,
TestStudio,
Robot,
Quality Manager, etc.
AutomatedQA TestComplete.