MT Imp | PDF | Scrum (Software Development) | Software Testing
0% found this document useful (0 votes)
7 views

MT Imp

This document provides an overview of manual testing. It discusses different software development life cycles (SDLC) models like waterfall, spiral, prototype, and V-model. It then covers various types of software testing like unit testing, integration testing, system testing, acceptance testing, and others. It also discusses testing terms like defects, test cases, and test plans. The document is intended to serve as a manual or guide on software testing concepts, processes, and terminology.
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views

MT Imp

This document provides an overview of manual testing. It discusses different software development life cycles (SDLC) models like waterfall, spiral, prototype, and V-model. It then covers various types of software testing like unit testing, integration testing, system testing, acceptance testing, and others. It also discusses testing terms like defects, test cases, and test plans. The document is intended to serve as a manual or guide on software testing concepts, processes, and terminology.
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 109

[Manual testing]

Sneha Gejji
1

Topics Page
SDLC 02
Waterfall model 03
Spiral model 08
Prototype model 10
V-model 11
Software testing 12
WBT 14
BBT 17
Functionality testing 17
Integration testing 20
System testing 24
Acceptance testing 29
Smoke testing 31
Ad-hoc testing 33
Agile 34
Compatibility testing 41
Performance testing 44
Globalization testing 48
Usability testing 50
Yellow box testing 51
Comparison testing 51
Accessibility testing 52
Reliability testing 52
Recovery testing 52
Exploratory testing 53
Regression testing 54
Defect 61
Test case 76
Test plan 89
STLC 97
2

SDLC
(Software development life cycle)
Definition:
It is a procedure to develop the software.
Stages:

When do we follow SDLC?


Whenever a software company or a person wants to develop a new software, they have
to follow SDLC
What will happen if we don’t follow SDLC?
1)We will not get to know how much money should be invested
2)We will not get to know how many resources are needed
3)We will not get detailed requirement document
4)There might be delay in releasing the software
Models of SDLC:
1)Waterfall model
2)Spiral model
3)Prototype model
4)V-model
5)Agile model
3

Waterfall model
Definition:
It is a step-by-step procedure to develop the software. It is a traditional model
Stages:

Why do we call it as waterfall model?


Here the progress is seen as flowing downwards like a waterfall therefore this model is
called as waterfall model
Requirement collection:
Here the business analyst (BA) will go to the customer place and collect the
requirements. Requirement is in the form of CRS (customer requirement specification)
CRS is converted to SRS, meaning “business language” is converted to “software
language “by the BA /PA
CRS: It is the requirement document in the form of customer business language
SRS: It is the requirement document in the form of the software language
BA: He will Convert CRS to SRS in the Service based Company.
PA: He will Convert CRS to SRS in the Product Based Company.
Service Based Company: They provide service and develop software for other
companies, according to their requirement
Ex: Infosys, Wipro, TCS……
Product Based Company: They develop their own software and sell it too other
companies which may need the S/W & earn profit
Ex: Microsoft, Oracle, Google…….
4

Who can become a Business Analyst?


Domain Expert -> Person who have worked in the same domain for more than
10-15 years
Senior Developer -> Developer who have worked in the same project domain
for more than 6 to 7 years
Sr. Test Engineer -> Test engineer who has tested in the same project same
for more than 6 to 7 years
Feasibility Study:
It is done by a team.
The team consists of Business Analyst, Project Manager, Architect, Hr Team and
finance Team
Here we check for:

Technical feasibility: Here we check technology is available or not to develop


the software (It is done by architect)
Financial Feasibility: Here we check budget is available or not to develop the
software. (It is done by Finance Team)
Resource Feasibility: Here we check resource is available or not to develop the
software. (It is done by HR Team)
Architect: He will do the technical feasibility study and tell which technology to
use for developing the software. Any Sr. Dev can become an
architect.
Design:
Here we do high level design and low-level design of the software
1. High Level Design: It is done by architect
It is the architect of the software to be developed
5

2. Low Level Design: It is done by Sr. Dev.


It describes how each and every feature in the
software should work, how each and every component
should work

Coding:
Here we start building the software or writing the code for the product. It is done by
Sr. Dev, Jr. Dev, Freshers
6

Testing:
After coding we start testing were in, we identify defect in the software it is done by test
engineer

Note: In waterfall model, developers are involved in testing.


Why Developers should not be involved in testing?
1. Developers consume most of the time for developing the software rather than
testing and testing will have no time
2. Developers see the product from the positive point of view and not from negative
point of view
3. Developers will have over confidence on the software that they have built.
4. Developers might find the defects while testing but still end up not fixing it.
Installation:
After the software is develop & tested it is installed in the customer place for using it is
done by installation engineer
7

Maintenance:
After the software is installed and used if the customer finds any defect, the software
company will fix it according to agreement that is, if the defect is found in the
maintenance period, then the software company will fix it free of cost

Advantages:
1. Requirement or Design doesn’t change so we get a stable product.
2. Quality of the software is good.

Drawbacks:
1. Backtracking is not possible i.e., for example, we cannot change the requirement
once the design stage is completed, meaning the requirement are freeze. Hence
this model is not flexible.
2. Requirement is not tested, Design is not tested, if there is any bug in the
requirement it does till the end & leads to a lot of reworks.
3. It is a traditional model, where developers were involved in testing.
Applications:
1. We go for waterfall Model to develop simple applications where the requirements
are fixed.
2. We go for Waterfall model to develop short term products where the
requirements are fixed.
Ex: Alarm, Calculator.
8

Spiral model
It is the process of developing the software. Here software is developed module wise

Here once requirement collection for ‘A’ is done, we go for design of module A
Once design is done & after coding, we go for testing of ‘A’ same process continue
module wise for the upcoming requirement.

How can we handle the changes?


1)Major changes
9

2)Minor changes

Advantages:
1. Requirement changes allowed after each cycle with minimum effort
2. Addition of new requirement is allowed

Drawback:
1. Same process repeated for each & every module
2. It is a traditional model where developers are involved in testing

Application:
We go for spiral model whenever customer gives the requirement in stages
10

Prototype model
Here we develop a prototype model or a dummy model before actual development of
the software

Here, the customer looks at the prototype and give the feedback if any changes are
needed
After the prototype is confirmed by the customer the actual design, development and
testing of the software happens

Advantages:
1)Good communication between the customer & development team
2)customer can make the changes if needed after looking at the prototype

Dis-advantages:
1)Investment is more
2)Delay in starting the actual development

Application:
1)when customer is new to the software
2)when developers are new to the domain
3)when customer is not clear about his own requirement
11

V-model
It is a step-by-step procedure to develop new software. It is also called V and V model
which means Verification and Validation model

Verification:
It is the process of reviewing CRS, SRS, Design and Code, Test Case and related
documents

Validation:
It is the actual testing done after the software is developed. Here we execute the test
cases
Here both the testing and development teams are parallelly involved.
Both the testing team and development team first do the verification process and then
validation process.
Verification is done to prevent the defect.
Once the software is ready, validation is done in order to identify and fix the defects
12

Advantages:
1. Testing starts from the initial stages i.e.; Requirement and Design are tested.
Downward flow of defects is reduced
2. Software quality will be good

Disadvantages:
1. Initial investment is more
2. Documentation is more

Applications:
1. Complex projects and huge projects
2. For long term projects

Software testing
Process of finding the defect in the software is known as software testing

(OR)
Verifying the functionality of an application against requirements specification is called
software testing
13

(OR)
Execution of program with the intent of finding the defects in the software is called
software testing

Why do we do software testing?


1)Every software is developed to support the business. If there is a defect in the
software it affects the business. So, before we use the software for business it should be
tested, all the problems must be recognized & solved
2)To check if the software is developed according to the requirement or not
3)To improve the quality of the product
Ways of testing:
a. Manual testing:
Testing the software repeatedly manually in order to find defects in the
software according to the requirement specification is called as manual testing
b. Automation testing:
Test engineer will write code/program/script by using tools like
selenium/QTP & run the program against the software. Tool or program will
automatically test the software & give the result as pass or fail this concept is
called as automation testing
14

White box testing


Testing each and every line of code is called white box testing

Types of white box testing:


1)Path testing
2)Condition testing
3)Loop testing
4)White box testing from memory point of view
5)White box testing from performance point of view

a) Path testing:
Here developer will write the flowchart & test all the independent path
15

Advantages of flow graph:


1. Will not miss any path
2. Will not repeat any path
b) Condition testing:
Here developers will test all the logical condition for both true & false values

c) Loop testing:
here developers will test the loop and ensure that the logic is repeating for all defined
number of cycles
16

d) White box testing from memory point of view:


What are the typical mistakes done by developer because of which size of the code
increases

 Because of repetition of same code instead of writing the function


 Because of not using inbuilt function
 Because of unused variable & function
e) White box testing from performance point of view:

 Because of not using better logic


 Using proper AND & OR

Black box testing


17

Verifying the functionality of an application according to requirement specification is


called as black box testing
It is done by test engineers

Functionality testing

Testing each and every component thoroughly against requirement specification() is


called functionality testing.

Component:
Component means that can be link, text field, text area, drop down, button, widget.
Thoroughly:
Testing the component by entering all the possible input is called as thoroughly

Why should the requirements be numbered?


18

 It is very easy to understand the requirement


 There will be clarity in the requirement
 Requirement becomes traceable
 Requirement becomes measurable
 It becomes very easy to communicate between developer testing team & customer
We can do any type of testing in 3 different ways
1. Over testing
2. Under testing
3. Optimized testing

A. Over testing:

Testing the application with the same scenarios in different ways


(OR)
Testing the application with those scenarios which doesn’t make no sense is called as
over testing
Dis-advantages:
 By doing over testing we will waste lot of time

B. Under testing:

Testing the application with insufficient set of scenarios is called as under testing
Dis-advantages:
 By doing over testing we will miss lot of defects

C. Optimized testing:
19

Testing the application only with those scenarios which make sense is called as
optimized testing
Advantages:
1. TE will not miss any scenarios
2. TE will not miss any defects
3. There will be no duplicate
4. Time will not be wasted
5. Quality will be good
Positive testing:
Testing each and every component of an application by entering valid or expected data
which is according to the requirement specification is called as positive testing.
Negative testing:
Testing each and every component of an application by entering invalid or unexpected
data which is according to the requirement specification is called as negative testing
Rules:
 Always start testing the application with valid data if the application works for
valid data, then only test for invalid data
 If application is not working for the one of the invalid values you can continue for
testing some more invalid value
 Test engineer should not assume (or) purpose the requirement if you have any
queries (or) question you should integrate with developer, customer(or) business
analyst and get it clarified.

Integration testing
20

Testing the data flow between two modules is called as integration testing.

Integration scenario:
1)Login as user ‘A’, click on amount transfer, enter FAN as user ‘A’, TAN as user ‘B’
and enter amount as (RS.1000) and click on transfer. Confirmation page should be
displayed. Logout as user ‘A’. login as user ‘B’, click on the amount balance and check
if the proper balance is displayed or not logout as user ‘B’.
Q. How to do integration testing?
1) Understanding the application is very important
a) You should understand each & every module
b) You should also understand how all the module are related
2) Identify all possible scenario
3)Prioritize the identified scenario
4)Document the scenario according to priority
5)Execute scenario
21

6)If you find the defect send it to developer


Positive integration testing:
Amount Transfer is less than or equal to balance this testing is known as positive
integration testing
Scenario:
Login as user ‘A’ check for amount balance, click on amount transfer, enter FAN as
user ‘A’, TAN as user ‘B’ and enter amount as (Rs.1000). click on transfer
confirmation message will be displayed, logout as user ‘A’. login as user B, click on
amount balance, there proper balance should be displayed (old balance+1000), logout
as user ‘B’
Negative integration testing:
Amount transfer is more than the balance
Scenario:
Login as user ‘A’ check for amount balance, click on amount transfer, enter FAN as
user ‘A’, TAN as user ‘B’ and enter amount as (Rs.15,000). click on transfer logout as
user ‘A’. login as user B, click on amount balance, there old balance should be
displayed logout as user ‘B’.
Type of integration testing:
Incremental integration testing:
Incremental add the modules & test the dataflow between the modules is called as
incremental integration testing.

Top-down incremental integration testing:


Incrementally adding the modules & testing the dataflow between the module and
make sure that the module which is added should be the child of the previous module.

Bottom-up incremental integration testing:


22

Incrementally adding the module & testing the dataflow between the module and make
sure that the module which is added should be the parent of the previous module.

Non-incremental integration testing:


Combining all the modules at once and testing the data flow between them is called as
non-incremental integration testing.
23

Drawback of non-incremental integration testing:


1. Chances is there we might miss some dataflow
2. Chances are there we might end up testing same thing again & again because of
this time taken will be more
3. It is difficult to identify the root cause of the defect

If one module is ready & another module is not ready then how will you do integration
testing?

Stubs: it is a dummy module, it acts like module which is not yet build, it generates the
data & receives the data
Driver: it is one which set up the testing environment and does lot of transaction,
analyse the result & send the output (does the transaction between real module &
stubs)
24

System testing
It is an end-to-end testing where in the test environment is just similar to the
production environment.
End-end testing:
Navigating through all the features and check whether the end feature is working as
expected or not.

OD Flow:
25

Types of environments:

Development environment:
It is the setup which is used for developing the software
(It consists of hardware, software, server, network)
Test environment:
It is the setup which is used for testing the software
(It consists of hardware, software, server, network)
Production environment:
It is the setup which is used for run the business
(It consists of hardware, software, server, network)
When do we do system testing?
 Whenever the environment which is just similar to the production environment
 Whenever the module is functionally stable (a smaller number of defects)
 Whenever bunch of modules are available
26

Terminologies:
Build:
When developers write the code & compile the code to get the binary we get the file
format, that file format we call it as build.

Test cycle:
It is the time spent or the effort spent by the test engineer to start and end the
testing.

Re-spin:
The process of getting a new build within one test cycle is called as Re-spin.
If there are blocker defects, we will get Re-spin.
To install Re-spin, first you should Uninstall old build and install Re-spin.
27

Patch:
Patch is a small software which consists of modified programs, added programs and
removed programs.

Release:
Starting from collecting the requirement, developing the software and testing the
software for so many cycles and releasing the software to the customer we call it as
1-release.
28
29

Acceptance testing
It is an end-to-end testing done by IT engineers, sitting at customer’s place where in
they take out the real time business scenarios & check whether software is capable of
handling it or not.

Why do we do Acceptance testing?

1. Chances are there under business pressure software company might push
software to the customer with critical bugs, to prevent that they do Acceptance
testing.
2. If they use software with the critical bugs for business they will undergo severe
loss, to avoid that they do acceptance testing.
3. Chances are their development team would misunderstand the requirement &
develop wrong features, to find such features customer will do Acceptance
testing.

Approach 2:

It is an end-to-end testing done by end-users wherein they use software for the business
for a particular period of time and check whether software is capable of handling all
real time business scenarios.
30

Approach 3:

It is an end-to-end testing done by our own TE sitting at customer’s place where in they
refer user scenarios given by the customer and check whether software is capable of
handling all real time business scenarios.

Approach 4:

It is an end-to-end testing done by our own T.E sitting at our own place where in they
refer user scenarios given by the customer and check whether software is capable of
handling all real time business scenarios.
31

Smoke testing

Testing the basic or critical features of an application before going for thorough
testing is called as Smoke Testing.

Advantages of Smoke testing:

 T.E can find all the blocker defects in the early stage itself.
 Developers will get sufficient time to fix the defect.
 Test cycle will not get postponed and the release will not be delayed.
32

Points to remember:

 In smoke testing we test only basic or critical features.


 We take every basic or critical feature and test for 1 or 2 important
scenarios.
 Here we do only positive testing.
 In the beginning we will not be able to identify basic or critical features, we
will learn it only after getting very good product knowledge.

When do we do smoke testing?

 Whenever we get a new build from the development team, we should always
start with smoke testing because adding, modifying, removing the features or
fixing the defects might affect basic or critical features, to find that in the
beginning they do smoke testing.
 Customer before he does acceptance testing, he should also do smoke testing:
 To check whether he has received the complete product or not.
 To check whether the product is properly installed and configured.
 One who installs the product in the server should do smoke testing to check
whether the product is installed properly or not.
 Before they give build to testing team, developers should do smoke testing so
that too many defects are there means they need not give build to testing team.

Why do we do smoke testing?

1. To check whether the product is testable or not. In the beginning if you find too
many defects, it means product is not testable so better stop testing and spend all
time in identifying some more scenarios.
2. Do smoke testing in the beginning itself if you find any blocker defect send it to
the developer in the beginning itself so that developers will have sufficient time to
fix the defect.
3. To check whether we have received broken build by the development team.
4. We do this to check that the product is installed properly or not.
5. It is like a health check of the product.

Difference between smoke testing & sanity testing

Smoke testing Sanity testing


 Smoke testing is a shallow and  It is deep and narrow testing (here
wide testing (shallow means high we take one feature and go deep
level testing, wide means cover all inside and test it.)
basic or critical features)
 Smoke testing is positive testing.  It is both positive & negative testing.
 Here we document scenarios and  Here we don’t document scenarios
test cases. and test cases
 Here we go for automation.  Here we don’t go for automation.
33

 Smoke testing is done by both  Sanity testing is done by only T. E


developers and T. E
Ad-hoc Testing
Testing the Application Randomly is called Ad-hoc Testing were in we don’t refer any
kind of formal document like test cases or scenarios

Why do we do Ad-hoc Testing?

1. Chances are there when the product is launched end users might use the
application randomly and find defects.
2. To avoid that test engineer should only test the application randomly.
3. If you see the requirement and test the software no. of defects which you are
going to catch more no. of defects.
4. We do this to increase the defect count.
5. We do this to have better test coverage.
6. The intention of doing ad-hoc testing is to somehow break the product.

How to do Ad-hoc testing?

Login as User, click on compose fill the details for all fields click on send, logout. Click
on the browser back button and check whether the login page should be displayed or
the application should ask to enter UN and PWD.

When do we do Ad-hoc testing?

1. When the product is functionally stable then we should think about ad-hoc
testing.
34

2. When we are doing smoke testing, we should not do negative testing/ad-hoc


testing. If you do this you will not be able to test basic or critical features.
3. Whenever the testing team/T. E is free they should spend time doing Ad-
hoc testing.
4. Whenever we are doing FT/IT/ST, if we get some ad-hoc scenarios we
should pause our regular testing & do ad-hoc testing, if we get too many
scenarios, we should document it & execute it when we get time.
(Note: It is not a formal like test case)

Agile model
35

Agile is a model were in we develop the software in an incremental and iterative


process.

They came up with this model in order to overcome the drawbacks that were there in
the traditional model.

Here we build large products in shorter cycles called Sprint.

Scrum process:

It is the process used to build an application in agile model.

Scrum team:
36

It is the group of Engineers working towards completing committed features or stories.

 Generally, scrum team will have 7-12 members.


 It includes Shared team and Core team members
 Core team includes Scrum master, Development Engineer and Test Engineer.
 Shared team includes Architect, Product Owner, Database admin, network
admin, UI and UX designers, BA.
 Scrum master leads this entire scrum team and he facilitates everyone to
complete their task.

Product Backlog:

It is a prioritized list of stories or requirements that must be developed in the complete


project.

 Generally, Product owner, Customer, Business analyst, architect, Scrum master


will be involved in building it.
 Generally, stories in product backlog need not be in detail.

Sprint Backlog:

*It is a list of stories and the associated task committed by the scrum team that

must be delivered within one sprint.

SPRINT PLANNING MEETING:

· Here the entire scrum team sits together and pulls the stories from the product
backlog.
· Scrum master assigns each story to development engineer and test engineer.
· Now each engineer derives the tasks to be completed to build the stories.
· Each engineer will estimate the time taken to complete each task i.e., they derive
the story point.
37

Following are the roles played by different people in Sprint planning meeting:

1. Scrum master:

a. This complete meeting is driven by the scrum master.

2. Product owner:

a. He clarifies if there are any questions related to stories or requirements.

3. Development Engineer:

a. He should derive the task for building every story.

b. He prioritizes which story to be built first and which story to build later in the
sprint.

c. He prioritizes the tasks.

d. He derives the story point.

4. Test engineer:

He derives the task to be completed to test each feature or story.

Ex: Create a/c a Identify scenarios

Write test cases

Review test cases

Execute test cases

Defect tracking

DAILY STAND-UP MEETING/ROLE CALL MEETING/

DAILY SCRUM MEETING

· Here the entire scrum team meets.


· This meeting is completely driven by the scrum master.
· Here every engineer should explain:
a) What they have done yesterday
b) What were the impediments/hurdles they faced yesterday
c) What are the Activities he is planning to do today
d) What are the impediments he is expecting in order to complete today’s task.
· The scrum master tries to solve certain impediments right there in the meeting.
If it takes too much time then scrum master notes it down in ‘Impediment
backlog’ and solves it later.
· Generally, this meeting should be completed within 10-15 mins.
· This meeting should be conducted at the beginning of the day.
38

· Here everybody should stand-up in the meeting so that people only talk to the
point.

SPRINT REVIEW MEETING:

· Sprint review meeting should be done at the end of the sprint, where the
engineers will give a demo to the product owner.
· They will also discuss how to plan for the next sprint.

RETROSPECTIVE MEETING

· Here the entire scrum team meets and discusses all achievements (good process
followed) and mistakes (wrong activities performed) and it will be documented.
This document is called a Retrospect document.
· When the next sprint starts while doing sprint planning meeting, we refer this
document & we plan it in such a way that old mistakes should not be repeated
and good activities are once again adopted.

BURNDOWN CHART:

It is a graphical representation of work left vs time.

STORYBOARD/WHITE BOARD:

It is a board which contains a list of pending tasks, tasks in progress and completed
tasks.
39

HOT FIX/ INCIDENT MANAGEMENT:

In the production if the customer faces any blocker or critical defects it will be
communicated to the company, developers will immediately fix the defect and TE will
retest and patch will be created & it will be installed in the production server.

This is called as Hot Fix or Incident management.

This may take up to 3 hours or 3 days.

ROOT CAUSE ANALYSIS

· Here the entire team sits together and finds the root cause of the defect and
shares it in a common folder where everyone can access it & present it to the
entire team.
40

· This technique is called Fish bone technique or Ishikawa method or RCA


meeting (Root Cause Analysis)
41

Compatibility testing
Testing the functionality of the application in different hardware and software
environments is called as Compatibility testing.

Why do we do compatibility testing?

1)Chances are their developer might develop the software in one platform and TE
would test the software in same platform and when it is released to the production end-
users might use the application in different platform, software which works in one
platform, but might not work in another platform because of some defects, due to this
end-user’s usage will go down & customer will undergo a huge loss, to avoid that all
this we do compatibility testing

2)To check whether the application is working consistently in all the platforms we do
compatibility testing
42

3)DE might write common code & claim that application works in all the platform or
else DE might write platform specific code & say that it is works in all respective
platforms

We have to test it in every platform & confirm that it really works or not

When we do compatibility testing?

when the product is functionally stable in the base platform only, we think about
testing the application in different platform

How we do compatibility testing?

It depends on the type of application

There are 3 types of application

a) Stand-alone application: It is a kind of application where we take one setup file


and install it in computer or a mobile only one user can access the software at a
time here no internet as well as server is required, no database required, this
kind of application is called as stand-alone application

Ex: Calculator, Alarm, MS. Paint

b) Client server application: It is a kind of application where two types of the


software i.e., client software & server software were in we use client software to
interact with server software this kind of application requires both internet as
well as server

Ex: WhatsApp, Instagram, Snapchat


43

c) Web application: It is a kind of client server application where in browser


behaviours like a client

Ex: WhatsApp, Facebook, Instagram.

How to do compatibility testing?

1)Buy the real device

2)Rent the real device

3)Browser stack

4)Virtualization

Web application: VM-ware

Android application: Emulator

IOS: Simulator

5)Crowd beta testing


44

Performance testing
Testing the stability and response time of an application by applying a load on it
is called as performance testing.

Response time:

Time taken to send the request time taken to execute the program & the time
taken to receive the response

T=T1+T2+T3

Load:

Designed number of users

Stability:

Ability to withstand the load

Performance testing tool:

a) J-meter

b) Neo load

c) Load runner

d) Rational performance tool

e) Silk performance tool


45

How will you do performance testing?

Types of performance testing:

1)Load testing

2)Stress testing

3)volume testing

4)Soak testing
46

A) Load testing: Testing the Stability & response time of an application by applying
the load which is less than or equal to designed number of users

B) Stress testing: Testing the Stability & response time of an application by applying
the load which is more than designed number of users

C) Volume testing: Testing the Stability & response time of an application by


transferring huge volume of data
47

D) Soak testing: Testing the Stability & response time of an application by applying
load continuously for a particular period of time
48

Globalization testing
Developing software for multiple languages is called as Globalization.

Testing software which is developed for multiple languages is called Globalization testing.

Types of globalization testing:

1) Internationalization testing(I18N)

2) Localization testing(L10N)

I18N testing:

Testing the application which is developed for multiple languages

1) Here we check whether content is displayed in right language or not

2) Right content is displayed in right place or not

3) Features are broken with language changed or not

How to do I18N testing for (Chinese language)?

a) Go to (Chinese language) property file

b) Add prefix and suffix to the content

c) Open the application & select the language Chinese corresponding page comes

d) I will check for prefix, if prefix is correct means content is in right language

e) I will check for suffix, if suffix is correct means content is in right place
49

Localization testing:

testing the application to check whether application is developed according to the country
standard or country culture or not is called localization testing(L10N)

Usability testing
Testing the user friendliness of an application is called as usability testing.
50

How to do usability testing?

1) I will check the look and feel of the application

2) I will check whether the application is easy to understand or not or the application
should take less time to perform specific action or not

3) Important or frequently used feature must be taken to the user within 3 clicks

4) Important or frequently used feature used features shouted be present either at the
left

For what kind of application, we do usability testing?

1)Any application which is used by multiple users or variety of users we do usability


testing

2)Any application which generates lot of revenue

3)Any application where end users won’t be provided any kind of training

When we do usability testing?

1)When the product is functionally stable, we can do usability testing


51

2)Certain project we do usability testing in the beginning of the SDLC itself. (Prototype
model)

Yellow Box testing


Testing the warning message of an application is called as yellow box testing.

Ex: Batter low

Storage full

Comparison testing
Testing the newly build application with the similar kind of application which released in
the market, here we compare the application, check the advantages and dis-advantages
and check whether all the features are present in our newly build application or not is
called as comparison testing.

Accessibility testing
Testing the user friendliness of an application from physically challenged people point of
view.

Reliabili
ty testing
Testing the functionality of an application continuously for a particular period of time is
52

called as reliability testing.

Recovery testing
Testing the functionality of an application to check well the application recovers the data
from the crash or disasters.

Exploratory Testing
Understand the application, identify the scenarios, document the scenarios and test the
application by referring the document is called as Exploratory testing.

Or

Explore the application, understand how each and every feature works and test the
application based on your understanding is called as Exploratory testing.

When we go for exploratory testing?


53

When the requirement is missing, we go for exploratory testing.

-In long term projects, when it’s a very big/huge/complex application, the requirement
for some modules might be missing.

-In product-based companies, since we don’t have customer, we won’t have proper
requirement document.

-In start-ups, if the company is very new, they might not maintain requirement document
properly.

-Sometimes even if the requirement (SRS) is present, we don’t have sufficient time to
read and understand the requirement.

How to do Exploratory testing?

1. Understand the application

a) Understand how each and every component works.

b) Understand how each and every module/feature works.

2. Identify the scenarios

3. Document the scenarios

4. Execute the scenarios referring the document.

5. When you find defects communicate of the developers.

Drawbacks of Exploratory testing:

1) Time Consuming

2) We might miss testing some features in turn we might miss the defects.

3) We might misunderstand defect as feature.

4) We might misunderstand feature as defect.

How to overcome the drawbacks of exploratory testing?

1. Interact with Sr. Dev, Sr. T.E, B.A or Customer.

2. Based on Product knowledge

3. Based on domain knowledge

4. By comparing the similar application.


54

5. Based on common sense.

Regression testing

Testing the unchanged features to make sure that is not affected or broken because of
changes is called as Regression testing (here changes can be addition, modification,
removal of features or fixing the defect)

OR

Re-execution of same test case is different test cycle(or)sprint(or)Release to make sure


that changes are not introducing any defects in unchanged features (Changes can be
addition, modification or removal of features is called Regression testing)

Type of Regression testing:


1)Unit regression testing

2)Regional regression testing

3)Full regression testing

a) Unit regression testing:


Testing the changes (or) only the bugs which is fixed is called as Unit regression testing.
55

b) regional regression testing:


Testing the changes & only the impacted regions is called regional regression testing

How will you identify impacted Region?

1)Based on product knowledge

(As a TE in-depth I will be knowing how each & every module works & also I will
be knowing how all the modules are related based on that knowledge, I will be able
to identify impacted areas)
56

2)By preparing Impact matrix

(Here we list the changes & also all the features in the application, then mark the
impacted areas)

3)By conducting Impact analysis meeting

(As soon as the new build comes entire testing team meets & discuss about list of
bugs fixed & the impacted areas)

4)By interacting with customer, business analyst, development team, testing

team we should gather the impacted areas & create an impact list based on that we
should do Regression testing

Advantages of Regional Regression Testing:


57

1)By not testing certain features we are saving testing time which intern

reduces the testing cost

2)Test cycle duration Reduces because of that turnaround time taken to

deliver the product to the customer reduces

Dis-advantages of Regional Regression Testing:


1)Chances are there we might miss identifying the impacted area because of that we
might miss the bugs.

c) Full Regression testing:


Testing the changes and all the remaining features is called as Full Regression testing

Why/when we do full regression testing?

1)Whenever too many changes are done in the product better to do full

regression testing

2)If the changes are done in core features

3)Every few (4-5) cycles once we should do full regression testing & last few cycles, we
should do full regression testing because we are about to launch the product to the
production to not to take any risk
58

Interview Question
Difference between Regression Testing and Re-testing

Regression Testing Retesting


Fixing the bugs or doing changes might Whenever developer gives build,
have impact on other features so testing checking or verifying whether defect is
the unchanged feature to make sure that it fixed or not is called Retesting.
is not broken because of changes is called
as Regression Testing

Regression Testing is done for Passed test Retesting is done for failed Test cases
cases

Here we can go for Automation Here we cannot go for Automation

Progression testing:

Testing newly added features is called as progression testing.

Q. When we do regression testing?

1) 1 release 2 build only we do regression testing


st nd

2) 2 release 1 build only we do regression testing


nd st
59

What are the drawbacks of manual repeated regression testing?

1) Manual testing is reparative in nature over the period of time it becomes


monotonous because of that testing engineer may not be effective in testing

2) As the size of the application increases test cycle duration also increases because
of that turnaround time taken to deliver the product to the customer increases

3) Man power is expensive

What is the role of Manual test engineer?

1. Write manual Test cases

2. Test new feature manually.


60

3. Test modified feature, fixed bugs manually

4. Find the defects & communicate to the developers.

What is the role of Automation Test engineer?

1. Understand the application

2. Understand the test cases.

3. Convert the manual test cases into automation test scripts (manual test cases of the stable
features)

4. Execute the automation scripts when the new build comes.

5. Maintain the automation scripts:

a) Whenever the requirement changes, test cases also should be changed.

b) When the test cases changes, we should change the automation scripts

c) If there are problems in old script, they have to fix it.

Why we go for Test Automation?

We go for test automation:

1. To reduce manual repeated testing efforts.

2. To reduce test cycle duration.

3. To reduce the turnaround time taken to deliver the product to the customer.

4. To reduce the no. of engineers

5. To reduce the cost of testing.

6. To improve the test efficiency.

7. To have the consistency in quality of test execution.


61

DEFECT
What is defect?
Any feature which is not working according to the requirement specification is called as
Defect.

OR
Deviation from requirement specification is called as Defect.

Why do we get Defect?


1. Wrong Implementation
62

2. Missing Implementation

3. Extra Implementation

What is the difference between Error, Defect, Bug and Failure?


63

Error: Error is a mistake done in the program because of which we will not be able to
compile the code or Run the code.
There are 2 types of error: i) Compile time error
ii) Run time error
Defect: Any feature which is not working according to the requirement is called as
Defect.
OR
Error found in the application or S/W is called as defect.
Bug: It is an informal name given to defect.
Failure: Many defects in the s/w leads to failure. It is the term which is used by
customer or end user.

DEFECT TRACKING PROCESS:

TEST ENGINEER:
1. Test engineer finds the defects
2. Prepare defects report
3. He will put the status as new/open
4. Send the report to development lead
DEVELOPMENT LEAD:
1. DL reads the report & understand the problem
2. Identifies the developer who did the mistake
3. Change the status to assign
4. Sends it development engineer
64

DEVELOPMENT ENGINEER:
1. DE reads the report & understand the problem
2. Goes the source code & fix the bug
3. Change the status to fixed
4. Send the report to test engineer & CC to development lead
TEST ENGINEER:
1. TE reads the report & understand the problem fixed
2. Retest the fix bug, if the bug is fixed change the status to closed
3. Otherwise change the status to reopen
4. Send the report to development engineer & also CC to development lead
Why test engineer should not wait for the permission of test lead for sending report
to development?
 There will be delay in communication report to developer
 As a test engineer will be having knowledge in depth about his feature, so better
take a decision and send report directly to development lead without the
permission of test lead
Why we should keep CC for every report to test lead?
 Test Lead is the one who attends managements developers and customer meeting,
so he should be aware of all the issue that are present in the product
 To get visibility that test engineer is working
As soon as you find defect immediately you have communicated defect to developer
why?
 Developers will get sufficient time to find the defect
 Someone else might send your defect
 Chances are their TE might forget the defect

Severity:
It is the impact of the defect on customer business
1. Blocker defect
2. Critical defect
3. Major defect
4. Minor defect
Blocker:
Assume that there is a defect in the software, I am 100% sure that this defect is going
to affect customer business work flow and also blocking test engineer to test the
feature, this kind of defect is called as blocker defect.
65

Critical defect:
Assume that there is defect in the application, I am 100% sure that this defect will
affect customer’s business work flow, but not blocking Test Engineer to test the
feature still we can continue testing. This type of defects is called as Critical defect.

Major Defect:
Assume that there is a defect in the application, I am not sure that how this defect is
going to affect customer business work flow, this type of defects is called Major
Defect.
66

Minor Defect:
Assume that there is defect in the application, I am 100% sure that this defect will
never affect customer business work flow, this type of Defect is called Minor defect.
Example: Spelling mistake, colour mistake, overlapping issue, alignment issue etc.

PRIORITY:
Importance given to fix the defect is called as priority
OR
67

How soon the defect must be fixed by the developer.


There are 3 levels of Priority. They are:
1. High or P1
2. Medium or P2
3. Low or P3
High or P1:
In the defect is having priority P1, then the developer should fix the defect immediately.
Medium or P2:
If the defect is having priority P2, then he can fix the defect, within some test cycle or
builds or within a release
Low or P3:
If the defect is having priority P3 then developer can fix the defect in upcoming release
or within some 2-3 release.
There are 4 combinations:
1. High severity High priority
2. Low severity low priority
3. High severity Low priority
Ex:
68

4. Low severity High Priority

DEFECT LIFE CYCLE:


Defect life cycle consists of Below mentioned status:
1. New/open
2. Assigned
3. Fixed
4. Closed
5. Reopen
6. Reject
7. Defect cannot be fixed
8. Postponed
9. Duplicate
69

10.Issue not reproduceable


11.Request for enhancement (RFE)

What is Reject status?


TE will find a defect and send it to developer, now developer says that it is not a defect,
it is a feature, in this case developers will change the status as Reject.
Why we get Reject status?
1. Because of Misunderstanding the requirement.

2. Because of referring old requirement


3. Whenever build or software is wrongly installed or wrongly configured.
If the test engineer installed build wrongly and find defect in software and
communicate it to the developers, now developer will tell it is not a defect
because code is perfect, but TE has not properly installed the software.
70

What is Duplicate status?


Test Engineer will find defect and communicate to developers, if already same defect is
tracked by another TE, then developer says that this new defect is duplicate of old defect.
Why we get Duplicate status?
i. Because of testing common feature.
sssss

ii. Assume that old TE has found lot of defects and communicated to developer, in
that some defects are fixed and some are pending. If new TE joins same project
and communicate old defects then developers say this new defect are duplicate of
Old defects.
71

What is Defect cannot be fixed status?


Here the developers are accepting that it is a defect but they are not in a position to fix
the defect, in this case developer says defect cannot be fixed.
Why we get Defect cannot be fixed?
i. If TE finds a defect in the root of the product and if it is a minor defect and if it is
not affecting customer business work flow, then developer says Defect cannot be
fixed.
(Suppose if its blocker or critical defect then developer should fix the defect.)

ii. If the cost of fixing the defect is more than cost of defect then developer says
Defect cannot be fixed.
Here the cost of defect means loss in the business because of having defect in
the software.

iii. When the technology itself doesn’t support.


72

What is Postpone status?


Here developers are accepting that it is defect but they want to fix it a little later. In this
case developers gives status as Postpone.
Why we get Postpone status?
i. If TE finds minor defect at the end of the release and if developer is not having
sufficient time to fix the defect in this case developer will give Postpone status.

ii. TE finds a defect in a feature which is not required to customer in Current release,
then developer will give Postpone status.
73

iii. If TE finds a defect in a feature where customer wants to do lot of changes in the
requirement (same feature) and even customer might remove feature as well. In
this case developer will give Postpone status.

iv. TE finds a defect in the feature which is exposed to internal users and if it is a
major or minor defect then developer will give the postpone status.
74

What is issue not reproduceable status?


TE is able to see the defect but developer is not able to see the same defect. In this case
developer says that issue not reproducible.
Why we get issue not reproduceable status?
i. Because of improper defect report
a. Because of using incorrect platform/Platform Mismatch: TE might be using one
OS or Browser but developers might be using some other OS or Browser because
of that they might not get the defect.
b. Because of Incorrect data: TE might be using some data to get the bug but
developers might be using some other data to get the same bug.

ii. Because of Inconsistent defect

What is Inconsistent defect?


Sometimes the defect appears, sometimes the defect disappears.
Or
Sometimes feature is working, sometimes same feature is not working.
75

What is Request for Enhancement (RFE)?


While testing the software if the TE finds any defect and if that defect is not a part of
requirement, then it is called as RFE or Change request.
76

Defect Report

What is defect tracking tool? Name some defect tracking tools.


Defect tracking tool is a software which is mainly used to track or store the
defects in centralized place and communicate defect to developer in an
organized way.

Defects tracking tools are:


1. Bugzilla
2. QC/ALM
3. JIRA
4. Mantis
5. Croc Plus
6. Rational clear Quest
7. bug net
8. Bugzini
9. Bugpro
77

Test case
Test case is a document that covers all possible scenarios for a specific requirement.
It contains different sections like step no., input, action or description, expected result,
actual result, status, and comments.

What will happen if you look into requirements and test the software?
Or
What are the drawbacks of not writing test cases?
 There will be no consistency in testing if you look into requirements and test the
software.
 The test engineer will miss lot of scenarios and defects.
 Quality of testing varies from person to person if you look into requirements and
test the s/w.
 Testing depends on the memory power of the test engineer.
 Chances are there we might end up in testing the same things again and again if
you look into requirement.
 Test coverage will not be good.
 Testing depends on the mood of the T.E.
When do we write test case?
1. When developers are busy in building the product, the testing team will be busy
writing the test cases.

2. When the customer is adding the requirement developers will add the features
parallelly test engineers will add new test cases.
78

3. When the customer is modifying or changing the requirement developers will


modify or change the feature, parallelly T.E. will modify or change the test cases.

4. When the customer is removing the requirement, developers will remove the
feature, and parallelly test engineer will remove the test cases to make sure that
features are removed from the s/w or not

Why do we write Test cases?


 We write test cases to have better test coverage.
When the requirement comes in developers are busy building the product same
time test engineers are free, so they identify all possible scenarios and document
it. When the build comes, we can spend time executing the scenarios, because of
this no. of scenarios that you are covering will be more.
 To have consistency in test execution.
It means if you have documented the scenarios, you can make sure that you are
executing all the scenarios in all the test cycles, sprints, or releases.
 To depend on the process rather than on a person.
79

 To avoid training every new engineer on the product or on the requirement.


 Test case is the only document that acts like as proof for customer, development
team, and also manager that we have covered all possible scenarios.
 Test case acts like a base document for writing automation scripts, if you refer to
the test case and write automation scripts you can ensure the same kind of
coverage even in automation.
 If you documented the test case, no need to remember the scenarios.

Test case design techniques


It is a technique which is used while writing test case in order to improve test case in
order to improve test coverage.
Drawback of not applying Test case design technique
1)TE will miss lot of scenarios
2)TE will miss lots of defects
3)Test case coverage will not be good
Types of Test case design techniques:
1. Error guessing
2. Equivalence class partition
3. Boundary value analysis (BVA)

1. Error Guessing:
Here we guess all possible errors and we derive the scenarios.
 We guess errors based on the following:
A. Requirement
B. Experience
C. Intuition

Ex: Amount
100
5001
99

4999
100.50
80

100%

$100
0
100 Rs only

2. Equivalence class partition:


Pressman Rules:
Rule 1: If the input is a range of values, then design test case for one valid and two
invalid inputs.
Ex: Amount 100-5000

Valid->500
Invalid-> 90
6000
Ex: Insurance
Age 5-55

Valid-> 30
Invalid->4
60
Rule 2: If the input is in a set of values, then design test case for one valid and two
invalid inputs.

Ex:
Printer- 10
Scanner- 20 Set of values
81

Webcam- 30

Valid-> 30
Invalid-> 25
40
Ex:

Rule 3: If the input is in Boolean, then design the test case for both true and false values.
Ex: Whenever we are testing for checkbox or radio buttons you should test the
application for both true and false values.

Practice Method:
If the input is in range of values, then divide the range into equivalent parts, try for all
the values and also test for at least two invalid values.
Ex: Amount 100-5000

1000
2000
3000
4000
5000

Note:
1. If there is a deviation between the range of values then we go got the practice
method.
2. If there is no deviation between the range of values then we go for Pressman rules.
3. By looking into requirements, we will get to know whether there is a deviation or
82

not.

Boundary Value Analysis:


If the input is in the range of value b/w A to B then design test case for A, A+1, A-1 and
B, B+1, B-1.
Amount 100-5000

100 5000
101 5001
99 999

Test case optimization:


The process of removing the duplicates from the test cases is called as Test case
optimization.
Difference between test case & test scenarios:
Test scenarios Test cases
 It is a high-level document of all the  It is a detailed document of the
customer business workflow scenario that helps us to test the
according to the customer's application.
Requirement.
 We write Test scenarios by looking  We write test cases by looking into
into the requirements. both requirement and test
83

scenarios.
 By looking into test scenarios, we  We can test any application by
can’t test any application until you looking at the test case, no matter
have good product knowledge. if you have product knowledge or
not.
 Here we mention what to test.  Here we mention how to test.

Test case review Process:

On what basis do they assign test case for review?


They will assign to the person:
1. Who is working on a similar or related module in the project.
2. Who has worked on same module in the previous project.
3. Who has been working in the project since the beginning knows every corner of
the product.
4. Who is responsible, and who will understand the requirement very fast and
identify more mistakes.
How do you ensure that reviewer does his job?
1. Assign primary and secondary reviewer.
2. Test lead should also randomly review and find the mistakes.
3. Test lead will intentionally introduce some mistakes and check whether it is found
by the reviewer.
Review ethics:
84

1. Always review the content, and not the author.


2. Reviewer should spend more time in finding the mistakes rather than giving the
solution.
3. Even after review if there are still any mistakes, both author and reviewer are
responsible.
Test case review template:

 Every TE should write the review comments in the test case review template only.
 Test case review template will be prepared either in the Test management tool or
MS Word/MS Excel.
 Test case review template is not standard, it may vary from Company to company
and project to project.

Traceability Matrix:
85

It is a document that we prepare to make sure that every requirement has got at least
one test case.

Advantages:
1. It ensures that every requirement has got at least one test cases, which indirectly
assures that you have tested every feature at least once.
2. It gives us Traceability from High level requirement till automation script.
Drawback:
It will not ensure that you have got 100% coverage.
Types of traceability Matrix:
There are 3 types of traceability Matrix. They are:
1. Forward Traceability Matrix:
Mapping from the root document to derived document is called forward
traceability matrix.
Ex: Mapping from Req to test case and test case to test script
2. Backward Traceability Matrix:
Mapping from derived document to root document is called as Backward
Traceability Matrix.
Ex: Mapping from test scripts to test cases and test cases to requirement.

3. Bi-Directional traceability Matrix:


Doing both forward and backward traceability matrix is called as Bi-directional
traceability Matrix.

Difference between Traceability Matrix and Test case review:


86

Traceability Matrix Test case review


 Here we check every requirement has  Here we check test case is
got at least one test case. covering all possible scenarios
for specific requirement.
 Here we don’t check whether test case  Here we don’t check whether
is covering all possible scenarios for a every requirement has got at least
specific requirement. one test case.

Lessons to remember while writing test cases:


#1: Before we actually write test cases, we should come up with options and select the
best option out of it.
#2: Start writing the test case with navigational steps.
#3: Never write hardcore test cases, always write generic test cases.
#4: Whatever we have covered in FT, don’t cover the same in Integration test cases.
If something is covered in integration test cases then don’t cover it in the system test
cases.
#5: Elaborate only those steps in which you have to focus. Don’t elaborate all steps
unnecessarily.
#6: Whenever we are writing test cases we should imagine/visualize the application.
#7: Always use should be/must be in expected results.
Don’t use could be/would be/will be/can be/might be.
#8: If you organize the steps properly, the total no. of steps can be reduced.
Approach to write Functionality Test cases:
1. Go to the body of the test case.
2. Start with navigational steps.
3. Take the first field.
 Start with valid inputs.
 Write the error-guessing scenarios
 Write equivalence class partition scenarios.
 Write the BVA scenarios.
4. Take the second field.
 Start with valid inputs.
 Write the error-guessing scenarios
 Write equivalence class partition scenarios.
 Write the BVA scenarios.

Approach to write Integration test cases:


1. Take one feature, and identify all possible scenarios.
87

2. Prioritize the identified scenarios & document it.


3. Go to the body of the test case.
4. Start with navigational steps.
5. Cover the scenarios.

How to fill the header?


1. Test case name:
Format: Projectname_ModuleName_scenario
Ex: CB_AmountTransfer_integration
CB_AmountTransfer_AmountTextFeild

2. Requirement Number:
BA when he converts CRS to SRS, in SRS for each requirement he will write the
requirement no.
Ex: 30.1 Amount Transfer
30.1.1 FAN text field
30.1.2 TAN text field
30.1.3 Amount text field

3. Test data:
It is the data written by a TE and has to be done before the test execution.
Ex: TE should have UN, PWD, URL, a/c number

4. Pre-condition:
88

It is a set of actions or settings which should be ready/done by TE before


executing the 1 test case.
st

Ex: User should have balance in his account.

5. Test case type:


Here the TE mention what type of test case he is writing.
Ex: Functionality test case, Integration test case, system test case.

6. Severity:
TE will give severity for every individual test case, based on how important and
complex the feature is from customer’s POV.
TE will execute test case based on severity.
There are 3 types of severity for Test cases: Critical, major, minor

7. Brief Description:
It describes about the complete test case and the behavior of the test case.
Ex: In the amount transfer module, it should accept only +ve integers.

How to fill the footer?


1. Author: Anyone who writes the test case will be the author.
Ex: Dinga
2. Reviewer: The person who reviews the test cases.
Ex: Dingi
3. Approved by: A person who approved the test cases.
Ex: Test lead
4. Approval date:
Ex: 01-01-2023

Procedure to write Test cases:


89

System study:
Read the requirement, understand the requirement, and if you have any queries interact
with the customer, B.A.
Identify all possible scenarios:
i. Identify
ii. Brainstorming sessions:
Write the test cases:
 Group all related scenarios.
 Prioritize the scenarios within each group.
 Apply test case design technique.
 Use the test case format given to you.
 Document it.

Store in test case repository:


It is a centralized place where in we store all the test cases in an organized way.
90

Test Plan
It is a document which drives all the future testing activities. Generally, it is prepared by
test lead or test manager. It has got several sections like:

1. Objective

1. Effort estimation
2. Scope
3. Approach
4. Assumption
5. Risk
6. Mitigation plan/Backup plan
7. Test Methodology
8. Test schedule
9. Test environment
10. Defect tracking
11. Test automation
12. Deliverables
13. Entry & exit criteria
14. Test stop criteria
15. Roles & responsibilities
16. Templates

1) Objective:

This section covers aim of preparing the test plan

2) Effort estimation:

This section covers estimation of how long it will take to complete the project & also we
estimate how many engineers are needed & the cost needed to complete the task and the
cost of testing.

1) Scope:

This section covers what are the features to be tested and what are the features not to be
tested.

2) Approach:

This section covers how we are going to test the product in future.

3) Assumption:
91

This section covers assumptions that we have made while planning.

4) Risk:

This section covers if any assumptions fails that becomes risk.

5) Mitigation plan/Backup Plan:

This section covers how to overcome (or) how to face the risk.

6) Test methodology:

This section covers what are the types of testing that we are planning to conduct.

7) Test Schedule:

This section covers when exactly we should start and end and activity.

8) Test Environment:

This section covers how we go about setting up the test environment in future (or) how
we setup the environment in future.

Ex: 12.1 Procedure to install the build

------------

------------

12.2 Hardware

12.2.1 Server side

HP startcat 1500

12.2.2 Client side

6 computers with following configurations

1 gz speed Intel processor

1 GB RAM

12.3 Software

12.3.1 Server side

OS: Linux, Version:


92

Web server: Tomcat, Version:

App server: Web sphere

DB server: Oracle, version:

12.3.2 Client side

OS: Win 10, win 8

Browser: Mozilla Firefox

Chrome

9) Defect Tracking:

This section covers in future when we find defects how it should be tracked and also
covers the procedure, status, severity and priority.

It also covers what should be the

a) Procedure

b) Status

c) Severity

d) Priority

10) Test Automation:

This section covers what are the features to be automated & what are the features not to
be automated & the complete automation strategy.

Ex: 6.1 Features to be Automated

---------

---------

6.2 Features not be automated

-------

------

6.3 Automation framework to be used

---------
93

---------

6.4 Automation tool to be used

Selenium

-----------

11) Deliverables:

This section covers which all documents that has to be provided by the testing team at
the end of test cycle.

Ex: Test cases, Traceability matrix, test execution report, defect report, release note,
graphs & matrices.

Release Note: Along with the product, we release a note to the customer called as
release note.

Release note consists of:

· List of open defects which are there in the product.

· List of bugs that are found in the previous release and fixed in the current
release.

· List of pending bugs in previous release & fixed in current release

· List of features added, modified or removed in current release.

· Procedure to install the software.

· Version of the product.

Graphs & matrices:

Graphs:

Defect density (or) Defect distribution graph


94

Build-wise defect distribution graph

Matrices:

Defect distribution matrices:


95

Test Engineer efficiency Matrices:

1) Entry & Exit Criteria:

Entry criteria:

This section covers list of criteria that should be met to start the activity.

Exit criteria:

This section covers list of criteria that should be met to say that activity is
over.
96

Ex: System study:

Entry criteria for system study:

· Should have got approved requirement from customer

· Should have assigned engineers to do system study

Exit criteria for system study:

· Should have completed reading the requirement.

· Should have got answers for all the queries.

Prepare Test Plan:

Entry criteria for test plan:

· Test plan template should be ready.

· Should have assigned someone to prepare test plan & review the test
plan.

· Should have met the exit criteria of system study.

Exit criteria for test plan:

· Should have got approval for test plan.

Write Test case:

Entry criteria for writing test case:

· Test case template should be ready.

· Should have met exit criteria of test plan.

· Should have assigned the module to engineer.

Exit criteria for writing test case:

· Test case should be reviewed, approved & stored in repository.

2) Test stop criteria:

This section covers when exactly we should stop testing.

When will you stop testing?

è We stop testing when the product quality is very good or product quality is
97

very bad.

ü Product quality is very good means if all the end-to-end business


scenarios are working fine.

ü If there are no blocker or critical defects.

ü There are few bugs left out which are all minor or major but are less
than the acceptable limit set by the customer.

ü If all the features requested by the customer are ready

û Product quality is bad means there are too many blocker & critical bugs.

û If it is crossing the budget.

û If it is crossing the schedule/deadline.

3) Roles & responsibility:

This section covers what each engineer should do in different stages of test life
cycle.

Roles and responsibilities of Test manager:

· Write and review Test plan.

· Interact with testing team, development team, if needed with customer.

· Should handle all the issues and escalations.

· He should approve release note.

Roles and responsibilities of Test lead:

· Write and review Test Plan.

· Allocate work to each engineer and make sure that they are going to
work and complete the task within the schedule.

· Consolidate all the reports which are sent by every TE and


communicate with testing team, development team, project manager
and customer.

· He conducts impact analysis meeting.

Roles and responsibilities of Test Engineer:


98

· Write Test case

· Review test case of another test engineer.

· Execute test case for his allocated features.

4) Templates:

This section covers formats for all the documents that we are planning to prepare
in the entire test life cycle.

i. Test case template

ii. Traceability matrix template

iii. Defect report template

iv. Test case review template

v. Test execution report template

STLC

1. System study:
Read the requirement, understand the requirement if you have any queries, interact
with BA, developers or customer.
99

2. Prepare Test plan:


Once after reading and understanding the requirement, we go for preparing the test
plan.

Test plan is a document which drives all the future testing activities.

-Here we decide how many engineers we require to complete the testing.

-What is the total time for completing the testing/project.

-What each engineer should do in different stages of testing.

-What are types of testing we will conduct in future.

-What are the features that are to be tested and not to be tested.

-What is the testing approach

-When each activity should start & end.

3. Write test cases:


Test case is a document which contains all possible scenarios.

This activity has got several stages like:

System study, identify all possible scenarios, write test case, review test case, fix
the review comments, verify the fix, test case approval, store in repository.

4. Prepare traceability matrix:


Once after we write test case, the biggest question is what is the proof that each
and every requirement has got a test case?

We prepare traceability matrix to ensure that each and every requirement has got
at least one test case.

5. Test Execution:
This is the stage where we execute all the test cases.

This is where we conduct all types of testing and find the bug.

This is the stage where Test engineers become productive to the organization.

This is the stage where the T.E spends a lot of time.


100

6. Defect tracking:
Once after test execution, obviously we are going to find the defects.

Each defect that we find should be tracked in an organized way. This is called as
Defect tracking.

7. Test execution report:


At the end of every test cycle we prepare test execution report.

It is a document which we prepare and provide to the customer at the end of every
test cycle.

This report covers:

*Total no. of test cases

*Total no. of test cases executed.

*Total no. of test cases not executed.

*No of test cases passed.

*No of test cases failed

*What is the pass percentage

*What is the fail percentage.

We will prepare this document and send it to the customer. From customer’s POV
this is the stage. But from company’s POV we have one more activity called as
Retrospective meeting.

8. Retrospective meeting/Project closure meeting/Post-mortem


meeting:
Here the entire team will sit together and discuss about the achievements and
mistakes, they document all this, that document is called as Retrospect document.

In the next release/sprint they open this document in the planning stage and plan in
such a way that all the achievements are adopted and all the mistakes are avoided.

SUBJECT : MANUAL TESTING FLOW


CHAPTER: SDLC
101

Topics:

1) SDLC: SOFTWARE DEVELOPMENT LIFE CYCLE

 FULL FORM OF SDLC?


 EXAMPLE TO EXPLAIN WHY WE NEED SDLC?(PEPSI)
 DEFINITION?
 STAGES OF SDLC?(DIAGRAM)
 WHEN TO GO FOR SDLC?
 WHAT WILL HAPPEN IF WE DON’T FOLLOW SDLC?
 DIFFERENT MODELS OF SDLC:

A. WATERFALL MODEL

 REQUIREMENT COLLECTION
 WHO IS INVOLVED?
 WHO CAN BECOME BA?
 HOW TO CONVERT CRS TO SRS?
 FEASIBILITY STUDY
 WHO ARE ALL INVOLVED?
 DESIGN
 HLD( house construction story, 3tier architecture)
 LLD(house construction story,GMAIL example)
 CODING
 TESTING
 WHAT HAPPENS IF DEVELOPERS ARE INVOLVED IN TESTING?
 INSTALLATION,TV STORY,WHO ARE ALL INVOLVED?
 MAINTENANCE
 ADVANTAGES, DISADVANTAGES & APPLICATIONS

B. SPIRAL MODEL
 EXAMPLE FOR DEPENDENCY(MS EXCEL)
 WHEN TO GO FOR SPIRAL MODEL?
 WHAT IS SPIRAL MODEL(DEFINITION AND WORKING)
 HOW TO HANDLE REQ CHANGES
MAJOR CHANGES
MINOR CHANGES
 ADVANTAGES,DISADVANTAGES
 APPLICATION?
102

C. V AND V MODEL

 1)CRS AND SRS ( IN REAL TIME )


 DEFINITION OF V MODEL
 EXPLANATION OF V MODEL ( WITH DIAGRAM )
 VERIFICATION AND VALIDATION
 WHY IT IS CALLED AS V MODEL
 ADVANTAGES , DISADVANTAGES AND APPLICATIONS

D. PROTOTYPE MODEL

 EXAMPLE OF THE WIPRO AND FEDEX


 HOW OVERCOME THE PROBLEM FACED IN THE EXAMPLE
( SAME
 WIPRO FEDEX EXAMPLE
 WHAT IS PROTOTYPE
 DEFINITION
 STAGES OF THE PROTOTYPE
 ADVANTAGES , DISADVANTAGES AND APPLICATION

CHAPTER 2

SOFTWARE TESTING

 EXAMPLE OF THE ICICI AND WIPRO


 DEFINITION 1
 DEFINITION 2( EXAMPLE OF SALES PAGE )
 WHY DO WE DO SOFTWARE TESTING
 TYPES OF SOFTWARE TESTING ( WHITE BOX , GREY BOX,
BLACK BOX TESTING WITH THERE ALTERNATIVE NAME)
 TWO WAYS OF SOFTWARE TESTING ( MANUAL TESTING ,
AUTOMATION TESTING )

WHITE BOX TESTING

 Definition of WBT
 Types of WBT
 PATH TESTING
 Definition
 Flow graph
103

 CONDITION TESTING
 Definition
 Program outlook example for condition testing
 LOOP TESTING
 Definition
 Program outlook example for loop testing
 WHITE BOX TESTING FROM MEMORY POINT OF VIEW
 Typical mistakes done by developers because of which size of code
increases.

 WHITE BOX TESTING FROM PERFORMANCE POINT OF VIEW


 Typical mistakes done by developers because of which it takes more
time to run the code.

BLACK BOX TESTING


 DEFINITION
 EXAMPLE ( LOGIN PAGE )
 TYPES OF BLACK BOX TESTING
 DIFFERENCE BETWEEN BLACK BOX TESTING AND WHITE BOX
TESTING .

A) FUNCTIONALITY TESTING
 ALTERNATIVES NAMES
 DEFINITION
 EXAMPLE OF ADD USER PAGE
 ASSIGNMENT OF MORE COMPONENTS AND THERE INPUTS
 DIFFERENT FORMATS OF REQUIREMENT
 WHY WE SHOULD NUMBER THE REQUIREMENT
 RULES OR LESSON
 WAYS OF THE FUNCTIONALITY TESTING
 TWO TYPES OF THE FUNCTIONALITY TESTING
 SCENARIOS
 REAL TIME EXAMPLE

B) INTEGRATION TESTING
 DEFINITION OF INTEGRATION TESTING
104

 EXAMPLE ( A AND B MODULES)


 REALTIME EXAMPLE( AMOUNT TRANSFER PAGE ALONG WITH
SCENARIO)
 HOW TO DO INTEGRATION TESTING
 POSITIVE AND NEGATIVE TESTING ON THE AMOUNT TRANSFER
PAGE
 GMAIL APPLICATION EXAMPLE
 DIFFERENT FORMATS OF WRITING THE SCENARIO
 TYPES OF INTEGRATION TESTING ( INCREMENTAL AND NON
INCREMENTAL INTEGRATION TESTING

D) SYSTEM TESTING
 DEFINITION OF SYSTEM TESTING
 EXAMPLE OF (A TO Z MODULE)
 STORY FOR THE OD REQUIREMENT
 OD FLOW EXAMPLE ( WITH DIAGRAM )
 SCENARIOS ON OD FLOW
 TYPES OF ENVIRONMENT
 WHY TESTING ENVIRONMENT SHOULD BE SIMILAR TO THE
PRODUCTION ENVIRONMENT
 TERMINOLOGIES
 WHO WILL BE INVOLVED IN THE INSTALLATION
 ROLES OF RELEASE ENGINEER
 VCT , MAVENS , JENKINS TOOLS
 SYSTEM TESTING IN DIFFERENT TYPES OF APPLICATIONS

d) ACCEPTANCE TESTING
 EXAMPLE OF THE PEN
 DEFINITION (WITH DIAGRAM EXPLANATION FIRST )
 WHY WE DO ACCEPTANCE TESTING
 ALL 4 APPROACH (WITH DIAGRAM , EXPLANATION FIRST)

E) SMOKE TESTING
 ALTERNATIVE NAMES
 EXAMPLES OF BUILD
 DEFINITION
 HOW TO DO SMOKE TESTING
105

 (EXAMPLE OF GMAIL )
 ASSIGNMENT FOR STUDENTS
 NOTE ABOUT SMOKE TESTING
 WHEN WE WILL DO SMOKE TESTING
 DIFFERENCE BETWEEN SMOKE TESTING AND SANITY TESTING
 EXPLANATION FOR ALTERNATIVE NAMES

ADHOC TESTING
 EXAMPLE OF THE MOBILE AND THE APPLICATION
 DEFINITION OF THE ADHOC TESTING
 WHY WE DO ADHOC TESTING
 HOW TO DO ADHOC TESTING (EXAMPLE AND SCENARIO )
 WHEN WE WILL DO ADHOC TESTING
 SCENARIO ON 5 APPLICATION
 GIVE ASSIGNMENT FOR 5 APPLICATION

 AGILE MODEL

 Release 1(example of modules A,B,C—-----Z)


 RELEASE 2 (example of modules OF AA,BB,CC—--ZZ, PATCH,
EXAMPLE
 DRAWBACKS OF TRADITIONAL MODEL
 SPRINT 1
 SPRINT 2 (COMPARE SPRINT AND RELEASE)
 AGILE MODEL (DEFINITION , TO OVERCOME DRAWBACKS)
 FLAVORS OF AGILE MODULE
 SCRUM PROCESS (DEFINITION )
 SCRUM TEAM (DRAW DIAGRAM )
 SHARE TEAM , CORE TEAM , ROLES OF EACH MEMBER
 DRAW PRODUCT BACKLOG DIAGRAM
 EXPLAIN WHO WILL BE INVOLVED IN CREATING IT
 EXPLAIN WHAT IS STORIES
 EXPLAIN SPRINT PLANNING MEETING (SPRINT
BACKLOG ,ASSIGN THE STORY TO DE/TE , DERIVE TASK
 DRAW THE SPRINT EXPLAIN THE PROCESS
 SPRINT REVIEW MEETING ( EXPLAIN WHAT HAPPENS HERE)
 RETROSPECTIVE MEETING
106

 EXPLAIN DAILY STAND UP MEETING


 TERMINOLOGIES - BURN DOWN CHAT , STORY BOARD ,
CHICKEN
 MULTIPLE SCRUM TEAM
 DRAW DIAGRAM , TAKE THE SAME PRODUCT BACKLOG AND
EXPLAIN
 HOTFIX (EXAMPLE PROJECT MANAGER CALLS DE/TE)
 RCA .

 USABILITY TESTING :

 ALTERNATIVE NAMES
 EXAMPLE OF GMAIL AND YAHOO
 DEFINITION OF USABILITY TESTING
 WHAT KIND OF APPLICATION WE CAN DO USABILITY TESTING
 HOW TO DO USABILITY TESTING
 (EX: FOR UI (LOOK) AND FEEL (UX) )
 PRACTICAL EXAMPLE OF ANY TWO APPLICATION
 USABILITY DEFECTS

 COMPATIBILITY TESTING

 EXAMPLE OF LAPTOP AND MOBILE


 EXAMPLE OF DIFFERENT OS PLATFORM FOR DEVELOPING ,
TESTING, END USERS
 DEFINITION
 WHY WE HAVE TO DO COMPATIBILITY TESTING
 WHEN WE WILL DP COMPATIBILITY TESTING
 HOW WE WILL DO COMPATIBILITY TESTING
 IN DIFFERENT APPLICATION
 (STANDALONE , WEB APPLICATION,CLIENT SERVER
APPLICATION)
 PRACTICAL EXAMPLE .
 COMPATIBILITY DEFECT

GLOBALIZATION TESTING

 EX: DIFFERENT LANGUAGES PEOPLE SPEAKING AROUND


 DEFINITION OF GLOBALIZATION TESTING
107

 HOW THEY DEVELOP THE APPLICATION FOR DIFFERENT


LANGUAGE (PROPERTY FILE AND EXAMPLE OF CHINESE
LANGUAGE)
 TYPES OF GLOBALIZATION TESTING
o I)I18N TESTING
o II)L10N TESTING
 OPEN THE APPLICATION SHOW THE DIFFERENT LANGUAGE IN
FLIPKART

PERFORMANCE TESTING

 DEFINITION
 (DEFINITION OF STABILITY , LOAD , RESPONSE TIME WITH
DIAGRAM )
 NOTE
 TOOLS
 HOW TO DO PERFORMANCE TESTING
 (EX: JMETER ) EXPLANATION
 TYPES OF PERFORMANCE TESTING

EXPLORATORY TESTING
 EXAMPLE ; REALTIME EXAMPLE , AT NEW OR IN FOREST YOU
LOST YOURSELF
 DEFINITION 1
 DEFINITION 2
 WHEN WE WILL DO EXPLORATORY TESTING
 DRAWBACKS OF EXPLORATORY TESTING
 HOW TO OVERCOME THE DRAWBACKS OF EXPLORATORY
TESTING .

REGRESSION TESTING

 EXAMPLE : ADDING THE MODULE


REMOVING THE MODULE
MODIFYING THE FEATURE
 DEFINITION : 1 AND 2
 WHEN WE WILL DO REGRESSION TESTING
 TYPES OF REGRESSION TESTING
 UNIT REGRESSION TESTING
 REGIONAL REGRESSION TESTING
 FULL REGRESSION TESTING
108

 DIFFERENCE BETWEEN RETESTING AND REGRESSION


TESTING
 AUTOMATION IN REGRESSION TESTING
 AUTOMATION IN AGILE
 ADVANTAGES OF MANUAL TESTING
 DISADVANTAGE OF MANUAL TESTING
 ADVANTAGE OF AUTOMATION TESTING
 DISADVANTAGE OF AUTOMATION TESTING
 ROLES OF MANUAL TESTING
 ROLES OF AUTOMATION TESTING

You might also like