0% found this document useful (0 votes)
11 views30 pages

Stqa Midsem Answer Key

The document provides a comprehensive overview of software testing, including definitions, components, and various testing methodologies. It differentiates between key concepts such as error, fault, defect, and outlines the Testing Maturity Model (TMM) which guides organizations in improving their testing processes. Additionally, it discusses principles of software testing, verification and validation, and the distinctions between testing and debugging.

Uploaded by

Sanjay
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views30 pages

Stqa Midsem Answer Key

The document provides a comprehensive overview of software testing, including definitions, components, and various testing methodologies. It differentiates between key concepts such as error, fault, defect, and outlines the Testing Maturity Model (TMM) which guides organizations in improving their testing processes. Additionally, it discusses principles of software testing, verification and validation, and the distinctions between testing and debugging.

Uploaded by

Sanjay
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

STQA MIDSEM ANSWER KEY

MIDSEM- I

PART-A

1. Define Software Testing

Software testing is defined in the sources primarily as a dual-purpose process:

• Testing is generally described as a group of procedures carried out to evaluate


some aspect of a piece of software.

• Testing can be described as a process used for revealing defects in software, and for
establishing that the software has attained a specified degree of quality with
respect to selected attributes.

• Specifically relating to execution-based testing, testing is defined as the process of


exercising a software component using a selected set of test cases, with the intent
of (i) revealing defects, and (ii) evaluating quality.

These definitions cover both validation (execution-based testing) and verification (activities
like reviews) activities, encompassing technical reviews, test planning, test tracking, test case
design, unit test, integration test, system test, acceptance test, and usability test.

2. List down the components of software testing process

As testing is considered a process embedded within the software development process, its
components align with the general definition of a software process.

A software process, in the engineering domain, is defined as the set of methods, practices,
standards, documents, activities, policies, and procedures that software engineers use to
develop and maintain a software system and its associated artifacts.

Specifically, the testing domain covers a broad scope of activities including:

• Technical reviews.

• Test planning.

• Test tracking.

• Test case design.

• Specific levels of testing: unit test, integration test, system test, acceptance test, and
usability test.

3. Differentiate Error, Fault and Defect

The source material differentiates these terms based on their point of origin and
manifestation:
Term Definition/Description Source

A mistake, misconception, or misunderstanding on the part of a software


Error developer (which includes software engineers, programmers, analysts,
and testers).

An anomaly in the software that is introduced as the result of an error. It


Fault may cause the software to behave incorrectly, and not according to its
(Defect) specification. The term "defect" is also associated with software artifacts
such as requirements and design documents.

The inability of a software system or component to perform its required


functions within specified performance requirements. A failure is
Failure observed during execution when the software does not produce the
expected results. A fault does not always immediately produce a failure; it
only manifests as a failure when the proper conditions occur.

4. Mention the Quality attributes in software testing

Software testing is used to evaluate the degree to which a system meets specified
requirements and customer needs. To do this, testers evaluate specific quality attributes,
including:

• Correctness: The degree to which the system performs its intended function.

• Reliability: The degree to which the software is expected to perform its required
functions under stated conditions for a stated period of time.

• Usability: Relates to the degree of effort needed to learn, operate, prepare input,
and interpret output of the software.

• Maintainability: The effort needed to make changes in the software.

• Portability: Relates to the ability of the software to be transferred from one


environment to another.

• Interoperability: The effort needed to link or couple one system to another.

• Integrity (Security): Relates to the system's ability to withstand both intentional and
accidental attacks.

• Testability: An attribute of interest to testers, defined either as the amount of effort


needed to test the software to requirements or the ability of the software to reveal
defects under testing conditions.

5. Define Fault Model


A fault (defect) model can be described as a link between the error made (e.g., a missing
requirement, a misunderstood design element, or a typographical error) and the
fault/defect in the software.

While digital system engineers use similar models linking physical defects to electrical
effects, software engineers often use the fault model concept, accumulated informally from
experience, to design tests and for diagnosis during debugging activities (fault localization).
For example, a fault model might link "an incorrect operator precedence order" fault
(defect) to a lack of education on the part of the programmer (error).

6. Mention any two differences between Black box testing and White box testing

The two testing strategies differ fundamentally in the tester's knowledge and viewpoint:

1. Knowledge Source/Viewpoint:

o Black box testing considers the software-under-test to be an opaque box. The


tester has no knowledge of its inner structure (i.e., how it works), relying
only on documentation such as requirements, specifications, or IPO
(Input/Process/Output) diagrams.

o White box testing focuses on the inner structure of the software. To design
test cases using this strategy, the tester must have knowledge of the code or a
suitable pseudo code representation.

2. Primary Focus and Defect Detection:

o Black box testing is often called functional or specification-based testing. It is


especially useful for revealing requirements and specification defects.

o White box testing is useful for revealing design and code-based control, logic
and sequence defects, initialization defects, and data flow defects.

7. Give one example for Equivalence class partitioning and its testing

Equivalence Class Partitioning (ECP) is a black box technique used to partition the input
domain into a finite number of classes, assuming that all members within a class are
processed equivalently by the software.

Example: Consider the specification for a module input where a widget identifier should
consist of 3–15 alphanumeric characters.

Applying ECP to the length requirement (range of values 3–15):

1. Valid Equivalence Class (EC3): All values from 3 to 15 (e.g., input length 9 characters).

2. Invalid Equivalence Class (EC4): Values just below the lower boundary, i.e., less than
3 (e.g., input length 2 characters).
3. Invalid Equivalence Class (EC5): Values just above the upper boundary, i.e., greater
than 15 (e.g., input length 16 characters).

Testing: To test using ECP, the tester selects one test input value to represent each identified
class. For the example above, the test cases would include inputs with 9 characters (valid), 2
characters (invalid), and 16 characters (invalid).

8. Define Random Testing

Random testing occurs when a tester randomly, or unsystematically, selects inputs from
the input domain of the software module or system.

For instance, if the valid input domain is all positive integers between 1 and 100, the tester
might randomly choose values like 55, 24, or 3. This approach does not systematically
consider whether these inputs are adequate or whether boundary or invalid values should
be prioritized.

9. Provide an example for State Transition Testing

State transition testing is a black box method useful for procedural and object-oriented
development, viewing the software in terms of its states, transitions between states, and
the inputs and events that trigger state changes.

Example (Stack Class): Consider testing a Stack Class (an object) which can hold a small
number of items and has methods like create, push, pop, full, and empty. The state of the
stack changes based on the sequence of method calls.

A test sequence is designed to transition the stack through different states (e.g., empty,
partially full, full):

1. create(s,3): Initializes a stack s to hold three items.

2. empty(s): Checks the initial state (should be True).

3. push(s,item-1), push(s,item-2), push(s,item-3): Transitions the stack from empty to


full.

4. full(s): Checks the final state (should be True).

Further operations like pop(s,item) would cause the stack to transition back from full to
partially full, and eventually to empty. The tester must also design sequences to test illegal
transitions, such as attempting an extra push on a full stack.

10. State the use of Statement coverage testing

Statement coverage is a basic program-based coverage criterion often applied in white box
testing.
The primary use of statement coverage testing is to establish a minimal testing goal and
measure the adequacy of a test set.

• Goal Setting: A tester sets the objective to satisfy the statement adequacy/coverage
criterion, requiring that a set of test cases be developed so that all (100%) of the
statements in the software unit are executed at least once.

• Verification: In terms of a control flow graph, achieving statement coverage requires


exercising all the nodes in the graph at least once. Tools can support gathering this
data to ensure the required degree of coverage is met.

• Limitation: Statement coverage is considered a weak coverage goal, and minimal, as


satisfying it alone is "not considered to be very useful for revealing defects," as
defects like missing statements may remain undetected.

PART-B

Part 1: Elaboration of the Testing Maturity Model (TMM)

The Testing Maturity Model ($\text{TMM}^\text{SM}$) serves as a guiding framework for


organizations to assess and improve their testing processes, defining testing as a process
that progresses through evolutionary stages to continuous improvement.

i) TMM Levels

The TMM adopts a staged architecture similar to the CMM, consisting of five levels that
prescribe a maturity hierarchy and an evolutionary path for process improvement.

Level Name Characteristics

Testing is a chaotic, ill-defined process, often not


Level
Initial distinguished from debugging. Tests are developed ad hoc
1
after coding, aiming only to show the software works.

Testing is separated from debugging and defined as a


planned phase following coding. Basic testing techniques
Level (black box and white box) are institutionalized, and the
Phase Definition
2 primary goal is to show the software meets its
specifications. Test planning often occurs late in the life
cycle.

Level Testing is integrated into the entire software life cycle,


Integration
3 beginning at the requirements phase. A software test
organization and a technical training program are
established. Testing is controlled and monitored.

Testing becomes a process that is formally measured and


Level Management and quantified. An organizationwide review program and a
4 Measurement test measurement program are established. Software is
tested for quality attributes like reliability and usability.

Mechanisms are in place for the testing process to be


Optimization/Defect
Level continuously improved. Defect prevention and quality
Prevention/Quality
5 control are practiced, often relying on statistical sampling
Control
and reliability measurements.

(Diagram Reference: The 5-Level Structure) This structure is depicted in Figure 1.5 (and
Figure 16.3 in the source material) which shows the five levels stacked hierarchically, with
Level 1 at the bottom and Level 5 at the top.

ii) TMM Internal Structure

The internal structure of each TMM maturity level provides a comprehensive framework for
evaluation and improvement.

1. Levels: Define the progression of testing capability.

2. Testing Capability: Describes the characteristics of the testing process at that level.

3. Maturity Goals (MG): Identify key process areas that must be addressed to achieve
maturity at that level.

4. Maturity Subgoals (MSG): Specify less abstract objectives that define the scope and
accomplishments needed for a particular level.

5. Activities, Tasks, and Responsibilities (ATRs): Address implementation and


organizational adaptation issues necessary to achieve the subgoals. These ATRs are
assigned based on the Critical Views.

(Diagram Reference: The Internal Structure) This structure is visualized in Figure 1.4 (and
Figure 16.4), showing that Levels indicate Testing Capability, which contains Maturity Goals,
which are supported by Maturity Subgoals, which are achieved by
Activities/Tasks/Responsibilities, which are organized by the Critical Views (Manager,
Developer/tester, User/client).

iii) People Involved (The Three Critical Views)

The TMM defines three critical views (CVs) representing the key participants in the testing
process, ensuring that responsibilities (ATRs) are assigned to appropriate groups at each
level.
1. Manager View: Encompasses the commitment and ability to perform activities
related to improving testing capability. This view typically includes project
managers, test group managers, and upper-level managers.

2. Developer/Tester View: Focuses on the technical activities and tasks that constitute
quality testing practices. This group includes staff involved in specifying, designing,
coding, and testing.

3. User/Client View: Defined as a cooperating or supporting view. The focus is on


soliciting user/client support, consensus, and participation in quality-related activities
such as requirements analysis, usability testing, and acceptance test planning.

iv) TMM Goals

The Maturity Goals (MGs) define the required process improvement steps for advancing
between levels.

TMM
Maturity Goals (MG) Source
Level

1. Develop testing and debugging goals. 2. Initiate a test planning process.


Level 2
3. Institutionalize basic testing techniques and methods.

1. Establish a software test organization. 2. Establish a technical training


Level 3 program. 3. Integrate testing into the software life cycle. 4. Control and
monitor the testing process.

1. Establish an organizationwide review program. 2. Establish a test


Level 4
measurement program. 3. Software quality evaluation.

Level 5 1. Defect prevention. 2. Quality control. 3. Test process optimization.

v) V-Model (Integration Support)

The V-model is a standard framework that illustrates how testing activities should be
integrated into the entire software life cycle. This integration is a key goal at TMM Level 3
("Integrate Testing into the Software Life Cycle").

• V-Model Structure: The model shows that activities like designing tests for
Acceptance, System, and Integration levels should occur in parallel with the
Requirements and Design development phases.

• Early Test Planning: The V-model philosophy requires test planning to begin as early
as possible in the life cycle, starting at the requirements phase, rather than waiting
until coding is complete.
• Deliverables: The model ensures that test deliverables (e.g., initial versions of test
plans and acceptance tests) are produced in early phases.

• Modified V-Model: The Extended/Modified V-model (Figure 1.6/Figure 10.6) further


supports maturity by integrating review/audit activities (static testing) horizontally
across the development and testing sides of the V.

(Diagram Reference: The Extended/Modified V-model) Figure 1.6 (and Figure 10.6)
graphically illustrates this integration, showing that requirements are supported by
requirements reviews and system/acceptance tests, while design is supported by design
reviews and integration tests.

Part 2: Distinguishing Concepts (Q12)

i) Verification and Validation

The differences between Verification and Validation relate to the timing of evaluation and
the focus of the requirements:

Concept Definition Focus

The process of evaluating a software system or


Typically associated with
component during, or at the end of, the
Validation traditional execution-based
development cycle in order to determine
testing.
whether it satisfies specified requirements.

The process of evaluating a software system or Typically associated with


component to determine whether the products static activities such as
Verification
of a given development phase satisfy the inspections and reviews of
conditions imposed at the start of that phase. software deliverables.

ii) Testing and Debugging

Testing and debugging are distinct processes with different goals, methods, and
responsibilities:

Concept Definition/Goal Process & Responsibility

A process used for revealing defects Testing involves activities like technical
in software, and for establishing that reviews, test planning, and executing
Testing the software has attained a specified selected test cases. This activity should
degree of quality with respect to ideally be performed by an independent
selected attributes. testing group.
Also known as fault localization, this Debugging is generally difficult to
process begins after testing has manage due to the unpredictability of
revealed a failure. It involves (1) defect occurrences. Software developers
Debugging
locating the fault or defect, (2) have a detailed understanding of the
repairing the code, and (3) retesting code and are the best qualified staff to
the code. perform debugging.

Part 3: Software Testing Principles (Q13)

Software testing principles are fundamental rules that guide test specialists in developing
knowledge, acquiring skills, and defining testing activities. The principles establish a
conceptual foundation for effective test practices.

Principle Purpose and Explanation Testing Process Relevance

Testing is the process of This separates testing (detection) from


exercising a component to debugging (location and repair). Quality
Principle 1:
achieve two goals: (i) revealing evaluation involves assessing attributes
Dual Purpose
defects, and (ii) evaluating like reliability, usability, and correctness
quality. against specified goals.

This guides testers in intelligently


A good test case is one that has
Principle 2: selecting a finite subset of inputs from a
a high probability of revealing a
Good Test Case large domain to maximize defect yield
yet-undetected defect(s).
within resource constraints.

This is critical because without a correct


Principle 4: statement of the output, the tester
A test case must contain the
Expected cannot determine whether a defect has
expected output or result.
Output been revealed or establish the pass/fail
status of the test.

Testing invalid inputs helps evaluate the


Principle 5: Test cases should be developed software's robustness (its ability to
Valid and for both valid and invalid input recover when unexpected events occur)
Invalid Inputs conditions. and exercises the code in ways that reveal
defects.

Plans ensure adequate time and resources


Testing should be planned, and
Principle 9: are allocated for testing tasks, enabling
objectives should be stated as
Planning the process to be monitored and
quantitatively as possible.
managed.
This philosophy, supported by models like
Principle 10: Testing activities should be
the V-model, ensures defects are detected
Early integrated early in the life
closer to their point of origin, improving
Integration cycle.
efficiency and quality.

Part 4: Design and Coding Defects (Q14 & Q15)

Q14: Types of Design Defects and Occurrences

Design defects occur when system components, their interactions, or interfaces are
incorrectly designed, typically assuming the design description is at the pseudo code level.

Example (Coin
Defect Class Description/Occurrence Detection Method
Problem Reference)

Processing steps in the algorithm Lack of error checks


Algorithmic are incorrect (e.g., wrong for incorrect and/or White box testing
and calculation specified, steps in invalid inputs, or (condition/branch
Processing incorrect order, omission of error lack of a path for testing) and Black box
Defects condition checks like division by recovery from input functional testing.
zero). errors.

An incorrect
Logic flow in the pseudo code is
Control, "while" loop
incorrect, such as branching too White box tests
Logic, and condition (i < 6
soon/late, improper nesting, or (condition/branch
Sequence instead of i <= 6) in
an incorrect branching testing, loop testing).
Defects the pseudo code
condition.
design.

Incorrect value for


an element in the
Incorrect design of data
coin_values integer Software reviews and
structures (e.g., incorrect type
Data Defects array (e.g., use of a data
assigned to a variable, array
initializing 25 cents dictionary.
improperly sized, missing fields).
twice instead of
including 50 cents).

Defects related to incorrect or General occurrence:


Module
inconsistent parameter types, Defects in the
Interface
improper number of parameters, description of
Description
or incorrect ordering of input/output
Defects
parameters between modules. parameters
expected by a
module.

Q15: Types of Coding Defects with Examples

Coding defects arise from errors in implementing the code, such as failure to understand
programming language constructs or transcription errors.

Example (Coin Problem Code


Defect Class Description/Occurrence
Reference)

The division operator may cause


Unchecked overflow/underflow
Algorithmic and problems if negative values are
conditions, incorrect ordering of
Processing divided, requiring an input check
arithmetic operators (precedence
Defects (though this is potentially mitigated
errors), or precision loss.
if input is checked elsewhere).

Incorrect iteration of loops (loop


Control, Logic, The loop variable increment step is
boundary problems), incorrect
and Sequence placed outside the scope of the
expression of case statements, or
Defects loop in the C-like code (i = i + 1).
missing paths.

Occurs when initialization


Initialization The variable total_coin_value is not
statements are omitted or
Defects initialized (used before defined).
incorrect.

The variable total_coin_value is


Variables used before definition,
Data-Flow used before it is defined, which is
defined twice before intermediate
Defects both an initialization and a data
use, or disregarded before use.
flow defect.

In the code, using The call to the external function


Module Interface incorrect/inconsistent parameter "scanf" is incorrect; the address of
Defects types, improper ordering, or calls the variable was not provided
to nonexistent modules. (&number_of_coins needed).

When the code documentation is The documentation accompanying


Code
incomplete, ambiguous, or does the code is incomplete and
Documentation
not reflect what the program ambiguous, reflecting deficiencies in
Defects
actually does. the external interface description.

Part 5: Black Box Testing Application (Q17)

Q17 (Option 1): Shopkeeper Scenario (ECP and BVA)


The software processes a customer payment (total currency sum, $S$) against a required
amount of 500 rupees.

Specification:

• If $S = 500$: Print "Success message."

• If $S < 500$: Print "Insufficient."

• If $S > 500$: Print amount to be returned ($S - 500$).

1. Equivalence Class Partitioning (ECP)

We partition the input domain ($S$) based on the specified functional conditions.

Class ID Condition Category

EC1 $S = 500$ (Exact Match) Valid (Functional Boundary)

EC2 $S < 500$ (Not enough money) Invalid (Below boundary)

EC3 $S > 500$ (Too much money) Invalid (Above boundary)

2. Boundary Value Analysis (BVA)

BVA focuses on values at and just outside the boundary conditions (500).

BVA Point Value Notes

LB (Lower Boundary) 499 Just below the target (EC2).

ON (On Boundary) 500 Exact target value (EC1).

UB (Upper Boundary) 501 Just above the target (EC3).

3. Test Cases (ECP and BVA Coverage)

Test Case Input (Total Covered EC/BVA Expected


Rationale
ID Sum S) Points Output

Test immediately below


T1 499 EC2, BVA (LB) "Insufficient"
boundary.

"Success Test on the required


T2 500 EC1, BVA (ON)
message" boundary.

Test immediately above


T3 501 EC3, BVA (UB) Return 1
boundary.
A representative value far
T4 100 EC2 "Insufficient"
below 500.

A representative value far


T5 750 EC3 Return 250
above 500.

Q17 (Option 2): Cause-and-Effect Graphing Scenario

This technique converts the specification into a Boolean graph to derive combinations of
inputs (causes) that result in specific outputs (effects).

Causes (Input Conditions):

• C1: Person A is a sportsman.

• C2: Person B is physically challenged.

Effects (Output Conditions):

• E1: Both are given a chance to attend the interview.

• E2: Person A needs to write the exam.

• E3: Person B is considered a sportsman.

Rules (Relationships derived from the Scenario):

1. If C1 AND C2, then E1.

2. If $\neg$ C1 (A is NOT sportsman), then E2.

3. If $\neg$ C2 (B is NOT physically challenged), then E3.

1. Cause-and-Effect Graph

The graph nodes are the causes (C1, C2) and effects (E1, E2, E3). Logical relationships (AND,
NOT) connect them.

Self-Correction/Note: The graph must show the connections: C1 and C2 link via an AND
operation to E1. C1 must also feed an inverter/NOT operation ( $\neg$ C1) to E2. C2 must
feed an inverter/NOT operation ($\neg$ C2) to E3.

(Diagram Reference: Cause-and-Effect Graph Structure)

1. A node representing C1 connects to an AND gate.

2. A node representing C2 connects to the same AND gate.

3. The output of the AND gate leads to E1.

4. A node representing C1 leads to an arc containing the NOT notation (a small circle),
and this inverted output leads to E2.
5. A node representing C2 leads to an arc containing the NOT notation, and this
inverted output leads to E3.

2. Decision Table and Test Cases

We analyze all possible combinations of C1 and C2 (4 combinations) to generate the final


test cases. (T=True, F=False).

Test E2 (A
C1 (A is C2 (B is $\neg$ $\neg$ E1 (Both E3 (B is
Case Writes
Sportsman) Disabled) C1 C2 Interview) Sportsman)
ID Exam)

T1 T T F F T F F

T2 T F F T F F T

T3 F T T F F T F

T4 F F T T F T T

Relevant Test Cases (Inputs/Scenarios):

• T1: A is sportsman, B is physically challenged. Result: Both attend interview.

• T2: A is sportsman, B is NOT physically challenged. Result: B is considered a


sportsman.

• T3: A is NOT sportsman, B is physically challenged. Result: A needs to write exam.

• T4: A is NOT sportsman, B is NOT physically challenged. Result: A needs to write


exam, B is considered a sportsman.

Part 6: Pseudocode Analysis (Q18)

The following pseudocode fragment calculates the number of positive integers read in a loop
running n times:

1 count = 0

2 read(n);

3 for i=1 to n

4 read(a);

5 if a>0

6 count = count + 1;
7 print count;

(Assumption: Lines 4-6 constitute the loop body, and Line 7 executes once after the loop
completes, based on typical structured code flow.)

i) Draw Control Flow Graph (CFG)

The CFG represents the sequence of execution and decision points.

• Nodes (N): N1 (Line 1), N2 (Line 2), N3 (Line 3, Loop Decision), N4 (Line 4), N5 (Line 5,
IF Decision), N6 (Line 6), N7 (Merge node after IF statement), N8 (Line 7, Print/Exit).

• Edges (E):

o N1 $\rightarrow$ N2

o N2 $\rightarrow$ N3

o N3 (T) $\rightarrow$ N4 (Loop execution)

o N3 (F) $\rightarrow$ N8 (Loop exit)

o N4 $\rightarrow$ N5

o N5 (T) $\rightarrow$ N6

o N5 (F) $\rightarrow$ N7 (Skipping N6)

o N6 $\rightarrow$ N7

o N7 $\rightarrow$ N3 (Loop return/iteration)

o N8 (Exit)

(Diagram Reference: Control Flow Graph) The graph would show a sequence leading into
Node 3. Node 3 has two outbound edges, one (F) leading to the terminal Node 8, and one
(T) leading to the loop body (N4 $\rightarrow$ N5). Node 5 (the IF decision) has two
outbound edges: one (F) to the merge Node 7, and one (T) to Node 6, which then merges to
Node 7. Node 7 loops back to Node 3.

ii) Develop Decision Coverage Test Cases

Decision coverage requires executing all outcomes (True and False branches) of all decision
nodes at least once.

1. N3 (Loop condition, i=1 to n): Must be True (loop runs) and False (loop skips/exits).

2. N5 (IF condition, a>0): Must be True (a>0) and False (a<=0).

Test Case Input N3 Outcome N5 Outcome


Input a values Notes
ID n (T/F) (T/F)
Covers N3 (F) (Loop is
T1 0 N/A F N/A
skipped).

$a_1=5$, Covers N3 (T) and N5 (T


T2 2 T (twice) T, F
$a_2=-3$ and F).

T2 requires one positive value (a>0 is T) and one non-positive value (a<=0 is F) within a loop
run, satisfying 100% decision coverage.

iii) Perform Path Testing

Path testing focuses on finding a basis set of independent paths, the number of which is
equal to the Cyclomatic Complexity V(G).

1. Calculate V(G): $V(G) = E - N + 2$.

o $E = 9$ (Edges)

o $N = 8$ (Nodes)

o $V(G) = 9 - 8 + 2 = 3$.

o We need 3 independent paths.

2. Define Independent Paths:

o Path 1 (Loop Skip/Zero Iteration): N1 $\rightarrow$ N2 $\rightarrow$ N3 (F)


$\rightarrow$ N8.

o Path 2 (Loop Once, IF True): N1 $\rightarrow$ N2 $\rightarrow$ N3 (T)


$\rightarrow$ N4 $\rightarrow$ N5 (T) $\rightarrow$ N6 $\rightarrow$ N7
$\rightarrow$ N3 (F) $\rightarrow$ N8.

o Path 3 (Loop Once, IF False): N1 $\rightarrow$ N2 $\rightarrow$ N3 (T)


$\rightarrow$ N4 $\rightarrow$ N5 (F) $\rightarrow$ N7 $\rightarrow$ N3 (F)
$\rightarrow$ N8.

Test cases T1 (n=0), and two variations of n=1 (one with a>0, one with a<=0) would be
required to explicitly execute these three paths.

iv) Data Flow Testing (Def-Use Analysis)

Data flow testing identifies definition (def) and use (use: computation-use, c-use; or
predicate-use, p-use) occurrences for variables, with the goal of exercising all def-use paths.

Variables of Interest: count, n, i, a

Variable Def Location (Line) Use Location (Line) Use Type Pair ID
count 1 6 c-use C1

1 7 c-use C2

6 6 c-use C3

6 7 c-use C4

n 2 3 p-use N1

i 3 3 p-use I1 (Initial def/use)

(implicit inc.) 3 p-use I2 (Iterative def/use)

a 4 5 p-use A1

(Note: Line 3 (for i=1 to n) contains both an initial definition of i and a predicate use of n and
i. Line 6 (count = count + 1) contains definition and computation-use of count. We simplify to
cover the essential data paths.)

Test Cases for Def-Use Coverage:

We need tests that cover:

1. N1: Definition of $n$ (L2) used in the loop predicate (L3).

2. C2: Definition of $count$ (L1) used for printing (L7) (Requires Loop Skip).

3. C1, C3, C4: Definition of $count$ (L1/L6) used in L6 and L7. (Requires loop run and L6
execution).

4. A1: Definition of $a$ (L4) used in predicate (L5). (Requires loop run).

Test Case ID Input n Input a values Path Covered Pairs Covered

T1 0 N/A N2-N3(F)-N7 N1, C2

T3 2 $a_1=5$, $a_2=5$ N2-N3(T)-...-N7-N3(F)-N8 All I & A pairs, C1, C3, C4

Test Set {T1, T3} covers all identified def-use pairs for all variable.
MIDSEM-II

PART A – (10 X 2 Marks = 20 Marks)

1. Mention the major phases/types of testing and define them.

Execution-based software testing for large systems is typically carried out at different levels,
usually comprising 3–4 major levels or phases of testing. These major phases include:

1. Unit Test: This phase tests a single component. A principal goal is to detect
functional and structural defects within that individual unit.

2. Integration Test: At this level, several components are tested as a group. Testers
investigate component interactions.

3. System Test: The system as a whole is tested. A principal goal is to evaluate non-
functional attributes such as usability, reliability, and performance.

4. Acceptance Test: This is a crucial testing stage where the development organization
must demonstrate that the software meets all of the client’s requirements.

2. State Test Harness.

A test harness is defined as the auxiliary code that must be developed to exercise each unit
and connect it to the outside world. Since the tester is focusing on a stand-alone function,
procedure, or class rather than a complete system, the test harness is needed to both call
the target unit and represent modules that are called by the target unit. This auxiliary code
is also known as scaffolding code.

3. Specify two major goals of Integration testing.

Integration test for procedural code has two major goals:

1. To detect defects that occur on the interfaces of units.

2. To assemble the individual units into working subsystems and finally a complete
system that is ready for system test.

4. List down the types of System Testing.

There are several types of system tests, including:

• Functional testing

• Performance testing

• Stress testing

• Configuration testing

• Security testing
• Recovery testing

The TMM also recommends that Reliability and Usability testing be formally integrated into
the testing process by organizations reaching higher levels of testing maturity.

5. "Regression testing is not a level of testing" - Justify the statement.

The statement is correct: Regression testing is not a level of testing.

Justification: Regression testing is the process of retesting software that has been modified
to ensure two things: that the new version of the software has retained the capabilities of
the old version, and that no new defects have been introduced due to the changes.
Because its function is verification after modification, it can occur at any level of test, such
as when unit tests are rerun after a defect repair.

6. Define Quality and its two criteria to meet.

Quality is defined by the IEEE Standard Glossary of Software Engineering Terminology based
on two criteria:

1. Quality relates to the degree to which a system, system component, or process


meets specified requirements.

2. Quality relates to the degree to which a system, system component, or process


meets customer or user needs, or expectations.

7. When steps the Test incident report is necessary to be created? Give the to create it.

A test incident report is necessary to be created when a tester observes any event that
occurs during the execution of the tests that is unexpected, unexplainable, and that
requires a follow-up investigation. It should be prepared if a unit fails a test.

The IEEE Standard for Software Test Documentation recommends the following sections to
be included in the report:

1. Test Incident Report identifier: To uniquely identify this report.

2. Summary: To identify the test items involved, the test procedures, test cases, and
test log associated with this report.

3. Incident description: This describes the time and date, testers, observers,
environment, inputs, expected outputs, actual outputs, anomalies, procedure step,
environment, and attempts to repeat the incident.

4. Impact: Describes the impact of the incident on the testing effort, test plans,
procedures, and test cases; a severity rating should be inserted here.

8. Enlist the sections to be included in Test Summary Report according to the IEEE test
documentation standard.
The IEEE test documentation standard describes the following sections for the Test
Summary Report:

1. Test Summary Report identifier: To uniquely identify this report.

2. Variances: Descriptions of any deviations from the test plan, test procedures, and
test designs, as well as variances of the test items from their original design.

3. Comprehensiveness assessment: Discussion of the comprehensiveness of the test


effort compared to objectives and test completeness criteria defined in the test plan.

4. Summary of results: Summary of the testing results, including all resolved and
unresolved incidents.

5. Evaluation: Evaluation of each test item based on test results, including its pass/fail
status and the severity level of any failure.

6. Summary of activities: Summary of all testing activities and events, recording


resource consumption, actual task durations, and hardware and software tool usage.

7. Approvals: Listing of the names of all persons needed to approve the document, with
space for signatures and dates.

9. Distinguish between Quality Control and Quality Assurance.

Quality Control (QC) and Quality Assurance (QA) are distinct, though related, concepts:

• Quality Control (QC): QC traditionally consists of the procedures and practices


employed to ensure that a work product or deliverable conforms to standards or
requirements. It is the set of activities designed to evaluate the quality of
developed or manufactured products. In a broader view, QC encompasses a
feedback loop to the process that created the product.

• Quality Assurance (QA): The software quality assurance (SQA) group is a team
dedicated to ensuring that all necessary actions are taken during the development
process so that the resulting software conforms to established technical
requirements. A key distinction is that QA is often used to describe activities that
evaluate the process by which products are developed and/or maintained, as well as
the product itself.

10. Provide the supporting activities in the ISO-9000-3 for Software Process Quality.

While the query refers to ISO-9000-3, the source material explicitly maps ISO-9001 areas to
TMM maturity levels, indicating these areas support software process quality as it evolves
through the TMM:
TMM
ISO-9001 Common Process Area
Level

Level 2 Test (4.10)

Level 3 Quality systems (4.2); Training (4.18)

Inspections (4.10); Inspection, test status (4.12); Quality records (4.16); Statistical
Level 4
techniques (4.20)

Level 5 Defect prevention (4.14)

PART B - (4 X 10 Marks = 40 Marks)

Q11: Unit Test Planning and Design

a) Describe various phases in Unit Test Planning process.

Unit test planning can be described across three phases, supporting the steady evolution of
the unit test plan:

1. Phase 1: Describe Unit Test Approach and Risks In this initial phase, the planner
outlines the general approach to unit testing. Key tasks include:

o Identifying test risks.

o Describing the techniques (e.g., black box, white box methods) that will be
used for designing test cases.

o Describing requirements for test harnesses and other interfacing software.

o Identifying completeness requirements (what will be covered and to what


degree, such as states, control, and data flow patterns).

o Defining termination conditions for unit tests, including special cases that
may result in abnormal termination.

o Estimating necessary resources (hardware, software, staff) and developing a


tentative schedule.

2. Phase 2: Identify Unit Features to be Tested This phase relies on information from
the unit specification and detailed design description. The planner specifies which
features of each unit will be tested, such as:

o Functions.

o Performance requirements.

o States and state transitions.


o Control structures and data flow patterns.

o The planner must also identify input/output characteristics (e.g., variables


with allowed ranges) and assess the risks if some features must be omitted
from testing.

3. Phase 3: Add Levels of Detail to the Plan The final phase refines the plan based on
the preceding steps. The planner adds details concerning the approach, resource,
and scheduling portions. Tasks include:

o Identifying existing test cases that can be reused.

o Including unit availability and integration scheduling information.

o Describing how test results will be recorded (e.g., test logs, test incident
reports) and providing references to standards for these documents.

o Describing any special tools required for the tests.

b) Discuss the concepts to be considered when designing the Unit Tests.

Part of the preparation for unit testing involves unit test design, which focuses on structural
integrity due to the small size of the component.

Key concepts and considerations include:

• Specification of Deliverables: It is important to specify both the test cases (including


input data and expected outputs) and the test procedures (the procedural steps
required to run the tests).

• Data Organization and Reuse: Test case data should be tabularized for ease of use
and reuse. The concept of Test Suites is used to define groups of related tests.

• Strategy Focus: Test case design can be based on both black box and white box
strategies. Considering the size of a unit, it makes sense to focus heavily on white
box test design to exercise internal elements like logic structures, data flow
sequences, or using mutation analysis, aiming to evaluate the structural integrity of
the unit.

• Criticality Testing: For units that perform mission/safety/business critical functions,


it is often useful and prudent to design stress, security, and performance tests at the
unit level, if possible, to prevent larger failures later.

• COTS Components: If testing a smaller Commercial-Off-the-Shelf (COTS) component


as a unit, a black box test design approach may be the only available option.

Q12: Drivers, Stubs, and Test Results


a) Summarize the designing considerations of Drivers and Stubs with proper examples and
diagrams.

Drivers and stubs are forms of auxiliary code needed to create the test harness or
scaffolding code, which exercises a stand-alone unit and connects it to the outside world.
Since this code is a test work product, it should be carefully designed, implemented, and
tested for reuse.

Component Function/Consideration Levels of Functionality

A driver can be designed to: (i) call the target


Calls the target unit under unit; (ii) pass inputs parameters from a table;
Driver
test. (iii) display parameters; and (iv) display results
(output parameters).

A stub can be designed to: (i) display a message


that it has been called; (ii) display any input
Represents modules that are
Stub parameters passed from the unit; (iii) pass back
called by the target unit.
a result from a table; and (iv) display the result
from the table.

Example and Diagram: In traditional imperative-language systems, drivers and stubs are
developed as procedures or functions. In object-oriented systems, they may involve the
design and implementation of special classes or even a hierarchy of classes.

A simplified diagram illustrating the test harness concept shows the unit under test
surrounded by its supporting auxiliary code:

• Driver (Code that calls the unit) $\rightarrow$ Unit Under Test $\rightarrow$ Stub
(Code that simulates called modules).

b) Describe the Summary Worksheet for unit test results.

A simple format, such as the Unit Test Worksheet (Table 6.1, implicitly referenced), is used
to record the status and summary of test efforts for a unit.

The worksheet includes identifying information such as:

• Unit Name.

• Unit Identifier.

• Tester.

• Date.

For the results themselves, the table tracks individual test runs:

• Test case ID.


• Status (whether the test was run or not run).

• Summary of results.

• Overall Pass/fail determination for that test case.

This format is valuable for inclusion in the test summary report and for monitoring test
progress during weekly status meetings.

Q13: Strategies of Integration Test

Integration test goals include detecting defects that occur on unit interfaces and assembling
units into working subsystems ready for system test. The integration strategies vary based on
the programming paradigm.

i) Procedures and Functions

For procedural code, integration relies on a defined calling hierarchy, usually represented by
a structure chart (e.g., Figure 6.6, implicitly referenced).

1. Bottom-up Integration:

o Strategy: Begins with the lowest-level modules (those that do not call other
modules).

o Implementation: Requires Drivers to call the initial low-level modules. Once a


module is tested, its driver is replaced by the actual module next in the
upward hierarchy, continuing until the highest-level module is integrated.

o Benefit: Complex or safety-critical modules found deep in the hierarchy can


be tested and assembled early.

2. Top-down Integration:

o Strategy: Starts with the highest-level module.

o Implementation: Requires Stubs to represent the subordinate modules it


calls. The stubs are replaced one-by-one with the actual subordinate
modules.

o Progression: Integration can traverse the structure chart in a depth-first (M1


$\rightarrow$ M2 $\rightarrow$ M6, M7, M8...) or breadth-first (M1
$\rightarrow$ M2, M3, M4, M5...) manner.

o Benefit: This approach helps form subsystems gradually, which can


sometimes be assembled and tested in parallel.

ii) Classes
For object-oriented systems, traditional hierarchical calling relationships (like structure
charts) are not applicable due to the nature of classes and messages.

1. Strategy: Object Clusters:

o Integration proceeds by making use of object clusters, which are groups of


related classes that work together (analogous to small subsystems).

o A cluster may consist of classes that, for example, produce a report or


monitor a device.

2. Implementation and Testing:

o Testing focuses on the interactions between classes via messages.

o A key test focus is the method-message path, defined as a sequence of


method executions linked by messages.

o Test cases are derived from scenarios of operation associated with the cluster
found in the design document.

(Diagram Reference): A diagram illustrating a generic class cluster (like Figure 6.8) typically
shows multiple classes interconnected by labeled messages (method calls), which together
form a functional grouping.

Q14: Two Types of System Testing

System testing is performed when the software system has been assembled and operates as
a whole. System tests evaluate the non-functional attributes of the system in addition to
finding defects.

i) Functional Testing

• Concept: Functional testing is black box in nature and focuses on verifying that the
system performs what the user requirements specify. It determines if the software
meets its functional requirements.

• Inputs and Boundaries: The testing focuses on inputs and proper outputs for each
function. It is mandatory that testers observe system behavior under improper and
illegal inputs to evaluate the system's robustness.

• Techniques: Since functional tests are derived from specifications, techniques such as
Equivalence Class Partitioning and Boundary-Value Analysis (ECP/BVA) are useful
for test case design.

ii) Performance Testing


• Concept: Performance testing verifies quality requirements, which are non-
functional in nature. These requirements describe the expected quality levels for the
software.

• Objectives: Performance objectives usually relate to:

o Response Time: Measuring the time required for transaction processing.

o Throughput/Capacity: Determining the maximum capacity the system can


successfully operate at.

o Resource Consumption: Analyzing memory or CPU usage.

o Delays: Measuring delays encountered within the system.

• Evaluation: Performance tests confirm that the software system operates at the
specific levels defined by the user requirements.

Q15: Comparison of Alpha testing, Beta testing and Acceptance testing

These three types of tests involve user participation and occur late in the development cycle,
but they differ in purpose and environment.

Feature Alpha Testing Beta Testing Acceptance Testing

Target Mass market (Shrink- Mass market (Shrink- Custom-made software for
Software wrapped) software. wrapped) software. a specific client.

Conducted under real-


Conducted by users
Conducted at the world conditions on
Location under real-world
developer's site. operational hardware and
conditions.
software.

Users use the software


Developers monitor Demonstration to the client
and report defects and
Goal/Focus potential users to note that the software meets all
feedback to the
problems. specified requirements.
developer.

Essential for evaluating Final approval stage; often


Allows early user
software under typical, determines final payment
Significance evaluation in a controlled
uncontrolled and successful completion
environment.
operational conditions. of the project.

Users typically Carefully planned with


Test Case Designed by generate test data via input from the client/users
Source developers/testers, often normal use.
based directly on
based on usage requirements and the user
scenarios. manual.

Q16: Elaborate the components of Test plan with necessary diagrams.

A test plan is a document that provides a framework for achieving a set of testing goals. It is
a complex document, often structured hierarchically (Master Test Plan, Unit Plan, System
Plan, etc.).

The components of a test plan, as outlined by the IEEE standard, include (Figure 7.2,
implicitly referenced):

1. Test Plan Identifier: A unique identifier for the document.

2. Introduction: Provides an overview of the project, the system being developed, high-
level testing goals, and references to related policies and documents (e.g., project
plan, quality assurance plan).

3. Items to be Tested: A list of the specific software entities (modules, classes,


subsystems, systems) to be tested, including their identifiers and version/revision
numbers. This supports traceability to requirements/design documents.

4. Features to be Tested: Describes the entities in terms of the functionality they


encompass.

5. Approach: A broad section covering the overall strategy. It specifies the testing
activities to be performed, the degree of coverage expected for white box tests (e.g.,
statement, branch coverage), how the testing process will be monitored, and the
specific criteria to be used for making stop-test decisions.

6. Pass/Fail Criteria: Defines the standards for deciding whether a test item has passed
or failed upon execution. Failure occurs when the actual output differs from the
expected output.

7. Suspension and Resumption Criteria: Conditions under which testing must be


temporarily halted (e.g., due to a severity level 1 or 2 failure) and the conditions
required to resume testing (e.g., passing a regression test on the repaired code).

8. Test Deliverables: Lists all mandatory resulting documents, which include the test
plan itself, associated test design specifications, test logs, test incident reports, and
the Test Summary Report.

9. Testing Tasks: Identifies all testing-related tasks and their dependencies, often
structured using a Work Breakdown Structure (WBS).

10. Test Environment: Details the hardware, software tools, and laboratory space
required to conduct the tests.
11. Responsibilities: Identifies the staff (testers, developers, SQA, users) responsible for
key activities such as test execution, tracking, result checking, and documentation.

12. Scheduling: Establishes task durations, sets test milestones, and specifies schedules
for staff and resource use.

13. Risks and Contingencies: Identifies, evaluates, and prioritizes risks (e.g., complex
modules, delivery delays) and outlines contingency plans if these risks materialize.

14. Testing Costs: Estimates the resources and budget required for the testing effort,
using methods like COCOMO models or historical data.

15. Approvals: Lists all designated parties (e.g., Test Manager, Project Manager, Client)
required to review and sign off on the plan.

(Diagram Reference): The list of test plan components is typically represented in a list or
block diagram (like Figure 7.2).

Q17: Role of Quality Assurance at each phase of SDLC

Software Quality Assurance (SQA) is a planned and systematic set of actions taken
throughout the development process to ensure that the resulting software conforms to
established technical requirements and standards. SQA activities align with the verification
and validation aspects of testing throughout the Software Development Life Cycle (SDLC),
often visualized using the V-model (Figure 8.5, implicitly referenced).

SDLC Phase SQA/Verification Role Supporting Activities/Documents

Verification and Review: SQA SQA participates in Requirements


ensures requirements are Reviews to detect defects early. SQA
Requirements
clearly articulated, ensures policies regarding user input
Phase
unambiguous, complete, and for acceptance test planning are
testable. followed.

Verification and Adherence:


SQA ensures the design
SQA participates in Design Reviews.
conforms to architectural and
Design Phase SQA may audit documentation for
detailed design standards. SQA
(High/Detailed) adherence to standards like module
is concerned with traceability
coupling and cohesion.
from design elements back to
requirements.

Compliance and Audits: SQA


Coding Phase SQA participates in Code Reviews. SQA
ensures implementation
monitors the use of the configuration
complies with organizational
coding standards and management system to control code
guidelines. baselines.

SQA assists in establishing defect


classification schemes and severity
Testing Phases Monitoring and Control: SQA
levels. SQA receives Test Incident
(Unit, Integration, ensures testing activities are
Reports and ensures developers repair
System, performed according to the
the code and complete Fix Reports.
Acceptance) approved test plan and policies.
SQA tracks problems until resolution is
verified.

Q18: Models Developed for Software Product Quality Assessment

Software product quality assessment involves defining quality goals, selecting measurable
attributes, and using testing to determine if those goals are met. Several models and
frameworks support this quantitative approach:

1. Software Quality Metrics Methodology (IEEE Std 1061)

This framework guides organizations in formally defining and measuring product quality
attributes.

• Concept: Quality is decomposed hierarchically. Abstract Quality Factors (e.g.,


reliability, usability) are broken down into measurable Quality Subfactors (e.g.,
availability, communicativeness), which are finally quantified by Metrics (data
collected during development/testing).

• Assessment Process: This methodology involves five steps:

1. Identify relevant software quality metrics.

2. Implement the metrics (establish collection procedures).

3. Analyze the results.

4. Validate the software quality metrics.

5. Evaluate the software quality.

2. Usage/Operational Profile Models (for Reliability)

These models focus on evaluating the statistical reliability of a product based on how users
interact with the system.

• Operational Profile (Musa Methodology): An operational profile is a model of the


intended usage pattern of the software, defining the probability of different
operations being executed. The methodology includes steps like establishing a user
profile (customer types and proportions) and developing a functional profile
(functions and frequency) to create the final operational profile.

o Assessment Use: This profile guides statistical testing by ensuring test cases
represent realistic usage frequencies. The resulting test data feeds reliability
growth models to predict failure rates and determine when to stop testing.

• Usage Model (Walton Model): This approach often uses finite-state machines to
model the software's behavior, defining states and transitions triggered by stimuli.
Each transition (arc) is assigned a probability of selection, allowing testers to traverse
the model randomly to generate sequences of stimuli that constitute test cases.

3. Usability Testing Models (Rubin)

Usability is a critical quality factor defined as the ease of learning, operating, and
interpreting software. Specialized testing models exist to evaluate this factor formally.

• Concept: Usability testing requires using a representative sample of end users and an
environment representing the actual work environment.

• Assessment Types: Rubin suggests different types of usability tests, often involving
increasing levels of fidelity and quantitative data collection:

o Exploratory Usability Testing: Used early, perhaps with prototypes, to identify


major design issues.

o Assessment Usability Testing: Conducted later with functioning prototypes to


measure quantitatively how well users perform tasks (e.g., time to complete a
task, error frequency).

o Validation Usability Testing: Used near the end of development to measure


compliance against specific usability requirements.

You might also like