Stqa Midsem Answer Key
Stqa Midsem Answer Key
MIDSEM- I
PART-A
• Testing can be described as a process used for revealing defects in software, and for
establishing that the software has attained a specified degree of quality with
respect to selected attributes.
These definitions cover both validation (execution-based testing) and verification (activities
like reviews) activities, encompassing technical reviews, test planning, test tracking, test case
design, unit test, integration test, system test, acceptance test, and usability test.
As testing is considered a process embedded within the software development process, its
components align with the general definition of a software process.
A software process, in the engineering domain, is defined as the set of methods, practices,
standards, documents, activities, policies, and procedures that software engineers use to
develop and maintain a software system and its associated artifacts.
• Technical reviews.
• Test planning.
• Test tracking.
• Specific levels of testing: unit test, integration test, system test, acceptance test, and
usability test.
The source material differentiates these terms based on their point of origin and
manifestation:
Term Definition/Description Source
Software testing is used to evaluate the degree to which a system meets specified
requirements and customer needs. To do this, testers evaluate specific quality attributes,
including:
• Correctness: The degree to which the system performs its intended function.
• Reliability: The degree to which the software is expected to perform its required
functions under stated conditions for a stated period of time.
• Usability: Relates to the degree of effort needed to learn, operate, prepare input,
and interpret output of the software.
• Integrity (Security): Relates to the system's ability to withstand both intentional and
accidental attacks.
While digital system engineers use similar models linking physical defects to electrical
effects, software engineers often use the fault model concept, accumulated informally from
experience, to design tests and for diagnosis during debugging activities (fault localization).
For example, a fault model might link "an incorrect operator precedence order" fault
(defect) to a lack of education on the part of the programmer (error).
6. Mention any two differences between Black box testing and White box testing
The two testing strategies differ fundamentally in the tester's knowledge and viewpoint:
1. Knowledge Source/Viewpoint:
o White box testing focuses on the inner structure of the software. To design
test cases using this strategy, the tester must have knowledge of the code or a
suitable pseudo code representation.
o White box testing is useful for revealing design and code-based control, logic
and sequence defects, initialization defects, and data flow defects.
7. Give one example for Equivalence class partitioning and its testing
Equivalence Class Partitioning (ECP) is a black box technique used to partition the input
domain into a finite number of classes, assuming that all members within a class are
processed equivalently by the software.
Example: Consider the specification for a module input where a widget identifier should
consist of 3–15 alphanumeric characters.
1. Valid Equivalence Class (EC3): All values from 3 to 15 (e.g., input length 9 characters).
2. Invalid Equivalence Class (EC4): Values just below the lower boundary, i.e., less than
3 (e.g., input length 2 characters).
3. Invalid Equivalence Class (EC5): Values just above the upper boundary, i.e., greater
than 15 (e.g., input length 16 characters).
Testing: To test using ECP, the tester selects one test input value to represent each identified
class. For the example above, the test cases would include inputs with 9 characters (valid), 2
characters (invalid), and 16 characters (invalid).
Random testing occurs when a tester randomly, or unsystematically, selects inputs from
the input domain of the software module or system.
For instance, if the valid input domain is all positive integers between 1 and 100, the tester
might randomly choose values like 55, 24, or 3. This approach does not systematically
consider whether these inputs are adequate or whether boundary or invalid values should
be prioritized.
State transition testing is a black box method useful for procedural and object-oriented
development, viewing the software in terms of its states, transitions between states, and
the inputs and events that trigger state changes.
Example (Stack Class): Consider testing a Stack Class (an object) which can hold a small
number of items and has methods like create, push, pop, full, and empty. The state of the
stack changes based on the sequence of method calls.
A test sequence is designed to transition the stack through different states (e.g., empty,
partially full, full):
Further operations like pop(s,item) would cause the stack to transition back from full to
partially full, and eventually to empty. The tester must also design sequences to test illegal
transitions, such as attempting an extra push on a full stack.
Statement coverage is a basic program-based coverage criterion often applied in white box
testing.
The primary use of statement coverage testing is to establish a minimal testing goal and
measure the adequacy of a test set.
• Goal Setting: A tester sets the objective to satisfy the statement adequacy/coverage
criterion, requiring that a set of test cases be developed so that all (100%) of the
statements in the software unit are executed at least once.
PART-B
i) TMM Levels
The TMM adopts a staged architecture similar to the CMM, consisting of five levels that
prescribe a maturity hierarchy and an evolutionary path for process improvement.
(Diagram Reference: The 5-Level Structure) This structure is depicted in Figure 1.5 (and
Figure 16.3 in the source material) which shows the five levels stacked hierarchically, with
Level 1 at the bottom and Level 5 at the top.
The internal structure of each TMM maturity level provides a comprehensive framework for
evaluation and improvement.
2. Testing Capability: Describes the characteristics of the testing process at that level.
3. Maturity Goals (MG): Identify key process areas that must be addressed to achieve
maturity at that level.
4. Maturity Subgoals (MSG): Specify less abstract objectives that define the scope and
accomplishments needed for a particular level.
(Diagram Reference: The Internal Structure) This structure is visualized in Figure 1.4 (and
Figure 16.4), showing that Levels indicate Testing Capability, which contains Maturity Goals,
which are supported by Maturity Subgoals, which are achieved by
Activities/Tasks/Responsibilities, which are organized by the Critical Views (Manager,
Developer/tester, User/client).
The TMM defines three critical views (CVs) representing the key participants in the testing
process, ensuring that responsibilities (ATRs) are assigned to appropriate groups at each
level.
1. Manager View: Encompasses the commitment and ability to perform activities
related to improving testing capability. This view typically includes project
managers, test group managers, and upper-level managers.
2. Developer/Tester View: Focuses on the technical activities and tasks that constitute
quality testing practices. This group includes staff involved in specifying, designing,
coding, and testing.
The Maturity Goals (MGs) define the required process improvement steps for advancing
between levels.
TMM
Maturity Goals (MG) Source
Level
The V-model is a standard framework that illustrates how testing activities should be
integrated into the entire software life cycle. This integration is a key goal at TMM Level 3
("Integrate Testing into the Software Life Cycle").
• V-Model Structure: The model shows that activities like designing tests for
Acceptance, System, and Integration levels should occur in parallel with the
Requirements and Design development phases.
• Early Test Planning: The V-model philosophy requires test planning to begin as early
as possible in the life cycle, starting at the requirements phase, rather than waiting
until coding is complete.
• Deliverables: The model ensures that test deliverables (e.g., initial versions of test
plans and acceptance tests) are produced in early phases.
(Diagram Reference: The Extended/Modified V-model) Figure 1.6 (and Figure 10.6)
graphically illustrates this integration, showing that requirements are supported by
requirements reviews and system/acceptance tests, while design is supported by design
reviews and integration tests.
The differences between Verification and Validation relate to the timing of evaluation and
the focus of the requirements:
Testing and debugging are distinct processes with different goals, methods, and
responsibilities:
A process used for revealing defects Testing involves activities like technical
in software, and for establishing that reviews, test planning, and executing
Testing the software has attained a specified selected test cases. This activity should
degree of quality with respect to ideally be performed by an independent
selected attributes. testing group.
Also known as fault localization, this Debugging is generally difficult to
process begins after testing has manage due to the unpredictability of
revealed a failure. It involves (1) defect occurrences. Software developers
Debugging
locating the fault or defect, (2) have a detailed understanding of the
repairing the code, and (3) retesting code and are the best qualified staff to
the code. perform debugging.
Software testing principles are fundamental rules that guide test specialists in developing
knowledge, acquiring skills, and defining testing activities. The principles establish a
conceptual foundation for effective test practices.
Design defects occur when system components, their interactions, or interfaces are
incorrectly designed, typically assuming the design description is at the pseudo code level.
Example (Coin
Defect Class Description/Occurrence Detection Method
Problem Reference)
An incorrect
Logic flow in the pseudo code is
Control, "while" loop
incorrect, such as branching too White box tests
Logic, and condition (i < 6
soon/late, improper nesting, or (condition/branch
Sequence instead of i <= 6) in
an incorrect branching testing, loop testing).
Defects the pseudo code
condition.
design.
Coding defects arise from errors in implementing the code, such as failure to understand
programming language constructs or transcription errors.
Specification:
We partition the input domain ($S$) based on the specified functional conditions.
BVA focuses on values at and just outside the boundary conditions (500).
This technique converts the specification into a Boolean graph to derive combinations of
inputs (causes) that result in specific outputs (effects).
1. Cause-and-Effect Graph
The graph nodes are the causes (C1, C2) and effects (E1, E2, E3). Logical relationships (AND,
NOT) connect them.
Self-Correction/Note: The graph must show the connections: C1 and C2 link via an AND
operation to E1. C1 must also feed an inverter/NOT operation ( $\neg$ C1) to E2. C2 must
feed an inverter/NOT operation ($\neg$ C2) to E3.
4. A node representing C1 leads to an arc containing the NOT notation (a small circle),
and this inverted output leads to E2.
5. A node representing C2 leads to an arc containing the NOT notation, and this
inverted output leads to E3.
Test E2 (A
C1 (A is C2 (B is $\neg$ $\neg$ E1 (Both E3 (B is
Case Writes
Sportsman) Disabled) C1 C2 Interview) Sportsman)
ID Exam)
T1 T T F F T F F
T2 T F F T F F T
T3 F T T F F T F
T4 F F T T F T T
The following pseudocode fragment calculates the number of positive integers read in a loop
running n times:
1 count = 0
2 read(n);
3 for i=1 to n
4 read(a);
5 if a>0
6 count = count + 1;
7 print count;
(Assumption: Lines 4-6 constitute the loop body, and Line 7 executes once after the loop
completes, based on typical structured code flow.)
• Nodes (N): N1 (Line 1), N2 (Line 2), N3 (Line 3, Loop Decision), N4 (Line 4), N5 (Line 5,
IF Decision), N6 (Line 6), N7 (Merge node after IF statement), N8 (Line 7, Print/Exit).
• Edges (E):
o N1 $\rightarrow$ N2
o N2 $\rightarrow$ N3
o N4 $\rightarrow$ N5
o N5 (T) $\rightarrow$ N6
o N6 $\rightarrow$ N7
o N8 (Exit)
(Diagram Reference: Control Flow Graph) The graph would show a sequence leading into
Node 3. Node 3 has two outbound edges, one (F) leading to the terminal Node 8, and one
(T) leading to the loop body (N4 $\rightarrow$ N5). Node 5 (the IF decision) has two
outbound edges: one (F) to the merge Node 7, and one (T) to Node 6, which then merges to
Node 7. Node 7 loops back to Node 3.
Decision coverage requires executing all outcomes (True and False branches) of all decision
nodes at least once.
1. N3 (Loop condition, i=1 to n): Must be True (loop runs) and False (loop skips/exits).
T2 requires one positive value (a>0 is T) and one non-positive value (a<=0 is F) within a loop
run, satisfying 100% decision coverage.
Path testing focuses on finding a basis set of independent paths, the number of which is
equal to the Cyclomatic Complexity V(G).
o $E = 9$ (Edges)
o $N = 8$ (Nodes)
o $V(G) = 9 - 8 + 2 = 3$.
Test cases T1 (n=0), and two variations of n=1 (one with a>0, one with a<=0) would be
required to explicitly execute these three paths.
Data flow testing identifies definition (def) and use (use: computation-use, c-use; or
predicate-use, p-use) occurrences for variables, with the goal of exercising all def-use paths.
Variable Def Location (Line) Use Location (Line) Use Type Pair ID
count 1 6 c-use C1
1 7 c-use C2
6 6 c-use C3
6 7 c-use C4
n 2 3 p-use N1
a 4 5 p-use A1
(Note: Line 3 (for i=1 to n) contains both an initial definition of i and a predicate use of n and
i. Line 6 (count = count + 1) contains definition and computation-use of count. We simplify to
cover the essential data paths.)
2. C2: Definition of $count$ (L1) used for printing (L7) (Requires Loop Skip).
3. C1, C3, C4: Definition of $count$ (L1/L6) used in L6 and L7. (Requires loop run and L6
execution).
4. A1: Definition of $a$ (L4) used in predicate (L5). (Requires loop run).
Test Set {T1, T3} covers all identified def-use pairs for all variable.
MIDSEM-II
Execution-based software testing for large systems is typically carried out at different levels,
usually comprising 3–4 major levels or phases of testing. These major phases include:
1. Unit Test: This phase tests a single component. A principal goal is to detect
functional and structural defects within that individual unit.
2. Integration Test: At this level, several components are tested as a group. Testers
investigate component interactions.
3. System Test: The system as a whole is tested. A principal goal is to evaluate non-
functional attributes such as usability, reliability, and performance.
4. Acceptance Test: This is a crucial testing stage where the development organization
must demonstrate that the software meets all of the client’s requirements.
A test harness is defined as the auxiliary code that must be developed to exercise each unit
and connect it to the outside world. Since the tester is focusing on a stand-alone function,
procedure, or class rather than a complete system, the test harness is needed to both call
the target unit and represent modules that are called by the target unit. This auxiliary code
is also known as scaffolding code.
2. To assemble the individual units into working subsystems and finally a complete
system that is ready for system test.
• Functional testing
• Performance testing
• Stress testing
• Configuration testing
• Security testing
• Recovery testing
The TMM also recommends that Reliability and Usability testing be formally integrated into
the testing process by organizations reaching higher levels of testing maturity.
Justification: Regression testing is the process of retesting software that has been modified
to ensure two things: that the new version of the software has retained the capabilities of
the old version, and that no new defects have been introduced due to the changes.
Because its function is verification after modification, it can occur at any level of test, such
as when unit tests are rerun after a defect repair.
Quality is defined by the IEEE Standard Glossary of Software Engineering Terminology based
on two criteria:
7. When steps the Test incident report is necessary to be created? Give the to create it.
A test incident report is necessary to be created when a tester observes any event that
occurs during the execution of the tests that is unexpected, unexplainable, and that
requires a follow-up investigation. It should be prepared if a unit fails a test.
The IEEE Standard for Software Test Documentation recommends the following sections to
be included in the report:
2. Summary: To identify the test items involved, the test procedures, test cases, and
test log associated with this report.
3. Incident description: This describes the time and date, testers, observers,
environment, inputs, expected outputs, actual outputs, anomalies, procedure step,
environment, and attempts to repeat the incident.
4. Impact: Describes the impact of the incident on the testing effort, test plans,
procedures, and test cases; a severity rating should be inserted here.
8. Enlist the sections to be included in Test Summary Report according to the IEEE test
documentation standard.
The IEEE test documentation standard describes the following sections for the Test
Summary Report:
2. Variances: Descriptions of any deviations from the test plan, test procedures, and
test designs, as well as variances of the test items from their original design.
4. Summary of results: Summary of the testing results, including all resolved and
unresolved incidents.
5. Evaluation: Evaluation of each test item based on test results, including its pass/fail
status and the severity level of any failure.
7. Approvals: Listing of the names of all persons needed to approve the document, with
space for signatures and dates.
Quality Control (QC) and Quality Assurance (QA) are distinct, though related, concepts:
• Quality Assurance (QA): The software quality assurance (SQA) group is a team
dedicated to ensuring that all necessary actions are taken during the development
process so that the resulting software conforms to established technical
requirements. A key distinction is that QA is often used to describe activities that
evaluate the process by which products are developed and/or maintained, as well as
the product itself.
10. Provide the supporting activities in the ISO-9000-3 for Software Process Quality.
While the query refers to ISO-9000-3, the source material explicitly maps ISO-9001 areas to
TMM maturity levels, indicating these areas support software process quality as it evolves
through the TMM:
TMM
ISO-9001 Common Process Area
Level
Inspections (4.10); Inspection, test status (4.12); Quality records (4.16); Statistical
Level 4
techniques (4.20)
Unit test planning can be described across three phases, supporting the steady evolution of
the unit test plan:
1. Phase 1: Describe Unit Test Approach and Risks In this initial phase, the planner
outlines the general approach to unit testing. Key tasks include:
o Describing the techniques (e.g., black box, white box methods) that will be
used for designing test cases.
o Defining termination conditions for unit tests, including special cases that
may result in abnormal termination.
2. Phase 2: Identify Unit Features to be Tested This phase relies on information from
the unit specification and detailed design description. The planner specifies which
features of each unit will be tested, such as:
o Functions.
o Performance requirements.
3. Phase 3: Add Levels of Detail to the Plan The final phase refines the plan based on
the preceding steps. The planner adds details concerning the approach, resource,
and scheduling portions. Tasks include:
o Describing how test results will be recorded (e.g., test logs, test incident
reports) and providing references to standards for these documents.
Part of the preparation for unit testing involves unit test design, which focuses on structural
integrity due to the small size of the component.
• Data Organization and Reuse: Test case data should be tabularized for ease of use
and reuse. The concept of Test Suites is used to define groups of related tests.
• Strategy Focus: Test case design can be based on both black box and white box
strategies. Considering the size of a unit, it makes sense to focus heavily on white
box test design to exercise internal elements like logic structures, data flow
sequences, or using mutation analysis, aiming to evaluate the structural integrity of
the unit.
Drivers and stubs are forms of auxiliary code needed to create the test harness or
scaffolding code, which exercises a stand-alone unit and connects it to the outside world.
Since this code is a test work product, it should be carefully designed, implemented, and
tested for reuse.
Example and Diagram: In traditional imperative-language systems, drivers and stubs are
developed as procedures or functions. In object-oriented systems, they may involve the
design and implementation of special classes or even a hierarchy of classes.
A simplified diagram illustrating the test harness concept shows the unit under test
surrounded by its supporting auxiliary code:
• Driver (Code that calls the unit) $\rightarrow$ Unit Under Test $\rightarrow$ Stub
(Code that simulates called modules).
A simple format, such as the Unit Test Worksheet (Table 6.1, implicitly referenced), is used
to record the status and summary of test efforts for a unit.
• Unit Name.
• Unit Identifier.
• Tester.
• Date.
For the results themselves, the table tracks individual test runs:
• Summary of results.
This format is valuable for inclusion in the test summary report and for monitoring test
progress during weekly status meetings.
Integration test goals include detecting defects that occur on unit interfaces and assembling
units into working subsystems ready for system test. The integration strategies vary based on
the programming paradigm.
For procedural code, integration relies on a defined calling hierarchy, usually represented by
a structure chart (e.g., Figure 6.6, implicitly referenced).
1. Bottom-up Integration:
o Strategy: Begins with the lowest-level modules (those that do not call other
modules).
2. Top-down Integration:
ii) Classes
For object-oriented systems, traditional hierarchical calling relationships (like structure
charts) are not applicable due to the nature of classes and messages.
o Test cases are derived from scenarios of operation associated with the cluster
found in the design document.
(Diagram Reference): A diagram illustrating a generic class cluster (like Figure 6.8) typically
shows multiple classes interconnected by labeled messages (method calls), which together
form a functional grouping.
System testing is performed when the software system has been assembled and operates as
a whole. System tests evaluate the non-functional attributes of the system in addition to
finding defects.
i) Functional Testing
• Concept: Functional testing is black box in nature and focuses on verifying that the
system performs what the user requirements specify. It determines if the software
meets its functional requirements.
• Inputs and Boundaries: The testing focuses on inputs and proper outputs for each
function. It is mandatory that testers observe system behavior under improper and
illegal inputs to evaluate the system's robustness.
• Techniques: Since functional tests are derived from specifications, techniques such as
Equivalence Class Partitioning and Boundary-Value Analysis (ECP/BVA) are useful
for test case design.
• Evaluation: Performance tests confirm that the software system operates at the
specific levels defined by the user requirements.
These three types of tests involve user participation and occur late in the development cycle,
but they differ in purpose and environment.
Target Mass market (Shrink- Mass market (Shrink- Custom-made software for
Software wrapped) software. wrapped) software. a specific client.
A test plan is a document that provides a framework for achieving a set of testing goals. It is
a complex document, often structured hierarchically (Master Test Plan, Unit Plan, System
Plan, etc.).
The components of a test plan, as outlined by the IEEE standard, include (Figure 7.2,
implicitly referenced):
2. Introduction: Provides an overview of the project, the system being developed, high-
level testing goals, and references to related policies and documents (e.g., project
plan, quality assurance plan).
5. Approach: A broad section covering the overall strategy. It specifies the testing
activities to be performed, the degree of coverage expected for white box tests (e.g.,
statement, branch coverage), how the testing process will be monitored, and the
specific criteria to be used for making stop-test decisions.
6. Pass/Fail Criteria: Defines the standards for deciding whether a test item has passed
or failed upon execution. Failure occurs when the actual output differs from the
expected output.
8. Test Deliverables: Lists all mandatory resulting documents, which include the test
plan itself, associated test design specifications, test logs, test incident reports, and
the Test Summary Report.
9. Testing Tasks: Identifies all testing-related tasks and their dependencies, often
structured using a Work Breakdown Structure (WBS).
10. Test Environment: Details the hardware, software tools, and laboratory space
required to conduct the tests.
11. Responsibilities: Identifies the staff (testers, developers, SQA, users) responsible for
key activities such as test execution, tracking, result checking, and documentation.
12. Scheduling: Establishes task durations, sets test milestones, and specifies schedules
for staff and resource use.
13. Risks and Contingencies: Identifies, evaluates, and prioritizes risks (e.g., complex
modules, delivery delays) and outlines contingency plans if these risks materialize.
14. Testing Costs: Estimates the resources and budget required for the testing effort,
using methods like COCOMO models or historical data.
15. Approvals: Lists all designated parties (e.g., Test Manager, Project Manager, Client)
required to review and sign off on the plan.
(Diagram Reference): The list of test plan components is typically represented in a list or
block diagram (like Figure 7.2).
Software Quality Assurance (SQA) is a planned and systematic set of actions taken
throughout the development process to ensure that the resulting software conforms to
established technical requirements and standards. SQA activities align with the verification
and validation aspects of testing throughout the Software Development Life Cycle (SDLC),
often visualized using the V-model (Figure 8.5, implicitly referenced).
Software product quality assessment involves defining quality goals, selecting measurable
attributes, and using testing to determine if those goals are met. Several models and
frameworks support this quantitative approach:
This framework guides organizations in formally defining and measuring product quality
attributes.
These models focus on evaluating the statistical reliability of a product based on how users
interact with the system.
o Assessment Use: This profile guides statistical testing by ensuring test cases
represent realistic usage frequencies. The resulting test data feeds reliability
growth models to predict failure rates and determine when to stop testing.
• Usage Model (Walton Model): This approach often uses finite-state machines to
model the software's behavior, defining states and transitions triggered by stimuli.
Each transition (arc) is assigned a probability of selection, allowing testers to traverse
the model randomly to generate sequences of stimuli that constitute test cases.
Usability is a critical quality factor defined as the ease of learning, operating, and
interpreting software. Specialized testing models exist to evaluate this factor formally.
• Concept: Usability testing requires using a representative sample of end users and an
environment representing the actual work environment.
• Assessment Types: Rubin suggests different types of usability tests, often involving
increasing levels of fidelity and quantitative data collection: