Software Engineering 010942
Software Engineering 010942
Syllabus
Unit I INTRODUCTION
Introduction: The Problem Domain–Software Engineering Challenges - Software
Engineering Approach–Software Processes:Software Process – Characteristics of a Software
Process–Software Development Process Models – Other software processes.
Text Books
1. An Integrated Approach to Software Engineering – Pankaj Jalote, Narosa Publishing
House, Delhi, 3rd Edition.
2. Fundamentals of Software Engineering – Rajib Mall, PHI Publication, 3rd Edition.
Reference Books
1. Software Engineering – K.K. Aggarwal and Yogesh Singh, New Age International
Publishers, 3 rd edition.
2 .A Practitioners Approach- Software Engineering, - R. S. Pressman, McGraw Hill.
3. Fundamentals of Software Engineering - Carlo Ghezzi, M. Jarayeri, D.
Manodrioli,PHIPublication
UNIT-I
1.1 INTRODUCTION
Problem domain (or problem space) is an engineering term referring to all information
that defines the problem and constrains the solution (the constraints being part of the
problem). It includes the goals that the problem owner wishes to achieve, the context within
which the problem exists, and all rules that define essential functions or other aspects of any
solution product. It represents the environment in which a solution will have to operate, as
well as the problem itself.
It is called scope of analysis, When collecting user stories or User Requirements
whatever, how do you decide which ones are relevant? You keep in mind some higher-level
statement of what the objectives are and what people think the problems are (if people didn’t
think there were problems, they wouldn’t be paying someone to come up with a solution,
would they?). Then it’s either in, out or borderline. If it’s borderline, you probably want
someone to agree it’s out (even if you want it to be in). In the mean time, and probably
afterwards, you keep it as Something to think about.
Note that the customer for a software solution (the “problem owner”) doesn’t
necessarily recognize the existence of a problem so much as an opportunity. An engineer sees
a “problem domain” as being the set of circumstances for which s/he has to provide a
solution; it’s the engineer’s problem, not necessarily the customer’s.
1
1.3 SOFTWARE ENGINEERING CHELLANGES
2
1.4 SOFTWARE ENGINEERING APPROACH
3
cases/functions rather it should be free from unnatural restrictions and should be able to
provide service to customers what actually they need or general needs in an extensive
manner.
Principle of Consistency is important in coding style and designing GUI (Graphical
User Interface) as consistent coding style gives an easier reading of code and consistency in
GUI makes user learning easier in dealing with interface and in using the software. Never
waste time if anything is required and that already exists at that time take the help of Open
source and fix it in your own way as per requirement.
Performing continuous validation helps in checking software system meets
requirement specifications and fulfills its intended purpose which helps in better software
quality control. To exit in current technology market trends Using modern programming
practices is important to meet users’ requirements in the latest and advanced way. Scalability
in Software Engineering should be maintained to grow and manage increased demand for
software applications.
The term software specifies to the set of computer programs, procedures and
associated documents (Flowcharts, manuals, etc.) that describe the program and how they are
to be used.
4
1.7 CHARACTERISTICS OF A SOFTWARE PROCESS
The process that deals with the technical and management issues of software
development is called a software process. A software development project must have at least
development activities and project management activities. The fundamental objectives of a
process are the same as that of software engineering (after all, the process is the main vehicle
of satisfying the software engineering objectives), viz. optimality and scalability.
Optimality means that the process should be able to produce high-quality software at
low cost, and scalability means that it should also be applicable for large software projects.
To achieve these objectives, a process should have some properties. Predictability of a
process determines how accurately the outcome of following a process in a project can be
predicted before the project is completed. Predictability can be considered a fundamental
property of any process, In fact, if a process is not predictable, it is of limited use.
One of the important objectives of the development project should be to produce
software that is easy to maintain. And the process should be such that it ensures this
maintainability. Testing consumes the most resources during development. Underestimating
the testing effort often causes the planners to allocate insufficient resources for testing,
which, in turn, results in unreliable software or schedule slippage. The goal of the process
should not be to reduce the effort of design and coding, but to reduce the cost of maintenance.
Both testing and maintenance depend heavily on the design and coding of software, and these
costs can be considerably reduced if the software is designed and coded to make testing and
maintenance easier. Hence, during the early phases of the development process the prime
issues should be “can it be easily tested” and “can it be easily modified”. Errors can occur at
any stage during development.
However error detection and correction should be a continuous process that is done
throughout software development. Detecting errors soon after they have been introduced is
clearly an objective that should be supported by the process. A process is also not a static
entity.
As the productivity (and hence the cost of a project) and quality are determined
largely by the process to satisfy the engineering objectives of quality improvement and cost
reduction, the software process must be improved. Having process improvement as a basic
goal of the software process implies that the software process used is such that is supports its
improvement.
5
1.8 SOFTWARE DEVELOPMENT PROCESS MODELS
• Big-Bang model
• Code-and-fix model
• Waterfall model
• V model
• Incremental model
• RAD model
• Agile model
• Iterative model
• Spiral model
• Prototype model
BIG-BANG MODEL
Big-Bang is the SDLC(Software Development Life cycle) model in which no
particular process is followed.
Generally this model is used for small projects in which the development teams are
small. It is specially useful in academic projects. This model is needs a little planning and
does not follow formal development.The development of this model begins with the required
money and efforts as an input. The output of this model is developed software, that may or
may not be according to the requirements of the customer.
6
DISADVANTAGES OF BIG-BANG MODEL
• It is a very high risk model.
• This model is not suitable for object oriented and complex projects.
• Big-Bang is poor model for lengthy and in-progress projects.
CODE-AND-FIX MODEL
Code and fix model is one step ahead from the Big-Bang model. It identifies the
product that must be tested before release.The testing team find the bugs then sends the
software back for fixing. To deliver the fixes developers complete some coding and send the
software again for testing. This process is repeated till the bugs are found in it, at an
acceptable level.
7
In this model, each phase is executed completely before the beginning of the next
phase. Hence the phases do not overlap in waterfall model.This model is used for small
projects. In this model, feedback is taken after each phase to ensure that the project is on the
right path. Testing part starts only after the development is completed.
COMMUNICATION
The software development starts with the communication between customer and
developer.
PLANNING
It consists of complete estimation, scheduling for project development.
MODELING
Modeling consists of complete requirement analysis and the design of the project i.e
algorithm, flowchart etc.The algorithm is the step-by-step solution of the problem and the
flow chart shows a complete flow diagram of a program.
CONSTRUCTION
Construction consists of code generation and the testing part.Coding part implements
the design details using an appropriate programming language.Testing is to check whether
the flow of coding is correct or not.Testing also checks that the program provides desired
output.
8
DEPLOYMENT
Deployment step consists of delivering the product to the customer and taking
feedback from them. If the customer wants some corrections or demands for the additional
capabilities, then the change is required for improvement in the quality of the software.
ADVANTAGES OF WATERFALL MODEL
• The waterfall model is simple and easy to understand, to implement, and use.
• All the requirements are known at the beginning of the project, hence it is easy to
manage.
• It avoids overlapping of phases because each phase is completed at once.
• This model works for small projects where the requirements are easily understood.
• This model is preferred for those projects where the quality is more important as
compared to the cost of the project.
DISADVANTAGES OF THE WATERFALL MODEL
• This model is not good for complex and object oriented projects.
• In this model, the changes are not permitted so it is not fit for moderate to high risk
changes in project.
• It is a poor model for long duration projects.
• The problems with this model are uncovered, until the software testing.
• The amount of risk is high.
V MODEL
V model is known as Verification and Validation model.This model is an extension of
the waterfall model.In the life cycle of V-shaped model, processes are executed
sequentially.Every phase completes its execution before the execution of next phase begins.
9
REQUIREMENTS
The requirements of product are understood from the customers point of view to
know their exact requirement and expectation.The acceptance test design planning is
completed at requirement stage because, business requirements are used as an input for
acceptance testing.
SYSTEM DESIGN
In system design, high level design of the software is constructed. In this phase, we
study how the requirements are implemented their technical use.
ARCHITECTURE DESIGN
In architecture design, software architecture is created on the basis of high level
design. The module relationship and dependencies of module, architectural diagrams,
database tables, technology details are completed in this phase.
MODULE DESIGN
In module phase, we separately design every module or the software components.
Finalize all the methods, classes, interfaces, data types etc.Unit tests are designed in module
design phase based on the internal module designs.Unit tests are the vital part of any
development process. They help to remove the maximum faults and errors at an early stage.
CODING PHASE
The actual code design of module designed in the design phase is grabbed in the coding
phase. On the basis of system and architecture requirements, we decide the best suitable
programming language. The coding is executed on the basis of coding guidelines and
standards.
ADVANTAGES OF V-MODEL
• V-model is easy and simple to use.
• Many testing activities i.e planning, test design are executed in the starting, it saves
more time.
• Calculation of errors is done at the starting of the project hence, less chances of error
occurred at final phase of testing.
• This model is suitable for small projects where the requirements are easily
understood.
DISADVANTAGES OF V-MODEL
• V-model is not suitable for large and composite projects.
• If the requirements are not constant then this model is not acceptable.
10
INCREMENTAL MODEL
The incremental model combines the elements of waterfall model and they are applied
in an iterative fashion. The first increment in this model is generally a core product. Each
increment builds the product and submits it to the customer for suggesting any
modifications.The next increment implements the customer's suggestions and add additional
requirements in the previous increment.This process is repeated until the product is
completed.
COMMUNICATION
The software development starts with the communication between customer and
developer.
PLANNING
It consists of complete estimation, scheduling for project development.
MODELING
Modeling consists of complete requirement analysis and the design of the project like
algorithm, flowchart etc.The algorithm is a step-by-step solution of the problem and the flow
chart shows a complete flow diagram of a program.
CONSTRUCTION
Construction consists of code generation and the testing part. Coding part implements
the design details using an appropriate programming language.Testing is to check whether
the flow of coding is correct or not.Testing also checks that the program provides desired
output.
11
DEPLOYMENT
Deployment step consists of delivering the product to the customer and taking
feedback from them.If the customer wants some corrections or demands for the additional
capabilities, then the change is required for improvement in the quality of the software.
ADVANTAGES OF INCREMENTAL MODEL
• This model is flexible because the cost of development is low and initial product
delivery is faster.
• It is easier to test and debug in the smaller iteration.
• The working software is generated quickly in the software life cycle.
• The customers can respond to its functionalities after every increment.
DISADVANTAGES OF THE INCREMENTAL MODEL
• The cost of the final product may cross the cost initially estimated.
• This model requires a very clear and complete planning.
• The planning of design is required before the whole system is broken into smaller
increments.
• The demands of customer for the additional functionalities after every increment
causes problem in the system architecture.
RAD MODEL
RAD is a Rapid Application Development model.Using the RAD model, software
product is developed in a short period of time.The initial activity starts with the
communication between customer and developer.Planning depends upon the initial
requirements and then the requirements are divided into groups.Planning is more important to
work together on different modules.
The RAD model consist of following phases:
1) Business Modeling
2) Data modeling
3) Process modeling
4) Application generation
5) Testing and turnover
12
ADVANTAGES OF RAD MODEL
• The process of application development and delivery are fast.
• This model is flexible, if any changes are required.
• Reviews are taken from the clients at the staring of the development hence there are
lesser chances to miss the requirements.
DISADVANTAGES OF RAD MODEL
• The feedback from the user is required at every development phase.
• This model is not a good choice for long term and large projects.
AGILE MODEL
Agile model is a combination of incremental and iterative process models. This model
focuses on the users satisfaction which can be achieved with quick delivery of the working
software product. Agile model breaks the product into individual iterations. Every iteration
includes cross functional teams working on different areas such as planning, requirements,
analysis, design, coding, unit testing and acceptance testing. At the end of an iteration
working product shows to the users.
13
ADVANTAGES OF AGILE MODEL
• Customers are satisfied because of quick and continuous delivery of useful software.
• Regular delivery of working software.
• Face to face interaction between the customers, developers and testers and it is best
form of communication.
• Even the late changes in the requirement can be incorporated in the software.
DISADVANTAGES OF AGILE MODEL
• It is totally depends on customer interaction. If the customer is not clear with their
requirements, the development team can go in the wrong direction.
• Documentation is less, so the transfer of technology to the new team members is
challenging.
ITERATIVE MODEL
In Iterative model, the large application of software development is divided into
smaller chunks and smaller parts of software which can be reviewed to recognize further
requirements are implemented. This process is repeated to generate a new version of the
software in each cycle of a model.With every iteration, development module goes through the
phases i.e. requirement, design, implementation and testing. These phases are repeated in
iterative model in a sequence.
14
REQUIREMENT PHASE
In this phase, the requirements for the software are assembled and analyzed.
Generates a complete and final specification of requirements
DESIGN PHASE
In this phase, a software solution meets the designed requirements which can be a
new design or an extension of an earlier design.
IMPLEMENTATION AND TEST PHASE
In this phase, coding for the software and test the code.
EVALUATION
In this phase, software is evaluated, the current requirements are reviewed and the
changes and additions in the requirements are suggested.
ADVANTAGES OF AN ITERATIVE MODEL
• Produces working software rapidly and early in the software life cycle.
• This model is easy to test and debug in a smaller iteration.
• It is less costly to change scope and requirements.
DISADVANTAGES OF AN ITERATIVE MODEL
• The system architecture is costly.
• This model is not suitable for smaller projects.
SPIRAL MODEL
It is a combination of prototype and sequential or waterfall model. This model was
developed by Boehm. It is used for generating the software projects. This model is a risk
driven process model. Every phase in the Spiral model is start with a design goal and ends
with the client review. The development team in this model begins with a small set of
requirements and for the set of requirements team goes through each development phase. The
development team adds the functionality in every spiral till the application is ready.
15
ADVANTAGES OF SPIRAL MODEL
• It reduces high amount of risk.
• It is good for large and critical projects.
• It gives strong approval and documentation control.
• In spiral model, the software is produced early in the life cycle process.
DISADVANTAGES OF SPIRAL MODEL
• It can be costly to develop a software model.
• It is not used for small projects.
PROTOTYPE MODEL
Prototype is defined as first or preliminary form using which other forms are copied
or derived. Prototype model is a set of general objectives for software. It does not identify the
requirements like detailed input, output. It is software working model of limited functionality.
In this model, working programs are quickly produced.
The different phases of Prototyping model are:
1) Communication
2) Quick design
3) Modeling and quick design
4) Construction of prototype
5) Deployment, delivery, feedback
16
ADVANTAGES OF PROTOTYPING MODEL
• In the development process of this model users are actively involved.
• The development process is the best platform to understand the system by the user.
• Earlier error detection takes place in this model.
• It gives quick user feedback for better solutions.
• It identifies the missing functionality easily. It also identifies the confusing or difficult
functions.
DISADVANTAGES OF PROTOTYPING MODEL
• The client involvement is more and it is not always considered by the developer.
• It is a slow process because it takes more time for development.
• Many changes can disturb the rhythm of the development team.
• It is a throw away prototype when the users are confused with it.
17
THE WATERFALL MODEL
It is a sequential design process in which progress is seen as flowing steadily
downwards. Phases in waterfall model:
(i) Requirements Specification
(ii) Software Design
(iii) Implementation
(iv) Testing
DATAFLOW MODEL
It is diagrammatic representation of the flow and exchange of information within a system.
EVOLUTIONARY DEVELOPMENT MODEL
Following activities are considered in this method:
(i) Specification
(ii) Development
(iii) Validation
ROLE / ACTION MODEL
Roles of the people involved in the software process and the activities.
NEED FOR PROCESS
The software development team must decide the process model that is to be used for
software product development and then the entire team must adhere to it. This is necessary
because the software product development can then be done systematically. Each team
member will understand what is the next activity and how to do it. Thus process model will
bring the definiteness and discipline in overall development process. Every process model
consists of definite entry and exit criteria for each phase. Hence the transition of the product
through various phases is definite.
If the process model is not followed for software development then any team member
can perform any software development activity, this will ultimately cause a chaos and
software project will definitely fail without using process model, it is difficult to monitor the
progress of software product. Thus process model plays an important rule in software
engineering.
ADVANTAGES OR DISADVANTAGES:
There are several advantages and disadvantages to different software development
methodologies, such as:
18
UNIT-II
19
REQUIREMENTS VALIDATION
This step involves checking that the requirements are complete, consistent, and
accurate. It also involves checking that the requirements are testable and that they meet the
needs and expectations of stakeholders.
REQUIREMENTS MANAGEMENT
This step involves managing the requirements throughout the software development
life cycle, including tracking and controlling changes, and ensuring that the requirements are
still valid and relevant.
Tools involved in requirement engineering:
• observation report
• Questionnaire ( survey , poll )
• Use cases
• User stories
• Requirement workshop
• Mind mapping
• Role playing
• Prototyping
• Functional requirements
• Non-functional requirements
• Domain requirements
FUNCTIONAL REQUIREMENTS
These are the requirements that the end user specifically demands as basic facilities
that the system should offer. It can be a calculation, data manipulation, business process, user
interaction, or any other specific functionality which defines what function a system is likely
to perform. All these functionalities need to be necessarily incorporated into the system as a
part of the contract. These are represented or stated in the form of input to be given to the
system, the operation performed and the output expected. They are basically the requirements
stated by the user which one can see directly in the final product, unlike the non-functional
requirements. For example, in a hospital management system, a doctor should be able to
retrieve the information of his patients.
20
Each high-level functional requirement may involve several interactions or dialogues
between the system and the outside world. In order to accurately describe the functional
requirements, all scenarios must be enumerated. There are many ways of expressing
functional requirements e.g., natural language, a structured or formatted language with no
rigorous syntax and formal specification language with proper syntax. Functional
Requirements in Software Engineering are also called Functional Specification.
NON-FUNCTIONAL REQUIREMENTS
These are basically the quality constraints that the system must satisfy according to
the project contract. Nonfunctional requirements, not related to the system functionality,
rather define how the system should perform The priority or extent to which these factors are
implemented varies from one project to other. They are also called non-behavioral
requirements. They basically deal with issues like:
• Portability
• Security
• Maintainability
• Reliability
• Scalability
• Performance
• Reusability
• Flexibility
DOMAIN REQUIREMENTS
Domain requirements are the requirements which are characteristic of a particular
category or domain of projects. Domain requirements can be functional or nonfunctional.
Domain requirements engineering is a continuous process of proactively defining the
requirements for all foreseeable applications to be developed in the software product line.
The basic functions that a system of a specific domain must necessarily exhibit come under
this category. For instance, in an academic software that maintains records of a school or
college, the functionality of being able to access the list of faculty and list of students of each
grade is a domain requirement. These requirements are therefore identified from that domain
model and are not user specific.
21
2.4 FEASIBILITY STUDIES
22
LEGAL FEASIBILITY
In Legal Feasibility study project is analyzed in legality point of view. This includes
analyzing barriers of legal implementation of project, data protection acts or social media
laws, project certificate, license, copyright etc. Overall it can be said that Legal Feasibility
Study is study to know if proposed project conform legal and ethical requirements.
SCHEDULE FEASIBILITY
In Schedule Feasibility Study mainly timelines/deadlines is analyzed for proposed
project which includes how many times teams will take to complete final project which has a
great impact on the organization as purpose of project may fail if it can’t be completed on
time.
Requirements elicitation is the process of gathering and defining the requirements for
a software system. The goal of requirements elicitation is to ensure that the software
development process is based on a clear and comprehensive understanding of the customer’s
needs and requirements. Requirements elicitation involves the identification, collection,
analysis, and refinement of the requirements for a software system. It is a critical part of the
software development life cycle and is typically performed at the beginning of the project.
Requirements elicitation involves stakeholders from different areas of the organization,
including business owners, end-users, and technical experts. The output of the requirements
elicitation process is a set of clear, concise, and well-defined requirements that serve as the
basis for the design and development of the software system.
Requirements elicitation is perhaps the most difficult, most error-prone, and most
communication-intensive software development. It can be successful only through an
effective customer-developer partnership. It is needed to know what the users really need.
REQUIREMENTS ELICITATION ACTIVITIES
Requirements elicitation includes the subsequent activities. Few of them are listed below
• Knowledge of the overall area where the systems is applied.
• The details of the precise customer problem where the system is going to be applied
must be understood.
• Interaction of system with external requirements.
• Detailed investigation of user needs.
23
• Define the constraints for system development.
24
(i) Draw the context diagram: The context diagram is a simple model that defines
the boundaries and interfaces of the proposed systems with the external world. It identifies
the entities outside the proposed system that interact with the system.
(ii) Development of a Prototype (optional): One effective way to find out what the
customer wants is to construct a prototype, something that looks and preferably acts as part of
the system they say they want.
We can use their feedback to modify the prototype until the customer is satisfied
continuously. Hence, the prototype helps the client to visualize the proposed system and
increase the understanding of the requirements. When developers and users are not sure about
some of the elements, a prototype may help both the parties to take a final decision.
(iii) Model the requirements: This process usually consists of various graphical
representations of the functions, data entities, external entities, and the relationships between
them. The graphical view may help to find incorrect, inconsistent, missing, and superfluous
requirements. Such models include the Data Flow diagram, Entity-Relationship diagram,
Data Dictionaries, State-transition diagrams, etc.
(iv) Finalise the requirements: After modeling the requirements, we will have a
better understanding of the system behavior. The inconsistencies and ambiguities have been
identified and corrected. The flow of data amongst various modules has been analyzed.
Elicitation and analyze activities have provided better insight into the system. Now we
finalize the analyzed requirements, and the next step is to document these requirements in a
prescribed format.
25
2.7 REQUIREMENTS DOCUMENTATION
26
2.8 REQUIREMENTS VALIDATION
27
difficult to implement and should be reconsidered. Developing tests from the user
requirements before any code is written is an integral part of test-driven development.
28
• Improve likelihood of delivering the right product, within budget and schedule with
the required quality.
The importance of requirements management is intensified, however, when building
complex or highly regulated products. This is because more time and budget are invested in
development. The cost of getting it wrong — be it money, time, or reputation — is too great
to risk. Hence, developers in regulated industries, or those who develop products with a
lengthy list of needs and requirements, tend to rely on requirements management tools, like
Jama Connect® to keep their projects in order.
29
(2). Definition of their responses of the software to all realizable classes of input data
in all available categories of situations.
(3). Full labels and references to all figures, tables, and diagrams in the SRS and
definitions of all terms and units of measure.
3. Consistency: The SRS is consistent if, and only if, no subset of individual
requirements described in its conflict. There are three types of possible conflict in the SRS
4. Unambiguousness: SRS is unambiguous when every fixed requirement has only
one interpretation. This suggests that each element is uniquely interpreted. In case there is a
method used with multiple definitions, the requirements report should determine the
implications in the SRS so that it is clear and simple to understand.
5. Ranking for importance and stability: The SRS is ranked for importance and
stability if each requirement in it has an identifier to indicate either the significance or
stability of that particular requirement.
Typically, all requirements are not equally important. Some prerequisites may be
essential, especially for life-critical applications, while others may be desirable. Each element
should be identified to make these differences clear and explicit. Another way to rank
requirements is to distinguish classes of items as essential, conditional, and optional.
6. Modifiability: SRS should be made as modifiable as likely and should be capable
of quickly obtain changes to the system to some extent. Modifications should be perfectly
indexed and cross-referenced.
7. Verifiability: SRS is correct when the specified requirements can be verified with
a cost-effective system to check whether the final software meets those requirements. The
requirements are verified with the help of reviews.
8. Traceability: The SRS is traceable if the origin of each of the requirements is clear
and if it facilitates the referencing of each condition in future development or enhancement
documentation.
PROPERTIES OF A GOOD SRS DOCUMENT
The essential properties of a good SRS document are the following:
Concise: The SRS report should be concise and at the same time, unambiguous,
consistent, and complete. Verbose and irrelevant descriptions decrease readability and also
increase error possibilities.
Structured: It should be well-structured. A well-structured document is simple to
understand and modify. In practice, the SRS document undergoes several revisions to cope
up with the user requirements. Often, user requirements evolve over a period of time.
30
Therefore, to make the modifications to the SRS document easy, it is vital to make the report
well-structured.
Black-box view: It should only define what the system should do and refrain from
stating how to do these. This means that the SRS document should define the external
behavior of the system and not discuss the implementation issues. The SRS report should
view the system to be developed as a black box and should define the externally visible
behavior of the system. For this reason, the SRS report is also known as the black-box
specification of a system.
Conceptual integrity: It should show conceptual integrity so that the reader can
merely understand it. Response to undesired events: It should characterize acceptable
responses to unwanted events. These are called system response to exceptional conditions.
Verifiable: All requirements of the system, as documented in the SRS document,
should be correct. This means that it should be possible to decide whether or not requirements
have been met in an implementation.
SEMANTIC DOMAINS
31
Formal techniques can have considerably different semantic domains. Abstract data
type specification languages used to specify algebras, theories, and programs. Programming
languages used to specify functions from input to output values.
SATISFACTION RELATION
Given the model of a system, it is essential to determine whether an element of the
semantic domain satisfies the specifications. This satisfaction is determined by using a
homomorphism known as the semantic abstraction function. The semantic abstraction
function maps the elements of the semantic domain into equivalent classes.
2.12AXIOMATIC SPECIFICATION
In the Axiomatic Specification of a system, first-order logic used to write the pre-and
post-conditions to specify the operations of the system in the form of axioms. The pre-
conditions capture the conditions that must be satisfied before an operation can successfully
be invoked.
In essence, the pre-conditions capture the requirements on the input parameters of a
function. The post-conditions are the conditions that must be satisfied when a function post-
conditions are essentially constraints on the results produced for the function execution to be
considered successful.
32
Computer-aided software engineering (CASE) is the implementation of computer-
facilitated tools and methods in software development. CASE is used to ensure high-quality
and defect-free software. CASE ensures a check-pointed and disciplined approach and helps
designers, developers, testers, managers, and others to see the project milestones during
development.
CASE can also help as a warehouse for documents related to projects, like business
plans, requirements, and design specifications. One of the major advantages of using CASE is
the delivery of the final product, which is more likely to meet real-world requirements as it
ensures that customers remain part of the process. CASE illustrates a wide set of labor-saving
tools that are used in software development. It generates a framework for organizing projects
and to be helpful in enhancing productivity. There was more interest in the concept of CASE
tools years ago, but less so today, as the tools have morphed into different functions, often in
reaction to software developer needs. The concept of CASE also received a heavy dose of
criticism after its release.
CASE TOOLS
The essential idea of CASE tools is that in-built programs can help to analyze
developing systems in order to enhance quality and provide better outcomes. Throughout the
1990, CASE tool became part of the software lexicon, and big companies like IBM were
using these kinds of tools to help create software.
ADVANTAGES OF THE CASE APPROACH
• As the special emphasis is placed on the redesign as well as testing, the servicing cost
of a product over its expected lifetime is considerably reduced.
• The overall quality of the product is improved as an organized approach is undertaken
during the process of development.
• Chances to meet real-world requirements are more likely and easier with a computer-
aided software engineering approach.
• CASE indirectly provides an organization with a competitive advantage by helping
ensure the development of high-quality products.
• It provides better documentation.
• It improves accuracy.
• It provides intangible benefits.
• It reduces lifetime maintenance.
• It is an opportunity to non-programmers.
33
• It impacts the style of working of the company.
• It reduces the drudgery in software engineer’s work.
• It increases the speed of processing.
• It is easy to program software.
DISADVANTAGES OF THE CASE APPROACH
Cost: Using a case tool is very costly. Most firms engaged in software development
on a small scale do not invest in CASE tools because they think that the benefit of CASE is
justifiable only in the development of large systems.
Learning Curve: In most cases, programmers’ productivity may fall in the initial
phase of implementation, because users need time to learn the technology. Many consultants
offer training and on-site services that can be important to accelerate the learning curve and
to the development and use of the CASE tools.
Tool Mix: It is important to build an appropriate selection tool mix to urge cost
advantage CASE integration and data integration across all platforms is extremely important.
The main objective of this project is to provide results to the students in a simple way.
The students can get results through the college/institution website through their roll
numbers. By analyzing the result status and applying the standard calculation followed by the
University the result are displayed with individual scores and the equivalent percentage. The
system is intended for the student. The student can login through their login id and password
to check their respective results. This can be achieved with web development technologies
like HTML, CSS, PhP, JavaScript and using the database MySQL. The faculty can view the
overall performance of the students in the semester examinations subject- wise. The
visualization of the overall results according to the subject(The percentage of pass and fail in
a particular subject) can be done using fusion charts.
Student Result Management System is a web-based application that mainly focuses
on providing the results to the student and the faculty. The student check their respective
results using their University registered recognition id’s along with their grades and
percentage of that particular semester. The student accessing their results through college site
is more convenient and the faculty can easily analyze the pass and failure of a particular
subject. The system is divided into three modules- Student, Faculty and Administrator. The
34
student using his roll number can view his results and the faculty using the joining year and
the subject name and view the analysis of pass and failure count in the selected subject . The
administrator uploads the results file to the database by converting the file to sql format(.sql)
from the PDF format(.pdf).The admin is provided with the privileges to modify the student
results by updating the results during the changes in supplementary or revaluation
examination. The update of any current score is to done by the administrator.
Semesters each semester of a student. action of results that conveys the overall
students performances in a particular subject. The main objective of this system is to provide
the student an convenient and simpler way to check their results and for evaluating the total
aggregate and the percentage for the semester results available. It assist the faculty and
student to analysis his/her and the whole class performance in a subject. The scope of this
project is addressed to solve the issues of long waiting and calculation of grades and
percentages in different semesters. Providing the results in an institutional websites provide
an easier access to the results to the student. The graphs for overall performance in every
subject makes the analysis task simpler
Software Quality Management ensures that the required level of quality is achieved
by submitting improvements to the product development process. SQA aims to develop a
culture within the team and it is seen as everyone's responsibility.
Software Quality management should be independent of project management to
ensure independence of cost and schedule adherences. It directly affects the process quality
and indirectly affects the product quality.
SOFTWARE QUALITY MANAGEMENT (SQM) TECHNIQUES
QUALITY ASSURANCE
The actual construction of the software program is part of the SQM quality assurance
process. Product performance will be verified throughout the route to confirm that all
standards are met with excellent SQM. Throughout the process, testers will audit and gather
data.
QUALITY PLANNING
35
Quality planning is required before the start of software development. During quality
planning, testers establish goals and objectives for your program and a strategic strategy to
assist you in achieving those goals. The most crucial part of SQM is quality planning since it
creates a solid blueprint for the remainder of the process to follow.
QUALITY CONTROL
Quality control is the last phase in the SQM process and is where testing takes place.
At this time, testers look for flaws, assess functionality, and so forth. Depending on the
findings, you may need to return to development to smooth out kinks and make some minor
final tweaks. A software quality management strategy helps ensure that you're adhering to all
industry standards and that your end-user receives a well-designed, high-quality product.
ADVANTAGES OF QUALITY MANAGEMENT
• The development team's productivity has increased.
• Test data and defect monitoring are more exact and up to date, resulting in improved
product quality.
• Reduced rework costs because flaws are discovered sooner in each level's software
project development life cycle.
• Confidence in current product management and future product development is
increased.
• Client trust is encouraged.
• Positive user experience.
• Profits are increased.
• Customer Satisfaction is boosted.
36
ISO 9001
This standard applies to the organizations engaged in design, development,
production, and servicing of goods. This is the standard that applies to most software
development organizations.
ISO 9002
This standard applies to those organizations which do not design products but are only
involved in the production. Examples of these category industries contain steel and car
manufacturing industries that buy the product and plants designs from external sources and
are engaged in only manufacturing those products. Therefore, ISO 9002 does not apply to
software development organizations.
ISO 9003
This standard applies to organizations that are involved only in the installation and
testing of the products. For example, Gas companies.
The Capability Maturity Model (CMM) is a procedure used to develop and refine an
organization's software development process.The model defines a five-level evolutionary
stage of increasingly organized and consistently more mature processes.
CMM was developed and is promoted by the Software Engineering Institute (SEI), a
research and development center promote by the U.S. Department of Defense (DOD).
Capability Maturity Model is used as a benchmark to measure the maturity of an
organization's software process.
METHODS OF SEICMM
CAPABILITY EVALUATION
Capability evaluation provides a way to assess the software process capability of an
organization. The results of capability evaluation indicate the likely contractor performance if
the contractor is awarded a work. Therefore, the results of the software process capability
assessment can be used to select a contractor.
SOFTWARE PROCESS ASSESSMENT
Software process assessment is used by an organization to improve its process
capability. Thus, this type of evaluation is for purely internal use. SEI CMM categorized
software development industries into the following five maturity levels. The various levels of
37
SEI CMM have been designed so that it is easy for an organization to build its quality system
starting from scratch slowly.
LEVEL 1: INITIAL
Ad hoc activities characterize a software development organization at this level. Very
few or no processes are described and followed. Since software production processes are not
limited, different engineers follow their process and as a result, development efforts become
chaotic. Therefore, it is also called a chaotic level.
LEVEL 2: REPEATABLE
At this level, the fundamental project management practices like tracking cost and
schedule are established. Size and cost estimation methods, like function point analysis,
COCOMO, etc. are used.
LEVEL 3: DEFINED
At this level, the methods for both management and development activities are
defined and documented. There is a common organization-wide understanding of operations,
38
roles, and responsibilities. The ways through defined, the process and product qualities are
not measured. ISO 9000 goals at achieving this level.
LEVEL 4: MANAGED
At this level, the focus is on software metrics. Two kinds of metrics are composed.
Product metrics measure the features of the product being developed, such as its size,
reliability, time complexity, understandability, etc.Process metrics follow the effectiveness of
the process being used, such as average defect correction time, productivity, the average
number of defects found per hour inspection, the average number of failures detected during
testing per LOC, etc.
LEVEL 5: OPTIMIZING
At this phase, process and product metrics are collected. Process and product
measurement data are evaluated for continuous process improvement.
39
UNIT-III
PROPOSAL WRITING
Project proposals are documents with objectives of the project being developed, cost
and scheduling estimation, justifying why the job should be awarded etc. Project managers,
with experience enough can estimate cost and schedule timeline for software development.
However, there is no set guidelines for writing project proposals, it is a skill that is developed
with experience of handling various projects.
40
PROJECT PLANNING AND SCHEDULING
Planning and scheduling a project involves identifying the activities, milestones and
deliverable versions of the final projects that can be reviewed. It must include a detailed
timeline for its activities, milestones and deliverable. Planning and scheduling are also steps
of software development.
ESTIMATING PROJECT COST
Project cost involves a summation of costs of resources required that may include
software, hardware, personnel, accommodation and other concerns involved in the
development and support of the software product in question.
MONITORING AND REVIEWING
Project monitoring is a continuous activity throughout the development of the product
where the project manager compares progress & cost reports with planned progress and cost
reports in a timely manner. Most organizations have some formal mechanism to monitor
progress but skilled project managers can achieve clear idea of the project progress by
discussing with members.
BUILDING TEAMS AND EVALUATION
Project managers must use their experience to interview new personnel to the
development team or have to use progress reports of existing members to include them in the
development team for a particular project. The target is to find and include most skilled
personnel in the fields required but that also depends on various factors like amount of
investments for project, availability etc.
REPORTING
Project managers are responsible for reporting project progress to client and his/her
organization. In addition to that, they might also have to write concise, coherent documents
that abstracts critical information from detailed reports.
41
to accommodate customer's necessities. The most significant is that the underlying
technology changes and advances so generally and rapidly that experience of one element
may not be connected to the other one. All such business and ecological imperatives bring
risk in software development; hence, it is fundamental to manage software projects
efficiently.
SOFTWARE PROJECT MANAGER
Software manager is responsible for planning and scheduling project development.
They manage the work to ensure that it is completed to the required standard. They monitor
the progress to check that the event is on time and within budget. The project planning must
incorporate the major issues like size & cost estimation scheduling, project monitoring,
personnel selection evaluation & risk management. To plan a successful software project, we
must understand:
• Scope of work to be completed
• Risk analysis
• The resources mandatory
• The project to be accomplished
• Record of being followed
Software Project planning starts before technical work start. The various steps of
planning activities are:
42
The size is the crucial parameter for the estimation of other activities. Resources
requirement are required based on cost and development time. Project schedule may prove to
be very useful for controlling and monitoring the progress of the project. This is dependent on
resources & development time.
43
metric has been slowly gaining popularity. One of the important advantages of using the
function point metric is that it can be used to easily estimate the size of a software product
directly from the problem specification. This is in contrast to the LOC metric, where the size
can be accurately determined only after the product has fully been developed. The conceptual
idea behind the function point metric is that the size of a software product is directly
dependent on the number of different functions or features it supports.
A software product supporting many features would certainly be of larger size than a
product with less number of features. Each function when invoked reads some input data and
transforms it to the corresponding output data. For example, the issue book feature (as show
in figure) of a Library Automation Software takes the name of the book as input and displays
its location and the number of copies available. Thus, a computation of the number of input
and the output data values to a system gives some indication of the number of functions
supported by the system.
44
THREE-POINT ESTIMATION
This technique involves estimating the project size using three values: optimistic,
pessimistic, and most likely. These values are then used to calculate the expected project size
using a formula such as the PERT formula.
FUNCTION POINTS
This technique involves estimating the project size based on the functionality
provided by the software. Function points consider factors such as inputs, outputs, inquiries,
and files to arrive at the project size estimate.
45
oversight, lack of familiarity with a particular aspect of a project, personal bias and desired to
win a contract through overly optimistic estimates.
DELPHI COST ESTIMATION
ROLE OF MEMBERS
Coordinator provide a copy of Software Requirement Specification(SRS) document
and a form of recording it cost estimate to each estimator.
ESTIMATOR
It complete their individual estimate anomalously and submit to the coordinator with
mentioning, if any, unusual characteristics of product which has influenced his estimation.
This process is Iterated for several rounds.
No discussion is allowed among the estimator during the entire estimation process
because there may be many estimators get easily influenced by rationale of an estimator who
may be more experienced or senior. After the completion of several iterations of estimation,
the coordinator takes the responsibility of compiling the result and preparing the final
estimates.
46
• Semidetached
• Embedded
ORGANIC
A development project can be treated of the organic type, if the project deals with
developing a well-understood application program, the size of the development team is
reasonably small, and the team members are experienced in developing similar methods of
projects. Examples of this type of projects are simple business systems, simple inventory
management systems, and data processing systems.
SEMIDETACHED
A development project can be treated with semidetached type if the development
consists of a mixture of experienced and inexperienced staff. Team members may have finite
experience in related systems but may be unfamiliar with some aspects of the order being
developed. Example of Semidetached system includes developing a new operating system
(OS), a Database Management System (DBMS), and complex inventory management system.
EMBEDDED
A development project is treated to be of an embedded type, if the software being
developed is strongly coupled to complex hardware, or if the stringent regulations on the
operational method exist. For Example: ATM, Air Traffic control.
For three product categories, Bohem provides a different set of expression to predict
effort (in a unit of person month)and development time from the size of estimation in
KLOC(Kilo Line of code) efforts estimation takes into account the productivity loss due to
holidays, weekly off, coffee breaks, etc.
According to Boehm, software cost estimation should be done through three stages:
• Basic Model
• Intermediate Model
• Detailed Model
BASIC COCOMO MODEL
The basic COCOMO model provide an accurate size of the project parameters. The
following expressions give the basic COCOMO estimation model:
Effort=a1*(KLOC) a2 PM
Tdev=b1*(efforts)b2 Months
47
Where
KLOC is the estimated size of the software product indicate in Kilo Lines of
Code,a1,a2,b1,b2 are constants for each group of software products, Tdev is the estimated
time to develop the software, expressed in months, Effort is the total effort required to
develop the software product, expressed in person months (PMs).
INTERMEDIATE MODEL
The basic Cocomo model considers that the effort is only a function of the number of
lines of code and some constants calculated according to the various software systems. The
intermediate COCOMO model recognizes these facts and refines the initial estimates
obtained through the basic COCOMO model by using a set of 15 cost drivers based on
various attributes of software engineering.
Classification of Cost Drivers and their attributes:
PRODUCT ATTRIBUTES
• Required software reliability extent
• Size of the application database
• The complexity of the product
HARDWARE ATTRIBUTES
• Run-time performance constraints
• Memory constraints
• The volatility of the virtual machine environment
• Required turnabout time
PERSONNEL ATTRIBUTES
• Analyst capability
• Software engineering capability
• Applications experience
• Virtual machine experience
PROJECT ATTRIBUTES
• Use of software tools
• Application of software engineering methods
• Required development schedule
DETAILED COCOMO MODEL
Detailed COCOMO incorporates all qualities of the standard version with an
assessment of the cost driver?s effect on each method of the software engineering process.
48
The detailed model uses various effort multipliers for each cost driver property. In detailed
cocomo, the whole software is differentiated into multiple modules, and then we apply
COCOMO in various modules to estimate effort and then sum the effort.
The Six phases of detailed COCOMO are:
• Planning and requirements
• System structure
• Complete structure
• Module code and test
• Integration and test
• Cost Constructive model
STATEMENT
A computer program is an implementation of an algorithm considered to be a
collection of tokens which can be classified as either operators or operand. All software
science metrics can be defined in terms of these basic symbols. These symbols are called as a
token. The basic measures are
n1 = count of unique operators.
n2 = count of unique operands.
N1 = count of total occurrences of operators.
N2 = count of total occurrence of operands.
Size of the program can be expressed as N = N1 + N2.
PROGRAM VOLUME (V)
The unit of measurement of volume is the standard unit for size "bits." It is the actual
size of a program if a uniform binary encoding for the vocabulary is used. V=N*log2n
PROGRAM LEVEL (L)
The value of L ranges between zero and one, with L=1 representing a program written
at the highest possible level (i.e., with minimum size).
L=V*/V
PROGRAM DIFFICULTY
The difficulty level or error-proneness (D) of the program is proportional to the
number of the unique operator in the program.D= (n1/2) * (N2/n2)
49
PROGRAMMING EFFORT (E)
The unit of measurement of E is elementary mental discriminations.
E=V/L=D*V
ESTIMATED PROGRAM LENGTH
According to Halstead, The first Hypothesis of software science is that the length of a
well-structured program is a function only of the number of unique operators and operands.
N=N1+N2.And estimated program length is denoted by N^
N^ = n1log2n1 + n2log2n2
POTENTIAL MINIMUM VOLUME
The potential minimum volume V* is defined as the volume of the most short
program in which a problem can be coded.
V* = (2 + n2*) * log2 (2 + n2*)
Here, n2* is the count of unique input and output parameters
SIZE OF VOCABULARY (N)
The size of the vocabulary of a program, which consists of the number of unique
tokens used to build a program, is defined as: n=n1+n2
Where n =vocabulary of a program
n1=number of unique operators
n2=number of unique operands
Language Level - Shows the algorithm implementation program language level. The
same algorithm demands additional effort if it is written in a low-level program language. For
example, it is easier to program in Pascal than in Assembler.
L' = V / D / D
lambda = L * V* = L2 * V
ADVANTAGES
• Predicts error rate.
• Predicts maintenance effort
• Simple to calculate
• Measure overall quality
• Used for any language
DISADVANTAGES
• Depends on complete code
• Complexity increases as program level decreases
50
• Difficult to compute
Once the effort required to develop a software has been determined, it is necessary to
determine the staffing requirement for the project. Putnam first studied the problem of
what should be a proper staffing pattern for software projects. He extended the work of
Norden who had earlier investigated the staffing pattern of research and development (R&D)
type of projects. In order to appreciate the staffing pattern of software projects, Norden’s
and Putnam’s results must be understood.
NORDEN’S WORK
Norden studied the staffing patterns of several R & D projects. He found that the
staffing pattern can be approximated by the Rayleigh distribution curve (as shown in
fig. 35.1). Norden represented the Rayleigh curve by the following equation:
E = K/t²d * t * e-t² / 2 t²d
Where E is the effort required at time t. E is an indication of the number of
engineers (or the staffing level) at any particular time during the duration of the project, K is
the area under the curve, and td is the time at which the curve attains its maximum value. It
must be remembered that the results of Norden are applicable to general R & D projects and
were not meant to model the staffing pattern of software development projects.
51
Putnam’s Work Putnam studied the problem of staffing of software projects and
found that the software development has characteristics very similar to other R & D
projects studied by Norden and that the Rayleigh-Norden curve can be used to relate the
number of delivered lines of code to the effort and the time required to develop the
project. By analyzing a large number of army projects, Putnam derived the following
expression: L = Ck K1/3td4/3 The various terms of this expression are as follows: K is
the total effort expended (in PM) in the product development and L is the product size in
KLOC. td corresponds to the time of system and integration testing. Therefore, td
can be approximately considered as the time required to develop the software. Ck is the state
of technology constant and reflects constraints that impede the progress of the programmer.
Typical values of Ck = 2 for poor development environment (no methodology,
poor documentation, and review, etc.), Ck = 8 for good software development
environment (software engineering principles are adhered to), Ck = 11 for an excellent
environment (in addition to following software engineering principles, automated tools
and techniques are used). The exact value of Ck for a specific project can be computed from
the historical data of the organization developing it. Putnam suggested that optimal staff
build-up on a project should follow the Rayleigh curve. Only a small number of
engineers are needed at the beginning of a project to carry out planning and specification
tasks. As the project progresses and more detailed work is required, the number of engineers
reaches a peak. After implementation and unit testing, the number of project staff falls.
3.10 SCHEDULING
52
• Determine the critical path.
A critical path is the chain of activities that determines the duration of the project.
The first step in scheduling a software project involves identifying all the tasks
necessary to complete the project. A good knowledge of the intricacies of the project
and the development process helps the managers to effectively identify the important
tasks of the project. Next, the large tasks are broken down into a logical set of small
activities which would be assigned to different engineers.
The work breakdown structure formalism helps the manager to breakdown the tasks
systematically after the project manager has broken down the tasks and created the
work breakdown structure, he has to find the dependency among the activities.
Dependency among the different activities determines the order in which the different
activities would be carried out. If an activity A requires the results of another activity
B, then activity A must be scheduled after activity B. In general, the task
dependencies define a partial ordering among tasks, i.e. each tasks may precede a
subset of other tasks, but some tasks might not have any precedence ordering defined
between them (called concurrent task).
The dependency among the activities is represented in the form of an activity
network. Once the activity network representation has been worked out, resources are
allocated to each activity. Resource allocation is typically done using a Gantt chart.
After resource allocation is done, a PERT chart representation is developed. The PERT
chart representation is suitable for program monitoring and control. For task scheduling, the
project manager needs to decompose the project tasks into a set of activities. The time frame
when each activity is to be performed is to be determined. The end of each activity is called
milestone. The project manager tracks the progress of a project by monitoring the timely
completion of the milestones. If he observes that the milestones start getting delayed, then
he has to carefully control the activities, so that the overall deadline can still be met.
53
• Democratic team organization
HIERARCHICAL TEAM ORGANIZATION
In this, the people of organization at different levels following a tree structure. People
at bottom level generally possess most detailed knowledge about the system. People at higher
levels have broader appreciation of the whole project.
Matrix Team Organization :In matrix team organization, people are divided into
specialist groups. Each group has a manager. Example of Metric team organization is as
follows :
EGOLESS TEAM ORGANIZATION
Egoless programming is a state of mind in which programmer are supposed to
separate themselves from their product. In this team organization goals are set and decisions
are made by group consensus. Here group, ‘leadership’ rotates based on tasks to be
54
performed and differing abilities of members. In this organization work products are
discussed openly and all freely examined all team members. There is a major risk which such
organization, if teams are composed of inexperienced or incompetent members.
DEMOCRATIC TEAM ORGANIZATION
It is quite similar to the egoless team organization, but one member is the team leader
with some responsibilities :
Coordination
Final decisions, when consensus cannot be reached.
ADVANTAGES
• Each member can contribute to decisions.
• Members can learn from each other.
• Improved job satisfaction.
DISADVANTAGES
• Communication overhead increased.
• Need for compatibility of members.
• Less individual responsibility and authority.
3.12 STAFFING
Staffing plans are important part of project resource management. Every project will
require resources for executing project activities. There will be a need for both man power
resources and physical resources. The resource requirement for each activity will be
estimated. The resources will be acquired during project execution as per the schedule.
Planning for resources, acquiring resources, developing team and managing team are
the important activities to be carried out as part of project resource management. A resource
management plan will contain all the necessary guidelines for project resource management.
A staffing management plan will also be part of the overall resource management plan.
Staffing management plan, which part of overall resource management plan will
specifically focus on the man power aspects of the project. Staffs are the most important part
of project. It is important to select and acquire the right staff with right skills at the right time.
CONCLUSION
Staffing is the most important part of project management. It is the staff who will
actually complete the project work. Staff will also consume the majority of project cost.
55
Hence it is extremely important to be very precise in planning and acquiring the right staff at
the right time for the right duration. It is also important to keep the staff members motivated
and ensure their safety and well- being. The staffing management plan help capture all these
aspects precisely for effecting staff management for the project. Take PMP Classes to know
more about staffing management plans.
A software project can be concerned with a large variety of risks. In order to be adept
to systematically identify the significant risks which might affect a software project, it is
essential to classify risks into different classes. The project manager can then check which
risks from each class are relevant to the project.
There are three main classifications of risks which can affect a software project:
• Project risks
• Technical risks
• Business risks
PROJECT RISKS
Project risks concern differ forms of budgetary, schedule, personnel, resource, and
customer-related problems. A vital project risk is schedule slippage. Since the software is
intangible, it is very tough to monitor and control a software project. It is very tough to
control something which cannot be identified. For any manufacturing program, such as the
manufacturing of cars, the plan executive can recognize the product taking shape.
TECHNICAL RISKS
Technical risks concern potential method, implementation, interfacing, testing, and
maintenance issue. It also consists of an ambiguous specification, incomplete specification,
changing specification, technical uncertainty, and technical obsolescence. Most technical
risks appear due to the development team's insufficient knowledge about the project.
BUSINESS RISKS
This type of risks contain risks of building an excellent product that no one need,
losing budgetary or personnel commitments, etc.
56
OTHER RISK CATEGORIES
KNOWN RISKS
Those risks that can be uncovered after careful assessment of the project program, the
business and technical environment in which the plan is being developed, and more reliable
data sources (e.g., unrealistic delivery date)
PREDICTABLE RISKS
Those risks that are hypothesized from previous project experience (e.g., past
turnover)
UNPREDICTABLE RISKS
Those risks that can and do occur, but are extremely tough to identify in advance.
When we develop software, the product (software) undergoes many changes in their
maintenance phase; we need to handle these changes effectively.Several individuals
(programs) works together to achieve these common goals. This individual produces several
work product (SC Items) e.g., Intermediate version of modules or test data used during
debugging, parts of the final product.The elements that comprise all information produced as
a part of the software process are collectively called a software configuration.
As software development progresses, the number of Software Configuration elements
(SCI's) grow rapidly.A configuration of the product refers not only to the product's
constituent but also to a particular version of the component.
Configuration Management (CM) is a technic of identifying, organizing, and controlling
modification to software being built by a programming team.
IMPORTANCE OF SCM
It is practical in controlling and managing the access to various SCIs e.g., by
preventing the two members of a team for checking out the same component for modification
at the same time.
It provides the tool to ensure that changes are being properly implemented. It has the
capability of describing and storing the various constituent of software. SCM is used in
keeping a system in a consistent state by automatically producing derived version upon
modification of the same component.
57
SCM PROCESS
It uses the tools which keep that the necessary change has been implemented
adequately to the appropriate component. The SCM process defines a number of tasks:
• Identification of objects in the software configuration
• Version Control
• Change Control
• Configuration Audit
• Status Reporting
SCM PROCESS MODELS
IDENTIFICATION
Basic Object: Unit of Text created by a software engineer during analysis, design,
code, or test.
Aggregate Object: A collection of essential objects and other aggregate objects.
Design Specification is an aggregate object.Each object has a set of distinct characteristics
that identify it uniquely: a name, a description, a list of resources, and a "realization."
The interrelationships between configuration objects can be described with a Module
Interconnection Language (MIL).
VERSION CONTROL
Version Control combines procedures and tools to handle different version of
configuration objects that are generated during the software process.
Clemm defines version control in the context of SCM: Configuration management allows
a user to specify the alternative configuration of the software system through the selection of
appropriate versions. This is supported by associating attributes with each software version,
and then allowing a configuration to be specified [and constructed] by describing the set of
desired attributes.
CHANGE CONTROL
James Bach describes change control in the context of SCM is: Change Control is
Vital. But the forces that make it essential also make it annoying.We worry about change
because a small confusion in the code can create a big failure in the product. But it can also
fix a significant failure or enable incredible new capabilities.We worry about change because
a single rogue developer could sink the project, yet brilliant ideas originate in the mind of
those rogues, and
58
A burdensome change control process could effectively discourage them from doing
creative work.A change request is submitted and calculated to assess technical merit;
potential side effects, the overall impact on other configuration objects and system functions,
and projected cost of the change.
The results of the evaluations are presented as a change report, which is used by a
change control authority (CCA) - a person or a group who makes a final decision on the
status and priority of the change. The "check-in" and "check-out" process implements two
necessary elements of change control-access control and synchronization control.
Access Control governs which software engineers have the authority to access and modify a
particular configuration object. Synchronization Control helps to ensure that parallel changes,
performed by two different people, don't overwrite one another.
CONFIGURATION AUDIT
SCM audits to verify that the software product satisfies the baselines requirements
and ensures that what is built and what is delivered.SCM audits also ensure that traceability is
maintained between all CIs and that all work requests are associated with one or more CI
modification.SCM audits are the "watchdogs" that ensures that the integrity of the project's
scope is preserved.
STATUS REPORTING
Configuration Status reporting (sometimes also called status accounting) providing
accurate status and current configuration data to developers, testers, end users, customers and
stakeholders through admin guides, user guides, FAQs, Release Notes, Installation Guide,
Configuration Guide, etc.
• Process Tailoring
• Quality Assurance Plan
• Configuration Management Plan
• Validation and Verification
• System Testing Plan
• Delivery, Installation, and Maintenance Plan
Software development is a sort of all new streams in world business, and there's next to
no involvement in structure programming items. Most programming items are customized to
59
accommodate customer's necessities. The most significant is that the underlying technology
changes and advances so generally and rapidly that experience of one element may not be
connected to the other one. All such business and ecological imperatives bring risk in
software development; hence, it is fundamental to manage software projects efficiently.
Once the activity network representation has been processed out, resources are allocated
to every activity. Resource allocation is usually done using a Gantt chart. After resource
allocation is completed, a PERT chart representation is developed. The PERT chart
representation is useful for program monitoring and control. For task scheduling, the project
plan needs to decompose the project functions into a set of activities. The time frame when
every activity is to be performed is to be determined. The end of every action is called a
milestone. The project manager tracks the function of a project by audit the timely
completion of the milestones. If he examines that the milestones start getting delayed, then he
has to handle the activities carefully so that the complete deadline can still be met.
60
UNIT-IV
61
4.2 OUTCOME OF ADESIGN PROCESS
1) Architectural Design
2) Preliminary (High-Level) Design
3) Detailed Design
ARCHITECTURAL DESIGN
The architectural design is the highest summarize version of the system. This
determines the software or application as a method with many components interacting with
each. At this level, the designers get the idea or thought of a suggested clarification domain or
province.
PRELIMINARY (HIGH-LEVEL) DESIGN
High-level design means identification of different modules and the control
relationships among them and the definition of the interfaces among these modules. The
outcome of the high-level design is called software architecture or the program structure.
Many different types of notations have been used to represent a high-level design. A popular
way is to use a tree-like diagram called the Structure Chart to represent the control hierarchy
in the high-level design.
DETAILED DESIGN
During detailed design, the data structure and the algorithms of the different modules
are designed. The outcome of the detailed design stage is usually known as the module-
specification document.
For good quality software to be produced, the software design must also be of good
quality. Now, the matter of concern is how the quality of good software design is
measured.This is done by observing certain factors in software design. These factors are:
• Correctness
• Understandability
• Efficiency
• Maintainability
62
CORRECTNESS
First of all, the design of any software is evaluated for its correctness. The evaluators
check the software for every kind of input and action and observe the results that the software
will produce according to the proposed design. If the results are correct for every input, the
design is accepted and is considered that the software produced according to this design will
function correctly.
UNDERSTANDABILITY
The software design should be understandable so that the developers do not find any
difficulty to understand it. Good software design should be self- explanatory. This is because
there are hundreds and thousands of developers that develop different modules of the
software, and it would be very time consuming to explain each design to each developer. So,
if the design is easy and self- explanatory, it would be easy for the developers to implement it
and build the same software that is represented in the design.
EFFICIENCY
The software design must be efficient. The efficiency of the software can be estimated
from the design phase itself, because if the design is describing software that is not efficient
and useful, then the developed software would also stand on the same level of efficiency.
Hence, for efficient and good quality software to be developed, care must be taken in the
designing phase itself.
MAINTAINABILITY
The software design must be in such a way that modifications can be easily made in
it. This is because every software needs time to time modifications and maintenance. So, the
design of the software must also be able to bear such changes. It should not be the case that
after making some modifications the other features of the software start misbehaving. Any
change made in the software design must not affect the other available features, and if the
features are getting affected, then they must be handled properly.
Cohesion and coupling in software engineering are two important concepts that
describe the relationship between the modules or components in a software system.
COHESION
Cohesion refers to the degree to which the responsibilities of a single module are
related & focussed.It measures how strongly the internal elements of a module are connected
63
& work together to achieve a common purpose. High cohesion promotes reusability,
maintainability, and understandability of the code. The higher the degree of cohesion, better
the quality of the software.
TYPES
1. FUNCTIONAL COHESION
Functional cohesion occurs when the elements within a module perform a single,
well-defined task or a function. All the elements within the module contribute to achieving
the same objective.This type of cohesion is considered the most desirable & strongest.
EXAMPLES
Reading transaction records, cosine angle computation, seat assignment to an airline
passenger, etc
2. SEQUENTIAL COHESION
Sequential cohesion occurs when the elements within a module are arranged in a specific
sequence, with the output of one element serves as the input for the next element. The
elements are executed in a step-by-step manner to achieve a particular functionality.
EXAMPLES
Cross-validated records and formatting of a module, raw records usage, formatting of
raw records, cross-validation of fields in raw records, etc.
3. COMMUNICATIONAL COHESION
Communicational cohesion occurs when the elements within a module operate on the
same input data or share data through parameters. The elements within the module work
together by passing data to each other. It is weaker than sequential cohesion.
EXAMPLE
Usage of a customer account number, finding the name of the customer, the loan
balance of the customer, etc.
4. PROCEDURAL COHESION
Procedural cohesion occurs when the elements within a module are grouped based on
a specific sequence of actions or steps. This type of cohesion is weaker than the
communicational cohesion.
EXAMPLES
Read, write, edit of the module, zero padding to the numeric fields, returning records,
etc.
64
5. TEMPORAL COHESION
Temporal cohesion occurs when the elements within a module are executed at the
same time or within the same timeframe. It is considered weaker than the procedural
cohesion.
EXAMPLES
Setting the counter to zero, opening the student file, clearing the variables of error messages,
initializing the array, etc.
6. LOGICAL COHESION
Logical cohesion occurs when the elements within a module are logically related, but
do not fit into any other cohesion types. It is not strong as other cohesion types.
EXAMPLES
When a component reads inputs from tape, disk, and network, etc.
7. COINCIDENTAL COHESION
Coincidental cohesion occurs when the elements are not related to each other.
EXAMPLES
Module for miscellaneous functions, customer record usage, displaying of customer
records, calculation of total sales, and reading the transaction record, etc.
COUPLING
Coupling in software engineering refers to the degree of interdependence &
connection between modules or components within a software system. Two modules are said
to have high coupling if they are closely connected. In simple words, coupling is not just
about modules, but the connection between modules and the degree of interaction or
interdependence between the two modules. If two modules contain a good amount of data,
then they are highly interdependent. If the connection between components is strong, we
speak about strongly coupled modules; when the connection is weak, we speak about the
loosely coupled modules.
TYPES OF COUPLING
1. DATA COUPLING
Data coupling occurs when modules share data through parameters or arguments.
Each module maintains its own data and does not exactly access or modify the data of other
modules.This type of coupling promotes encapsulation & module interdependence.
2. STAMP COUPLING
Stamp coupling is a weaker form of coupling where modules share a composite data
structure, but not all the elements are used by each module.As the data and elements are pre-
65
organized and well-placed beforehand, no junk or unused data is shared or passed between
the two coupling modules which improves the efficiency of the modules.
3. CONTROL COUPLING
Control coupling occurs when one module controls the behavior of another module.
This type of coupling implies that one module has knowledge of internal workings &
decisions or another module, that makes the code more difficult to maintain.
4. EXTERNAL COUPLING
External coupling measures the degree to which the system relies on external entities
to fulfill its functionality or interact with the external environment. Low external coupling -
Changes in the external entities have little impact on internal implementation of the system.
5. COMMON COUPLING
Common coupling occurs when two or more modules in the system share global data.
The modules can access & manipulate the same global variables & the data structures.
6. CONTENT COUPLING
Content coupling occurs when one module directly accesses or modifies the content
of another module. This type of coupling is strong & undesirable since it tightly couples the
modules, making them highly independent on each other's implementation.
A good system design strategy is to organize the program modules in such a method
that are easy to develop and latter too, change. Structured design methods help developers to
deal with the size and complexity of programs. Analysts generate instructions for the
developers about how code should be composed and how pieces of code should fit together to
form a program.
To design a system, there are two possible approaches:
• Top-down Approach
• Bottom-up Approach
1. Top-down Approach: This approach starts with the identification of the main components
and then decomposing them into their more detailed sub-components.
66
2. Bottom-up Approach: A bottom-up approach begins with the lower details and moves
towards up the hierarchy, as shown in fig. This approach is suitable in case of an existing
system.
67
DESIGN NOTATIONS
Design Notations are primarily meant to be used during the process of design and are
used to represent design or design decisions. For a function-oriented design, the design can be
represented graphically or mathematically by the following:
68
For the smallest units of data elements, the data dictionary lists their name and their
type. A data dictionary plays a significant role in any software development process because
of the following reasons:
A Data dictionary provides a standard language for all relevant information for use by
engineers working in a project. A consistent vocabulary for data items is essential since, in
large projects, different engineers of the project tend to use different terms to refer to the
same data, which unnecessarily causes confusion. The data dictionary provides the analyst
with a means to determine the definition of various data structures in terms of their
component elements.
STRUCTURED CHARTS
It partitions a system into block boxes. A Black box system that functionality is
known to the user without the knowledge of internal design.
69
PSEUDO-CODE
Pseudo-code notations can be used in both the preliminary and detailed design phases. Using
pseudo-code, the designer describes system characteristics using short, concise, English
Language phases that are structured by keywords such as If-Then-Else, While-Do, and End.
• Objects: All entities involved in the solution design are known as objects. For
example, person, banks, company, and users are considered as objects. Every entity
has some attributes associated with it and has some methods to perform on the
attributes.
• Classes: A class is a generalized description of an object. An object is an instance of a
class. A class defines all the attributes, which an object can have and methods, which
represents the functionality of the object.
• Messages: Objects communicate by message passing. Messages consist of the
integrity of the target object, the name of the requested operation, and any other action
70
needed to perform the function. Messages are often implemented as procedure or
function calls.
Abstraction In object-oriented design, complexity is handled using abstraction. Abstraction is
the removal of the irrelevant and the amplification of the essentials.
• Encapsulation: Encapsulation is also called an information hiding concept. The data
and operations are linked to a single unit. Encapsulation not only bundles essential
information of an object together but also restricts access to the data and methods
from the outside world.
• Inheritance: OOD allows similar classes to stack up in a hierarchical manner where
the lower or sub-classes can import, implement, and re-use allowed variables and
functions from their immediate superclasses.This property of OOD is called an
inheritance. This makes it easier to define a specific class and to create generalized
classes from specific ones.
• Polymorphism: OOD languages provide a mechanism where methods performing
similar tasks but vary in arguments, can be assigned the same name. This is known as
polymorphism, which allows a single interface is performing functions for different
types. Depending upon how the service is invoked, the respective portion of the code
gets executed.
Detailed design deals with the implementation part of what is seen as a system and its
sub-systems in the previous two designs. It is more detailed towards modules and their
implementations. It defines logical structure of each module and their interfaces to
communicate with other modules.
The detailed design may include:
• Decomposition of major system components into program units.
• Allocation of functional responsibilities to units.
• User interfaces
• Unit states and state changes
• Data and control interaction between units
• Data packaging and implementation, including issues of scope and visibility of
program elements
71
• Algorithms and data structures
72
design helps meet functional and non-functional requirements. ü Document the review
process.
73
model of the software system. The SDD is used as the primary medium for
communicating software design information
CONSIDERATIONS FOR PRODUCING AN SDD
This clause provides information to be considered before producing an SDD. How the
SDD fits into the software life cycle, where it fits, and why it is used are discussed.
Software life cycle
The life cycle of a software system is normally defined as the period of time that starts
when a software product is conceived and ends when the product is no longer available for
use. The life cycle approach is an effective engineering management tool and provides a
model for a context within which to discuss the preparation and use of the SDD.While it is
beyond the scope of this document to prescribe a particular standard life cycle, a typical cycle
will be used to define such a context for the SDD.
This cycle is based on IEEE Std 610.12-1990 and consists of a concept phase,
requirements phase, design phase, implementation phase, test phase, installation and checkout
phase, operation and maintenance phase, and retirement phase.
SDD within the life cycle
For both new software systems and existing systems under maintenance, it is
important to ensure that the design and implementation used for a software system satisfy the
requirements driving that system. The minimum documentation required to do this is defined
in IEEE Std 730-1998. The SDD is one of these required products. It records the result of the
design processes that are carried out during the design phase.
Purpose of an SDD
The SDD shows how the software system will be structured to satisfy the
requirements identified in the software requirements specification IEEE Std 830-1998. It is a
translation of requirements into a description of the software structure, software components,
interfaces, and data necessary for the implementation phase. In essence, the SDD becomes a
detailed blueprint for the implementation activity. In a complete SDD, each requirement must
be traceable to one or more design entities.
74
UNIT-V
Software testing is the process of evaluating and verifying that a software product or
application does what it is supposed to do. The benefits of testing include preventing bugs,
reducing development costs and improving performance.
75
Software Testing is a type of investigation to find out if there is any default or error
present in the software so that the errors can be reduced or removed to increase the quality of
the software and to check whether it fulfills the specifies requirements or not.
The process of investigating and checking a program to find whether there is an error
or not and does it fulfill the requirements or not is called testing.When the number of errors
found during the testing is high, it indicates that the testing was good and is a sign of good
test case. Finding an unknown error that wasn’t discovered yet is a sign of a successful and a
good test case.
It is the process of finding the yet undiscovered bugs/errors in the program. The user
tests the program, find the error and bugs which could hamper the normal functioning of the
program. Sometimes, there are changes made to the software, new functionality, or changes
to the existing functionalities that are done introduces the possibility of error. It also increases
the risk that software would not work as per requirement or does not fulfill the intended
purpose.
Testing reduces this risk. It is through the process of testing and rectification, that we can
identify the issues and resolve them.
It is a type of software testing which is used to verify the functionality of the software
application, whether the function is working according to the requirement specification. In
functional testing, each function tested by giving the value, determining the output, and
verifying the actual output with the expected value. Functional testing performed as black-
box testing which is presented to confirm that the functionality of an application or system
behaves as we are expecting. It is done to verify the functionality of the application.
Functional testing also called as black-box testing, because it focuses on application
specification rather than actual code. Tester has to test only the program rather than the
system.
76
GOAL OF FUNCTIONAL TESTING
The purpose of the functional testing is to check the primary entry function,
necessarily usable function, the flow of screen GUI. Functional testing displays the error
message so that the user can easily navigate throughout the application.
process of functional testing
Testers follow the following steps in the functional testing:
• Tester does verification of the requirement specification in the software application.
• After analysis, the requirement specification tester will make a plan.
• After planning the tests, the tester will design the test case.
• After designing the test, case tester will make a document of the traceability matrix.
• The tester will execute the test case design.
• Analysis of the coverage to examine the covered testing area of the application.
• Defect management should do to manage defect resolving.
TYPES OF FUNCTIONAL TESTING
Unit Testing: Unit testing is a type of software testing, where the individual unit or
component of the software tested. Unit testing, examine the different part of the application,
by unit testing functional testing also done, because unit testing ensures each module is
working correctly.
The developer does unit testing. Unit testing is done in the development phase of the
application.
Smoke Testing: Functional testing by smoke testing. Smoke testing includes only the
basic (feature) functionality of the system. Smoke testing is known as "Build Verification
Testing." Smoke testing aims to ensure that the most important function work.
For example, Smoke testing verifies that the application launches successfully will
check that GUI is responsive.
Sanity Testing: Sanity testing involves the entire high-level business scenario is
working correctly. Sanity testing is done to check the functionality/bugs fixed. Sanity testing
is little advance than smoke testing.
Regression Testing: This type of testing concentrate to make sure that the code
changes should not side effect the existing functionality of the system. Regression testing
specifies when bug arises in the system after fixing the bug, regression testing concentrate on
that all parts are working or not. Regression testing focuses on is there any impact on the
system.
77
Integration Testing: Integration testing combined individual units and tested as a
group. The purpose of this testing is to expose the faults in the interaction between the
integrated units. Developers and testers perform integration testing.
White box testing: White box testing is known as Clear Box testing, code-based
testing, structural testing, extensive testing, and glass box testing, transparent box testing. It is
a software testing method in which the internal structure/design/ implementation tested
known to the tester.
Black box testing: It is also known as behavioral testing. In this testing, the internal
structure/ design/ implementation not known to the tester. This type of testing is functional
testing. Why we called this type of testing is black-box testing, in this testing tester, can't see
the internal code.
User acceptance testing: It is a type of testing performed by the client to certify the
system according to requirement. The final phase of testing is user acceptance testing before
releasing the software to the market or production environment. UAT is a kind of black-box
testing where two or more end-users will involve.
Structural testing is a type of software testing which uses the internal design of the
software for testing or in other words the software testing which is performed by the team
which knows the development phase of the software, is known as structural testing.
Structural testing is basically related to the internal design and implementation of the
software i.e. it involves the development team members in the testing team. It basically tests
different aspects of the software according to its types. Structural testing is just the opposite
of behavioral testing.
78
CONTROL FLOW TESTING
Control flow testing is a type of structural testing that uses the programs’s control
flow as a model. The entire code, design and structure of the software have to be known for
this type of testing. Often this type of testing is used by the developers to test their own code
and implementation. This method is used to test the logic of the code so that required result
can be obtained.
DATA FLOW TESTING
It uses the control flow graph to explore the unreasonable things that can happen to
data. The detection of data flow anomalies are based on the associations between values and
variables. Without being initialized usage of variables. Initialized variables are not used once.
SLICE BASED TESTING
It was originally proposed by Weiser and Gallagher for the software maintenance. It is
useful for software debugging, software maintenance, program understanding and
quantification of functional cohesion. It divides the program into different slices and tests that
slice which can majorly affect the entire software.
MUTATION TESTING
Mutation Testing is a type of Software Testing that is performed to design new
software tests and also evaluate the quality of already existing software tests. Mutation testing
is related to modification a program in small ways. It focuses to help the tester develop
effective tests or locate weaknesses in the test data used for the program.
79
ADVANTAGES OF STRUCTURAL TESTING
• It provides thorough testing of the software.
• It helps in finding out defects at an early stage.
• It helps in elimination of dead code.
• It is not time consuming as it is mostly automated.
DISADVANTAGES OF STRUCTURAL TESTING
• It requires knowledge of the code to perform test.
• It requires training in the tool used for testing.
• Sometimes it is expensive.
80
ACCEPTANCE TESTING
It is a kind of testing conducted to ensure whether the requirement of the users are
fulfilled prior to its delivery and the software works correctly in the user’s working
environment. While performing the software testing, following Testing principles must be
applied by every software engineer.
The requirements of customers should be traceable and identified by all different
tests. Planning of tests that how tests will be conducted should be done long before the
beginning of the test. The Pareto principle can be applied to software testing- 80% of all
errors identified during testing will likely be traceable to 20% of all program modules.
Testing should begin “in the small” and progress toward testing “in the large”.
Exhaustive testing which simply means to test all the possible combinations of data is not
possible. Testing conducted should be most effective and for this purpose, an independent
third party is required.
81
have developed the product right." And it also checks that the software meets the business
needs of the client.
BENEFITS OF VALIDATION TESTING
• We check whether the developed product is right.
• Validation is also known as dynamic testing.
• Validation includes testing like functional testing, system testing, integration, and
User acceptance testing.
• It is a process of checking the software during or at the end of the development cycle
to decide whether the software follow the specified business requirements
• Quality control comes under validation testing.
• In validation testing, the execution of code happens.
• In the validation testing, we can find those bugs, which are not caught in the
verification process.
• Validation testing is executed by the testing team to test the application.
• After verification testing, validation testing takes place.
• In this type of testing, we can validate that the user accepts the product or not.
ADVANTAGES OF VALIDATION TESTING
Any bugs missed during verification will be detected while running validation tests. If
specifications were incorrect and inadequate, validation tests would reveal their inefficacy.
Teams will have to spend time and effort fixing them, but it will prevent a bad product from
hitting the market. Validation tests ensure that the product matches and adheres to customer
demands, preferences, and expectations under different conditions (slow connectivity, low
battery, etc.) These tests are also required to ensure the software functions flawlessly across
different browser-device-OS combinations. In other words, it authenticates software for cross
browser compatibility.
Regression Testing is a type of testing in the software development cycle that runs
after every change to ensure that the change introduces no unintended breaks. Regression
testing addresses a common issue that developers face — the emergence of old bugs with the
introduction of new changes.
82
IMPORTANCE OF REGRESSION TESTING
Typically, it involves writing a test for a known bug and re-running this test after
every change to the code base. This aims to immediately identify any change that
reintroduces a bug. With a push for agility in software development, there is an emphasis on
adopting an iterative process – push new code often and break things if necessary. Regression
testing ensures that with frequent pushes, developers do not break things that already work.
The regression testing example below emphasizes its importance.
Example
Consider an example where a software development company is working on releasing
a new product for video editing. The primary requirement is to release their first build with
only the core features. Before product release, a regression test is conducted with 1000 test
cases to ensure the basic or fermium editing functionalities. Your initial build is ready to hit
the market if it passes the tests successfully.
However, with the success of your first product making waves in the video editing
landscape, your business team comes back with a requirement to add a few new premium
features. Your product team develops those and adds them to the existing app, but with the
addition of new codes, a regression test is required again. Hence, you write 100 new test
cases to verify the functionality of those new features. However, you will have to run those
1000 old test cases already conducted to ensure essential functions haven’t been broken.
83
5.9 ART OF DEBUGGING
84
• Fix & Validate: The last stage is the fix and validation that emphasizes fixing the
bugs followed by running all the test scripts to check whether they pass.
DEBUGGING STRATEGIES
For a better understanding of a system, it is necessary to study the system in depth. It
makes it easier for the debugger to fabricate distinct illustrations of such systems that are
needed to be debugged. The backward analysis analyzes the program from the backward
location where the failure message has occurred to determine the defect region. It is
necessary to learn the area of defects to understand the reason for defects. In the forward
analysis, the program tracks the problem in the forward direction by utilizing the breakpoints
or print statements incurred at different points in the program. It emphasizes those regions
where the wrong outputs are obtained.
To check and fix similar kinds of problems, it is recommended to utilize past
experiences. The success rate of this approach is directly proportional to the debugger.
Software testing tools are required for the betterment of the application or software.
That's why we have so many tools available in the market where some are open-source and
paid tools. The significant difference between open-source and the paid tool is that the open-
source tools have limited features, whereas paid tool or commercial tools have no limitation
for the features. The selection of tools depends on the user's requirements, whether it is paid
or free.
The software testing tools can be categorized, depending on the licensing (paid or
commercial, open-source), technology usage, type of testing, and so on.
The software testing tools can be divided into the following:
TEST MANAGEMENT TOOL
Test management tools are used to keep track of all the testing activity, fast data
analysis, manage manual and automation test cases, various environments, and plan and
maintain manual testing as well.
BUG TRACKING TOOL
The defect tracking tool is used to keep track of the bug fixes and ensure the delivery
of a quality product. This tool can help us to find the bugs in the testing stage so that we can
get the defect-free data in the production server. With the help of these tools, the end-users
can allow reporting the bugs and issues directly on their applications.
85
AUTOMATION TESTING TOOL
This type of tool is used to enhance the productivity of the product and improve the
accuracy. We can reduce the time and cost of the application by writing some test scripts in
any programming language.
PERFORMANCE TESTING TOOL
Performance or Load testing tools are used to check the load, stability, and scalability
of the application. When n-number of the users using the application at the same time, and if
the application gets crashed because of the immense load, to get through this type of issue,
we need load testing tools.
CROSS-BROWSER TESTING TOOL
This type of tool is used when we need to compare a web application in the various
web browser platforms. It is an important part when we are developing a project. With the
help of these tools, we will ensure the consistent behavior of the application in multiple
devices, browsers, and platforms.
INTEGRATION TESTING TOOL
This type of tool is used to test the interface between modules and find the critical
bugs that are happened because of the different modules and ensuring that all the modules are
working as per the client requirements.
UNIT TESTING TOOL
This testing tool is used to help the programmers to improve their code quality, and
with the help of these tools, they can reduce the time of code and the overall cost of the
software.
MOBILE/ANDROID TESTING TOOL
We can use this type of tool when we are testing any mobile application. Some of the
tools are open-source, and some of the tools are licensed. Each tool has its functionality and
features.
GUI TESTING TOOL
GUI testing tool is used to test the User interface of the application because a proper
GUI (graphical user interface) is always useful to grab the user's attention. These type of tools
will help to find the loopholes in the application's design and makes its better.
86
The security testing tool is used to ensure the security of the software and check for
the security leakage. If any security loophole is there, it could be fixed at the early stage of
the product. We need this type of the tool when the software has encoded the security code
which is not accessible by the unauthorized users.
5.11 METRICS
87
5.12 RELIABILITY ESTIMATION
Estimating the reliability of software, two main paths can be taken. One is to analyze
the code by means of static analyzers (see e.g. [Khoshgoftaar and Munson1990]), model
checkers, theorem proverbs, compilers etc. Another is to analyze the software from an
external point of view; here, the main sources of input are testing software testing, including
unit, integration and system testing, provides much data about the programs reliability if test
coverage is good, tests are systematically conducted, and at least part of the testing is done
under realistic operating conditions.
This preferably comes from experts that have nit participated in the development of
the software (to avoid myopia), but are experienced in software development (including
architecture and code analysis issues), have access to the software’s source code and
documentation, may carry out static analysis, and have a good understanding of the
requirements and operating conditions of the software. operational data (failure reports etc.);
if high-quality operational data is available from a sufficiently long time period, it provides a
solid ground for stochastic analysis of reliability.
Reliability estimates of software can be used in a number of ways, among them
• To estimate the reliability of a total system of which the software is a part
• To allocate resources during development and maintenance
• To estimate maintenance costs.
We consider the problem of arriving at a reliability estimate for a piece of software.
The assumed end use of the reliability estimates is risk and reliability analysis of software
based systems, e.g. nuclear power plants. We assume that informed expert opinion exists
about a softwareís reliability, and try to update this reliability estimate with information
contained in operational data records. Probability estimates can be arrived at in three,
mutually non-exclusive ways:
Subjective degrees of belief tell how much trust we (or experts, or whoever made
the estimate) put on that the event actually occurs. This is often the best way of arriving at
probability estimates when data is scarce or nonexistent, or when the situation involves very
complex elements.
Propensities tell how probable the event is based on some formal model(s). The model
is usually mathematical or physical. For example, after making a model for a die, we may
arrive at the conclusion that the probability of the die arriving on each of its faces is equal.
88
This is often the best way to arrive at probability estimates when data is scarce but detailed
modeling is feasible.
Frequencies tell how many times (relatively speaking) the event actually occurred in a
dataset. This is often the best way to arrive at probability estimates when data is abundant. In
Bayesian statistics, there are two kinds of probabilities of an event. Prior probability
expresses the probability estimate before a new event (measurement, arrival of data or such)
takes place. Posterior probability expresses the probability estimate after the event.
Software maintenance is a part of the Software Development Life Cycle. Its primary
goal is to modify and update software application after delivery to correct errors and to
improve performance. Software is a model of the real world. When the real world changes,
the software require alteration wherever possible. Software Maintenance is an inclusive
activity that includes error corrections, enhancement of capabilities, deletion of obsolete
capabilities, and optimization.
NEED FOR MAINTENANCE
• Correct errors
• Change in user requirement with time
• Changing hardware/software requirements
• To improve system efficiency
• To optimize the code to run faster
• To modify the components
• To reduce any unwanted side effects.
Thus the maintenance is required to ensure that the system continues to satisfy user
requirements.
TYPES OF SOFTWARE MAINTENANCE
1. CORRECTIVE MAINTENANCE
Corrective maintenance aims to correct any remaining errors regardless of where they
may cause specifications, design, coding, testing, and documentation, etc.
2. ADAPTIVE MAINTENANCE
It contains modifying the software to match changes in the ever-changing
environment.
89
3. PREVENTIVE MAINTENANCE
It is the process by which we prevent our system from being obsolete. It involves the
concept of reengineering & reverse engineering in which an old system with old technology
is re-engineered using new technology. This maintenance prevents the system from dying
out.
4. PERFECTIVE MAINTENANCE
It defines improving processing efficiency or performance or restricting the software
to enhance changeability. This may contain enhancement of existing system functionality,
improvement in computational efficiency, etc.
5.14 SOFTWARE MAINTENANCE PROCESS
PROGRAM UNDERSTANDING
The first step consists of analyzing the program to understand.
90
GENERATING A PARTICULAR MAINTENANCE PROBLEM
The second phase consists of creating a particular maintenance proposal to
accomplish the implementation of the maintenance goals.
RIPPLE EFFECT
The third step consists of accounting for all of the ripple effects as a consequence of
program modifications.
91
From an implementation, one can draw a number of conclusions regarding the requirement.
Partial information can also be provided via an imperfect model.
ADVANTAGES OF REVERSE ENGINEERING:
EXPLORING EXISTING DESIGNS AND MANEUVERS
We are able to observe what already exists thanks to reverse engineering. This
includes any components, system, or procedures that might serve communities in other way.
Through analysis of existing products, innovation and discovery are made possible.
DISCOVERING ANY PRODUCT VULNERABILITIES
Reverse engineering helps in identifying product flaws, just as in the prior step. This
is done to protect the users of the product's safety and wellbeing. An issue should ideally
come up in the research stage rather than the distribution stage.
INSPIRING CREATIVE MINDS WITH OLD IDEAS
Last but not least, reverse engineering provides a way for innovative design. An
engineer may come upon a system during the process that could be valuable for a totally
unrelated project. This demonstrates how engineering links tasks to prior knowledge.
CREATIVE A RELIABLE CAD MODEL FOR FUTURE REFERENCE
The majority of reverse engineering procedures include creating a fully functional
CAD file for future use. In order to inspect the part digitally in the event that problems
develop later, a CAD file is made. This type of technology has improved product
expressiveness and engineering productivity. RECONSTRUCTING A PRODUCT THAT
IS OUTDATED
Understanding the product itself is a crucial component of redesigning an existing
product. Working out the quirks in an antiquated system with the help of reverse engineering
gives us the visual. The most crucial factor in this procedure is quality.
92
There are several steps involved in the software engineering process, which can vary
depending on the specific methodology being used. However, some common steps include:
PLANNING
This involves gathering and documenting requirements, establishing goals and
objectives, and creating a project plan.
ANALYSIS
This involves understanding the needs of the users and the environment in which the
software will be used, and defining the problems that the software must solve.
DESIGN
This involves creating a blueprint for the software, including the overall architecture,
user interface, and specific features and functions.
IMPLEMENTATION
This involves writing the actual code for the software and testing it to ensure that it
meets the specified requirements.
TESTING
This involves verifying that the software works as intended, and identifying and
fixing any errors or defects.
DEPLOYMENT
This involves installing the software in its intended environment and making it
available for use.
93
MAINTENANCE
This involves ongoing activities to ensure that the software continues to meet the
needs of the users and to address any issues that may arise.
ADVANTAGES OF SOFTWARE RE-ENGINEERING
IMPROVED CODE QUALITY
Re-engineering can help to improve the quality of the code by removing duplicated
code, simplifying complex code, and making the code more readable and maintainable.
ENHANCED PERFORMANCE
Re-engineering can help to improve the performance of software systems by
optimising the code for better performance and scalability.
INCREASED MAINTAINABILITY
Re-engineering can help to make software systems more maintainable by making the
code easier to understand and modify, and by adding documentation and automated tests.
ENHANCED FLEXIBILITY
Re-engineering can help to make software systems more flexible by making it easier
to add new features and capabilities, and by making it easier to adapt to changing
requirements and environments.
REDUCED RISK
Re-engineering can help to reduce the risk of software systems by identifying and
fixing potential problems and vulnerabilities, and by making the code more reliable and
robust.
94
Configuration Items, which has been formally reviewed and agreed upon, and serves as the
basis of further development).
2.VERSION CONTROL
Control changes before and after release to customer.Combines procedures and tools
to manage the different versions of configuration objects created during the software process.
A variant is a different set of objects at the same revision level and coexists with other
variants Software version control (SVC), also called revision control, source control
management, and versioning control, is a management strategy to track and store changes to a
software development document or set of files that follow the development project from
beginning to end-of-life.
3.CHANGE CONTROL
Change management is the discipline that guides how we Prepare ,Equip,Support
individuals to successfully adopt change. In order to drive organizational success and
outcomes. A change request (CR) is submitted and evaluated to assess technical merit,
potential side effects, overall impact on other configuration objects and system functions, and
the projected cost of the change.
4.CONFIGURATION AUDITING
A software configuration audit complements the formal technical review of the
process and product. It focuses on the technical correctness of the configuration object that
has been modified.
5.REPORTING
Providing accurate status and current configuration data to developers, tester, end
users, customers and stakeholders through admin guides, user guides, FAQs, Release notes,
Memos, Installation Guide, Configuration guide etc
95
SALEM CHRISTIAN COLLEGE OF ARTS AND SCIENCE
ADVANCED SOFTWARE ENGONEERING
M.Sc-COMPUTER SCIENCE MAX.MARKS=75
MODEL QUESTION PAPER
PART-A (15*1=15)
Answer the ALL Questions
1. Software is defined as ___________
a) set of programs documentation b) set of programs
c) documentation of data d) None of the mentioned
2. Who is the father of Software Engineering?
a) Margaret Hamilton b) Watts S. Humphrey
c) Alan Turing d) Boris Beizer
3. What are the features of Software Code?
a) Simplicity b) Accessibility c) Modularity d) All of the above
4. ____________ is a software development activity that is not a part of software
processes.
a) Validation b) Specification c) Development d) Dependence
5. Attributes of good software is ____________
a) Development b) Maintainability & functionality
c) Functionality d) Maintainability
6. The Cleanroom philosophy was proposed by _________
a) Linger b) Mills c) Dyer d) All of the Mentioned
7. Who proposed the spiral model?
a) Barry Boehm b) Pressman c) Royce d) IBM
8. Which of the following are CASE tools?
a) Central Repository b) Integrated Case Tools
c) Upper Case Tools d) All of the mentioned
9. Software patch is defined as ______________
a) Daily or routine Fix b) Required or Critical Fix
c) Emergency Fix d) None of the mentioned
10. Who proposed Function Points?
a) Albrecht b) Jacobson c) Boehm d) None of the mentioned
11. 4GT Model is a set of __________________
96
a) Programs b) CASE Tools
c) Software tools d) None of the mentioned
12. __________ is not suitable for accommodating any change?
a) RAD Model b) Waterfall Model
c) Build & Fix Model d) Prototyping Model
13. Which one of the following is not a software process quality?
a) Visibility b) Timeliness c) Productivity d) Portability
14 . What is system software?
a) computer program b) Testing c) AI d) IOT
15. Quality Management is known as _______
a) SQI b) SQA c) SQM d) SQA and SQM
PART-B (2*5=10)
Answer any TWO Questions
16.Explain about Software Engineering Chellanges.
17.Discuss about Types of Requirements.
18.Explain about Project Estimation Techniques.
19.Explain about Characteristics of Good Software Design.
20.Explain about Functional Testing.
PART-A (5*10=50)
Answer the ALL Questions
21. a)Explain about Waterfall Model. (or)
b) Discuss about Software Engineering Approaches.
22. a)Explain about Requirement Documentation. (or)
b) Discuss about ISO 9000.
23. a)Explain about COCOMO. (or)
b) Discuss about Risk Management.
24. a)Explain about Cohesion. (or)
b) Discuss about Coupling.
25. a)Explain about Structural Testing (or)
b) Discuss about Re-engineering.
97
SALEM CHRISTIAN COLLEGE OF ARTS AND SCIENCE
ADVANCED SOFTWARE ENGINEERING
OBJECTIVE QUESTIONS
1. Software is defined as ___________
a) set of programs documentation b) set of programs
c) documentation of data d) None of the mentioned
2. Who is the father of Software Engineering?
a) Margaret Hamilton b) Watts S. Humphrey
c) Alan Turing d) Boris Beizer
3. What are the features of Software Code?
a) Simplicity b) Accessibility c) Modularity d) All of the above
4. ____________ is a software development activity that is not a part of software
processes.
a) Validation b) Specification c) Development d) Dependence
5. Attributes of good software is ____________
a) Development b) Maintainability & functionality
c) Functionality d) Maintainability
6. The Cleanroom philosophy was proposed by _________
a) Linger b) Mills c) Dyer d) All of the Mentioned
7. Who proposed the spiral model?
a) Barry Boehm b) Pressman c) Royce d) IBM
8. Which of the following are CASE tools?
a) Central Repository b) Integrated Case Tools
c) Upper Case Tools d) All of the mentioned
9. Software patch is defined as ______________
a) Daily or routine Fix b) Required or Critical Fix
c) Emergency Fix d) None of the mentioned
10. Who proposed Function Points?
a) Albrecht b) Jacobson c) Boehm d) None of the mentioned
98