SE Notes
SE Notes
Lecture Notes
on
SOFTWARE ENGINEERING
20A05403T
By
B. JAVEED BASHA
ASSISTANT PROFESSOR
DEPARTMENT OF CSE
2022-2023
Contents
1. Syllabus
2. Unit-I
3. Unit-II
4. Unit-III
5. Unit-IV
6. Unit-V
7. 2 M Questions
8. 5 M Questions
9. 10 M Questions
R20 Regulations
JAWAHARLAL NEHRU TECHNOLOGICAL UNIVERSITY,ANANTAPUR
(Established by Govt. of A.P., ACT No.30 of 2008)
ANANTHAPURAMU – 515 002 (A.P) INDIA
Computer Science & Engineering
Course Code 20A05403T
SOFTWARE ENGINEERING
(Common to CSE, IT, CSE( DS), CSE (IoT)
LTP C
300 3
Semester IV
Pre-requisite: NIL
Course Objectives:
• To learn the basic concepts of software engineering and life cycle models
• To explore the issues in software requirements specification and enable to write SRS documents for
software development problems
• To elucidate the basic concepts of software design and enable to carry out procedural and object
oriented design of software development problems
• To understand the basic concepts of black box and white box software testing and enable to design
test cases for unit, integration, and system testing
• To reveal the basic concepts in software project management
Textbooks:
1. Rajib Mall, “Fundamentals of Software Engineering”, 5th Edition, PHI, 2018.
2. Pressman R, “Software Engineering- Practioner Approach”, McGraw Hill.
Reference Books:
1. Somerville, “Software Engineering”, Pearson 2.
2. Richard Fairley, “Software Engineering Concepts”, Tata McGraw Hill.
3. JalotePankaj, “An integrated approach to Software Engineering”, Narosa
PAGE NO.4
TADIPATRI ENGINEERING COLLEGE, TADIPATRI
UNIT – 1
Basic concepts in software engineering and software project management
Basic concepts: abstraction versus decomposition, evolution of software engineering
techniques, Software development life cycle (SDLC) models: Iterative waterfall model,
Prototype model, Evolutionary model, Spiral model, RAD model, Agile models, software
project management: project planning, project estimation, COCOMO, Halstead’s Software
Science, project scheduling, staffing, Organization and team structure, risk management,
configuration management.
Software subsists of carefully-organized instructions and code written by developers on any of various particular computer languages.
Computer programs and related documentation such as requirements, design models and user manuals.
Engineering is the application of scientific and practical knowledge to invent, design, build, maintain, and improve frameworks,
processes, etc.
Software Engineering is an engineering branch related to the evolution of software product using well-defined scientific principles,
techniques, and procedures. The result of software engineering is an effective and reliable software product.
o Huge Programming: It is simpler to manufacture a wall than to a house or building, similarly, as the measure of programming
become extensive engineering has to step to give it a scientific process.
o Adaptability: If the software procedure were not based on scientific and engineering ideas, it would be simpler to re-create new
software than to scale an existing one.
o Cost: As the hardware industry has demonstrated its skills and huge manufacturing has let down the cost of computer and
electronic hardware. But the cost of programming remains high if the proper process is not adapted.
o Dynamic Nature: The continually growing and adapting nature of programming hugely depends upon the environment in
which the client works. If the quality of the software is continually changing, new upgrades need to be done in the existing one.
o Quality Management: Better procedure of software development provides a better and quality software product.
Good communication skills. These skills comprise of oral, written, and interpersonal skills.
High motivation.
Intelligence.
1. Reduces complexity: Big software is always complicated and challenging to progress. Software engineering has a great
solution to reduce the complication of any project. Software engineering divides big problems into various small issues. And
then start solving each small issue one by one. All these small problems are solved independently to each other.
2. To minimize software cost: Software needs a lot of hardwork and software engineers are highly paid experts. A lot of
manpower is required to develop software with a large number of codes. But in software engineering, programmers project
everything and decrease all those things that are not needed. In turn, the cost for software productions becomes less as
compared to any software that does not use software engineering method.
3. To decrease time: Anything that is not made according to the project always wastes time. And if you are making great
software, then you may need to run many codes to get the definitive running code. This is a very time-consuming procedure,
and if it is not well handled, then this can take a lot of time. So if you are making your software according to the software
engineering method, then it will decrease a lot of time.
4. Handling big projects: Big projects are not done in a couple of days, and they need lots of patience, planning, and
management. And to invest six and seven months of any company, it requires heaps of planning, direction, testing, and
maintenance. No one can say that he has given four months of a company to the task, and the project is still in its first stage.
Because the company has provided many resources to the plan and it should be completed. So to handle a big project without
any problem, the company has to go for a software engineering method.
5. Reliable software: Software should be secure, means if you have delivered the software, then it should work for at least its
given time or subscription. And if any bugs come in the software, the company is responsible for solving all these bugs.
Because in software engineering, testing and maintenance are given, so there is no worry of its reliability.
6. Effectiveness: Effectiveness comes if anything has made according to the standards. Software standards are the big target of
companies to make it more effective. So Software becomes more effective in the act with the help of software engineering.
It is the simplification of a problem by focusing on only one aspect of the problem while omitting all other aspects. When using the
principle of abstraction to understand a complex problem, we focus our attention on only one or two specific aspects of the problem and
ignore the rest.
Whenever we omit some details of a problem to construct an abstraction, we construct a model of the problem. In everyday life, we use
the principle of abstraction frequently to understand a problem or to assess a situation.
Decomposition:
Decomposition is a process of breaking down. It will be breaking down functions into smaller parts. It is anotheimportant
principle of software engineering to handle problem complexity. This principle is profusely made use by several
software engineering techniques to contain the exponential growth of the perceived problem complexity. The
decomposition principle is popularly is says the divide and conquer principle.
Functional Decomposition:
It is a term that engineers use to describe a set of steps in which they break down the overall function of a device, system, or process into
its smaller parts.
In software engineering there are two main concepts in design phase which are abstraction and decomposition but I can't get the differences
between them?
Abstraction in general is process of consciously ignoring some aspects of a subject under analysis in order to better understand other aspects of
it. In other words, it kind of simplification of a subject. In software in particular, analysis & design are all about abstraction.
When you model your DB, you ignore UI and behavior of your system and concentrate only on DB structure.
When you model your architecture, you concentrate on high-level modules and their relationships and ignore their internal structure
Each UML diagram for example gives a special, limited view on the system, therefore focusing on a single aspect and ignoring all other
(sequences abstract objects and messages, deployment abstracts network and servers, use cases abstract system users and their interactions
with a system, etc)
writing source code in any programming language requires a lot of abstraction - programmers abstract app's functionality using limited set
of language constructs
Decomposition is an application of the old good principle "divide and conquer" to software development. It is a technique of classifying,
structuring and grouping complex elements in order to end up with more atomic ones, organized in certain fashion and easier to manage. In all
phases there are lots of examples:
functional decomposition of a complex process to hierarchical structure of smaller sub-processes and activities
high-level structure of an complex application to 3 tiers - UI, logic and data.
Class structure of a complex domain
namespaces as a common concept of breaking a global scope into several local ones
UML packages are a direct use of decomposition on the model level - use packages to organize your model
Abstraction is somehow more generic principle than decomposition, kind of "father of all principles" :)
Abstraction is one of the fundamental principles of object oriented programming. Abstraction allows us to name objects that are not directly
instantiated but serve as a basis for creating objects with some common attributes or properties. For example: in the context of computer
accessories Data Storage Device is an abstract term because it can either be a USB pen drive, hard disk, or RAM. But a USB pen drive or hard
disks are concrete objects because their attributes and behaviors are easily identifiable, which is not the case for Data Storage Device, being an
abstract object for computer accessories. So, abstraction is used to generalize objects into one category in the design phase. For example in a
travel management system you can use Vehicle as an abstract object or entity that generalizes how you travel from one place to another .
Decomposition is a way to break down your systems into modules in such a way that each module provides different functionality, but may
affect other modules also. To understand decomposition quite clearly, you should first understand the concepts of association, composition, and
aggregation.
What is SDLC?
SDLC is a process followed for a software project, within a software organization. It consists of a detailed plan describing how to
develop, maintain, replace and alter or enhance specific software. The life cycle defines a methodology for improving the quality of
software and the overall development process.
The following figure is a graphical representation of the various stages of a typical SDLC.
Waterfall Model
Iterative Model
Spiral Model
V-Model
Big Bang Model
Other related methodologies are Agile Model, RAD Model, Rapid Application Development and Prototyping Models.
Prototype model:
The prototyping paradigm begins with requirements gathering. Developer and customer meet and define the overall
objectives for the software, identify whatever requirements are known, and outline areas where further definition is
mandatory A "quick design" then occurs The quick design focuses on a representation of those aspects of the software that
will be visible to the customer/user (e g , input approaches and output formats) The quick design leads to the construction of
a prototype is evaluated by the customer/user and used to refine requirements or the software to be developed Iteration
occurs as the prototype is tuned to satisfy the needs of the customer, while at the same time enabling the developer to better
understand what needs to be done.
Ideally, the prototype serves as a mechanism for identifying software requirements, a working prototype
is built, the developer attempts to use existing program fragments or applies tools (e.g., report generators,
window managers) that enable working programs to lie generated quickly
The prototype can serve as "the first system." The one that Brooks recommends we throw away. But this
may be an idealized view. It is true that both customers and developers like the prototyping paradigm. Users
get a feel for the actual system and developers get to build something immediately. Yet, prototyping can also
be problematic for the following reason:
The customer sees what appears to be a working version of the software, unaware that the prototype is held
GIT-CSE-ADSA PAGE NO.15
TADIPATRI ENGINEERING COLLEGE, TADIPATRI
together "with chewing gum and baling wire," unaware that in the rush to get it working no one has considered
overall software quality or longterm maintainability When informed that the product must be rebuilt so that high
levels of quality can be maintained, the customer cries foul and demands that "a few fixes" be applied to make the
prototype a working product. Too often, software development management relents
The developer often makes implementation compromises in order to get a prototype working quickly. An
inappropriate operating system or programming language may be used simply because it is available and known;
an inefficient algorithm may be implemented simply to demonstrate capability. After a time, the developer may
become familiar with these choices and forget all the reasons why they were inappropriate. The less-than-ideal
choice has now become an integral part of the system.
Although problems can occur, prototyping can be an effective paradigm for software engineering.
The key is to define the rules of the game at the beginning; that is, the customer and developer must
both agree that the prototype is built to serve as a mechanism for defining requirements. It is
then discarded (at least in part) and the actual software is engineered with an eye toward quality and
maintainability.
Evolutionary model:
Evolutionary model is also referred to as the successive versions model and sometimes as
the incremental model. In Evolutionary model, the software requirement is first broken down into
several modules (or functional units) that can be incrementally constructed and delivered (see Figure
5).
The development first develops the core modules of the system. The core modules are those that do
not need services from the other modules. The initial product skeleton is refined into increasing
levels of capability by adding new functionalities in successive versions. Each evolutionary model
may be developed using an iterative waterfall model of development.
User gets a chance to experiment with a partially developed software much before the
complete version of the system is released.
Evolutionary model helps to accurately elicit user requirements during the delivery of different
versions of the software.
The core modules get tested thoroughly, thereby reducing the chances of errors in the core
modules of the final products.
Evolutionary model avoids the need to commit large resources in one go for development of the
system.
Spiral model:-
Spiral model is one of the most important Software Development Life Cycle models, which
provides support for Risk Handling. In its diagrammatic representation, it looks like a spiral
with many loops. The exact number of loops of the spiral is unknown and can vary from
project to project. Each loop of the spiral is called a Phase of the software development
process. The exact number of phases needed to develop the product can be varied by the
project manager depending upon the project risks. As the project manager dynamically
determines the number of phases, so the project manager has an important role to develop a
product using the spiral model.
The Radius of the spiral at any point represents the expenses(cost) of the project so far, and
the angular dimension represents the progress made so far in the current phase.
The below diagram shows the different phases of the Spiral Model: –
Each phase of the Spiral Model is divided into four quadrants as shown in the above figure.
The functions of these four quadrants are discussed below-
1. Objectives determination and identify alternative solutions: Requirements are
gathered from the customers and the objectives are identified, elaborated, and analyzed at
the start of every phase. Then alternative solutions possible for the phase are proposed in
this quadrant.
2. Identify and resolve Risks: During the second quadrant, all the possible solutions are
evaluated to select the best possible solution. Then the risks associated with that solution
are identified and the risks are resolved using the best possible strategy. At the end of this
quadrant, the Prototype is built for the best possible solution.
3. Develop next version of the Product: During the third quadrant, the identified features
are developed and verified through testing. At the end of the third quadrant, the next
version of the software is available.
GIT-CSE-ADSA PAGE NO.18
TADIPATRI ENGINEERING COLLEGE, TADIPATRI
4. Review and plan for the next Phase: In the fourth quadrant, the Customers evaluate the
so far developed version of the software. In the end, planning for the next phase is started.
Risk Handling in Spiral Model
A risk is any adverse situation that might affect the successful completion of a software
project. The most important feature of the spiral model is handling these unknown risks after
the project has started. Such risk resolutions are easier done by developing a prototype. The
spiral model supports coping up with risks by providing the scope to build a prototype at every
phase of the software development.
The Prototyping Model also supports risk handling, but the risks must be identified
completely before the start of the development work of the project. But in real life project risk
may occur after the development work starts, in that case, we cannot use the Prototyping
Model. In each phase of the Spiral Model, the features of the product dated and analyzed,
and the risks at that point in time are identified and are resolved through prototyping. Thus,
this model is much more flexible compared to other SDLC models.
Why Spiral Model is called Meta Model?
The Spiral model is called a Meta-Model because it subsumes all the other SDLC models. For
example, a single loop spiral actually represents the Iterative Waterfall Model. The spiral
model incorporates the stepwise approach of the Classical Waterfall Model. The spiral model
uses the approach of the Prototyping Model by building a prototype at the start of each phase
as a risk-handling technique. Also, the spiral model can be considered as supporting
the Evolutionary model – the iterations along the spiral can be considered as evolutionary
levels through which the complete system is built.
Advantages of Spiral Model:
Below are some advantages of the Spiral Model.
1. Risk Handling: The projects with many unknown risks that occur as the development
proceeds, in that case, Spiral Model is the best development model to follow due to the
risk analysis and risk handling at every phase.
2. Good for large projects: It is recommended to use the Spiral Model in large and complex
projects.
3. Flexibility in Requirements: Change requests in the Requirements at later phase can be
incorporated accurately by using this model.
4. Customer Satisfaction: Customer can see the development of the product at the early
phase of the software development and thus, they habituated with the system by using it
before completion of the total product.
Disadvantages of Spiral Model:
Below are some main disadvantages of the spiral model.
1. Complex: The Spiral Model is much more complex than other SDLC models.
2. Expensive: Spiral Model is not suitable for small projects as it is expensive.
3. Too much dependability on Risk Analysis: The successful completion of the project is
very much dependent on Risk Analysis. Without very highly experienced experts, it is
going to be a failure to develop a project using this model.
4. Difficulty in time management: As the number of phases is unknown at the start of the
project, so time estimation is very difficult.
Rapid application development (RAD) is an incremental software development process model that
emphasizes an extremely short development cycle. The RAD model is a "high speed" adaptation of
the linear sequential model in which rapid development is achieved by using component-based
construction. If requirements are well understood and project scope is constrained, the RAD process
enables a development team to create a "fully functional system" within very short time periods
(e.g., 60 to 90 days). Used primarily for information systems applications, the RAD approach
encompasses the following phases :
Business modeling The information flow among business functions is modeled in a way that
answers the following questions: What information drives the business process? What
information is generated? Who generates it? Where does the information go? Who processes it?
Data modeling The information flow defined as part of the business modeling phase is refined into
a set of data objects that are needed to support the business The characteristics (called attributes)
of each object are identified and the relationships between these objects defined.
Process modeling The data objects defined in the data modeling phase are transformed to achieve
the information flow necessary to implement a business function. Processing descriptions are
created for adding, modifying, deleting, or retrieving a data object.
GIT-CSE-ADSA PAGE NO.20
Application generation RAD assumes the use of fourth generation techniques. Rather than creating
TADIPATRI ENGINEERING COLLEGE, TADIPATRI
software using conventional third generation programming languages the RAD process works to
reuse existing program components (when possible) or create reusable components (when
necessary). In all cases, automated tools are used to facilitate construction of the software.
Testing and turnover Since the RAD process emphasizes reuse, many of the program
components have already been tested. This reduces overall testing time. However, new
components must be tested and all interfaces must be fully exercised.
If a business application can be modularized in a way that enables each major function to be
completed in less than three months (using the approach described previously), it is a candidate for
RAD. Each major function can be addressed by a separate RAD team and then integrated to form a
whole
For large but scalable projects, RAD requires sufficient human resources to create the right number of
RAD teams
RAD requires developers and customers who are committed to the rapid-fire activities necessary to get a
system complete in a much abbreviated time frame. If commitment is lacking from either constituency,
RAD projects will fail
Not all types of applications are appropriate for RAD If a system cannot be properly modularized,
building the components necessary for RAD will be problematic. If high performance is an issue and
performance is to be achieved through tuning the interfaces to system components, the RAD approach
may not work
RAD is not appropriate when technical risks are high. This occurs when a new application makes heavy
use of new technology or when the new software requires a high degree of interoperability with existing
computer programs.
Agile Modeling suggests a wide array of ―core‖ and ―supplementary‖ modeling principles, those
that make AM unique are :
• Model with a purpose. A developer who uses AM should have a specific goal in mind before
creating the model. Once the goal for the model is identified, the type of notation to be used and
level of detail required will be more obvious.
• Use multiple models. There are many different models and notations that can be used to describe
software. Only a small subset is essential for most projects. AM suggests that to provide needed
insight, each model should present a different aspect of the system and only those models that
provide value to their intended audience should be used.
• Travel light. As software engineering work proceeds, keep only those models that will provide
long-term value and jettison the rest. Every work product that is kept must be maintained as
changes occur. This represents work that slows the team down. Ambler notes that ―Every time you
decide to keep a model you trade-off agility for the convenience of having that information
Software Project Management (SPM) is a proper way of planning and leading software
projects. It is a part of project management in which software projects are planned,
implemented, monitored and controlled.
Need of Software Project Management:
Software is an non-physical product. Software development is a new stream in business
and there is very little experience in building software products. Most of the software
products are made to fit client’s requirements. The most important is that the basic
technology changes and advances so frequently and rapidly that experience of one
product may not be applied to the other one. Such type of business and environmental
constraints increase risk in software development hence it is essential to manage
software projects efficiently.
It is necessary for an organization to deliver quality product, keeping the cost within
client’s budget constrain and deliver the project as per scheduled. Hence in order,
software project management is necessary to incorporate user requirements along with
budget and time constraints.
Software Project Management consists of several different type of managements:
1. Conflict Management:
Conflict management is the process to restrict the negative features of conflict while
increasing the positive features of conflict. The goal of conflict management is to
improve learning and group results including efficacy or performance in an
organizational setting. Properly managed conflict can enhance group results.
2. Risk Management:
Risk management is the analysis and identification of risks that is followed by
synchronized and economical implementation of resources to minimize, operate and
control the possibility or effect of unfortunate events or to maximize the realization of
opportunities.
3. Requirement Management:
It is the process of analyzing, prioritizing, tracing and documenting on requirements
and then supervising change and communicating to pertinent stakeholders. It is a
continuous process during a project.
4. Change Management:
Change management is a systematic approach for dealing with the transition or
transformation of an organization’s goals, processes or technologies. The purpose of
change management is to execute strategies for effecting change, controlling change
and helping people to adapt to change.
6. Release Management:
Release Management is the task of planning, controlling and scheduling the build in
deploying releases. Release management ensures that organization delivers new
and enhanced services required by the customer, while protecting the integrity of
existing services.
Aspects of Software Project Management:
Project Planning:
Project Estimation:
Estimations of all kinds are entrenched in our day-to-day life. When we plan a trip, we usually
estimate expenses for accommodation, meals, and transportation. We also calculate how much
time we need to get to the hotel and the airport, figuring out the shortest route. We also set our
priorities and do our best to stick to them.
In software development, accurate estimating is far more vital since the success of your effort is at
stake. As a project manager or business owner, your top priority is to meet deliverables' time-
frames and optimize the budget.
The biggest challenge with project estimation is that there's also a great deal of ambiguity when it
comes to software development. It can be hard to assume how much it'll cost on the first try, as a
lot of factors are at play. It can even take hours of preliminary research and coming up with
unconventional methods how to tackle it. However, you can try to use the standard approaches we
have described below and see if they work.
First, what is project estimation?
Generally speaking, it's the process of analyzing available data to predict the time, cost, and
resources needed to complete a project. Typically, project estimation includes scope, time-frames,
budget, and risks.
Key Components of Project Estimation
Scope
CIO defines project scope as a detailed outline of all aspects of a project, including all related
activities, resources, timelines, and deliverables, as well as the project's boundaries. The project
scope also outlines key stakeholders, processes, assumptions, and constraints, as well as what the
project is about, what is included, and what isn't. All of this essential information is documented in
a scope statement.
The project statement of work (SoW) details all aspects of the project, from the software
development life cycle to key meetings and status updates. Accurately estimating the project's
scope means you can have a more precise understanding of the cost, time-frames, and potential
bottlenecks.
Time-frame
With a scope of work on hand, it's easier to estimate how long it’s going to take to achieve each
milestone. Make sure the time-frame allows time for management and administration. Also, make
sure you prioritize tasks, identifying those that need to be completed before others. Some factors
might also slow things down, like meetings, holidays, interruptions of all sorts, and rejections from
Quality Assurance.
A proper timeline estimation covering all elements of the project will show you how much time
will go into completing different parts, inter dependable deliverables, and when each major
milestone will be achieved.
Resources
Defining the scope of work and timeline makes it easier to understand what resources the project
needs. Resources are staff, vendors, contractors as well as equipment. You should allocate
resources for the tasks in the scope of work. Before you do that, you need to know their availability
and schedule in advance. This way, you increase the reliability of a project.
Cost
Cost is an essential part of a project. Everyone wants to know how much it will cost to develop a
GIT-CSE-ADSA
project before delving into it. To predict the project cost, you need to consider thePAGE NO.27
scope, timeline,
and resources. With these aspects mapped out, you can have a rough estimate of your project.
TADIPATRI ENGINEERING COLLEGE, TADIPATRI
You can base your estimation on past project costs. If you don't have your own historical data, it's
best to ask someone who has already developed a similar project to advise budget-wise. The more
accurate information you have, the closer project estimates will be.
Risks
Every project comes with risks. However, it's possible to identify them and devise strategies to
handle them. An ideal project estimation document includes potential risks as a sort of insurance
against threats in the project. After the risks are identified, you need to prioritize them and
recognize their probability and impact.
How to Estimate a Project?
Here're common techniques you can use to estimate your project.
Top-Down estimating
It's a technique whereby the overall project is estimated as a whole, and then it's broken down into
individual phases. You can use this approach based on your historical data, adding estimation for
each project element. Top-down assessment isn't detailed, so it's only suitable for rough budget
estimation to see if it's viable.
Bottom-Up estimating
This method uses a detailed scope of work and suits projects you've decided to go with. The
bottom-up approach suggests estimating each task and adding the estimates to get a high-level
assessment. This way, you make a big picture from small pieces, which is more time-consuming
but guarantees more accurate results than the Top-Down Estimate.
Expert estimation
This kind of estimation is the quickest way to estimate a project. It involves an expert with relevant
experience who applies their knowledge and historical data to estimate a project. So, if you've
already executed a similar project, you can use a top-down or bottom-up estimation by an expert.
If it's not your typical project, you need to gather a tech and domain experts team to do the
assessment. This team of experts isn't necessarily a part of the project team. However, it's highly
recommended to engage an architect, tech lead, or systems analyst.
Analogous estimation
You can use it when the information about a project is limited and you've executed a similar one in
the past. Comparing a project with the previous one and leaning on historical data can give you
insight into a project you have on the anvil. For more precise estimation, you should choose a
project that's a likeness of your current one. Then, you'll be able to сompare the two projects to get
an estimation. The more similarities the projects have, the more precise estimation will be.
Parametric estimation
If you're looking for accurate estimation, a parametric approach is a good solution. It relies on
historical data to calculate estimation coupled with statistical data. Namely, it uses variables from
similar projects and applies them to a current one. The variables can be human resources,
materials, equipment, and more.
COCOMO Model:
Cocomo (Constructive Cost Model) is a regression model based on LOC, i.e number of
Lines of Code. It is a procedural cost estimate model for software projects and is often
used as a process of reliably predicting the various parameters associated with making
a project such as size, effort, cost, time, and quality. It was proposed by Barry Boehm in
1981 and is based on the study of 63 projects, which makes it one of the best-
documented models.
The key parameters which define the quality of any software products, which are also an
outcome of the Cocomo are primarily Effort & Schedule:
Effort: Amount of labor that will be required to complete a task. It is measured in
person-months units.
Schedule: Simply means the amount of time required for the completion of the job,
which is, of course, proportional to the effort put in. It is measured in the units of time
such as weeks, months.
Different models of Cocomo have been proposed to predict the cost estimation at
different levels, based on the amount of accuracy and correctness required. All of these
models can be applied to a variety of projects, whose characteristics determine the
value of constant to be used in subsequent calculations. These characteristics
pertaining to different system types are mentioned below.
Boehm’s definition of organic, semidetached, and embedded systems:
1. Organic – A software project is said to be an organic type if the team size required is
adequately small, the problem is well understood and has been solved in the past
and also the team members have a nominal experience regarding the problem.
2. Semi-detached – A software project is said to be a Semi-detached type if the vital
characteristics such as team size, experience, knowledge of the various
programming environment lie in between that of organic and Embedded. The projects
classified as Semi-Detached are comparatively less familiar and difficult to develop
compared to the organic ones and require more experience and better guidance and
creativity. Eg: Compilers or different Embedded Systems can be considered of Semi-
Detached type.
3. Embedded – A software project requiring the highest level of complexity, creativity,
and experience requirement fall under this category. Such software requires a larger
team size than the other two models and also the developers need to be sufficiently
experienced and creative to develop such complex models.
All the above system types utilize different values of the constants used in Effort
Calculations.
Types of Models: COCOMO consists of a hierarchy of three increasingly detailed
and accurate forms. Any of the three forms can be adopted according to our
requirements. These are types of COCOMO model:
1. Basic COCOMO Model
2. Intermediate COCOMO Model
3. Detailed COCOMO Model
The first level, Basic COCOMO can be used for quick and slightly rough calculations
ofGIT-CSE-ADSA
Software Costs. Its accuracy is somewhat restricted due to the absence of
PAGE NO.29
sufficient factor considerations.
Intermediate COCOMO takes these Cost Drivers into account and Detailed
COCOMO additionally accounts for the influence of individual project phases, i.e in
case of Detailed it accounts for both these cost drivers and also calculations are
TADIPATRI ENGINEERING COLLEGE, TADIPATRI
5.
6.
7.
The above formula is used for the cost estimation of for the basic COCOMO
model, and also is used in the subsequent models. The constant values a,b,c and
d for the Basic Model for the different categories of system:
Software Projects a b c d
#include<bits/stdc++.h>
int fround(float x)
GIT-CSE-ADSA
{ PAGE NO.30
int a;
TADIPATRI ENGINEERING COLLEGE, TADIPATRI
x=x+0.5;
a=x;
return(a);
float effort,time,staff;
int model;
model=0; //organic
model=1; //semi-detached
else if(size>300)
model=2; //embedded
GIT-CSE-ADSA PAGE NO.31
TADIPATRI ENGINEERING COLLEGE, TADIPATRI
// Calculate Effort
effort = table[model][0]*pow(size,table[model][1]);
// Calculate Time
time = table[model][2]*pow(effort,table[model][3]);
staff = effort/time;
int main()
GIT-CSE-ADSA PAGE NO.32
{
TADIPATRI ENGINEERING COLLEGE, TADIPATRI
float
table[3][4]={2.4,1.05,2.5,0.38,3.0,1.12,2.5,0.35,3.6,1.20,2.5,0.32};
char mode[][15]={"Organic","Semi-Detached","Embedded"};
int size = 4;
calculate(table,3,mode,size);
return 0;
Output:
The mode is Organic
Effort = 10.289 Person-Month
Development Time = 6.06237 Months
Average Staff Required = 2 Persons
8. Intermediate Model –
The basic Cocomo model assumes that the effort is only a function of the number
of lines of code and some constants evaluated according to the different software
systems. However, in reality, no system’s effort and schedule can be solely
calculated on the basis of Lines of Code. For that, various other factors such as
reliability, experience, Capability. These factors are known as Cost Drivers and
the Intermediate Model utilizes 15 such drivers for cost estimation.
Classification of Cost Drivers and their attributes:
(i) Product attributes –
Required software reliability extent
Size of the application database
The complexity of the product
(ii) Hardware attributes –
Run-time performance constraints
Memory constraints
GIT-CSE-ADSA PAGE NO.33
The volatility of the virtual machine environment
Required turnabout time
(iii) Personnel attributes –
Analyst capability
TADIPATRI ENGINEERING COLLEGE, TADIPATRI
Nominal
Very Very
;
Cost Drivers Low Low High High
Product Attributes
Hardware Attributes
Runtime Performance
Constraints 1.00 1.11 1.30
Personnel attributes
Nominal
Very Very
;
Cost Drivers Low Low High High
Programming language
experience 1.14 1.07 1.00 0.95
Project Attributes
Application of software
engineering methods 1.24 1.10 1.00 0.91 0.82
Software Projects a b
9. Detailed Model –
Detailed COCOMO incorporates all characteristics of the intermediate version with
an assessment of the cost driver’s impact on each step of the software
engineering process. The detailed model uses different effort multipliers for each
cost driver attribute. In detailed cocomo, the whole software is divided into
different modules and then we apply COCOMO in different modules to estimate
effort and then sum the effort.
The Six phases of detailed COCOMO are:
1. Planning and requirements
2. System design
3. Detailed design
4. Module code and test
5. Integration and test
6. Cost Constructive model
The effort is calculated as a function of program size and a set of cost drivers are
given according to each phase of the software lifecycle.
Halstead metrics –
Halstead Program Length – The total number of operator occurrences and the total
number of operand occurrences.
N = N1 + N2
And estimated program length is, N^ = n1log2n1 + n2log2n2
The following alternate expressions have been published to estimate program
length:
NJ = log2(n1!) + log2(n2!)
NB = n1 * log2n2 + n2 * log2n1
NC = n1 * sqrt(n1) + n2 * sqrt(n2)
NS = (n * log2n) / 2
Halstead Vocabulary – The total number of unique operator and unique operand
occurrences.
n= n1 + n2
GIT-CSE-ADSA PAGE NO.37
Program Volume – Proportional to program size, represents the size, in bits, of
space necessary for storing the program. This parameter is dependent on specific
algorithm implementation. The properties V, N, and the number of lines in the code
TADIPATRI ENGINEERING COLLEGE, TADIPATRI
are shown to be linearly connected and equally valid for measuring relative program
size.
V = Size * (log2 vocabulary) = N * log2(n)
The unit of measurement of volume is the common unit for size “bits”. It is the actual
size of a program if a uniform binary encoding for the vocabulary is used. And error =
Volume / 3000
Potential Minimum Volume – The potential minimum volume V* is defined as the
volume of the most succinct program in which a problem can be coded.
V* = (2 + n2*) * log2(2 + n2*)
Here, n2* is the count of unique input and output parameters
Program Level – To rank the programming languages, the level of abstraction
provided by the programming language, Program Level (L) is considered. The higher
the level of a language, the less effort it takes to develop a program using that
language.
L = V* / V
The value of L ranges between zero and one, with L=1 representing a program
written at the highest possible level (i.e., with minimum size).
And estimated program level is L^ =2 * (n2) / (n1)(N2)
Program Difficulty – This parameter shows how difficult to handle the program is.
D = (n1 / 2) * (N2 / n2)
D=1/L
As the volume of the implementation of a program increases, the program level
decreases and the difficulty increases. Thus, programming practices such as
redundant usage of operands, or the failure to use higher-level control constructs will
tend to increase the volume as well as the difficulty.
Programming Effort – Measures the amount of mental activity needed to translate
the existing algorithm into implementation in the specified program language.
E = V / L = D * V = Difficulty * Volume
Language Level – Shows the algorithm implementation program language level. The
same algorithm demands additional effort if it is written in a low-level program
language. For example, it is easier to program in Pascal than in Assembler.
L’ = V / D / D
lambda = L * V* = L2 * V
{
GIT-CSE-ADSA
int i, j, save, im1; PAGE NO.39
Explanation –
Int 4 sort 1
() 5 x 7
, 4 n 3
[] 7 i 8
If 2 j 7
< 2 save 3
; 11 im1 3
For 2 2 2
= 6 1 3
GIT-CSE-ADSA PAGE NO.40
– 1 0 1
TADIPATRI ENGINEERING COLLEGE, TADIPATRI
<= 2 – –
++ 2 – –
return 2 – –
{} 3 – –
Therefore,
N = 91
n = 24
V = 417.23 bits
N^ = 86.51
n2* = 3 (x:array holding integer
to be sorted. This is used both
as input and output)
V* = 11.6
L = 0.027
D = 37.03
L^ = 0.038
T = 610 seconds
It GIT-CSE-ADSA
depends on the complete code. PAGE NO.41
It has no use as a predictive estimating model.
TADIPATRI ENGINEERING COLLEGE, TADIPATRI
Project scheduling:
Project-task scheduling is a significant project planning activity. It comprises deciding which functions
would be taken up when. To schedule the project plan, a software project manager wants to do the
following:
1. Identify all the functions required to complete the project.
2. Break down large functions into small activities.
3. Determine the dependency among various activities.
4. Establish the most likely size for the time duration required to complete the activities.
5. Allocate resources to activities.
6. Plan the beginning and ending dates for different activities.
7. Determine the critical path. A critical way is the group of activities that decide the duration of the
project.
The first method in scheduling a software plan involves identifying all the functions required to
complete the project. A good judgment of the intricacies of the project and the development process
helps the supervisor to identify the critical role of the project effectively. Next, the large functions are
broken down into a valid set of small activities which would be assigned to various engineers. The work
breakdown structure formalism supports the manager to breakdown the function systematically after
the project manager has broken down the purpose and constructs the work breakdown structure; he
has to find the dependency among the activities. Dependency among the various activities determines
the order in which the various events would be carried out. If an activity A necessary the results of
another activity B, then activity A must be scheduled after activity B. In general, the function
dependencies describe a partial ordering among functions, i.e., each service may precede a subset of
other functions, but some functions might not have any precedence ordering describe between them
(called concurrent function). The dependency among the activities is defined in the pattern of an activity
network.
Once the activity network representation has been processed out, resources are allocated to every
activity. Resource allocation is usually done using a Gantt chart. After resource allocation is completed, a
PERT chart representation is developed. The PERT chart representation is useful for program monitoring
and control. For task scheduling, the project plan needs to decompose the project functions into a set
of activities. The time frame when every activity is to be performed is to be determined. The end of
every action is called a milestone. The project manager tracks the function of a project by audit the
timely completion of the milestones. If he examines that the milestones start getting delayed, then he
has to handle the activities carefully so that the complete deadline can still be met.
Staffing:
Personnel Planning deals with staffing. Staffing deals with the appoint personnel for the position that is identified
by the organizational structure.
It involves:
For personnel planning and scheduling, it is helpful to have efforts and schedule size for the subsystems
and necessary component in the system.
At planning time, when the system method has not been completed, the planner can only think to know
about the large subsystems in the system and possibly the major modules in these subsystems.
5Once the project plan is estimated, and the effort and schedule of various phases and functions are known, staff
requirements can be achieved.
From the cost and overall duration of the projects, the average staff size for the projects can be
determined by dividing the total efforts (in person-months) by the whole project duration (in months).
Typically the staff required for the project is small during requirement and design, the maximum during
implementation and testing, and drops again during the last stage of integration and testing.
Using the COCOMO model, average staff requirement for various phases can be calculated as the effort
and schedule for each method are known.
When the schedule and average staff level for every action are well-known, the overall personnel
allocation for the project can be planned.
This plan will indicate how many people will be required for different activities at different times for the
duration of the project.
The total effort for each month and the total effort for each step can easily be calculated from this plan.
Team Structure
Team structure addresses the issue of arrangement of the individual project teams. There are some
possible methods in which the different project teams can be organized. There are primarily three
GIT-CSE-ADSA PAGE NO.43
TADIPATRI ENGINEERING COLLEGE, TADIPATRI
formal team structures: chief programmer, Ego-less or democratic, and the mixed team
organizations even several other variations to these structures are possible. Problems of various
complexities and sizes often need different team structures for the chief solution.
The structure allows input from all representatives, which can lead to better decisions in various
problems. This suggests that this method is well suited for long-term research-type projects that do not
have time constraints.
The chief programmer is essential for all major technical decisions of the project.
He does most of the designs, and he assigns coding of the different part of the design to the
programmers.
The backup programmer uses the chief programmer makes technical decisions, and takes over the chief
programmer if the chief programmer drops sick or leaves.
This structure considerably reduces interpersonal communication. The communication paths, as shown
in fig:
It consists of project leaders who have a class of senior programmers under him, while under every
senior programmer is a group of a junior programmer.
The group of a senior programmer and his junior programmers behave like an ego-less team, but
communication among different groups occurs only through the senior programmers of the group.
Such a team has fewer communication paths than a democratic team but more paths compared to a
chief programmer team.
It limits the number of communication paths and stills allows for the needed
communication.
It can be expanded over multiple levels.
It is well suited for the development of the hierarchical software products.
Large software projects may have several levels.
Limitations of hierarchical team organization :
The Chief programmer : It is the person who is actively involved in the planning,
specification and design process and ideally in the implementation process as well.
The project assistant : It is the closest technical co-worker of the chief programmer.
The project secretary : It relieves the chief programmer and all other programmers of
administration tools.
GIT-CSE-ADSA PAGE NO.47
TADIPATRI ENGINEERING COLLEGE, TADIPATRI
Specialists : These people select the implementation language, implement individual
system components and employ software tools and carry out tasks.
Centralized decision-making
Reduced communication paths
Small teams are more productive than large teams
The chief programmer is directly involved in system development and can exercise the
better control function.
Disadvantages of Chief-programmer team organization :
Coordination
Final decisions, when consensus cannot be reached.
Advantages of Democratic Team Organization :
Risk management:
What is Risk?
"Tomorrow problems are today's risk." Hence, a clear definition of a "risk" is a problem
that could cause some loss or threaten the progress of the project, but which has not
happened yet.
These potential issues might harm cost, schedule or technical success of the project and
the quality of our software device, or project team morale.
Risk Management is the system of identifying addressing and eliminating these
problems before they can damage the project.
We need to differentiate risks, as potential issues, from the current problems of the
project.ry of Java
Different methods are required to address these two kinds of issues.
For example, staff storage, because we have not been able to select people with the
right technical skills is a current problem, but the threat of our technical persons being
hired away by the competition is a risk.
Risk Management
A software project can be concerned with a large variety of risks. In order to be adept to
systematically identify the significant risks which might affect a software project, it is
essential to classify risks into different classes. The project manager can then check
which risks from each class are relevant to the project.
There are three main classifications of risks which can affect a software project:
1. Project risks
2. Technical risks
3. Business risks
1. Project risks: Project risks concern differ forms of budgetary, schedule, personnel,
resource, and customer-related problems. A vital project risk is schedule slippage. Since
the software is intangible, it is very tough to monitor and control a software project. It is
very tough to control something which cannot be identified. For any manufacturing
program, such as the manufacturing of cars, the plan executive can recognize the
product taking shape.
3. Business risks: This type of risks contain risks of building an excellent product that no
one need, losing budgetary or personnel commitments, etc.
1. 1. Known risks: Those risks that can be uncovered after careful assessment of the
project program, the business and technical environment in which the plan is being
developed, and more reliable data sources (e.g., unrealistic delivery date)
2. 2. Predictable risks: Those risks that are hypothesized from previous project experience
(e.g., past turnover)
3. 3. Unpredictable risks: Those risks that can and do occur, but are extremely tough to
identify in advance.
Risk Assessment
The objective of risk assessment is to division the risks in the condition of their loss,
causing potential. For risk assessment, first, every risk should be rated in two methods:
Based on these two methods, the priority of each risk can be estimated:
Where p is the priority with which the risk must be controlled, r is the probability of the
risk becoming true, and s is the severity of loss caused due to the risk becoming true. If
all identified risks are set up, then the most likely and damaging risks can be controlled
first, and more comprehensive risk abatement methods can be designed for these risks.
1. Risk Identification: The project organizer needs to anticipate the risk in the project
as early as possible so that the impact of risk can be reduced by making effective risk
management planning.
A project can be of use by a large variety of risk. To identify the significant risk, this
might affect a project. It is necessary to categories into the different risk of classes.
There are different types of risks which can affect a software project:
1. Technology risks: Risks that assume from the software or hardware technologies that
are used to develop the system.
2. People risks: Risks that are connected with the person in the development team.
3. Organizational risks: Risks that assume from the organizational environment where the
software is being developed.
4. Tools risks: Risks that assume from the software tools and other support software used
to create the system.
5. Requirement risks: Risks that assume from the changes to the customer requirement
and the process of managing the requirements change.
6. Estimation risks: Risks that assume from the management estimates of the resources
required to build the system
2. Risk Analysis: During the risk analysis process, you have to consider every identified
risk and make a perception of the probability and seriousness of that risk.
There is no simple way to do this. You have to rely on your perception and experience of
previous projects and the problems that arise in them.
It is not possible to make an exact, the numerical estimate of the probability and
seriousness of each risk. Instead, you should authorize the risk to one of several bands:
1. The probability of the risk might be determined as very low (0-10%), low (10-25%),
moderate (25-50%), high (50-75%) or very high (+75%).
2. The effect of the risk might be determined as catastrophic (threaten the survival of the
plan), serious (would cause significant delays), tolerable (delays are within allowed
contingency), or insignificant.
Risk Control
It is the process of managing risks to achieve desired outcomes. After all, the identified
risks of a plan are determined; the project must be made to include the most harmful
and the most likely risks. Different risks need different containment methods. In fact,
most risks need ingenuity on the part of the project manager in tackling the risk.
1. Avoid the risk: This may take several ways such as discussing with the client to change
the requirements to decrease the scope of the work, giving incentives to the engineers
to avoid the risk of human resources turnover, etc.
2. Transfer the risk: This method involves getting the risky element developed by a third
party, buying insurance cover, etc.
3. Risk reduction: This means planning method to include the loss due to risk. For
instance, if there is a risk that some key personnel might leave, new recruitment can be
planned.
Risk Leverage: To choose between the various methods of handling risk, the project
plan must consider the amount of controlling the risk and the corresponding reduction
of risk. For this, the risk leverage of the various risks can be estimated.
Risk leverage is the variation in risk exposure divided by the amount of reducing the risk.
Risk leverage = (risk exposure before reduction - risk exposure after reduction) /
(cost of reduction)
1. Risk planning: The risk planning method considers each of the key risks that have
been identified and develop ways to maintain these risks.
For each of the risks, you have to think of the behavior that you may take to minimize
the disruption to the plan if the issue identified in the risk occurs.
You also should think about data that you might need to collect while monitoring the
plan so that issues can be anticipated.
Again, there is no easy process that can be followed for contingency planning. It rely on
the judgment and experience of the project manager.
Configuration management:
Whenever a software is build, there is always scope for improvement and those
improvements brings changes in picture. Changes may be required to modify or
update any existing solution or to create a new solution for a problem.
Requirements keeps on changing on daily basis and so we need to keep on
upgrading our systems based on the current requirements and needs to meet
desired outputs. Changes should be analyzed before they are made to the
existing system, recorded before they are implemented, reported to have details
of before and after, and controlled in a manner that will improve quality and
reduce error. This is where the need of System Configuration Management
comes.
System Configuration Management (SCM) is an arrangement of exercises
which controls change by recognizing the items for change, setting up
connections between those things, making/characterizing instruments for
overseeing diverse variants, controlling the changes being executed in the
current framework, inspecting and revealing/reporting on the changes made. It
is essential to control the changes in light of the fact that if the changes are not
checked legitimately then they may wind up undermining a well-run
programming. In this way, SCM is a fundamental piece of all project
management activities.
Processes involved in SCM –
Configuration management provides a disciplined environment for smooth
control of work products. It involves the following activities:
1. Identification and Establishment – Identifying the configuration items from
products that compose baselines at given points in time (a baseline is a set
of mutually consistent Configuration Items, which has been formally
reviewed and agreed upon, and serves as the basis of further development).
Establishing relationship among items, creating a mechanism to manage
multiple level of control and procedure for change management system.
2. Version control – Creating versions/specifications of the existing product to
build new products from the help of SCM system. A description of version is
given below:
Attributes of WebApps :
Network Intensiveness
Concurrency
Unpredictable load
Performance
Availability
Data driven
Content Sensitive
Continuous evolution
Immediacy
Security
Aesthetic
Network intensiveness.
A WebApp resides on a network and must serve the needs of a diverse community
of clients.
The network may enable worldwide access and communication (i.e., the Internet)
or more limited access and communication
(e.g., a corporate Intranet Network Intensiveness)
Unpredictable load :
The number of users of the WebApp may vary by orders of magnitude from day to
day. One hundred users may show up on Monday; 10,000 may use the system on
Thursday.
Performance :
If a WebApp user must wait too long (for access, for server side processing, for
client-side formatting and display), he or she may decide to go elsewhere.
Availability :
Although expectation of 100 percent availability is unreasonable, users of popular
WebApps often demand access on a 24/7/365 basis.
Data driven :
The primary function of many WebApps is to use hypermedia to present text,
graphics, audio, and video content to the end user.
In addition, WebApps are commonly used to access information that exists on
databases that are not an integral part of the Web-based environment (e.g., e-
commerce or financial applications).
Content sensitive:
The quality and artistic nature of content remains an important
Determinant of the quality of a WebApp.
Continuous evolution:
Unlike conventional application software that evolves over a series of planned,
chronologically spaced releases, Web applications evolve continuously.
It is not unusual for some WebApps (specifically, their content) to be updated on a
minute-by-minute schedule or for content to be independently computed for each
request.
Immediacy:
Although immediacy—the compelling (forceful) need to get software to market
quickly—is a characteristic of many application domains,
WebApps often exhibit a time-to-market that can be a matter of a few days or
weeks.
Security:
Because WebApps are available via network access, it is difficult, if not
impossible, to limit the population of end users who may access the application. In
order to protect sensitive content and provide secure mode of data transmission,
strong security measures must be implemented.
Myth 2 :
The addition of the latest hardware programs will improve the software
development.
Fact:
The role of the latest hardware is not very high on standard software
development; instead (CASE) Engineering tools help the computer they are
more important than hardware to produce quality and productivity.
Hence, the hardware resources are misused.
Myth 3 :
Managers think that, with the addition of more people and program planners
to Software development can help meet project deadlines (If lagging behind).
Fact :
Software development is not, the process of doing things like production;
here the addition of people in previous stages can reduce the time it will be
used for productive development, as the newcomers would take time
existing developers of definitions and understanding of the file project.
However, planned additions are organized and organized It can help
complete the project.
(ii)Customer Myths :
The customer can be the direct users of the software, the technical team,
marketing / sales department, or other company. Customer has myths
Leading to false expectations (customer) & that’s why you create
dissatisfaction with the developer.
Myth 1 :
A general statement of intent is enough to start writing plans (software
development) and details of objectives can be done over time.
Fact:
Official and detailed description of the database function, ethical
performance, communication, structural issues and the verification process
are important.
It is happening that the complete communication between the customer and
the developer is required.
Myth 2 :
Project requirements continue to change, but, change, can be easy location
due to the flexible nature of the software.
Fact :
Changes were made to the final stages of software development but cost to
make those changes grow through the latest stages of
Development. A detailed analysis of user needs should be done to
minimize change requirement. Figure shows the transition costs in
Respect of the categories of development.
(iii)Practitioner’s Myths :
Myths 1 :
They believe that their work has been completed with the writing of the plan and
they received it to work.
Fact:
It is true that every 60-80% effort goes into the maintenance phase (as of the
latter software release). Efforts are required, where the product is available
first delivered to customers.
Myths 2 :
There is no other way to achieve system quality, behind it done running.
Fact:
Systematic review of project technology is the quality of effective software
verification method. These updates are quality filters and more accessible
than test.
Myth 3 :
An operating system is the only product that can be successfully exported
project.
Fact:
A working system is not enough, it is just the right document brochures and
booklets are also reqd. To provide for guidance & software support.
Myth4 :
Engineering software will enable us to build powerful and unnecessary
document & always delay us.
Fact :
Software engineering does not deal with text building, rather while creating
better quality leads to reduced recycling & this is being studied for rapid
product delivery.
We can use their feedback to modify the prototype until the customer is satisfied
continuously. Hence, the prototype helps the client to visualize the proposed system
and increase the understanding of the requirements. When developers and users are
not sure about some of the elements, a prototype may help both the parties to take a
final decision.
Some projects are developed for the general market. In such cases, the prototype
should be shown to some representative sample of the population of potential
purchasers. Even though a person who tries out a prototype may not buy the final
system, but their feedback may allow us to make the product more attractive to others.
The prototype should be built quickly and at a relatively low cost. Hence it will always
have limitations and would not be acceptable in the final system. This is an optional
activity.
(iii) Model the requirements: This process usually consists of various graphical
representations of the functions, data entities, external entities, and the relationships
between them. The graphical view may help to find incorrect, inconsistent, missing, and
superfluous requirements. Such models include the Data Flow diagram, Entity-
Relationship diagram, Data Dictionaries, State-transition diagrams, etc.
(iv) Finalise the requirements: After modeling the requirements, we will have a better
understanding of the system behavior. The inconsistencies and ambiguities have been
identified and corrected. The flow of data amongst various modules has been analyzed.
Elicitation and analyze activities have provided better insight into the system. Now we
finalize the analyzed requirements, and the next step is to document these requirements
in a prescribed format.
Next
Software requirements specification:
The SRS is a specification for a specific software product, program, or set of applications
that perform particular functions in a specific environment. It serves several goals
depending on who is writing it. First, the SRS could be written by the client of a system.
Second, the SRS could be written by a developer of the system. The two methods create
entirely various situations and establish different purposes for the document altogether.
The first case, SRS, is used to define the needs and expectation of the users. The second
case, SRS, is written for various purposes and serves as a contract document between
customer and developer.
Traceability:
Traceability comprises of two words i.e. trace and ability. Trace means to find someone
or something and ability means to a skill or capability or talent to do something.
Therefore, traceability simply means the ability to trace the requirement, to provide better
quality, to find any risk, to keep and verify the record of history and production of an
item or product by the means of documented identification. Due to this, it’s easy for the
suppliers to reduce any risk or any issue if found and to improve the quality of the item or
product. So, it’s important to have traceability rather than no traceability. Using
traceability, finding requirements, and any risk to improve the quality of the product
becomes very easy.
There are various types of traceability given below:
1. Source traceability –
These are basically the links from requirement to stakeholders who propose these
requirements.
2. Requirements traceability –
These are the links between dependent requirements.
3. Design traceability –
These are the links from requirement to design.
Traceability matrix is generally used to represent the information of traceability. For
mentioning the traceability of small systems usually the traceability matrix is maintained.
If one requirement is dependent upon another requirement then in that row-column cell
‘D’ is mentioned and if there is a weak relationship between the requirements than
corresponding entry can be denoted by ‘R’. For example:
Requirement ID A B C D E F
A D R
B D
C R
D D R
F R D
2. Completeness: The SRS is complete if, and only if, it includes the following elements:
(2). Definition of their responses of the software to all realizable classes of input data in
all available categories of situations.
Note: It is essential to specify the responses to both valid and invalid values.
(3). Full labels and references to all figures, tables, and diagrams in the SRS and
definitions of all terms and units of measure.
3. Consistency: The SRS is consistent if, and only if, no subset of individual
requirements described in its conflict. There are three types of possible conflict in the
SRS:
(1). The specified characteristics of real-world objects may conflicts. For example,
(a) The format of an output report may be described in one requirement as tabular but
in another as textual.
(b) One condition may state that all lights shall be green while another states that all
lights shall be blue.
(2). There may be a reasonable or temporal conflict between the two specified actions.
For example,
(a) One requirement may determine that the program will add two inputs, and another
may determine that the program will multiply them.
(b) One condition may state that "A" must always follow "B," while other requires that "A
and B" co-occurs.
(3). Two or more requirements may define the same real-world object but use different
terms for that object. For example, a program's request for user input may be called a
"prompt" in one requirement's and a "cue" in another. The use of standard terminology
and descriptions promotes consistency.
4. Unambiguousness: SRS is unambiguous when every fixed requirement has only one
interpretation. This suggests that each element is uniquely interpreted. In case there is a
method used with multiple definitions, the requirements report should determine the
implications in the SRS so that it is clear and simple to understand.
5. Ranking for importance and stability: The SRS is ranked for importance and
stability if each requirement in it has an identifier to indicate either the significance or
stability of that particular requirement.
Typically, all requirements are not equally important. Some prerequisites may be
essential, especially for life-critical applications, while others may be desirable. Each
element should be identified to make these differences clear and explicit. Another way
to rank requirements is to distinguish classes of items as essential, conditional, and
optional.
7. Verifiability: SRS is correct when the specified requirements can be verified with a
cost-effective system to check whether the final software meets those requirements. The
requirements are verified with the help of reviews.
8. Traceability: The SRS is traceable if the origin of each of the requirements is clear
and if it facilitates the referencing of each condition in future development or
enhancement documentation.
2. Forward Traceability: This depends upon each element in the SRS having a unique
name or reference number.
The forward traceability of the SRS is especially crucial when the software product enters
the operation and maintenance phase. As code and design document is modified, it is
necessary to be able to ascertain the complete set of requirements that may be
concerned by those modifications.
10. Testability: An SRS should be written in such a method that it is simple to generate
test cases and test plans from the report.
11. Understandable by the customer: An end user may be an expert in his/her explicit
domain but might not be trained in computer science. Hence, the purpose of formal
notations and symbols should be avoided too as much extent as possible. The language
should be kept simple and clear.
12. The right level of abstraction: If the SRS is written for the requirements stage, the
details should be explained explicitly. Whereas,for a feasibility study, fewer analysis can
be used. Hence, the level of abstraction modifies according to the objective of the SRS.
Concise: The SRS report should be concise and at the same time, unambiguous,
consistent, and complete. Verbose and irrelevant descriptions decrease readability and
also increase error possibilities.
Structured: It should be well-structured. A well-structured document is simple to
understand and modify. In practice, the SRS document undergoes several revisions to
cope up with the user requirements. Often, user requirements evolve over a period of
time. Therefore, to make the modifications to the SRS document easy, it is vital to make
the report well-structured.
Black-box view: It should only define what the system should do and refrain from
stating how to do these. This means that the SRS document should define the external
behavior of the system and not discuss the implementation issues. The SRS report
should view the system to be developed as a black box and should define the externally
visible behavior of the system. For this reason, the SRS report is also known as the black-
box specification of a system.
Conceptual integrity: It should show conceptual integrity so that the reader can merely
understand it. Response to undesired events: It should characterize acceptable
responses to unwanted events. These are called system response to exceptional
conditions.
Verifiable: All requirements of the system, as documented in the SRS document, should
be correct. This means that it should be possible to decide whether or not requirements
have been met in an implementation.
This recommended practice is aimed at specifying requirements of software to be developed but also can be
applied to assist in the selection of in-house and commercial software products. However, application to
already-developed software could be counterproductive.
When software is embedded in some larger system, such as medical equipment, then issues beyond those
identiÞed in this recommended practice may have to be addressed.
This recommended practice describes the process of creating a product and the content of the product. The
product is an SRS. This recommended practice can be used to create such an SRS directly or can be used as a
model for a more speciÞc standard.
This recommended practice does not identify any speciÞc method, nomenclature, or tool for preparing an SRS.
Document History
IEEE 830
June 25, 1998
Recommended Practice for Software Requirements SpeciÞcations
This is a recommended practice for writing software requirements speciÞcations. It describes the content and qualities of a good
software requirements speciÞcation (SRS) and presents several sample...
IEEE 830
June 25, 1998
Recommended Practice for Software Requirements Specifications
This is a recommended practice for writing software requirements specifications. It describes the content and qualities of a good
software requirements speciÞcation (SRS) and presents several sample...
IEEE 830
January 1, 1993
Recommended Practice for Software Requirements Specifications
This is a recommended practice for writing software requirements specifications. It describes the content and qualities of a good
software requirements specification (SRS) and presents several sample...
IEEE 830
January 1, 1984
GUIDE TO SOFTWARE REQUIREMENTS SPECIFICATIONS; (IEEE COMPUTER SOCIETY DOCUMENT)
A description is not available for this item.
Decision:
Once the ‘new member’ possibility is chosen, the software system asks for
details concerning the member just like the member’s name, address,
number, etc.
Action:
If correct info is entered then a membership record for the member is made
and a bill is written for the annual membership charge and the protection
deposit collectible.
Renewal Option:
Decision:
If the ‘renewal’ possibility is chosen, the LMS asks for the member’s name
and his membership range to test whether or not he’s a sound member or
not.
Action:
If the membership is valid then the membership ending date is updated and
therefore the annual membership bill is written, otherwise, a slip-up message
is displayed.
Decision:
If the ‘cancel membership’ possibility is chosen, then the software system
asks for a member’s name and his membership range.
Action:
The membership is off, a cheque for the balance quantity because of the
member is written and at last, the membership record is deleted from the
information.
We can derive decision table We can not derive decision tree from
2. from decision tree. decision table.
S.No. Decision Table Decision Tree
It is used when there are small It is used when there are more
5. number of properties. number of properties.
Some may argue that all these steps usually take place, but they
must, to some extent for at least usable software with longer
perspectives for exploitation. Some of the earlier steps – particularly
design stages – may bring a sense of uncertainty in terms of
unforeseen problems later in the process. The reasons could be:
1. Lack of grasp of the problem as a whole
2. Dispersed engineering teams have different perceptions of the end-product
3. Lack of domain knowledge
4. Inconsistent requirements
5. Yet-to-be discovered areas of expertise
Z NOTATION
Z notation is a model-based, abstract formal specification technique
most compatible with object-oriented programming. Z defines system
models in the form of states where each state consists of variables,
values and operations that change from one state to another.
As opposed to the usability of B, which is involved in full development
life-cycle, Z formalises a specification of the system at the design
level.
EVENT-B
Event-B is an advanced implementation of the B method. Using this
approach, formal software specification is the process of creating a
discrete model that represents a specific state of the system. The
state is an abstract representation of constants, variables and
transitions (events). Part of an event is the guard that determines the
condition for the transition to another other state to take place.
Constructed models (blueprints) are a further subject of refinement,
proof obligation and decomposition for the correctness of verification.
Evaluation
Before deciding on the use of formal methods, each architect must list
the pros and cons against resources available, as well as the system’s
needs.
BENEFITS
1. Significantly improves reliability at the design level decreasing the cost of testing
2. Improves system cohesion, reliability, and safety-critical components by fault detection on early
phases in the development cycle
3. Validated models present deterministic system behaviour
CRITICISMS
1. Requires qualified professionals competent in either mathematics (mathematical expressions, set
theory and predicate logic) or software engineering. Systems once modelled may be difficult to
implement by unaccustomed programmers. “People are quite reluctant to use such methods
mostly because it necessitates modifying the development process in a significant fashion.”,
author of B-Methods, Abrial, once said.
2. Design proof-validation may introduce additional effort/cost to overall project estimation.
axiomatic specification:
In the Axiomatic Specification of a system, first-order logic is used to write the pre-and post-
conditions to specify the operations of the system in the form of axioms. The pre-conditions
capture the conditions that must be satisfied before an operation can successfully be invoked.
In essence, the pre-conditions capture the requirements on the input parameters of a function.
The post-conditions are the conditions that must be satisfied when a function post-conditions are
essentially constraints on the results produced for the function execution to be considered
successful.
Algebraic Specification:
In the Algebraic Specification technique, an object class or type is specified in terms of
relationships existing between the operations defined on that type. It was first brought into
prominence by Guttag (1980-1985) in the specification of abstract data types. Various
notations of algebraic specifications have evolved, including those based on OBJ and Larch
languages.
Essentially, algebraic specifications define a system as a heterogeneous algebra. A
heterogeneous algebra is a collection of different sets on which several operations are
defined. Traditional algebras are homogeneous. A
homogeneous algebra consists of a single set and several operations defined in this set such
as – { +, -, *, / }.
UNIT – III
Good Software Design, Cohesion and coupling, Control Hierarchy: Layering, Control
Abstraction, Depth and width, Fan-out, Fan-in, Software design approaches, object
oriented vs. function oriented design. Overview of SA/SD methodology, structured
analysis, Data flow diagram, Extending DFD technique to real life systems, Basic Object
oriented concepts, UML Diagrams, Structured design, Detailed design, Design review,
Characteristics of a good user interface, User Guidance and Online Help, Mode-based vs
Mode-less Interface, Types of user interfaces, Component-based GUI development, User
interface design methodology: GUI design methodology.
Software Design
Good Software Design :
Software design is a mechanism to transform user requirements into some suitable form, which helps
the programmer in software coding and implementation. It deals with representing the client's
requirement, as described in SRS (Software Requirement Specification) document, into a form, i.e., easily
implementable using programming language.
The software design phase is the first step in SDLC (Software Design Life Cycle), which moves the
concentration from the problem domain to the solution domain. In software design, we consider the
system to be a set of components or modules with clearly defined behaviors & boundaries.
Types of Coupling:
Data Coupling: If the dependency between the modules is based on the fact that they
communicate by passing only data, then the modules are said to be data coupled. In data
coupling, the components are independent of each other and communicate through data.
Module communications don’t contain tramp data. Example-customer billing system.
Stamp Coupling In stamp coupling, the complete data structure is passed from one
module to another module. Therefore, it involves tramp data. It may be necessary due to
efficiency factors- this choice was made by the insightful designer, not a lazy programmer.
Control Coupling: If the modules communicate by passing control information, then they
are said to be control coupled. It can be bad if parameters indicate completely different
behavior and good if parameters allow factoring and reuse of functionality. Example- sort
function that takes comparison function as an argument.
External Coupling: In external coupling, the modules depend on other modules, external
to the software being developed or to a particular type of hardware. Ex- protocol, external
file, device format, etc.
Common Coupling: The modules have shared data such as global data structures. The
changes in global data mean tracing back to all modules which access that data to
evaluate the effect of the change. So it has got disadvantages like difficulty in reusing
modules, reduced ability to control data accesses, and reduced maintainability.
Content Coupling: In a content coupling, one module can modify the data of another
module, or control flow is passed from one module to the other module. This is the worst
form of coupling and should be avoided.
Cohesion: Cohesion is a measure of the degree to which the elements of the module are
functionally related. It is the degree to which all elements directed towards performing a single
task are contained in the component. Basically, cohesion is the internal glue that keeps the
module together. A good software design will have high cohesion.
Types of Cohesion:
Functional Cohesion: Every essential element for a single computation is contained in
the component. A functional cohesion performs the task and functions. It is an ideal
situation.
Sequential Cohesion: An element outputs some data that becomes the input for other
element, i.e., data flow between the parts. It occurs naturally in functional programming
languages.
Communicational Cohesion: Two elements operate on the same input data or contribute
towards the same output data. Example- update record in the database and send it to the
printer.
Procedural Cohesion: Elements of procedural cohesion ensure the order of execution.
Actions are still weakly connected and unlikely to be reusable. Ex- calculate student GPA,
print student record, calculate cumulative GPA, print cumulative GPA.
Temporal Cohesion: The elements are related by their timing involved. A module
connected with temporal cohesion all the tasks must be executed in the same time span.
This cohesion contains the code for initializing all the parts of the system. Lots of different
activities occur, all at unit time.
Logical Cohesion: The elements are logically related and not functionally. Ex- A
component reads inputs from tape, disk, and network. All the code for these functions is in
the same component. Operations are related, but the functions are significantly different.
Coincidental Cohesion: The elements are not related(unrelated). The elements have no
conceptual relationship other than location in source code. It is accidental and the worst
form of cohesion. Ex- print next line and reverse the characters of a string in a single
component.
Control Hierarchy:
Control hierarchy, also called program structure, represents the organization of program components (modules) and
implies a hierarchy of control. It does not represent procedural aspects of software such as sequence of processes,
occurrence or order of decisions, or repetition of operations; nor is it necessarily applicable to all architectural styles.
Different notations are used to represent control hierarchy for those architectural styles that are amenable to this
representation. The most common is the treelike diagram that represents hierarchical control for call and return
architectures. However, other notations, such as Warnier-Orr and Jackson diagrams may also be used with equal
effectiveness. In order to facilitate later discussions of structure, we define a few simple measures and terms. Referring
to figure , depth and width provide an indication of the number of levels of control and overall span of control,
respectively. Fan-out is a measure of the number of modules that are directly controlled by another module. Fan-in
indicates how many modules directly control a given module.
The control relationship among modules is expressed in the following way: A module that controls another module is
said to be superordinate to it, and conversely, a module controlled by another is said to be subordinate to the controller
. For example, referring to figure, module M is superordinate to modules a, b, and c. Module h is subordinate to
module e and is ultimately subordinate to module M. Width-oriented relationships (e.g., between modules d and e)
although possible to express in practice, need not be defined with explicit terminology.
The control hierarchy also represents two subtly different characteristics of the software architecture: visibility and
connectivity. Visibility indicates the set of program components that may be invoked or used as data by a given
component, even when this is accomplished indirectly. For example, a module in an object-oriented system may have
access to a wide array of data objects that it has inherited, but makes use of only a small number of these data objects.
All of the objects are visible to the module. Connectivity indicates the set of components that are directly invoked or
used as data by a given component. For example, a module that directly causes another module to begin execution is
connected to it.
StructuralPartitioning
If the architectural style of a system is hierarchical, the program structure can be partitioned both horizontally and
vertically.Reffering to figure(a) horizontal partitioning defines separate branches of the modular hierarchy for each
major program function. Control modules, represented in a darker shade are used to coordinate communication
between and execution of the functions. The simplest approach to horizontal partitioning defines three partitions—
input, data transformation (often called processing) and output. Partitioning the architecture horizontally provides a
number of distinct benefits:
• software that is easier to test
• software that is easier to maintain
• propagation of fewer side effects
• software that is easier to extend
Because major functions are decoupled from one another, change tends to be less complex and extensions to the
system (a common occurrence) tend to be easier to accomplish without side effects. On the negative side, horizontal
partitioning often causes more data to be passed across module interfaces and can complicate the overall control of
program flow (if processing requires rapid movement from one function to another).
Vertical partitioning (Figure (b)), often called factoring, suggests that control (decision making) and work should be
distributed top-down in the program structure. Toplevel modules should perform control functions and do little actual
processing work. Modules that reside low in the structure should be the workers, performing all input, computation,
and output tasks.
The nature of change in program structures justifies the need for vertical partitioning. Referring to figure(b), it can be
seen that a change in a control module (high in the structure) will have a higher probability of propagating side effects
to modules that are subordinate to it. A change to a worker module, given its low level in the structure, is less likely to
cause the propagation of side effects. In general, changes to computer programs revolve around changes to input,
computation or transformation, and output. The overall control structure of the program (i.e., its basic behavior is far
less likely to change). For this reason vertically partitioned structures are less likely to be susceptible to side effects
when changes are made and will therefore be more maintainable—a key quality factor.
Layering:
Software engineering is fully a layered technology, to develop software we need to go from
one layer to another. All the layers are connected and each layer demands the fulfillment of
the previous layer.
Our aim is to understand and implement Control Abstraction in Java. Before jumping right into
control abstraction, let us understand what is abstraction.
Abstraction: To put it in simple terms, abstraction is anything but displaying only the
essential features of a system to a user without getting into its details. For example, a car and
its functions are described to the buyer and the driver also learns how to drive using the
steering wheel and the accelerators but the inside mechanisms of the engine are not
displayed to the buyer. To read more about Abstraction, refer here.
In abstraction, there are two types: Data abstraction and Control abstraction.
Data abstraction, in short means creating complex data types but giving out only the
essentials operations.
Control Abstraction: This refers to the software part of abstraction wherein the program is
simplified and unnecessary execution details are removed.
Here are the main points about control abstraction:
Control Abstraction follows the basic rule of DRY code which means Don’t Repeat Yourself
and using functions in a program is the best example of control abstraction.
Control Abstraction can be used to build new functionalities and combines control
statements into a single unit.
It is a fundamental feature of all higher-level languages and not just java.
Higher-order functions, closures, and lambdas are few preconditions for control
abstraction.
Highlights more on how a particular functionality can be achieved rather than describing
each detail.
Forms the main unit of structured programming.
A simple algorithm of control flow:
The resource is obtained first
Then, the block is executed.
As soon as control leaves the block, the resource is closed
Example:
Java
// Abstract class
abstract class Vehicle {
// Regular method
System.out.println("kon kon");
class Main {
myCar.VehicleSound();
myCar.honk();
Output
kon kon
honk honk
The greatest advantage of control abstraction is that it makes code a lot cleaner and also
more secure.
In Breadth Testing the full functionality of a product(all the features) are tested but the features are not
tested in detail.
There are some scenarios where Breadth testing takes precedence and others where Depth testing takes
precedence. In practice though, a combination of both is used. These techniques help prioritize tests and in
times of schedule crunches, decide on the optimal use of time to pick the best areas to concentrate on.
1. Integration Testing: during Top-Down integration, developers can decide whether to use a breadth first or
depth first strategy.
2. Sanity testing: We use breadth first usually to ensure the full functionality is working.
3. Functional/System Test: A combination of both – breadth and depth is used i.e. the full functionality and in
depth testing of features is used.
4. Automation: to decide whether we want to automate the end to end or a particular feature in depth or detail.
5. Test coverage metrics: How many features have been covered vs how much in depth have they been
covered.
6. Regression: Breadth testing first followed by Depth testing of the changed functionality.
7. Test Data: Breadth refers to the variation in test data values and categories whereas Depth refers to the
volume or size of databases.
Fan-out,Fan-in:-
The fan-out of a module is the number of its immediately subordinate modules. As a rule
of thumb, the optimum fan-out is seven, plus or minus 2. This rule of thumb is based on
the psychological study conducted by George Miller during which he determined that the
human mind has difficulty dealing with more than seven things at once.
The fan-in of a module is the number of its immediately superordinate (i.e., parent or boss)
modules. The designer should strive for high fan-in at the lower levels of the
hierarchy. This simply means that normally there are common low-level functions that
exist that should be identified and made into common modules to reduce redundant code
and increase maintainability. High fan-in can also increase portability if, for example, all
I/O handling is done in common modules.
Object-Oriented Considerations
Strengths of Fan-In
High fan-in reduces redundancy in coding. It also makes maintenance easier. Modules
developed for fan-in must have good cohesion, preferably functional. Each interface to a
fan-in module must have the same number and types of parameters.
The designer should strive for software structure with moderate fan-out in the upper levels
of the hierarchy and high fan-in in the lower levels of the hierarchy. Some examples of
common modules which result in high fan-in are: I/O modules, edit modules, modules
simulating a high level command (such as calculating the number of days between two
dates).
Use factoring to solve the problem of excessive fan-out. Create an intermediate module to
factor out modules with strong cohesion and loose coupling.
There are two fundamentally different approaches to software design that are in use today—
function-oriented design, and object-oriented design. Though these two design approaches are
radically different, they are complementary rather than competing techniques. The objectoriented
approach is a relatively newer technology and is still evolving. For development of large
programs, the object- oriented approach is becoming increasingly popular due to certain
advantages that it offers. On the other hand, function-oriented designing is a mature technology
and has a large following. Salient features of these two approaches.
Function-oriented Design:-
The following are the salient features of the function-oriented design approach: Top-down decomposition: A
system, to start with, is viewed as a black box that provides certain services (also known as high-level functions)
to the users of the system. In top-down decomposition, starting at a high-level view of the system, each high-level
function is successively refined into more detailed functions. For example, consider a function create-new-library
me mbe r which essentially creates the record for a new member, assigns a unique membership number to him,
and prints a bill towards his membership charge. This high-level function may be refined into the following
subfunctions:
• assign-membership-number
• create-member-record
• print-bill
Each of these subfunctions may be split into more detailed subfunctions and so on. Centralised system state: The
system state can be defined as the values of certain data items that determine the response of the system to a user
action or external event. For example, the set of books (i.e. whether borrowed by different users or available for
issue) determines the state of a library automation system. Such data in procedural programs usually have global
scope and are shared by many modules. The system state is centralised and shared among different functions. For
example, in the library management system, several functions such as the following share data such as member-
records for reference and updation:
• create-new-member
• delete-member
• update-member-record A large number of function-oriented design approaches have been proposed in the past. A few of
the well-established function-oriented design approaches are as following:
• Structured design by Constantine and Yourdon, [1979
] • Jackson’s structured design by Jackson [1975]
• Warnier-Orr methodology [1977, 1981]
• Step-wise refinement by Wirth [1971]
• Hatley and Pirbhai’s Methodology [1987]
5.5.2 Object-oriented Design In the object-oriented design (OOD) approach, a system is viewed as being made up
of a collection of objects (i.e. entities). Each object is associated with a set of functions that are called its methods.
Each object contains its own data and is responsible for managing it. The data internal to an object cannot be
accessed directly by other objects and only through invocation of the methods of the object. The system state is
decentralised since there is no globally shared data in the system and data is stored in each object. For example, in
a library automation software, each library member may be a separate object with its own data and functions to
operate on the stored data. The methods defined for one object cannot directly refer to or change the data of other
objects. The object-oriented design paradigm makes extensive use of the principles of abstraction and
decomposition as explained below. Objects decompose a system into functionally independent modules. Objects
can also be considered as instances of abstract data types (ADTs). The ADT concept did not originate from the
object-oriented approach. In fact, ADT concept was extensively used in the ADA programming language
introduced in the 1970s. ADT is an important concept that forms an important pillar of objectorientation. Let us
now discuss the important concepts behind an ADT. There are, in fact, three important concepts associated with
an ADT—data abstraction, data structure, data type. We discuss these in the following subsection: Data
abstraction: The principle of data abstraction implies that how data is exactly stored is abstracted away. This
means that any entity external to the object (that is, an instance of an ADT) would have no knowledge about how
data is exactly stored, organised, and manipulated inside the object. The entities external to the object can access
the data internal to an object only by calling certain well-defined methods supported by the object. Consider an
ADT such as a stack. The data of a stack object may internally be stored in an array, a linearly linked list, or a
bidirectional linked list. The external entities have no knowledge of this and can access data of a stack object only
through the supported operations such as push and pop.
Data structure: A data structure is constructed from a collection of primitive data items. Just as a civil engineer
builds a large civil engineering structure using primitive building materials such as bricks, iron rods, and cement;
a programmer can construct a data structure as an organised collection of primitive data items such as integer,
floating point numbers, characters, etc.
Data type: A type is a programming language terminology that refers to anything that can be instantiated. For
example, int, float, char etc., are the basic data types supported by C programming language. Thus, we can say
that ADTs are user defined data types. In object-orientation, classes are ADTs. But, what is the advantage of
developing an application using ADTs? Let us examine the three main advantages of using ADTs in programs:
The data of objects are encapsulated within the methods. The encapsulation principle is also known as data
hiding. The encapsulation principle requires that data can be accessed and manipulated only through the methods
supported by the object and not directly. This localises the errors. The reason for this is as follows. No program
element is allowed to change a data, except through invocation of one of the methods. So, any error can easily be
traced to the code segment changing the value. That is, the method that changes a data item, making it erroneous
can be easily identified.
An ADT-based design displays high cohesion and low coupling. Therefore, object- oriented designs are
highly modular.
Since the principle of abstraction is used, it makes the design solution easily understandable and helps to
manage complexity. Similar objects constitute a class. In other words, each object is a member of some class.
Classes may inherit features from a super class. Conceptually, objects communicate by message passing. Objects
have their own internal data. Thus an object may exist in different states depending the values of the internal data.
In different states, an object may behave differently
1. System
2. Process
3. Technology
1. Analysis Phase:
Analysis Phase involves data flow diagram, data dictionary, state transition diagram, and
entity-relationship diagram.
1. Data Flow Diagram:
In the data flow diagram, the model describes how the data flows through the system. We
can incorporate the Boolean operators and & or link data flow when more than one data
flow may be input or output from a process.
For example, if we have to choose between two paths of a process we can add an
operator or and if two data flows are necessary for a process we can add an operator. The
input of the process “check-order” needs the credit information and order information
whereas the output of the process would be a cash-order or a good-credit-order.
2. Data Dictionary:
The content that is not described in the DFD is described in the data dictionary. It defines
the data store and relevant meaning. A physical data dictionary for data elements that flow
between processes, between entities, and between processes and entities may be
included. This would also include descriptions of data elements that flow external to the
data stores.
A logical data dictionary may also be included for each such data element. All system
names, whether they are names of entities, types, relations, attributes, or services, should
be entered in the dictionary.
4. ER Diagram:
ER diagram specifies the relationship between data store. It is basically used in database
design. It basically describes the relationship between different entities.
2. Design Phase:
Design Phase involves structure chart and pseudocode.
1. Structure Chart:
It is created by the data flow diagram. Structure Chart specifies how DFS’s processes are
grouped into tasks and allocate to the CPU. The structured chart does not show the
working and internal structure of the processes or modules and does not show the
relationship between data or data-flows. Similar to other SASD tools, it is time and cost-
independent and there is no error-checking technique associated with this tool.
The modules of a structured chart are arranged arbitrarily and any process from a DFD
can be chosen as the central transform depending on the analysts’ own perception. The
structured chart is difficult to amend, verify, maintain, and check for completeness and
consistency.
2. Pseudo Code:
It is the actual implementation of the system. It is an informal way of programming that
doesn’t require any specific programming language or technology.
A Data Flow Diagram (DFD) is a traditional visual representation of the information flows within a
system. A neat and clear DFD can depict the right amount of the system requirement graphically. It can
be manual, automated, or a combination of both.
It shows how data enters and leaves the system, what changes the information, and where data is
stored.
The objective of a DFD is to show the scope and boundaries of a system as a whole. It may be used as a
communication tool between a system analyst and any person who plays a part in the order that acts as
a starting point for redesigning a system. The DFD is also called as a data flow graph or bubble chart.
1. All names should be unique. This makes it easier to refer to elements in the DFD.
2. Remember that DFD is not a flow chart. Arrows is a flow chart that represents the order of events; arrows
in DFD represents flowing data. A DFD does not involve any order of events.
3. Suppress logical decisions. If we ever have the urge to draw a diamond-shaped box in a DFD, suppress
that urge! A diamond-shaped box is used in flow charts to represents decision points with multiple exists
paths of which the only one is taken. This implies an ordering of events, which makes no sense in a DFD.
4. Do not become bogged down with details. Defer error conditions and error handling until the end of the
analysis.
Standard symbols for DFDs are derived from the electric circuit diagram analysis and are shown in fig:
Circle: A circle (bubble) shows a process that transforms data inputs into data outputs.
Data Flow: A curved line shows the flow of data into or out of a process or data store.
Data Store: A set of parallel lines shows a place for the collection of data items. A data store indicates
that the data is stored which can be used at a later stage or by the other processes in a different order.
The data store can have an element or group of elements.
Source or Sink: Source or Sink is an external entity and acts as a source of system inputs or sink of
system outputs.
0-level DFDM
It is also known as fundamental system model, or context diagram represents the entire software
requirement as a single bubble with input and output data denoted by incoming and outgoing arrows.
Then the system is decomposed and described as a DFD with multiple bubbles. Parts of the system
represented by each of these bubbles are then decomposed and documented as more and more
detailed DFDs. This process may be repeated at as many levels as necessary until the program at hand is
well understood. It is essential to preserve the number of inputs and outputs between levels, this
concept is called leveling by DeMacro. Thus, if bubble "A" has two inputs x 1 and x2 and one output y,
then the expanded DFD, that represents "A" should have exactly two external inputs and one external
output as shown in fig:
The Level-0 DFD, also called context diagram of the result management system is shown in fig. As the
bubbles are decomposed into less and less abstract bubbles, the corresponding data flow may also be
needed to be decomposed.
1-level DFD
In 1-level DFD, a context diagram is decomposed into multiple bubbles/processes. In this level, we
highlight the main objectives of the system and breakdown the high-level process of 0-level DFD into
subprocesses.
2-Level DFD
2-level DFD goes one process deeper into parts of 1-level DFD. It can be used to project or record the
specific/necessary detail about the system's functioning.
Basic Object oriented concepts:
In the object-oriented design method, the system is viewed as a collection of objects (i.e., entities). The
state is distributed among the objects, and each object handles its state data. For example, in a Library
Automation Software, each library representative may be a separate object with its data and functions
to operate on these data. The tasks defined for one purpose cannot refer or change data of other
objects. Objects have their internal data which represent their state. Similar objects create a class. In
other words, each object is a member of some class. Classes may inherit features from the superclass.
UML Diagrams:
Unified Modeling Language (UML) is a general purpose modelling language. The main aim
of UML is to define a standard way to visualize the way a system has been designed. It is
quite similar to blueprints used in other fields of engineering.
UML is not a programming language, it is rather a visual language. We use UML diagrams
to portray the behavior and structure of a system. UML helps software engineers,
businessmen and system architects with modelling, design and analysis. The Object
Management Group (OMG) adopted Unified Modelling Language as a standard in 1997. Its
been managed by OMG ever since. International Organization for Standardization (ISO)
published UML as an approved standard in 2005. UML has been revised over the years and
is reviewed periodically.
Do we really need UML?
Complex applications need collaboration and planning from multiple teams and hence
require a clear and concise way to communicate amongst them.
Businessmen do not understand code. So UML becomes essential to communicate with
non programmers essential requirements, functionalities and processes of the system.
A lot of time is saved down the line when teams are able to visualize processes, user
interactions and static structure of the system.
UML is linked with object oriented design and analysis. UML makes the use of elements and
forms associations between them to form diagrams. Diagrams in UML can be broadly
classified as:
1. Structural Diagrams – Capture static aspects or structure of a system. Structural
Diagrams include: Component Diagrams, Object Diagrams, Class Diagrams and
Deployment Diagrams.
2. Behavior Diagrams – Capture dynamic aspects or behavior of the system. Behavior
diagrams include: Use Case Diagrams, State Diagrams, Activity Diagrams and Interaction
Diagrams.
The image below shows the hierarchy of diagrams according to UML 2.2
1. Class – A class defines the blue print i.e. structure and functions of an object.
2. Objects – Objects help us to decompose large systems and help us to modularize our
system. Modularity helps to divide our system into understandable components so that we
can build our system piece by piece. An object is the fundamental unit (building block) of a
system which is used to depict an entity.
3. Inheritance – Inheritance is a mechanism by which child classes inherit the properties of
their parent classes.
4. Abstraction – Mechanism by which implementation details are hidden from user.
5. Encapsulation – Binding data together and protecting it from the outer world is referred to
as encapsulation.
6. Polymorphism – Mechanism by which functions or entities are able to exist in different
forms.
Additions in UML 2.0 –
Software development methodologies like agile have been incorporated and scope of
original UML specification has been broadened.
Originally UML specified 9 diagrams. UML 2.x has increased the number of diagrams from
9 to 13. The four diagrams that were added are : timing diagram, communication diagram,
interaction overview diagram and composite structure diagram. UML 2.x renamed
statechart diagrams to state machine diagrams.
UML 2.x added the ability to decompose software system into components and sub-
components.
1. Class Diagram – The most widely use UML diagram is the class diagram. It is the building
block of all object oriented software systems. We use class diagrams to depict the static
structure of a system by showing system’s classes,their methods and attributes. Class
diagrams also help us identify relationship between different classes or objects.
2. Composite Structure Diagram – We use composite structure diagrams to represent the
internal structure of a class and its interaction points with other parts of the system. A
composite structure diagram represents relationship between parts and their configuration
which determine how the classifier (class, a component, or a deployment node) behaves.
They represent internal structure of a structured classifier making the use of parts, ports,
and connectors. We can also model collaborations using composite structure diagrams.
They are similar to class diagrams except they represent individual parts in detail as
compared to the entire class.
3. Object Diagram – An Object Diagram can be referred to as a screenshot of the instances
in a system and the relationship that exists between them. Since object diagrams depict
behaviour when objects have been instantiated, we are able to study the behaviour of the
system at a particular instant. An object diagram is similar to a class diagram except it
shows the instances of classes in the system. We depict actual classifiers and their
relationships making the use of class diagrams. On the other hand, an Object Diagram
represents specific instances of classes and relationships between them at a point of time.
4. Component Diagram – Component diagrams are used to represent the how the physical
components in a system have been organized. We use them for modelling implementation
details. Component Diagrams depict the structural relationship between software system
elements and help us in understanding if functional requirements have been covered by
planned development. Component Diagrams become essential to use when we design
and build complex systems. Interfaces are used by components of the system to
communicate with each other.
5. Deployment Diagram – Deployment Diagrams are used to represent system hardware
and its software.It tells us what hardware components exist and what software components
run on them.We illustrate system architecture as distribution of software artifacts over
distributed targets. An artifact is the information that is generated by system software.
They are primarily used when a software is being used, distributed or deployed over
multiple machines with different configurations.
6. Package Diagram – We use Package Diagrams to depict how packages and their
elements have been organized. A package diagram simply shows us the dependencies
between different packages and internal composition of packages. Packages help us to
organise UML diagrams into meaningful groups and make the diagram easy to
understand. They are primarily used to organise class and use case diagrams.
Behavior Diagrams –
1. State Machine Diagrams – A state diagram is used to represent the condition of the
system or part of the system at finite instances of time. It’s a behavioral diagram and it
represents the behavior using finite state transitions. State diagrams are also referred to
as State machines and State-chart Diagrams . These terms are often used
interchangeably.So simply, a state diagram is used to model the dynamic behavior of a
class in response to time and changing external stimuli.
2. Activity Diagrams – We use Activity Diagrams to illustrate the flow of control in a system.
We can also use an activity diagram to refer to the steps involved in the execution of a use
case. We model sequential and concurrent activities using activity diagrams. So, we
basically depict workflows visually using an activity diagram.An activity diagram focuses on
condition of flow and the sequence in which it happens. We describe or depict what
causes a particular event using an activity diagram.
3. Use Case Diagrams – Use Case Diagrams are used to depict the functionality of a system
or a part of a system. They are widely used to illustrate the functional requirements of the
system and its interaction with external agents(actors). A use case is basically a diagram
representing different scenarios where the system can be used. A use case diagram gives
us a high level view of what the system or a part of the system does without going into
implementation details.
4. Sequence Diagram – A sequence diagram simply depicts interaction between objects in a
sequential order i.e. the order in which these interactions take place.We can also use the
terms event diagrams or event scenarios to refer to a sequence diagram. Sequence
diagrams describe how and in what order the objects in a system function. These diagrams
are widely used by businessmen and software developers to document and understand
requirements for new and existing systems.
5. Communication Diagram – A Communication Diagram(known as Collaboration Diagram
in UML 1.x) is used to show sequenced messages exchanged between objects. A
communication diagram focuses primarily on objects and their relationships. We can
represent similar information using Sequence diagrams,however, communication diagrams
represent objects and links in a free form.
6. Timing Diagram – Timing Diagram are a special form of Sequence diagrams which are
used to depict the behavior of objects over a time frame. We use them to show time and
duration constraints which govern changes in states and behavior of objects.
7. Interaction Overview Diagram – An Interaction Overview Diagram models a sequence of
actions and helps us simplify complex interactions into simpler occurrences. It is a mixture
of activity and sequence diagrams.
Structured design:
Detailed design, Design review, Characteristics of a good user interface, User Guidance and Online Help,
Mode-based vs Mode-less Interface, Types of user interfaces, Component-based GUI development, User
interface design methodology: GUI design methodology.
Detailed design:
The design phase of software development deals with transforming the customer requirements as
described in the SRS documents into a form implementable using a programming language.
The software design process can be divided into the following three levels of phases of design:
1. Interface Design
2. Architectural Design
3. Detailed Design
Interface Design:
Interface design is the specification of the interaction between a system and its environment. this phase
proceeds at a high level of abstraction with respect to the inner workings of the system i.e, during
interface design, the internal of the systems are completely ignored and the system is treated as a black
box. Attention is focused on the dialogue between the target system and the users, devices, and other
systems with which it interacts. The design problem statement produced during the problem analysis step
should identify the people, other systems, and devices which are collectively called agents.
Interface design should include the following details:
Precise description of events in the environment, or messages from agents to which the system must
respond.
Precise description of the events or messages that the system must produce.
Specification on the data, and the formats of the data coming into and going out of the system.
Specification of the ordering and timing relationships between incoming events or messages, and
outgoing events or outputs.
Architectural Design:
Architectural design is the specification of the major components of a system, their responsibilities,
properties, interfaces, and the relationships and interactions between them. In architectural design, the
overall structure of the system is chosen, but the internal details of major components are ignored.
Issues in architectural design includes:
Gross decomposition of the systems into major components.
Allocation of functional responsibilities to components.
Component Interfaces
Component scaling and performance properties, resource consumption properties, reliability
properties, and so forth.
Communication and interaction between components.
The architectural design adds important details ignored during the interface design. Design of the
internals of the major components is ignored until the last phase of the design.
Detailed Design:
Design is the specification of the internal elements of all major system components, their properties,
relationships, processing, and often their algorithms and the data structures.
The detailed design may include:
Decomposition of major system components into program units.
Allocation of functional responsibilities to units.
User interfaces
Unit states and state changes
Data and control interaction between units
Data packaging and implementation, including issues of scope and visibility of program elements
Algorithms and data structures
Design Review:
Software design reviews are a systematic, comprehensive, and well-documented inspection of
design that aims to check whether the specified design requirements are adequate and the design
meets all the specified requirements. In addition, they also help in identifying the problems (if any) in
the design process. IEEE defines software design review as ‘a formal meeting at which a system’s
preliminary or detailed design is presented to the user, customer, or other interested parties for
comment and approval.’ These reviews are held at the end of the design phase to resolve issues (if
any) related to software design decisions, that is, architectural design and detailed design (component-
level and interface design) of the entire software or a part of it (such as a database).
The elements that should be examined while reviewing the software design include requirements and
design specifications, verification and validation results produced from each phase of SOLe, testing
and development plans, and all other project related documents and activities. Note that design
reviews are considered as the best mechanism to ensure product quality and to reduce the potential
risk of avoiding the problems of not meeting the schedules and requirements.
We’ll be covering the following topics in this tutorial:
Generally, the review process is carried out in three steps, which corresponds to the steps involved in
the software design process. First, a preliminary design review is conducted with the customers and
users to ensure that the conceptual design (which gives an idea to the user of what the system will
look like) satisfies their requirements. Next, a critical design review is conducted with analysts and
other developers to check the technical design (which is used by the developers to specify how the
system will work) in order to critically evaluate technical merits of the design. Next, a program design
review is conducted with the programmers in order to get feedback before the design is implemented.
To ensure that the software requirements are reflected in the software architecture
To specify whether effective modularity is achieved
To define interfaces for modules and external system elements
To ensure that the data structure is consistent with the information domain
To ensure that maintainability has been considered
To assess the quality factors.
In this review, it is verified that the proposed design includes the required hardware and interfaces with
the other parts of the computer-based system. To conduct a preliminary design review, a review team
is formed where each member acts as an independent person authorized to make necessary
comments and decisions. This review team comprises the following individuals.
If errors are noted in the review process then the faults are assessed on the basis of their severity.
That is, if there is a minor fault it is resolved by the review team. However, if there is a major fault, the
review team may agree to revise the proposed conceptual design. Note that preliminary design review
is again conducted to assess the effectiveness of the revised (new) design.
Critical Design Review
Once the preliminary design review is successfully completed and the customer(s) is satisfied with the
proposed design, a critical design review is conducted. This review is conducted to serve the following
purposes.
To assure that there are no defects in the technical and conceptual designs
To verify that the design being reviewed satisfies the design requirements established in the architectural
design specifications
To assess the functionality and maturity of the design critically
To justify the design to the outsiders so that the technical design is more clear, effective and easy to
understand
In this review, diagrams and data (sometimes both) are used to evaluate alternative design strategies
and how and why the major design decisions have been taken. Just like for the preliminary design
review, a review team is formed to carry out a critical design review. In addition to the team members
involved in the preliminary design review, this team comprises the following individuals.
System tester: Understands the technical issues of design and compare them with the design created for
similar projects.
Analyst: Responsible for writing system documentation.
Program designer for this project: Understands the design in order to derive detailed program designs.
Similar to a preliminary design review, if discrepancies are noted in the critical design review process
the faults are assessed on the basis of their severity. A minor fault is resolved by the review team. If
there is a major fault, the review team may agree to revise the proposed technical design. Note that a
critical design review is conducted again to assess the effectiveness of the revised (new) design.
Design reviews are considered important as in these reviews the product is logically viewed as the
collection of various entities/components and use-cases. These reviews are conducted at all software
design levels and cover all parts of the software units. Generally, the review process comprises three
criteria, as listed below.
The software design review process is beneficial for everyone as the faults can be detected at an early
stage, thereby reducing the cost of detecting errors and reducing the likelihood of missing a critical
issue. Every review team member examines the integrity of the design and not the persons involved in
it (that is, designers), which in turn emphasizes that the common ‘objective of developing a highly
rated design is achieved. To check the effectiveness of the design, the review team members should
address the following questions.
In addition to these questions, if the proposed system is developed using a phased development (like
waterfall and incremental model), then the phases should be interfaced sufficiently so that an easy
transition can take place from one phase to the other.
Component-based interface –
It may be smooth for the person to apprehend if brand new interactive fashion of the
interface turns into very just like interface of different programs which can be already
acquainted to the person. This is viable most effective if the improvement of distinctive
interactive person interfaces is through the use of a few preferred interface components.
Speed of use :
The speed of use of a user interface is determined by time and efforts used to initiate and
execute different commands. It is sometimes referred to as productivity support that in which
much time user can perform his task. To initiate and execute different commands, there must
a less requirement of user and time effort. It can only be possible achieve by using a properly
designed user interface.
Speed of recall:
After using the interface many times, speed to recall any command increases automatically.
The speed should be maximized with which they recall command issue procedure. There are
many ways to improve recalling speed like by using some metaphors, symbolic command
issue procedures, and intuitive command names.
Error prevention :
As we understand prevention is higher than cure. So to accurate mistakes, it’s far greater
useful to save you mistakes. A good user interface have to reduce scope of committing
mistakes all through the use of various instructions. By tracking mistakes which took place
through common customers, mistake charge may be without difficulty decided. By automating
the person interface code with tracking code that is beneficial in recording frequency and
blunders sorts and after that show the information of blunders of mistakes dedicated through
users.
Aesthetic and attractive :
As we all know attractive things gain more attention. Thus, a good user interface should be
attractive to use. Thus, graphics-based user interfaces are in great demand over text-based
interfaces.
Feedback :
Providing remarks to the moves of the person facilitates person to apprehend processing of
the system. If any request of user takes more than a few seconds then user starts to panic,
that is what is happening, if the proper feedback is providing to user, then he must know
about his actions. Thus, a good user interface must contain feedback about the processing.
Error recovery :
Error could be very common, all people can dedicate an blunders even specialists also can
dedicate mistakes. Therefore, it’s also a responsibility of a great person interface to offer a
undo facility in order that person can get better their errors at the same time as use of the
interface. If the mistakes can’t be recovered through users, they experience irritated, helpless,
and low.
User guidance and online assist :
A good user interface s one which additionally offers assist to its person after they overlook
some thing like a command or while they may be ignorant of capabilities of the software
program. It may be accomplished through supplying exact Users are seeking steering and on
line assist to person after they want it.
User guidance and online Help:
Putting together a user's guide can become a problem that no one wants to deal with. Freelance technical author David
Farbey asks who's going to write the user's guide and considers why it's important to have one.
There comes a time when every software development manager has to find an answer to the question 'who's going
to write the user's guide?' In major corporations, where there is a well-established technical publications
department, the responsibility for the user's guide would have been allocated early on, as soon as the design
specifications were approved for development.
However in many small and medium-sized enterprises the user's guide question often becomes a problem that
everyone tries to ignore.
In the last dozen years as a technical writer in the software industry I have heard a whole range of excuses as to
why the user's guide question is not important. None of the arguments put forward stand up to close scrutiny.
However it is often the case that something obvious to a developer is less obvious to the user. GUI metaphors that
are second nature to an experienced developer may be totally strange for a user for whom this may be one of the
first applications they ever use.
Even an experienced user can have problems when a new version differs significantly from an older version of the
same product, or when a newly-launched application differs from a comparable application from a rival vendor.
Users need some reference information to help them, and a user's guide is the natural place for them to look for it.
QA testing doesn't look at whether a user would understand that in order to display dialog B they needed to click
button A, nor whether the user would understand the importance of completing the fields in dialog A in the first
place. In the absence of usability testing (even rarer, unfortunately, than user documentation) a well-written user's
guide can explain what the user can see on the screen, and map the screen buttons and dialogs to the user's tasks.
The developer's view of software is, naturally, code-centric, while users see a software product as just another
tool, like a calculator, or a pen and paper.The user needs to get their job done quickly and efficiently, and will use
the best tools they can find. They are not interested in the internal logic of the code behind the application nor in
the table structure of the underlying database, any more than DIY enthusiasts are interested in exactly how their
hammers are made.
They are likely to select the most interesting and attractive snippets of technical information, and whatever jargon
they feel is most likely to make the product seem new and innovative. Once a customer has bought and paid for
your product, the marketing department is no longer interested in them. Quite rightly, they are busy going after
the next customer.
Even if the technical marketing literature does describe your product's features well, there is no guarantee that the
people who are going to use the products even saw the marketing brochures. It is very likely that they, the users,
aren't the people who are authorized to make purchasing decisions.
In any case commercial viability requires that you sell beyond this initial market segment and into a more general
one, where by definition the average user is going to be less sophisticated. These are the people for whom a well-
written user's guide can be really valuable.
Your support staff themselves would benefit from having a clear and comprehensive User's Guide in front of
them, and from knowing that their customers had copies of the Guide as well. The customer support log would
then be able to show which topics were not yet adequately covered in the user's guide, so that the next edition
would be even better.
We'd love to have a user's guide but we just don't have a budget
It is difficult for companies, particularly start-ups and smaller enterprises, to have the same level of investment in
peripheral activities as major corporations. It is particularly difficult to fund activities that appear to create no
value for the company. But a good User's Guide is really an essential part of a software product, just as much as
the GUI, the installation package or the code itself.
Printing costs can be high, but a User's Guide can just as easily be delivered as a PDF file or even better, as an
online help file. The minimum cost of developing a User's Guide is the cost of the technical writer, who could
easily be a contractor if your company is concerned about headcount.
The costs involved in providing a User's Guide need to be set against the benefits. Your products become easier to
learn and to use, creating the kind of grass-roots customer loyalty that is difficult to buy. Frivolous calls to
customer support are reduced, giving your staff adequate time to deal quickly with more significant problems.
Your company's reputation is also enhanced when you provide good user documentation, leading to better market
recognition and enhanced sales. And hiring a technical writer means that you ensure that your programming staff
can devote all their time to what they do best - writing code.
Different modules specified in the design document are coded in the Coding phase according to the
module specification. The main goal of the coding phase is to code from the design document prepared
after the design phase through a high-level language and then to unit test this code.
Good software development organizations want their programmers to maintain to some well-defined and
standard style of coding called coding standards. They usually make their own coding standards and
guidelines depending on what suits their organization best and based on the types of software they
develop. It is very important for the programmers to maintain the coding standards otherwise the code
will be rejected during code review.
Purpose of Having Coding Standards:
A coding standard gives a uniform appearance to the codes written by different engineers.
It improves readability, and maintainability of the code and it reduces complexity also.
It helps in code reuse and helps to detect error easily.
It promotes sound programming practices and increases efficiency of the programmers.
Some of the coding standards are given below:
1. Limited use of globals:
These rules tell about which types of data that can be declared global and the data that can’t be.
3. Naming conventions for local variables, global variables, constants and functions:
Some of the naming conventions are given below:
Meaningful and understandable variables name helps anyone to understand the
reason of using it.
Local variables should be named using camel case lettering starting with small letter
(e.g. localData) whereas Global variables names should start with a capital letter
(e.g. GlobalData). Constant names should be formed using capital letters only
(e.g. CONSDATA).
It is better to avoid the use of digits in variable names.
The names of the function should be written in camel case starting with small letters.
The name of the function must describe the reason of using the function clearly and
briefly.
4. Indentation:
Proper indentation is very important to increase the readability of the code. For making the
code readable, programmers should use White spaces properly. Some of the spacing
conventions are given below:
There must be a space after giving a comma between two function arguments.
Each nested block should be properly indented and spaced.
Proper Indentation should be there at the beginning and at the end of each block in
the program.
All braces should start from a new line and the code following the end of braces also
start from a new line.
Software Development Process refers to implementing the design and operations of software, this
process takes place which ultimately delivers the best product. Do several questions arise after this
process like whether the code is secure? Is it well-designed? Is the code free of error? As per the survey,
on average programmers make a mistake once at every five lines of the code. To rectify these bugs
Code Review comes into the picture. Reviewing a code typically means checking whether the code
passes the test cases, has bugs, repeated lines, and various possible errors which could reduce the
efficiency and quality of the software. Reviews can be good and bad as well. Good ones lead to more
usage, growth, and popularity of the software whereas bad ones degrade the quality of software.
we will discuss the 5 steps to a complete review code. So let’s get started.
For web development, several files and folders are incorporated. All the files contain thousands of lines
of code. When you start reviewing them, this might look dense and confusing. So, the first step of code
review must be splitting the code into sections. This will make a clear understanding of the code flow.
Suppose, there are 9 folders and each folder contains 5 files. Divide them into sections. Set a goal to
review at least 5 files of the first folder in n no of days and once you complete reviewing it, go for the
next folder. Like this, when you assign yourself a task for some time, you’ll get sufficient time to review,
and thus, you’ll not feel bored or disinterested.
This is the second step of the code review process. You must seek advice or help from fellow developers
as everyone’s contribution is equally important. Experienced ones can identify the mistakes within a
second and rectify them but the young minds come up with more simple ways to implement a task. So,
ask your juniors as they have the curiosity to learn more. To make it perfect, they find other ways which
will benefit in two ways –
a) They’ll get deeper knowledge.
b) Solution can be more precise.
The below quote states the best outcome of teamwork. Thus, teamwork improves the performance of
software and fosters a positive environment.
“Alone, we can do so little. Together, we can do so much”
– Helen Keller
Functions are reusable blocks of code. A piece of code that does a single task that can be called
whenever required. Avoid repetition of codes. Check if you’ve to repeat code for different tasks, again
and again, so there you can use these functions to reduce the repeatability of code. This process of using
functions maintains the codebase.
For example, if you’re building a website. Several components are made in which basic functionalities
are defined. If a block of code is being repeated so many times, copy that block of code or function to a
file that can be invoked (reused) wherever and whenever required. This also reduces the complexity level
and lengthiness of the codebase.
5. Check Test Cases and Re-Build
This is the final step of the code review process. When you have rectified all the possible errors while
reviewing, check if all the test cases are passed, all the conditions are satisfied. There are various tests
such as functionality, usability, interface, performance, and security testing.
Functionality: These tests include working of external and internal links, APIs, test forms.
Usability: Checking design, menus, buttons, or links to different pages should be easily visible
and consistent on all web pages.
Interface: It shows how interactive the website is.
Performance: It shows the load time of a website, tests if there’s a crash in a website due to
peak load.
Security: Test unauthorized access to the website.
Once all the test cases are passed, re-build the entire code. After this process is done, go for a look over
the website. Examine all the working like buttons, arrow keys, etc.
Go For a Demo Presentation
When all the steps of the Code Review process stated above are done, go for a demo presentation.
Schedule a flexible meeting and give a presentation to the team demonstrating the working of the
software. Go through the operations of every part of a website. Tell them about the changes made.
Validate your points as to why these changes have been done. See if all requirements are fulfilled and
also the website doesn’t look too bulky. Make sure it is simple and at the complete working stage.
Things to avoid while reviewing code
1. Don’t take too many files at a time to review.
2. Don’t go for continuous reviewing, take breaks.
3. So many nested loops.
4. Usage of too many variables.
5. No negative comments to anyone in a team.
6. Don’t make the website look too complex.
So till now you must have got the complete picture of the Code Review process. It is a very tedious
process in any modern development team’s workflow. It helps in giving a fresh start to identify bugs and
simple coding errors before your product gets to the next step or deployment, making the process for
getting the software to the customer more efficient. Before getting your prototype turned into a product,
do a proper code review or scrutiny to get the best version of it.
Overview Software Documentation
Software documentation is a written piece of text that is often accompanied with a software program.
This makes the life of all the members associated with the project more easy. It may contain anything
from API documentation, build notes or just help content. It is a very critical process in software
development. It’s primarily an integral part of any computer code development method. Moreover, the
computer code practitioners are a unit typically concerned with the worth, degree of usage and quality of
the actual documentation throughout development and its maintenance throughout the total method.
Motivated by the requirements of Novatel opposition, a world leading company developing package in
support of worldwide navigation satellite system, and based mostly on the results of a former systematic
mapping studies area unit aimed of the higher understanding of the usage and therefore the quality of a
varied technical documents throughout computer code development and there maintenance.
For example before development of any software product requirements are documented which is called
as Software Requirement Specification (SRS). Requirement gathering is considered as an stage
of Software Development Life Cycle (SDLC).
Another example can be a user manual that a user refer for installing, using, and providing maintenance
to the software application/product.
Types Of Software Documentation :
1. Requirement Documentation :
It is the description of how the software shall perform and which environment setup would be
appropriate to have the best out of it. These are generated while the software is under
development and is supplied to the tester groups too.
2. Architectural Documentation :
Architecture documentation is a special type of documentation that concerns the design. It
contains very little code and is more focused on the components of the system, their roles and
working. It also shows the data flows throughout the system.
3. Technical Documentation :
These contain the technical aspects of the software like API, algorithms etc. It is prepared
mostly for the software devs.
4. End-user Documentation :
As the name suggests these are made for the end user. It contains support resources for the end
user.
Purpose of Documentation :
Due to the growing importance of the computer code necessities, the method of crucial them needs to be
effective so as to notice desired results. As, such determination of necessities is often beneath sure
regulation and pointers that area unit core in getting a given goal.
These all implies that computer code necessities area unit expected to alter thanks to the ever ever-
changing technology within the world . however the very fact that computer code information id obtained
through development has to be modified within the wants of users and the transformation of the
atmosphere area unit inevitable.
what is more, computer code necessities ensures that there’s verification and therefore the testing method
, in conjunction with prototyping and conferences there are focus teams and observations.
For a software engineer reliable documentation is often a should the presences of documentation helps
keep track of all aspects of associate application and it improves on the standard of a wares, it’s the most
focuses area unit development , maintenance and information transfer to alternative developers.
productive documentation can build info simply accessible, offer a restricted range of user entry purpose
, facilitate new user learn quickly , alter the merchandise and facilitate to chop out the price.
Importance of software documentation :
For a programmer reliable documentation is always a must the presence keeps track of all aspects of an
application and helps in keeping the software updated.
Advantage of software documentation :
The presence of documentation helps in keeping the track of all aspects of an application and
also improves on the quality of the software product .
The main focus are based on the development , maintenance and knowledge transfer to other
developers.
Helps development teams during development.
Helps end-users in using the product.
Improves overall quality of software product
It cuts down duplicative work.
Makes easier to understand code.
Helps in establishing internal co-ordination in work.
Disadvantage of software documentation :
The documenting code is time consuming.
The software development process often takes place under time pressure, due to which many
times the documentation updates don’t match the updated code .
The documentation has no influence on the performance of the an application.
Documenting is not so fun, its sometime boring to a certain extent.
The agile methodology encourages engineering groups to invariably concentrate on delivering price to
their customers. This key should be thought-about within the method of manufacturing computer code
documentation .good package ought to be provided whether or not it’s a computer code specifications
document for programmers , testers, computer code manual for finish users .
Software testing can be stated as the process of verifying and validating that software or application is
bug-free, meets the technical requirements as guided by its design and development, and meets the user
requirements effectively and efficiently with handling all the exceptional and boundary cases.
The process of software testing aims not only at finding faults in the existing software but also at finding
measures to improve the software in terms of efficiency, accuracy, and usability. It mainly aims at
measuring the specification, functionality, and performance of a software program or application.
Software testing can be divided into two steps:
1. Verification: it refers to the set of tasks that ensure that software correctly implements a specific
function.
2. Validation: it refers to a different set of tasks that ensure that the software that has been built is
traceable to customer requirements.
Verification: “Are we building the product right?”
Validation: “Are we building the right product?”
Also known as closed box/data-driven testing. Also known as clear box/structural testing.
End users, testers, and developers. Normally done by testers and developers.
Note: Software testing is a very broad and vast topic and is considered to be an integral and very
important part of software development and hence should be given its due importance.
Black box testing is a type of software testing in which the functionality of the software is not known.
The testing is done without the internal knowledge of the products.
Black box testing can be done in following ways:
1. Syntax Driven Testing – This type of testing is applied to systems that can be syntactically
represented by some language. For example- compilers,language that can be represented by context free
grammar. In this, the test cases are generated so that each grammar rule is used at least once.
2. Equivalence partitioning – It is often seen that many type of inputs work similarly so instead of
giving all of them separately we can group them together and test only one input of each group. The idea
is to partition the input domain of the system into a number of equivalence classes such that each
member of class works in a similar way, i.e., if a test case in one class results in some error, other
members of class would also result into same error.
The technique involves two steps:
1. Identification of equivalence class – Partition any input domain into minimum two sets: valid
values and invalid values. For example, if the valid range is 0 to 100 then select one valid
input like 49 and one invalid like 104.
2. Generating test cases –
(i) To each valid and invalid class of input assign unique identification number.
(ii) Write test case covering all valid and invalid test case considering that no two invalid
inputs mask each other.
To calculate the square root of a number, the equivalence classes will be:
(a) Valid inputs:
Whole number which is a perfect square- output will be an integer.
Whole number which is not a perfect square- output will be decimal number.
Positive decimals
(b) Invalid inputs:
Negative numbers(integer or decimal).
Characters other that numbers like “a”,”!”,”;”,etc.
3. Boundary value analysis – Boundaries are very good places for errors to occur. Hence if test cases
are designed for boundary values of input domain then the efficiency of testing improves and probability
of finding errors also increase. For example – If valid range is 10 to 100 then test for 10,100 also apart
from valid and invalid inputs.
4. Cause effect Graphing – This technique establishes relationship between logical input called causes
with corresponding actions called effect. The causes and effects are represented using Boolean graphs.
The following steps are followed:
1. Identify inputs (causes) and outputs (effect).
2. Develop cause effect graph.
3. Transform the graph into decision table.
4. Convert decision table rules to test cases.
For example, in the following cause effect graph:
White box testing techniques analyze the internal structures the used data structures, internal design, code
structure and the working of the software rather than just the functionality as in black box testing. It is
also called glass box testing or clear box testing or structural testing.
Working process of white box testing:
Input: Requirements, Functional specifications, design documents, source code.
Processing: Performing risk analysis for guiding through the entire process.
Proper test planning: Designing test cases so as to cover entire code. Execute rinse-repeat
until error-free software is reached. Also, the results are communicated.
Output: Preparing final report of the entire testing process.
Testing techniques:
Statement coverage: In this technique, the aim is to traverse all statement at least once.
Hence, each line of code is tested. In case of a flowchart, every node must be traversed at least
once. Since all lines of code are covered, helps in pointing out faulty code.
Statement Coverage Example
Branch Coverage: In this technique, test cases are designed so that each branch from all
decision points are traversed at least once. In a flowchart, all edges must be traversed at least
once.
4 test cases required such that all branches of all decisions are covered, i.e, all edges of flowchart are covered
Condition Coverage: In this technique, all individual conditions must be covered as shown in
the following example:
0. READ X, Y
1. IF(X == 0 || Y == 0)
2. PRINT ‘0’
In this example, there are 2 conditions: X == 0 and Y == 0. Now, test these conditions get
TRUE and FALSE as their values. One possible example would be:
#TC1 – X = 0, Y = 55
#TC2 – X = 5, Y = 0
Multiple Condition Coverage: In this technique, all the possible combinations of the possible
outcomes of conditions are tested at least once. Let’s consider the following example:
0. READ X, Y
1. IF(X == 0 || Y == 0)
2. PRINT ‘0’
#TC1: X = 0, Y = 0
#TC2: X = 0, Y = 5
#TC3: X = 55, Y = 0
#TC4: X = 55, Y = 5
Hence, four test cases required for two individual conditions.
Similarly, if there are n conditions then 2n test cases would be required.
Basis Path Testing: In this technique, control flow graphs are made from code or flowchart
and then Cyclomatic complexity is calculated which defines the number of independent paths
so that the minimal number of test cases can be designed for each independent path.
Steps:
0. Make the corresponding control flow graph
1. Calculate the cyclomatic complexity
2. Find the independent paths
3. Design test cases corresponding to each independent path
Flow graph notation: It is a directed graph consisting of nodes and edges. Each node
represents a sequence of statements, or a decision point. A predicate node is the one that
represents a decision point that contains a condition after which the graph splits. Regions are
bounded by nodes and edges.
Cyclomatic Complexity: It is a measure of the logical complexity of the software and is used
to define the number of independent paths. For a graph G, V(G) is its cyclomatic complexity.
Calculating V(G):
4. V(G) = P + 1, where P is the number of predicate nodes in the flow graph
5. V(G) = E – N + 2, where E is the number of edges and N is the total number of
nodes
6. V(G) = Number of non-overlapping regions in the graph
Example:
Debugging
Introduction:
In the context of software engineering, debugging is the process of fixing a bug in the software. In other
words, it refers to identifying, analyzing and removing errors. This activity begins after the software fails
to execute properly and concludes by solving the problem and successfully testing the software. It is
considered to be an extremely complex and tedious task because errors need to be resolved at all stages
of debugging.
Debugging Process: Steps involved in debugging are:
Problem identification and report preparation.
Assigning the report to software engineer to the defect to verify that it is genuine.
Defect Analysis using modeling, documentations, finding and testing candidate flaws, etc.
Defect Resolution by making required changes to the system.
Validation of corrections.
Debugging Strategies:
1. Study the system for the larger duration in order to understand the system. It helps debugger to
construct different representations of systems to be debugging depends on the need. Study of
the system is also done actively to find recent changes made to the software.
2. Backwards analysis of the problem which involves tracing the program backward from the
location of failure message in order to identify the region of faulty code. A detailed study of
the region is conducting to find the cause of defects.
3. Forward analysis of the program involves tracing the program forwards using breakpoints or
print statements at different points in the program and studying the results. The region where
the wrong outputs are obtained is the region that needs to be focused to find the defect.
4. Using the past experience of the software debug the software with similar problems in nature.
The success of this approach depends on the expertise of the debugger.
Debugging Tools:
Debugging tool is a computer program that is used to test and debug other programs. A lot of public
domain software like gdb and dbx are available for debugging. They offer console-based command line
interfaces. Examples of automated debugging tools include code based tracers, profilers, interpreters, etc.
Some of the widely used debuggers are:
Radare2
WinDbg
Valgrind
Difference Between Debugging and Testing:
Debugging is different from testing. Testing focuses on finding bugs, errors, etc whereas debugging starts
after a bug has been identified in the software. Testing is used to ensure that the program is correct and it
was supposed to do with a certain minimum success rate. Testing can be manual or automated. There are
several different types of testing like unit testing, integration testing, alpha and beta testing, etc.
Debugging requires a lot of knowledge, skills, and expertise. It can be supported by some automated
tools available but is more of a manual process as every bug is different and requires a different
technique, unlike a pre-defined testing mechanism.
Integration Testing
Integration testing is the process of testing the interface between two software units or module. It’s
focus on determining the correctness of the interface. The purpose of the integration testing is to expose
faults in the interaction between integrated units. Once all the modules have been unit tested, integration
testing is performed.
Integration test approaches – There are four types of integration testing approaches. Those approaches
are the following:
1. Big-Bang Integration Testing – It is the simplest integration testing approach, where all the modules
are combining and verifying the functionality after the completion of individual module testing. In simple
words, all the modules of the system are simply put together and tested. This approach is practicable only
for very small systems. If once an error is found during the integration testing, it is very difficult to
localize the error as the error may potentially belong to any of the modules being integrated. So,
debugging errors reported during big bang integration testing are very expensive to fix.
Advantages:
It is convenient for small systems.
Disadvantages:
There will be quite a lot of delay because you would have to wait for all the modules to be
integrated.
High risk critical modules are not isolated and tested on priority since all modules are tested at
once.
2. Bottom-Up Integration Testing – In bottom-up testing, each module at lower levels is tested with
higher modules until all modules are tested. The primary purpose of this integration testing is, each
subsystem is to test the interfaces among various modules making up the subsystem. This integration
testing uses test drivers to drive and pass appropriate data to the lower level modules. Advantages:
In bottom-up testing, no stubs are required.
A principle advantage of this integration testing is that several disjoint subsystems can be
tested simultaneously.
Disadvantages:
Driver modules must be produced.
In this testing, the complexity that occurs when the system is made up of a large number of
small subsystem.
3. Top-Down Integration Testing – Top-down integration testing technique used in order to simulate
the behaviour of the lower-level modules that are not yet integrated.In this integration testing, testing
takes place from top to bottom. First high-level modules are tested and then low-level modules and
finally integrating the low-level modules to a high level to ensure the system is working as
intended. Advantages:
Separately debugged module.
Few or no drivers needed.
It is more stable and accurate at the aggregate level.
Disadvantages:
Needs many Stubs.
Modules at lower level are tested inadequately.
4. Mixed Integration Testing – A mixed integration testing is also called sandwiched integration
testing. A mixed integration testing follows a combination of top down and bottom-up testing
approaches. In top-down approach, testing can start only after the top-level module have been coded and
unit tested. In bottom-up approach, testing can start only after the bottom level modules are ready. This
sandwich or mixed approach overcomes this shortcoming of the top-down and bottom-up approaches.
Advantages:
Mixed approach is useful for very large projects having several sub projects.
This Sandwich approach overcomes this shortcoming of the top-down and bottom-up
approaches.
Disadvantages:
For mixed integration testing, require very high cost because one part has Top-down approach
while another part has bottom-up approach.
This integration testing cannot be used for smaller system with huge interdependence between
different modules.
Program Analysis Tool is an automated tool whose input is the source code or the executable code of a
program and the output is the observation of characteristics of the program. It gives various
characteristics of the program such as its size, complexity, adequacy of commenting, adherence to
programming standards and many other characteristics.
system testing
System testing verifies that an application performs tasks as designed. This step, a kind
of black box testing, focuses on the functionality of an application. System testing, for
example, might check that every kind of user input produces the intended output across the
application.
With system testing, a QA team gauges if an application meets all of its requirements, which
includes technical, business and functional requirements. To accomplish this, the QA team
might utilize a variety of test types, including performance, usability, load testing and
functional tests.
With system testing, a QA team determines whether a test case corresponds to each of an
application's most crucial requirements and user stories. These individual test cases
determine the overall test coverage for an application, and help the team catch critical
defects that hamper an application's core functionalities before release. A QA team can log
and tabulate each defect per requirement.
Additionally, each individual type of system test reports relevant metrics of a piece of
software, including:
Usability testing: user error rates, task success rate, time to complete a task, user
satisfaction.
Phases of system testing
System testing examines every component of an application to make sure that they work as
a complete and unified whole. A QA team typically conducts system testing after it checks
individual modules with functional or user-story testing and then each component
through integration testing.
If a software build achieves the desired results in system testing, it gets a final check
via acceptance testing before it goes to production, where users consume the software. An
app-dev team logs all defects, and establishes what kinds and amount of defects are
tolerable.
Commercial system testing tools include froglogic's Squish and Inflectra's SpiraTest, while
open source tools include Robotium and SmartBear's SoapUI.
Performance Testing
Performance Testing is a type of software testing that ensures software applications to perform properly
under their expected workload. It is a testing technique carried out to determine system performance in
terms of sensitivity, reactivity and stability under a particular workload.
Performance Testing is the process of analyzing the quality and capability of a product. It is a testing
method performed to determine the system performance in terms of speed, reliability and stability under
varying workload. Performance testing is also known as Perf Testing.
Regression Testing
Regression Testing is the process of testing the modified parts of the code and the parts that might get
affected due to the modifications to ensure that no new errors have been introduced in the software after
the modifications have been made. Regression means return of something and in the software field, it
refers to the return of a bug.
Software typically undergoes many levels of testing, from unit testing to system or acceptance testing.
Typically, in-unit testing, small “units”, or modules of the software, are tested separately with focus on
testing the code of that module. In higher, order testing (e.g, acceptance testing), the entire system (or a
subsystem) is tested with the focus on testing the functionality or external behavior of the system.
As information systems are becoming more complex, the object-oriented paradigm is gaining popularity
because of its benefits in analysis, design, and coding. Conventional testing methods cannot be applied
for testing classes because of problems involved in testing classes, abstract classes, inheritance, dynamic
binding, message, passing, polymorphism, concurrency, etc.
Testing classes is a fundamentally different problem than testing functions. A function (or a procedure)
has a clearly defined input-output behavior, while a class does not have an input-output behavior
specification. We can test a method of a class using approaches for testing functions, but we cannot test
the class using these
approaches.
According to Davis the dependencies occurring in conventional systems are:
Data dependencies between variables
Calling dependencies between modules
Functional dependencies between a module and the variable it computes
Definitional dependencies between a variable and its types.
Reliability Testing is a testing technique that relates to test the ability of a software to function and
given environmental conditions that helps in uncovering issues in the software design and functionality.
It is defined as a type of software testing that determines whether the software can perform a failure free
operation for a specific period of time in a specific environment. It ensures that the product is fault free
and is reliable for its intended purpose.
Objective of Reliability Testing:
The objective of reliability testing is:
To find the perpetual structure of repeating failures.
To find the number of failures occurring is the specific period of time.
To discover the main cause of failure.
To conduct performance testing of various modules of software product after fixing defects.
Types of Reliability Testing:
There are three types of reliability testing:-
1. Feature Testing:
Following three steps are involved in this testing:
Each function in the software should be executed at least once.
Interaction between two or more functions should be reduced.
Each function should be properly executed.
2. Regression Testing:
Regression testing is basically performed whenever any new functionality is added, old functionalities
are removed or the bugs are fixed in an application to make sure with introduction of new
functionality or with the fixing of previous bugs, no new bugs are introduced in the application.
3. Load Testing:
Load testing is carried out to determine whether the application is supporting the required load
without getting breakdown. It is performed to check the performance of the software under maximum
work load.
The study of reliability testing can be divided into three categories:-
1. Modelling
2. Measurement
3. Improvement
Measurement of Reliability Testing:
Mean Time Between Failures (MTBF):
Measurement of reliability testing is done in terms of mean time between failures (MTBF).
Mean Time To Failure (MTTF):
The time between two consecutive failures is called as mean time to failure (MTTF).
Mean Time To Repair (MTTR):
The time taken to fix the failures is known as mean time to repair (MTTR).
MTBF = MTTF + MTTR
Statistical Testing
Statistical Testing is a testing method whose objective is to work out the undependable of software
package product instead of discovering errors. check cases ar designed for applied mathematics testing
with a wholly totally different objective than those of typical testing.
Operation Profile:
Different classes of users might use a software package for various functions. for instance, a professional
may use the library automation software package to make member records, add books to the library, etc.
whereas a library member may use to software package to question regarding the provision of the book
or to issue and come books. Formally, the operation profile of a software package may be outlined
because the chance distribution of the input of a mean user. If the input to a variety of categories{Ci} is
split, the chance price of a category represents the chance of a mean user choosing his next input from
this class. Thus, the operation profile assigns a chance price Pi to every input category Ci.
Software Quality
Traditionally, a high-quality product is outlined in terms of its fitness of purpose. That is, a high-quality
product will specifically what the users need it to try to. For code merchandise, the fitness of purpose is
typically taken in terms of satisfaction of the wants arranged down within the SRS document. though
“fitness of purpose” could be a satisfactory definition of quality for several merchandises like an
automobile, a table fan, a grinding machine, etc. – for code merchandise, “fitness of purpose” isn’t a
completely satisfactory definition of quality. To convey associate degree example, think about a software
that’s functionally correct.
It performs all functions as laid out in the SRS document. But, it has an associate degree virtually
unusable program. despite the fact that it should be functionally correct, we have a tendency to cannot
think about it to be a high-quality product. Another example is also that of a product that will everything
that the users need however has associate degree virtually incomprehensible and not maintainable code.
Therefore, the normal construct of quality as “fitness of purpose” for code merchandise isn’t totally
satisfactory.
The modern read of high-quality associates with software many quality factors like the following:
Portability:
A software is claimed to be transportable, if it may be simply created to figure in several package
environments, in several machines, with alternative code merchandise, etc.
Usability:
A software has smart usability if completely different classes of users (i.e. each knowledgeable and
novice users) will simply invoke the functions of the merchandise.
Reusability:
A software has smart reusability if completely different modules of the merchandise will simply be
reused to develop new merchandise.
Correctness:
A software is correct if completely different needs as laid out in the SRS document are properly
enforced.
Maintainability:
A software is reparable, if errors may be simply corrected as and once they show up, new functions
may be simply added to the merchandise, and therefore the functionalities of the merchandise may be
simply changed, etc
Quality Management
Software Quality Management ensures that the required level of quality is achieved by submitting
improvements to the product development process. SQA aims to develop a culture within the team and it is
seen as everyone's responsibility.
Software Quality management should be independent of project management to ensure independence of cost
and schedule adherences. It directly affects the process quality and indirectly affects the product quality.
Activities of Software Quality Management:
Quality Assurance - QA aims at developing Organizational procedures and standards for quality at
Organizational level.
Quality Planning - Select applicable procedures and standards for a particular project and modify as
required to develop a quality plan.
Quality Control - Ensure that best practices and standards are followed by the software development
team to produce quality products.
CMM was developed by the Software Engineering Institute (SEI) at Carnegie Mellon University in
1987.
It is not a software process model. It is a framework that is used to analyze the approach and
techniques followed by any organization to develop software products.
It also provides guidelines to further enhance the maturity of the process used to develop those
software products.
It is based on profound feedback and development practices adopted by the most successful
organizations worldwide.
This model describes a strategy for software process improvement that should be followed by moving
through 5 different levels.
Each level of maturity shows a process capability level. All the levels except level-1 are further
described by Key Process Areas (KPA’s).
Shortcomings of SEI/CMM:
It encourages the achievement of a higher maturity level in some cases by displacing the true mission,
which is improving the process and overall software quality.
It only helps if it is put into place early in the software development process.
It has no formal theoretical basis and in fact is based on the experience of very knowledgeable people.
It does not have good empirical support and this same empirical support could also be constructed to
support other models.
Level-1: Initial –
No KPA’s defined.
Processes followed are Adhoc and immature and are not well defined.
Unstable environment for software development.
No basis for predicting product quality, time for completion, etc.
Level-2: Repeatable –
Focuses on establishing basic project management policies.
Experience with earlier projects is used for managing new similar natured projects.
Project Planning- It includes defining resources required, goals, constraints, etc. for the project. It
presents a detailed plan to be followed systematically for the successful completion of good quality
software.
Configuration Management- The focus is on maintaining the performance of the software product,
including all its components, for the entire lifecycle.
Requirements Management- It includes the management of customer reviews and feedback which
result in some changes in the requirement set. It also consists of accommodation of those modified
requirements.
Subcontract Management- It focuses on the effective management of qualified software contractors
i.e. it manages the parts of the software which are developed by third parties.
Software Quality Assurance- It guarantees a good quality software product by following certain
rules and quality standard guidelines while developing.
Level-3: Defined –
At this level, documentation of the standard guidelines and procedures takes place.
It is a well-defined integrated set of project-specific software engineering and management processes.
Peer Reviews- In this method, defects are removed by using a number of review methods like
walkthroughs, inspections, buddy checks, etc.
Intergroup Coordination- It consists of planned interactions between different development teams
to ensure efficient and proper fulfillment of customer needs.
Organization Process Definition- Its key focus is on the development and maintenance of the
standard development processes.
Organization Process Focus- It includes activities and practices that should be followed to improve
the process capabilities of an organization.
Training Programs- It focuses on the enhancement of knowledge and skills of the team members
including the developers and ensuring an increase in work efficiency.
Level-4: Managed –
At this stage, quantitative quality goals are set for the organization for software products as well as
software processes.
The measurements made help the organization to predict the product and process quality within some
limits defined quantitatively.
Software Quality Management- It includes the establishment of plans and strategies to develop
quantitative analysis and understanding of the product’s quality.
Quantitative Management- It focuses on controlling the project performance in a quantitative
manner.
Level-5: Optimizing –
This is the highest level of process maturity in CMM and focuses on continuous process improvement
in the organization using quantitative feedback.
Use of new tools, techniques, and evaluation of software processes is done to prevent recurrence of
known defects.
Process Change Management- Its focus is on the continuous improvement of the organization’s
software processes to improve productivity, quality, and cycle time for the software product.
Technology Change Management- It consists of the identification and use of new technologies to
improve product quality and decrease product development time.
Defect Prevention- It focuses on the identification of causes of defects and prevents them from
recurring in future projects by improving project-defined processes.
In Software Engineering, Software Measurement is done based on some Software Metrics where these
software metrics are referred to as the measure of various characteristics of a Software.
In Software engineering Software Quality Assurance (SAQ) assures the quality of the software. Set of
activities in SAQ are continuously applied throughout the software process. Software Quality is
measured based on some software quality metrics.
There is a number of metrics available based on which software quality is measured. But among them,
there are few most useful metrics which are most essential in software quality measurement. They are –
1. Code Quality
2. Reliability
3. Performance
4. Usability
5. Correctness
6. Maintainability
7. Integrity
8. Security
Now let’s understand each quality metric in detail –
1. Code Quality – Code quality metrics measure the quality of code used for the software project
development. Maintaining the software code quality by writing Bug-free and semantically correct code is
very important for good software project development. In code quality both Quantitative metrics like the
number of lines, complexity, functions, rate of bugs generation, etc, and Qualitative metrics like
readability, code clarity, efficiency, maintainability, etc are measured.
2. Reliability – Reliability metrics express the reliability of software in different conditions. The
software is able to provide exact service at the right time or not is checked. Reliability can be checked
using Mean Time Between Failure (MTBF) and Mean Time To Repair (MTTR).
3. Performance – Performance metrics are used to measure the performance of the software. Each
software has been developed for some specific purposes. Performance metrics measure the performance
of the software by determining whether the software is fulfilling the user requirements or not, by
analyzing how much time and resource it is utilizing for providing the service.
4. Usability – Usability metrics check whether the program is user-friendly or not. Each software is used
by the end-user. So it is important to measure that the end-user is happy or not by using this software.
5. Correctness – Correctness is one of the important software quality metrics as this checks whether the
system or software is working correctly without any error by satisfying the user. Correctness gives the
degree of service each function provides as per developed.
6. Maintainability – Each software product requires maintenance and up-gradation. Maintenance is an
expensive and time-consuming process. So if the software product provides easy maintainability then we
can say software quality is up to mark. Maintainability metrics include time requires to adapt to new
features/functionality, Mean Time to Change (MTTC), performance in changing environments, etc.
7. Integrity – Software integrity is important in terms of how much it is easy to integrate with other
required software’s which increases software functionality and what is the control on integration from
unauthorized software’s which increases the chances of cyberattacks.
8. Security – Security metrics measure how much secure the software is? In the age of cyber terrorism,
security is the most essential part of every software. Security assures that there are no unauthorized
changes, no fear of cyber attacks, etc when the software product is in use by the end-user.
A CASE (Computer power-assisted software package Engineering) tool could be a generic term
accustomed denote any type of machine-driven support for software package engineering. in a very
additional restrictive sense, a CASE tool suggests that any tool accustomed automatize some activity
related to software package development.
Several CASE tools square measure obtainable. A number of these CASE tools assist in part connected
tasks like specification, structured analysis, design, coding, testing, etc.; and other to non-phase activities
like project management and configuration management.
Reasons for using CASE tools:
The primary reasons for employing a CASE tool are:
to extend productivity
to assist turn out higher quality code at a lower price
CASE environment:
Although individual CASE tools square measure helpful, the true power of a toolset is often completed
only this set of tools square measure integrated into a typical framework or setting. CASE tools square
measure characterized by the stage or stages of package development life cycle that they focus on. Since
totally different tools covering different stages share common data, it’s needed that they integrate through
some central repository to possess an even read of data related to the package development artifacts. This
central repository is sometimes information lexicon containing the definition of all composite and
elementary data things.
Through the central repository, all the CASE tools in a very CASE setting share common data among
themselves. therefore a CASE setting facilities the automation of the step-wise methodologies for
package development. A schematic illustration of a CASE setting is shown in the below diagram:
Note: CASE environment is different from programming environment.
A CASE environment facilitates the automation of the in small stages methodologies for package
development. In distinction to a CASE environment, a programming environment is an Associate in a
Nursing integrated assortment of tools to support solely the cryptography part of package development.
Computer aided software engineering (CASE) is the implementation of computer facilitated tools and
methods in software development. CASE is used to ensure a high-quality and defect-free software.
CASE ensures a check-pointed and disciplined approach and helps designers, developers, testers,
managers and others to see the project milestones during development.
CASE can also help as a warehouse for documents related to projects, like business plans, requirements
and design specifications. One of the major advantages of using CASE is the delivery of the final
product, which is more likely to meet real-world requirements as it ensures that customers remain part of
the process.
CASE illustrates a wide set of labor-saving tools that are used in software development. It generates a
framework for organizing projects and to be helpful in enhancing productivity. There was more interest
in the concept of CASE tools years ago, but less so today, as the tools have morphed into different
functions, often in reaction to software developer needs. The concept of CASE also received a heavy
dose of criticism after its release.
CASE Tools:
The essential idea of CASE tools is that in-built programs can help to analyze developing systems in
order to enhance quality and provide better outcomes. Throughout the 1990, CASE tool became part of
the software lexicon, and big companies like IBM were using these kinds of tools to help create
software.
Various tools are incorporated in CASE and are called CASE tools, which are used to support different
stages and milestones in a software development life cycle.
Types of CASE Tools:
1. Diagramming Tools:
It helps in diagrammatic and graphical representations of the data and system processes. It represents
system elements, control flow and data flow among different software components and system
structure in a pictorial form.
For example, Flow Chart Maker tool for making state-of-the-art flowcharts.
3. Analysis Tools:
It focuses on inconsistent, incorrect specifications involved in the diagram and data flow. It helps in
collecting requirements, automatically check for any irregularity, imprecision in the diagrams, data
4. Central Repository:
It provides the single point of storage for data diagrams, reports and documents related to project
management.
5. Documentation Generators:
It helps in generating user and technical documentation as per standards. It creates documents for
technical users and end users.
For example, Doxygen, DrExplain, Adobe RoboHelp for documentation.
6. Code Generators:
It aids in the auto generation of code, including definitions, with the help of the designs, documents
and diagrams.
As special emphasis is placed on redesign as well as testing, the servicing cost of a product over its
expected lifetime is considerably reduced.
The overall quality of the product is improved as an organized approach is undertaken during the
process of development.
Chances to meet real-world requirements are more likely and easier with a computer-aided software
engineering approach.
CASE indirectly provides an organization with a competitive advantage by helping ensure the
development of high-quality products.
Software Maintenance
Software Maintenance is the process of modifying a software product after it has been delivered to the
customer. The main purpose of software maintenance is to modify and update software applications after
delivery to correct faults and to improve performance.
Need for Maintenance –
Software Maintenance must be performed in order to:
Correct faults.
Improve the design.
Implement enhancements.
Interface with other systems.
Accommodate programs so that different hardware, software, system features, and
telecommunications facilities can be used.
Migrate legacy software.
Retire software.
Challenges in Software Maintenance:
The various challenges in software maintenance are given below:
The popular age of any software program is taken into consideration up to ten to fifteen years. As
software program renovation is open ended and might maintain for decades making it very expensive.
Older software program’s, which had been intended to paintings on sluggish machines with much less
reminiscence and garage ability can not maintain themselves tough in opposition to newly coming
more advantageous software program on contemporary-day hardware.
Changes are frequently left undocumented which can also additionally reason greater conflicts in
future.
As era advances, it turns into high priced to preserve vintage software program.
Often adjustments made can without problems harm the authentic shape of the software program,
making it difficult for any next adjustments.
Categories of Software Maintenance –
Maintenance can be divided into the following:
1. Corrective maintenance:
Corrective maintenance of a software product may be essential either to rectify some bugs observed
while the system is in use, or to enhance the performance of the system.
2. Adaptive maintenance:
This includes modifications and updations when the customers need the product to run on new
platforms, on new operating systems, or when they need the product to interface with new hardware
and software.
3. Perfective maintenance:
A software product needs maintenance to support the new features that the users want or to change
different types of functionalities of the system according to the customer demands.
4. Preventive maintenance:
This type of maintenance includes modifications and updations to prevent future problems of the
software. It goals to attend problems, which are not significant at this moment but may cause serious
issues in future.
Reverse Engineering –
Reverse Engineering is processes of extracting knowledge or design information from anything man-
made and reproducing it based on extracted information. It is also called back Engineering.
Software Reverse Engineering –
Software Reverse Engineering is the process of recovering the design and the requirements specification
of a product from an analysis of it’s code. Reverse Engineering is becoming important, since several
existing software products, lack proper documentation, are highly unstructured, or their structure has
degraded through a series of maintenance efforts.
Why Reverse Engineering?
Providing proper system documentation.
Recovery of lost information.
Assisting with maintenance.
Facility of software reuse.
Discovering unexpected flaws or faults.
Used of Software Reverse Engineering –
Software Reverse Engineering is used in software design, reverse engineering enables the developer
or programmer to add new features to the existing software with or without knowing the source code.
Reverse engineering is also useful in software testing, it helps the testers to study the virus and other
malware code .
Reverse Engineering
Software Reverse Engineering is a process of recovering the design, requirement specifications and
functions of a product from an analysis of its code. It builds a program database and generates
information from this.
The purpose of reverse engineering is to facilitate the maintenance work by improving the
understandability of a system and to produce the necessary documents for a legacy system.
Reverse Engineering Goals:
1. Collection Information:
This step focuses on collecting all possible information (i.e., source design documents etc.) about the
software.
8. Generate documentation:
Finally, in this step, the complete documentation including SRS, design document, history, overview,
etc. are recorded for future use.
Software Maintenance
Software maintenance is a part of the Software Development Life Cycle. Its primary goal is to modify
and update software application after delivery to correct errors and to improve performance. Software
is a model of the real world. When the real world changes, the software require alteration wherever
possible.
o Correct errors
o Change in user requirement with time
o Changing hardware/software requirements
o To improve system efficiency
o To optimize the code to run faster
o To modify the components
o To reduce any unwanted side effects.
Thus the maintenance is required to ensure that the system continues to satisfy user requirements.
1. Corrective Maintenance
Corrective maintenance aims to correct any remaining errors regardless of where they may cause
specifications, design, coding, testing, and documentation, etc.
2. Adaptive Maintenance
It contains modifying the software to match changes in the ever-changing environment.
3. Preventive Maintenance
It is the process by which we prevent our system from being obsolete. It involves the concept of
reengineering & reverse engineering in which an old system with old technology is re-engineered using
new technology. This maintenance prevents the system from dying out.
4. Perfective Maintenance
It defines improving processing efficiency or performance or restricting the software to enhance
changeability. This may contain enhancement of existing system functionality, improvement in
computational efficiency, etc.
Software Maintenance is a very broad activity that takes place once the operation is done. It optimizes
the software performance by reducing errors, eliminating useless lines of codes and applying advanced
development. It can take up to 1-2 years to build a software system while its maintenance and
modification can be an ongoing activity for 15-20 years.
Categories of Software Maintenance:
1. Corrective Maintenance
2. Adaptive Maintenance
3. Perfective Maintenance
4. Preventive Maintenance
The cost of system maintenance represents a large proportion of the budget of most organizations that
use software system. More than 65% of software lifecycle cost is expanded in the maintenance activities.
Cost of software maintenance can be controlled by postponing the. development opportunity of software
maintenance but this will cause the following intangible cost:
Customer dissatisfaction when requests for repair or modification cannot be addressed in a timely
manner.
Reduction in overall software quality as a result of changes that introduce hidden errors in maintained
software.
Software maintenance cost factors:
The key factors that distinguish development and maintenance and which lead to higher maintenance
cost are divided into two subcategories:
1. Non-Technical factors
2. Technical factors
Non-Technical factors:
The Non-Technical factors include:
1. Application Domain
2. Staff stability
3. Program lifetime
4. Dependence on External Environment
5. Hardware stability
Technical factors:
Technical factors include the following:
1. module independence
2. Programming language
3. Programming style
4. Program validation and testing
5. Documentation
6. Configuration management techniques
Efforts expanded on maintenance may be divided into productivity activities (for example analysis and
evaluation, design and modification, coding). The following expression provides a module of
maintenance efforts:
M = P + K(C - D)
where,
M: Total effort expanded on the maintenance.
P: Productive effort.
K: An empirical constant.
C: A measure of complexity that can be attributed to a lack of good design and documentation.
D: A measure of the degree of familiarity with the software.
Reliability Testing
Reliability Testing is a testing technique that relates to test the ability of a software to function and
given environmental conditions that helps in uncovering issues in the software design and functionality.
It is defined as a type of software testing that determines whether the software can perform a failure free
operation for a specific period of time in a specific environment. It ensures that the product is fault free
and is reliable for its intended purpose.
Objective of Reliability Testing:
The objective of reliability testing is:
To find the perpetual structure of repeating failures.
To find the number of failures occurring is the specific period of time.
To discover the main cause of failure.
To conduct performance testing of various modules of software product after fixing defects.
Types of Reliability Testing:
There are three types of reliability testing:-
1. Feature Testing:
Following three steps are involved in this testing:
Each function in the software should be executed at least once.
Interaction between two or more functions should be reduced.
Each function should be properly executed.
2. Regression Testing:
Regression testing is basically performed whenever any new functionality is added, old functionalities
are removed or the bugs are fixed in an application to make sure with introduction of new
functionality or with the fixing of previous bugs, no new bugs are introduced in the application.
3. Load Testing:
Load testing is carried out to determine whether the application is supporting the required load
without getting breakdown. It is performed to check the performance of the software under maximum
work load.
The study of reliability testing can be divided into three categories:-
1. Modelling
2. Measurement
3. Improvement
Measurement of Reliability Testing:
Mean Time Between Failures (MTBF):
Measurement of reliability testing is done in terms of mean time between failures (MTBF).
Mean Time To Failure (MTTF):
The time between two consecutive failures is called as mean time to failure (MTTF).
Mean Time To Repair (MTTR):
The time taken to fix the failures is known as mean time to repair (MTTR).
MTBF = MTTF + MTTR
Statistical Testing
Statistical Testing is a testing method whose objective is to work out the undependable of software
package product instead of discovering errors. check cases ar designed for applied mathematics testing
with a wholly totally different objective than those of typical testing.
Operation Profile:
Different classes of users might use a software package for various functions. for instance, a professional
may use the library automation software package to make member records, add books to the library, etc.
whereas a library member may use to software package to question regarding the provision of the book
or to issue and come books. Formally, the operation profile of a software package may be outlined
because the chance distribution of the input of a mean user. If the input to a variety of categories{Ci} is
split, the chance price of a category represents the chance of a mean user choosing his next input from
this class. Thus, the operation profile assigns a chance price Pi to every input category Ci.
Software Quality
Traditionally, a high-quality product is outlined in terms of its fitness of purpose. That is, a high-quality
product will specifically what the users need it to try to. For code merchandise, the fitness of purpose is
typically taken in terms of satisfaction of the wants arranged down within the SRS document. though
“fitness of purpose” could be a satisfactory definition of quality for several merchandises like an
automobile, a table fan, a grinding machine, etc. – for code merchandise, “fitness of purpose” isn’t a
completely satisfactory definition of quality. To convey associate degree example, think about a software
that’s functionally correct.
It performs all functions as laid out in the SRS document. But, it has an associate degree virtually
unusable program. despite the fact that it should be functionally correct, we have a tendency to cannot
think about it to be a high-quality product. Another example is also that of a product that will everything
that the users need however has associate degree virtually incomprehensible and not maintainable code.
Therefore, the normal construct of quality as “fitness of purpose” for code merchandise isn’t totally
satisfactory.
The modern read of high-quality associates with software many quality factors like the following:
Portability:
A software is claimed to be transportable, if it may be simply created to figure in several package
environments, in several machines, with alternative code merchandise, etc.
Usability:
A software has smart usability if completely different classes of users (i.e. each knowledgeable and
novice users) will simply invoke the functions of the merchandise.
Reusability:
A software has smart reusability if completely different modules of the merchandise will simply be
reused to develop new merchandise.
Correctness:
A software is correct if completely different needs as laid out in the SRS document are properly
enforced.
Maintainability:
A software is reparable, if errors may be simply corrected as and once they show up, new functions
may be simply added to the merchandise, and therefore the functionalities of the merchandise may be
simply changed, etc
Quality Management
Software Quality Management ensures that the required level of quality is achieved by submitting
improvements to the product development process. SQA aims to develop a culture within the team and it is
seen as everyone's responsibility.
Software Quality management should be independent of project management to ensure independence of cost
and schedule adherences. It directly affects the process quality and indirectly affects the product quality.
Activities of Software Quality Management:
Quality Assurance - QA aims at developing Organizational procedures and standards for quality at
Organizational level.
Quality Planning - Select applicable procedures and standards for a particular project and modify as
required to develop a quality plan.
Quality Control - Ensure that best practices and standards are followed by the software development
team to produce quality products.
CMM was developed by the Software Engineering Institute (SEI) at Carnegie Mellon University in
1987.
It is not a software process model. It is a framework that is used to analyze the approach and
techniques followed by any organization to develop software products.
It also provides guidelines to further enhance the maturity of the process used to develop those
software products.
It is based on profound feedback and development practices adopted by the most successful
organizations worldwide.
This model describes a strategy for software process improvement that should be followed by moving
through 5 different levels.
Each level of maturity shows a process capability level. All the levels except level-1 are further
described by Key Process Areas (KPA’s).
Shortcomings of SEI/CMM:
It encourages the achievement of a higher maturity level in some cases by displacing the true mission,
which is improving the process and overall software quality.
It only helps if it is put into place early in the software development process.
It has no formal theoretical basis and in fact is based on the experience of very knowledgeable people.
It does not have good empirical support and this same empirical support could also be constructed to
support other models.
Level-1: Initial –
No KPA’s defined.
Processes followed are Adhoc and immature and are not well defined.
Unstable environment for software development.
No basis for predicting product quality, time for completion, etc.
Level-2: Repeatable –
Focuses on establishing basic project management policies.
Experience with earlier projects is used for managing new similar natured projects.
Project Planning- It includes defining resources required, goals, constraints, etc. for the project. It
presents a detailed plan to be followed systematically for the successful completion of good quality
software.
Configuration Management- The focus is on maintaining the performance of the software product,
including all its components, for the entire lifecycle.
Requirements Management- It includes the management of customer reviews and feedback which
result in some changes in the requirement set. It also consists of accommodation of those modified
requirements.
Subcontract Management- It focuses on the effective management of qualified software contractors
i.e. it manages the parts of the software which are developed by third parties.
Software Quality Assurance- It guarantees a good quality software product by following certain
rules and quality standard guidelines while developing.
Level-3: Defined –
At this level, documentation of the standard guidelines and procedures takes place.
It is a well-defined integrated set of project-specific software engineering and management processes.
Peer Reviews- In this method, defects are removed by using a number of review methods like
walkthroughs, inspections, buddy checks, etc.
Intergroup Coordination- It consists of planned interactions between different development teams
to ensure efficient and proper fulfillment of customer needs.
Organization Process Definition- Its key focus is on the development and maintenance of the
standard development processes.
Organization Process Focus- It includes activities and practices that should be followed to improve
the process capabilities of an organization.
Training Programs- It focuses on the enhancement of knowledge and skills of the team members
including the developers and ensuring an increase in work efficiency.
Level-4: Managed –
At this stage, quantitative quality goals are set for the organization for software products as well as
software processes.
The measurements made help the organization to predict the product and process quality within some
limits defined quantitatively.
Software Quality Management- It includes the establishment of plans and strategies to develop
quantitative analysis and understanding of the product’s quality.
Quantitative Management- It focuses on controlling the project performance in a quantitative
manner.
Level-5: Optimizing –
This is the highest level of process maturity in CMM and focuses on continuous process improvement
in the organization using quantitative feedback.
Use of new tools, techniques, and evaluation of software processes is done to prevent recurrence of
known defects.
Process Change Management- Its focus is on the continuous improvement of the organization’s
software processes to improve productivity, quality, and cycle time for the software product.
Technology Change Management- It consists of the identification and use of new technologies to
improve product quality and decrease product development time.
Defect Prevention- It focuses on the identification of causes of defects and prevents them from
recurring in future projects by improving project-defined processes.
In Software Engineering, Software Measurement is done based on some Software Metrics where these
software metrics are referred to as the measure of various characteristics of a Software.
In Software engineering Software Quality Assurance (SAQ) assures the quality of the software. Set of
activities in SAQ are continuously applied throughout the software process. Software Quality is
measured based on some software quality metrics.
There is a number of metrics available based on which software quality is measured. But among them,
there are few most useful metrics which are most essential in software quality measurement. They are –
1. Code Quality
2. Reliability
3. Performance
4. Usability
5. Correctness
6. Maintainability
7. Integrity
8. Security
Now let’s understand each quality metric in detail –
1. Code Quality – Code quality metrics measure the quality of code used for the software project
development. Maintaining the software code quality by writing Bug-free and semantically correct code is
very important for good software project development. In code quality both Quantitative metrics like the
number of lines, complexity, functions, rate of bugs generation, etc, and Qualitative metrics like
readability, code clarity, efficiency, maintainability, etc are measured.
2. Reliability – Reliability metrics express the reliability of software in different conditions. The
software is able to provide exact service at the right time or not is checked. Reliability can be checked
using Mean Time Between Failure (MTBF) and Mean Time To Repair (MTTR).
3. Performance – Performance metrics are used to measure the performance of the software. Each
software has been developed for some specific purposes. Performance metrics measure the performance
of the software by determining whether the software is fulfilling the user requirements or not, by
analyzing how much time and resource it is utilizing for providing the service.
4. Usability – Usability metrics check whether the program is user-friendly or not. Each software is used
by the end-user. So it is important to measure that the end-user is happy or not by using this software.
5. Correctness – Correctness is one of the important software quality metrics as this checks whether the
system or software is working correctly without any error by satisfying the user. Correctness gives the
degree of service each function provides as per developed.
6. Maintainability – Each software product requires maintenance and up-gradation. Maintenance is an
expensive and time-consuming process. So if the software product provides easy maintainability then we
can say software quality is up to mark. Maintainability metrics include time requires to adapt to new
features/functionality, Mean Time to Change (MTTC), performance in changing environments, etc.
7. Integrity – Software integrity is important in terms of how much it is easy to integrate with other
required software’s which increases software functionality and what is the control on integration from
unauthorized software’s which increases the chances of cyberattacks.
8. Security – Security metrics measure how much secure the software is? In the age of cyber terrorism,
security is the most essential part of every software. Security assures that there are no unauthorized
changes, no fear of cyber attacks, etc when the software product is in use by the end-user.
A CASE (Computer power-assisted software package Engineering) tool could be a generic term
accustomed denote any type of machine-driven support for software package engineering. in a very
additional restrictive sense, a CASE tool suggests that any tool accustomed automatize some activity
related to software package development.
Several CASE tools square measure obtainable. A number of these CASE tools assist in part connected
tasks like specification, structured analysis, design, coding, testing, etc.; and other to non-phase activities
like project management and configuration management.
Reasons for using CASE tools:
The primary reasons for employing a CASE tool are:
to extend productivity
to assist turn out higher quality code at a lower price
CASE environment:
Although individual CASE tools square measure helpful, the true power of a toolset is often completed
only this set of tools square measure integrated into a typical framework or setting. CASE tools square
measure characterized by the stage or stages of package development life cycle that they focus on. Since
totally different tools covering different stages share common data, it’s needed that they integrate through
some central repository to possess an even read of data related to the package development artifacts. This
central repository is sometimes information lexicon containing the definition of all composite and
elementary data things.
Through the central repository, all the CASE tools in a very CASE setting share common data among
themselves. therefore a CASE setting facilities the automation of the step-wise methodologies for
package development. A schematic illustration of a CASE setting is shown in the below diagram:
Note: CASE environment is different from programming environment.
A CASE environment facilitates the automation of the in small stages methodologies for package
development. In distinction to a CASE environment, a programming environment is an Associate in a
Nursing integrated assortment of tools to support solely the cryptography part of package development.
Computer aided software engineering (CASE) is the implementation of computer facilitated tools and
methods in software development. CASE is used to ensure a high-quality and defect-free software.
CASE ensures a check-pointed and disciplined approach and helps designers, developers, testers,
managers and others to see the project milestones during development.
CASE can also help as a warehouse for documents related to projects, like business plans, requirements
and design specifications. One of the major advantages of using CASE is the delivery of the final
product, which is more likely to meet real-world requirements as it ensures that customers remain part of
the process.
CASE illustrates a wide set of labor-saving tools that are used in software development. It generates a
framework for organizing projects and to be helpful in enhancing productivity. There was more interest
in the concept of CASE tools years ago, but less so today, as the tools have morphed into different
functions, often in reaction to software developer needs. The concept of CASE also received a heavy
dose of criticism after its release.
CASE Tools:
The essential idea of CASE tools is that in-built programs can help to analyze developing systems in
order to enhance quality and provide better outcomes. Throughout the 1990, CASE tool became part of
the software lexicon, and big companies like IBM were using these kinds of tools to help create
software.
Various tools are incorporated in CASE and are called CASE tools, which are used to support different
stages and milestones in a software development life cycle.
Types of CASE Tools:
1. Diagramming Tools:
It helps in diagrammatic and graphical representations of the data and system processes. It represents
system elements, control flow and data flow among different software components and system
structure in a pictorial form.
For example, Flow Chart Maker tool for making state-of-the-art flowcharts.
3. Analysis Tools:
It focuses on inconsistent, incorrect specifications involved in the diagram and data flow. It helps in
collecting requirements, automatically check for any irregularity, imprecision in the diagrams, data
4. Central Repository:
It provides the single point of storage for data diagrams, reports and documents related to project
management.
5. Documentation Generators:
It helps in generating user and technical documentation as per standards. It creates documents for
technical users and end users.
For example, Doxygen, DrExplain, Adobe RoboHelp for documentation.
6. Code Generators:
It aids in the auto generation of code, including definitions, with the help of the designs, documents
and diagrams.
As special emphasis is placed on redesign as well as testing, the servicing cost of a product over its
expected lifetime is considerably reduced.
The overall quality of the product is improved as an organized approach is undertaken during the
process of development.
Chances to meet real-world requirements are more likely and easier with a computer-aided software
engineering approach.
CASE indirectly provides an organization with a competitive advantage by helping ensure the
development of high-quality products.
Software Maintenance
Software Maintenance is the process of modifying a software product after it has been delivered to the
customer. The main purpose of software maintenance is to modify and update software applications after
delivery to correct faults and to improve performance.
Need for Maintenance –
Software Maintenance must be performed in order to:
Correct faults.
Improve the design.
Implement enhancements.
Interface with other systems.
Accommodate programs so that different hardware, software, system features, and
telecommunications facilities can be used.
Migrate legacy software.
Retire software.
Challenges in Software Maintenance:
The various challenges in software maintenance are given below:
The popular age of any software program is taken into consideration up to ten to fifteen years. As
software program renovation is open ended and might maintain for decades making it very expensive.
Older software program’s, which had been intended to paintings on sluggish machines with much less
reminiscence and garage ability can not maintain themselves tough in opposition to newly coming
more advantageous software program on contemporary-day hardware.
Changes are frequently left undocumented which can also additionally reason greater conflicts in
future.
As era advances, it turns into high priced to preserve vintage software program.
Often adjustments made can without problems harm the authentic shape of the software program,
making it difficult for any next adjustments.
Categories of Software Maintenance –
Maintenance can be divided into the following:
1. Corrective maintenance:
Corrective maintenance of a software product may be essential either to rectify some bugs observed
while the system is in use, or to enhance the performance of the system.
2. Adaptive maintenance:
This includes modifications and updations when the customers need the product to run on new
platforms, on new operating systems, or when they need the product to interface with new hardware
and software.
3. Perfective maintenance:
A software product needs maintenance to support the new features that the users want or to change
different types of functionalities of the system according to the customer demands.
4. Preventive maintenance:
This type of maintenance includes modifications and updations to prevent future problems of the
software. It goals to attend problems, which are not significant at this moment but may cause serious
issues in future.
Reverse Engineering –
Reverse Engineering is processes of extracting knowledge or design information from anything man-
made and reproducing it based on extracted information. It is also called back Engineering.
Software Reverse Engineering –
Software Reverse Engineering is the process of recovering the design and the requirements specification
of a product from an analysis of it’s code. Reverse Engineering is becoming important, since several
existing software products, lack proper documentation, are highly unstructured, or their structure has
degraded through a series of maintenance efforts.
Why Reverse Engineering?
Providing proper system documentation.
Recovery of lost information.
Assisting with maintenance.
Facility of software reuse.
Discovering unexpected flaws or faults.
Used of Software Reverse Engineering –
Software Reverse Engineering is used in software design, reverse engineering enables the developer
or programmer to add new features to the existing software with or without knowing the source code.
Reverse engineering is also useful in software testing, it helps the testers to study the virus and other
malware code .
Reverse Engineering
Software Reverse Engineering is a process of recovering the design, requirement specifications and
functions of a product from an analysis of its code. It builds a program database and generates
information from this.
The purpose of reverse engineering is to facilitate the maintenance work by improving the
understandability of a system and to produce the necessary documents for a legacy system.
Reverse Engineering Goals:
1. Collection Information:
This step focuses on collecting all possible information (i.e., source design documents etc.) about the
software.
8. Generate documentation:
Finally, in this step, the complete documentation including SRS, design document, history, overview,
etc. are recorded for future use.
Software Maintenance
Software maintenance is a part of the Software Development Life Cycle. Its primary goal is to modify
and update software application after delivery to correct errors and to improve performance. Software
is a model of the real world. When the real world changes, the software require alteration wherever
possible.
o Correct errors
o Change in user requirement with time
o Changing hardware/software requirements
o To improve system efficiency
o To optimize the code to run faster
o To modify the components
o To reduce any unwanted side effects.
Thus the maintenance is required to ensure that the system continues to satisfy user requirements.
1. Corrective Maintenance
Corrective maintenance aims to correct any remaining errors regardless of where they may cause
specifications, design, coding, testing, and documentation, etc.
2. Adaptive Maintenance
It contains modifying the software to match changes in the ever-changing environment.
3. Preventive Maintenance
It is the process by which we prevent our system from being obsolete. It involves the concept of
reengineering & reverse engineering in which an old system with old technology is re-engineered using
new technology. This maintenance prevents the system from dying out.
4. Perfective Maintenance
It defines improving processing efficiency or performance or restricting the software to enhance
changeability. This may contain enhancement of existing system functionality, improvement in
computational efficiency, etc.
Software Maintenance is a very broad activity that takes place once the operation is done. It optimizes
the software performance by reducing errors, eliminating useless lines of codes and applying advanced
development. It can take up to 1-2 years to build a software system while its maintenance and
modification can be an ongoing activity for 15-20 years.
Categories of Software Maintenance:
1. Corrective Maintenance
2. Adaptive Maintenance
3. Perfective Maintenance
4. Preventive Maintenance
The cost of system maintenance represents a large proportion of the budget of most organizations that
use software system. More than 65% of software lifecycle cost is expanded in the maintenance activities.
Cost of software maintenance can be controlled by postponing the. development opportunity of software
maintenance but this will cause the following intangible cost:
Customer dissatisfaction when requests for repair or modification cannot be addressed in a timely
manner.
Reduction in overall software quality as a result of changes that introduce hidden errors in maintained
software.
Software maintenance cost factors:
The key factors that distinguish development and maintenance and which lead to higher maintenance
cost are divided into two subcategories:
1. Non-Technical factors
2. Technical factors
Non-Technical factors:
The Non-Technical factors include:
1. Application Domain
2. Staff stability
3. Program lifetime
4. Dependence on External Environment
5. Hardware stability
Technical factors:
Technical factors include the following:
1. module independence
2. Programming language
3. Programming style
4. Program validation and testing
5. Documentation
6. Configuration management techniques
Efforts expanded on maintenance may be divided into productivity activities (for example analysis and
evaluation, design and modification, coding). The following expression provides a module of
maintenance efforts:
M = P + K(C - D)
where,
M: Total effort expanded on the maintenance.
P: Productive effort.
K: An empirical constant.
C: A measure of complexity that can be attributed to a lack of good design and documentation.
D: A measure of the degree of familiarity with the software.