Unit Iii Software Design
Unit Iii Software Design
INTRODUCTION
The design phase of software development deals with transforming the customer requirements as described in
the SRS documents into a form implementable using a programming language. In order to be easily
implementable in a conventional programming language, the following item must be designed during the design
phase:
Thus the goal of design phase is to take the SRS document of the input and to produce the above mentioned
items at the completion stage of the design phase.
Software design is defined as a process of problem solving and planning for a software solution.
OR
Software design in software engineering is the process of planning how a software system will work. It
includes the concepts and documentation that result from this process.
Software design is the process of transforming user requirements into a suitable form, which helps
the programmer in software coding and implementation. During the software design phase, the
design document is produced, based on the customer requirements as documented in the SRS
document. Hence, this phase aims to transform the SRS document into a design document.
The following items are designed and documented during the design phase:
1. Different modules are required.
2. Control relationships among modules.
3. Interface among different modules.
4. Data structure among the different modules.
5. Algorithms are required to be implemented among the individual modules.
Objectives of Software Design
1. Correctness: A good design should be correct i.e., it should correctly implement all the
functionalities of the system.
2. Efficiency: A good software design should address the resources, time, and cost optimization
issues.
3. Flexibility: A good software design should have the ability to adapt and accommodate changes
easily. It includes designing the software in a way, that allows for modifications, enhancements,
and scalability without requiring significant rework or causing major disruptions to the existing
functionality.
4. Understandability: A good design should be easily understandable, it should be modular, and all
the modules are arranged in layers.
5. Completeness: The design should have all the components like data structures,
modules, external interfaces, etc.
6. Maintainability: A good software design aims to create a system that is easy to understand,
modify, and maintain over time.
Software Design Concepts
Concepts are defined as a principal idea or invention that comes into our mind or in thought to
understand something. The software design concept simply means the idea or principle behind the
design. It describes how you plan to solve the problem of designing software, and the logic, or
thinking behind how you will design software. It allows the software engineer to create the model of
the system software or product that is to be developed or built. The software design concept provides
a supporting and essential structure or model for developing the right software. There are many
concepts of software design and some of them are given below:
Software designs and problems are often complex and many aspects of software system must be modeled.
Static Model
A static model describes the static structure of the system being modeled, which is considered less likely to
change than the functions of the system. In particular, a static model defines the classes in the system, the
attributes of the classes, the relationships between classes, and the operations of each class.
Dynamic Model
Dynamic Modeling is used to represent the behavior of the static constituents of a software , here static
constituents includes, classes , objects, their relationships and interfaces Dynamic Modeling also used to
represents the interaction, workflow, and different states of the static constituents in a software.
The design phase of software development deals with transforming the customer requirements as described in
the SRS documents into a form implementable using a programming language.
The software design process can be divided into the following three levels of phases of design:
1. Interface Design
2. Architectural Design
3. Detailed Design
InterfaceDesign:
Interface design is the specification of the interaction between a system and its environment. this phase
proceeds at a high level of abstraction with respect to the inner workings of the system i.e, during interface
design, the internal of the systems are completely ignored and the system is treated as a black box. Attention is
focussed on the dialogue between the target system and the users, devices, and other systems with which it
interacts. The design problem statement produced during the problem analysis step should identify the people,
other systems, and devices which are collectively called agents.
Precise description of events in the environment, or messages from agents to which the system must
respond.
Precise description of the events or messages that the system must produce.
Specification on the data, and the formats of the data coming into and going out of the system.
Specification of the ordering and timing relationships between incoming events or messages, and
outgoing events or outputs.
Architectural Design:
Architectural design is the specification of the major components of a system, their responsibilities, properties,
interfaces, and the relationships and interactions between them. In architectural design, the overall structure of
the system is chosen, but the internal details of major components are ignored.
Issues in architectural design includes:
Gross decomposition of the systems into major components.
Allocation of functional responsibilities to components.
Component Interfaces
Component scaling and performance properties, resource consumption properties, reliability properties,
and so forth.
Communication and interaction between components.
The architectural design adds important details ignored during the interface design. Design of the internals of
the major components is ignored until the last phase of the design.
Detailed Design:
Design is the specification of the internal elements of all major system components, their properties,
relationships, processing, and often their algorithms and the data structures.
The detailed design may include:
Decomposition of major system components into program units.
Allocation of functional responsibilities to units.
User interfaces
Unit states and state changes
Data and control interaction between units
Data packaging and implementation, including issues of scope and visibility of program elements
Algorithms and data structures
Structure Charts
Structure chart is a chart derived from Data Flow Diagram. It represents the system in more detail than DFD.
It breaks down the entire system into lowest functional modules, describes functions and sub-functions of
each module of the system to a greater detail than DFD. Structure chart represents hierarchical structure of
modules. At each layer a specific task is performed.
Here are the symbols used in construction of structure charts –
Module - It represents process or subroutine or task. A control module branches to more than one sub-
module. Library Modules are re-usable and invokable from any module.
Structure Chart represents the hierarchical structure of modules. It breaks down the entire
system into the lowest functional modules and describes the functions and sub-functions of
each module of a system in greater detail.
2. Conditional Call
It represents that control module can select any of the sub module on the basis of some condition.
5. Control Flow
It represents the flow of control between the modules. It is represented by a directed arrow
with a filled circle at the end.
6. Physical Storage
It is that where all the information are to be stored.
Example
Structure chart for an Email server
Types of Structure Chart
1. Transform Centered Structure: These type of structure chart are designed for the
systems that receives an input which is transformed by a sequence of operations being
carried out by one module.
2. Transaction Centered Structure: These structure describes a system that processes a
number of different types of transaction.
Modularization
Modularization is a technique to divide a software system into multiple discrete and independent
modules, which are expected to be capable of carrying out task(s) independently. These modules may
work as basic constructs for the entire software. Designers tend to design modules such that they can be
executed and/or compiled separately and independently.
Modular design unintentionally follows the rules of ‘divide and conquer’ problem-solving strategy this
is because there are many other benefits attached with the modular design of a software.
Advantage of modularization:
Smaller components are easier to maintain
Program can be divided based on functional aspects
Desired level of abstraction can be brought in the program
Components with high cohesion can be re-used again
Concurrent execution can be made possible
Desired from security aspect
Modularity
Modularity specifies to the division of software into separate modules which are differently named and
addressed and are integrated later on in to obtain the completely functional software. It is the only
property that allows a program to be intellectually manageable. Single large programs are difficult to
understand and read due to a large number of reference variables, control paths, global variables, etc.
The desirable properties of a modular system are:
Each module is a well-defined system that can be used with other applications.
Each module has single specified objectives.
Modules can be separately compiled and saved in the library.
Modules should be easier to use than to build.
Modules are simpler from outside than inside.
Advantages and Disadvantages of Modularity
Advantages of Modularity
There are several advantages of Modularity
It allows large programs to be written by several or different people
It encourages the creation of commonly used routines to be placed in the library and used by
other programs. It simplifies the overlay procedure of loading a large program into main storage.
It provides more checkpoints to measure progress.
It provides a framework for complete testing, more accessible to test
It produced the well designed and more readable program.
Disadvantages of Modularity
There are several disadvantages of Modularity
Execution time maybe, but not certainly, longer
Storage size perhaps, but is not certainly, increased
Compilation and loading time may be longer
Inter-module communication problems may be increased
More linkage required, run-time may be longer, more source lines must be written, and more
documentation has to be done
Modular Design
Modular design reduces the design complexity and results in easier and faster implementation by
allowing parallel development of various parts of a system. discuss a different section of modular design
1. Functional Independence: Functional independence is achieved by developing functions that
perform only one kind of task and do not excessively interact with other modules. Independence
is important because it makes implementation more accessible and faster. The independent
modules are easier to maintain, test, and reduce error propagation and can be reused in other
programs as well. Thus, functional independence is a good design feature which ensures
software quality.
2. Information hiding: The fundamental of Information hiding suggests that modules can be
characterized by the design decisions that protect from the others, i.e., In other words, modules
should be specified that data include within a module is inaccessible to other modules that do
not need for such information.
It is measured using two criteria:
Cohesion: It measures the relative function strength of a module.
Coupling: It measures the relative interdependence among modules.
The use of information hiding as design criteria for modular system provides the most significant benefits
when modifications are required during testing's and later during software maintenance. This is because as
most data and procedures are hidden from other parts of the software, inadvertent errors introduced
during modifications are less likely to propagate to different locations within the software.
Strategy of Design
A good system design strategy is to organize the program modules in such a method that are easy to develop
and latter too, change. Structured design methods help developers to deal with the size and complexity of
programs. Analysts generate instructions for the developers about how code should be composed and how
pieces of code should fit together to form a program.
To design a system, there are two possible approaches:
1. Top-down Approach
2. Bottom-up Approach
1. Top-down Approach: This approach starts with the identification of the main components and then
decomposing them into their more detailed sub-components. We know that a system is composed of more
than one sub-systems and it contains a number of components. Further, these sub-systems and
components may have their on set of sub-system and components and creates hierarchical structure in the
system. Top-down design takes the whole software system as one entity and then decomposes it to achieve
more than one sub-system or component based on some characteristics. Each subsystem or component is
then treated as a system and decomposed further. This process keeps on running until the lowest level of
system in the top-down hierarchy is achieved. Top-down design starts with a generalized model of system
and keeps on defining the more specific part of it. When all components are composed the whole system
comes into existence. Top-down design is more suitable when the software solution needs to be designed
from scratch and specific details are unknown.
2.Bottom-up Approach: A bottom-up approach begins with the lower details and moves towards up the
hierarchy, as shown in fig. This approach is suitable in case of an existing system. The bottom up design
model starts with most specific and basic components. It proceeds with composing higher level of
components by using basic or lower level components. It keeps creating higher level components until the
desired system is not evolved as one single component. With each higher level, the amount of abstraction is
increased. Bottom-up strategy is more suitable when a system needs to be created from some existing
system, where the basic primitives can be used in the newer system.
Both, top-down and bottom-up approaches are not practical individually. Instead, a good combination of
both is used.
In this case, modules are subordinates to different modules. Therefore, no direct coupling.
2. Data Coupling: When data of one module is passed to another module, this is called data
coupling.
3. Stamp Coupling: Two modules are stamp coupled if they communicate using composite data
items such as structure, objects, etc. When the module passes non-global data structure or entire
structure to another module, they are said to be stamp coupled. For example, passing structure
variable in C or object in C++ language to a module.
4. Control Coupling: Control Coupling exists among two modules if data from one module is
used to direct the structure of instruction execution in another.
5. External Coupling: External Coupling arises when two modules share an externally imposed
data format, communication protocols, or device interface. This is related to communication to
external tools and devices.
6. Common Coupling: Two modules are common coupled if they share information through
some global data items.
6. Content Coupling: Content Coupling exists among two modules if they share
code, e.g., a branch from one module into another module.
Module Cohesion
In computer programming, cohesion defines to the degree to which the elements of a module
belong together. Thus, cohesion measures the strength of relationships between pieces of
functionality within a given module. For example, in highly cohesive systems, functionality is
strongly related.
Cohesion is an ordinal type of measurement and is generally described as "high cohesion" or
"low cohesion."
Pseudo code
Pseudocode is a set of statements whose aim is to quantify the process without obscuring its function with the
syntax and semantics of a particular programming language. We have already seen some examples of pseudo
code in the previous section which was introduced to present the principle of procedures. In general, the syntax
used for pseudo code is arbitrary and user dependent and typically reflects the programming language the user
is most familiar with. The key to using pseudo code is to convey the process clearly and accurately in a way
that real code using some programming language can not necessarily do as well, otherwise, one might as well
write out the code directly - many programmers do! The following examples illustrate the use of pseudo-code.
Example 1 Pseudo-code to read in a number from the keyboard, square it and write out the result to the VDU.
Output(“Input number”)
input (number)
number=number*number
output(“Number squared is”, number)
Here, the data I/O is assumed to be controlled by the functions output and input.
Function Oriented Design
Function oriented design is an approach to software design where the design is decomposed into a set of
interacting units where each unit has a clearly defined function.
Function Oriented Design Strategies:
Function Oriented Design Strategies are as follows:
Data Flow Diagram (DFD): A data flow diagram (DFD) maps out the flow of information for any process or
system. It uses defined symbols like rectangles, circles and arrows, plus short text labels, to show data inputs,
outputs, storage points and the routes between each destination.
Data Dictionaries:
Data dictionaries are simply repositories to store information about all data items defined in DFDs. At the
requirement stage, data dictionaries contains data items. Data dictionaries include Name of the item, Aliases
(Other names for items), Description / purpose, Related data items, Range of values, Data structure definition
/ form.
Structure Charts: It is the hierarchical representation of system which partitions the system into black boxes
(functionality is known to users but inner details are unknown). Components are read from top to bottom and
left to right. When a module calls another, it views the called module as black box, passing required parameters
and receiving results.
Object-Oriented Design
In the object-oriented design method, the system is viewed as a collection of objects (i.e., entities). The state
is distributed among the objects, and each object handles its state data. For example, in a Library Automation
Software, each library representative may be a separate object with its data and functions to operate on these
data. The tasks defined for one purpose cannot refer or change data of other objects. Objects have their
internal data which represent their state. Similar objects create a class. In other words, each object is a member
of some class. Classes may inherit features from the superclass.
The different terms related to object design are:
1. Objects: All entities involved in the solution design are known as objects. For example, person, banks,
company, and users are considered as objects. Every entity has some attributes associated with it and has
some methods to perform on the attributes.
2. Classes: A class is a generalized description of an object. An object is an instance of a class. A class defines
all the attributes, which an object can have and methods, which represents the functionality of the object.
3. Messages: Objects communicate by message passing. Messages consist of the integrity of the target object,
the name of the requested operation, and any other action needed to perform the function. Messages are
often implemented as procedure or function calls.
4. Abstraction In object-oriented design, complexity is handled using abstraction. Abstraction is the removal of
the irrelevant and the amplification of the essentials.
5. Encapsulation: Encapsulation is also called an information hiding concept. The data and operations are linked
to a single unit. Encapsulation not only bundles essential information of an object together but also restricts
access to the data and methods from the outside world.
6. Inheritance: OOD allows similar classes to stack up in a hierarchical manner where the lower or subclasses
can import, implement, and re-use allowed variables and functions from their immediate superclasses. This
property of OOD is called an inheritance. This makes it easier to define a specific class and to create generalized
classes from specific ones.
7. Polymorphism: OOD languages provide a mechanism where methods performing similar tasks but vary in
arguments, can be assigned the same name. This is known as polymorphism, which allows a single interface is
performing functions for different types. Depending upon how the service is invoked, the respective portion
of the code gets executed.
Agility:
Agile software engineering has a set of development guidelines.
The mechanism encourages customer satisfaction and early incremental delivery of software.
-> Software engineers and other project stakeholders (managers, customers, end users) work
together on an agile team .
->Agile software engineering represents a reasonable alternative to conventional software
engineering .
Agility:
Agility means effective (rapid and adaptive) response to change, effective communication among all
stakekholder.
Drawing the customer onto team and organizing a team so that it is in control of work performed. The
Agile process is light-weight methods and People-based rather than plan-based methods.
The agile process forces the development team to focus on software itself rather than design and
documentation.
The agile process believes in iterative method.
The aim of agile process is to deliver the working model of software quickly to the customer For
example: Extreme programming is the best known of agile process.
AGILITY AND THE COST OF CHANGE
The cost of change increases as the project progresses.(Figure 3.1, solid black curve). It is relatively easy to
accommodate a change when a software team is gathering requirements (early in a project).
The team is in the middle of validation testing (something that occurs relatively late in the project), and
stakeholder is requesting a major functional change. The change requires a modification to the architectural
design of the software, the design and construction of three new components, modifications to another five
components, the design of new tests, and so on. Costs and time increases quickly. A well-designed agile process
“flattens” the cost of change curve (Figure , shaded, solid curve), allowing a software team to accommodate
changes late in a software project without cost and time impact. the agile process encompasses incremental
delivery.
Any agile software process is characterized in a manner that addresses a number of key assumptions about
the majority of software projects:
1. It is difficult to predict(estimate for future change) in advance which software requirements will persist
and which will change.
2. For many types of software, design and construction are performed simultaneously. It is difficult to predict
how much design is necessary before construction is used to prove the design.
3. Analysis, design, construction, and testing are not expected. (from a planning point of view)
Agility Principles
12 agility principles for those who want to achieve agility:
1. Our highest priority is to satisfy the customer through early and continuous delivery of valuable software.
2. Welcome changing requirements, even late in development.
3. Deliver working software frequently, from a couple of weeks to a couple of Months.
4. stake holders and software engineers must work together daily throughout the project.
5. Build projects around motivated individuals. Give them the environment and support they need, and trust
them to get the job done.
6. The most efficient and effective method of conveying information to and within a development team is
face-to-face conversation.
7. Working software is the primary measure of progress.
8. Agile processes promote sustainable development.
9. Continuous attention to technical excellence and good design enhances(increases)agility.
10. Simplicity—the art of maximizing the amount of work not done—is essential.
11. The best architectures, requirements, and designs emerge from self– organizing teams.
12. At regular intervals, the team reflects on how to become more effective and then adjusts its behavior
accordingly.
Not every agile process model applies these 12 principles with equal weight, and some models choose to
ignore (or at least downplay) the importance of one or more of the principles.
The Politics of Agile Development:
There is debate about the benefits and applicabilityof agile software development as opposed to more
conventional software engineering processes(produces documents rather than working product).
Even within the agile, there are many proposed process models each with a different approach to the agility
.
Human Factors
Agile software development take the importance of “people factors”. Number of different talents must exist
among the people on an agile team and the team itself:
Competence:“competence” encompasses talent, specific software-related skills, and overall knowledge of
the process .
Common focus: Members of the agile team may perform different tasks and all should be focused on one
goal—to deliver a working software increment to the customer within the time promised. Collaboration: Team
members must collaborate with one another and all other Stakeholders to complete the their task.
Decision-making ability: Any good software team (including agile teams) must be allowed the freedom to
control its own destiny.
Fuzzy problem-solving ability: The agile team will continually have to deal with ambiguities(confusions or
doubts).
Mutual trust and respect: The agile team exhibits the trust and respect . Self-organization:
(1) the agile team organizes itself for the work to be done
(2) the team organizes the process to best accommodate its local environment
(3) the team organizes the work schedule to best achieve delivery of the software increment.
Extreme Programming:
Extreme Programming (XP), the most widely used approach to agile software development. XP proposed by kent
beck during the late 1980’s.
XP Values
Beck defines a set of five values —communication, simplicity, feedback, courage, and respect. Each of these
values is used in XP activities, actions, and tasks.
To achieve simplicity, XP restricts developers to design only for immediate needs, rather than future needs.
Feedback is derived from three sources: the implemented software itself, the customer, and other software
team members
courage.(discipline) An agile XP team must have the discipline (courage) to design for today, recognizing that
future requirements may change dramatically.
the agile team inculcates respect among it members, between other stakeholders and team members.
The XP(Extreme Programming) Process
XP Process have four framework activities: planning, design, coding, and testing. Planning.
The planning
activity begins with listening—a requirements gathering activity.
Listening leads to the creation of a set of “stories” (also called user stories) that describe required output,
features, and functionality for software to be built.
Each story is written by the customer and is placed on an index card.The customer assigns a value (i.e., a
priority) to the story based on the overall business value of the feature or function.
Members of the XP team then assess each story and assign a cost— measured in development weeks—
to it.
If the story is estimated to require more than three development weeks,the story into smaller stories and
the assignment of value and cost occurs again. It is important to note that new stories can be written at
any time.
the stories with highest value will be moved up in the schedule and implemented first.
Testing:
As the individual unit tests are organized into a “universal testing suite” integration and
validation testing of the system can occur on a daily basis.
XP acceptance tests, also called customer tests, are specified by the customer and focus on
overall system features and functionality. Acceptance tests are derived from user stories that
have been implemented as part of a software release.
Industrial XP
Joshua Kerievsky describes Industrial Extreme Programming (IXP) IXP has six new practices that are designed
to help XP process works successfully for projects within a large organization.
Readiness assessment. the organization should conduct a readiness assessment.
(1) Development environment exists to support IXP(Industrial Extreme Programming).
(2) the team will be populated by the proper set of stakeholders.
(3) the organization has a distinct quality program and supports continuous improvement.
(4) the organizational culture will support the new values of an agile Team.
Project community.
Classic XP suggests that the right people be used to populate the agile team to ensure success.
The people on the team must be well-trained, adaptable and skilled.
A community may have a team members and customers who are central to the success of the project.
Project chartering.:
Chartering examines the context of the project to determine how it complements, extends, or replaces
existing systems or processes.
Test-driven management.
Test-driven management establishes a series of measurable “destinations” and then defines mechanisms
for determining whether or not these destinations have been reached.
Retrospectives.
An IXP team conducts a specialized technical reviews after a software increment is delivered. Called a
retrospective.
The review examines “issues, events, and lessons-learned” across a software increment and/or the entire
software release.
Continuous learning.
learning is a vital part of continuous process improvement, members of the XP team are encouraged to
learn new methods and techniques that can lead to a higher quality product.
The XP Debate
Requirements volatility. The customer is an active member of the XP team, changes to requirements are
requested informally. As a consequence, the scope of the project can change and earlier work may have to
be modified to accommodate current needs.
Conflicting customer needs. Many projects have multiple customers, each with his own set of needs.
• Requirements are expressed informally. User stories and acceptance tests are the only explicit
manifestation of requirements in XP. specification is often needed to remove inconsistencies, and errors
before the system is built.
• Lack of formal design:when complex systems are built, design must have the overall structure of the
software then it will exihibit quality.
Other Agile Process Models:
The most widely used of all agile process model is Extreme Programming (XP). But many other agile process
models have been proposed and are in use across the industry. Among the most common are:
• Adaptive Software Development (ASD)
• Scrum
• Dynamic Systems Development Method (DSDM)
• Crystal
• Feature Drive Development (FDD)
• Lean Software Development (LSD)
• Agile Modeling (AM)
• Agile Unified Process (AUP)
Adaptive Software Development (ASD)
Adaptive Software Development (ASD) has been proposed by Jim High smith as a technique for building
complex systems. ASD focus on human collaboration and team selforganization.
ASD “life cycle” (Figure 3.3) has three phases:- speculation, collaboration, and learning.
speculation,
the project is initiated and adaptive cycle planning is conducted.Adaptive cycle planning uses project
initiation information—the customer’s statement, project constraints (e.g., delivery dates or user
descriptions), and basic requirements—to define the set of release cycles (software increments) that will
be required for the project.
Scrum meetings—are short (typically 15 minutes) meetings held daily by the Scrum team. Three
key questions are asked and answered by all team members.
•What did you do since the last team meeting?
• What obstacles are you encountering?
• What do you plan to accomplish by the next team meeting? A team leader, called a Scrum
master, leads the meeting and assesses the responses from each person. Demos—deliver the
software increment to the customer so that functionality that has been implemented can be
demonstrated and evaluated by the customer.
Dynamic Systems Development Method (DSDM)
The Dynamic Systems Development Method (DSDM) is an agile software development
approach.
The DSDM—80 percent of an application can be delivered in 20 percent of the time it
would take to deliver the complete (100 percent) application.
DSDM is an iterative software process in which each iteration follows the 80 percent rule.
That is, only enough work is required for each increment to facilitate movement to the
next increment.
The remaining detail can be completed later when more business requirements are
known or changes have been requested and accommodated.
The activities of DSDM are:
Feasibility study—establishes the basic business requirements and constraints associated
with the application to be built and then assesses whether the application is a viable
candidate for the DSDM process.
Business study—establishes the functional and information requirements that will allow
the application to provide business value.
Functional model iteration—produces a set of incremental prototypes that demonstrate
functionality for the customer.
The intent during this iterative cycle is to gather additional requirements by
eliciting feedback from users as they exercise the prototype.
Design and build iteration—prototypes built during functional model iteration to ensure that each has been
engineered in a manner that it will provide operational business value for end users.
Implementation—places the latest software increment (an “operationalized” prototype) into the operational
environment. It should be noted that (1) the increment may not be 100 percent complete or (2) changes may be
requested as the increment is put into place.
Crystal
Alistair Cockburn and Jim Highsmith created the Crystal family of agile methods .
Cockburn characterizes as “a resource limited, cooperative game of invention and communication, with a
primary goal of delivering useful, working software and a secondary goal of setting up for the next game”.
Cockburn and Highsmith have defined a set of methodologies, each with core elements that are common
to all, and roles,process patterns, work products, and practice that are unique to each.
The Crystal family is actually a set of example agile processes that have been proven effective for different
types of projects.
Feature Driven Development (FDD)
Feature Driven Development (FDD) was originally coined by Peter Coad and his colleagues as a process
model for object-oriented software engineering.
A feature “is a client-valued function that can be implemented in two weeks or less”.
the definition of features provides the following benefits:
features are small blocks of deliverable functionality, users can describe them more easily.
Since a feature is the FDD deliverable software increment, the team develops operational features
every two weeks.
Because features are small, their design and code representations are easier.
the following template for defining a feature:
<action> the <result> <by for of to> a(n) <object>
Where
<object> is “a person, place, or thing.”
Examples of features for an e-commerce application might be:
1) Add the product to shopping cart
2) Store the shipping-information for the customer
A feature set groups related features into business-related categories and is defined as:
For example: Making a product sale is a feature set that would encompass the features noted earlier and others.
The FDD approach defines five “collaborating” framework activities as shown in Figure 3.5.
It is essential for developers, their managers, and other stakeholders to understand project status.
For that FDD defines six milestones during the design and implementation of a feature: “design
walkthrough, design, design inspection, code, code inspection, promote to build”.
Lean Software Development (LSD)
Lean Software Development (LSD) has adapted the principles of lean manufacturing to the
world of software engineering.
LSD process can be summarized as eliminate waste, build quality in, create knowledge, defer
commitment, deliver fast, respect people, and optimize the whole.
For example,
eliminate waste within the context of an agile software project as
1)adding no extraneous features or functions
(2) assessing the cost and schedule impact of any newly requested requirement,
(3) removing any superfluous process steps,
(4) establishing mechanisms to improve the way team members find information,
(5) ensuring the testing finds as many errors as possible,
Agile Modeling (AM)
Agile Modeling(AM) suggests a wide array of “core” and “supplementary” modeling principles,
Model with a purpose. A developer who uses AM should have a specific goal (e.g., to
communicate information to the customer or to help better understand aspect of the software)
in mind before creating the model.
Model with a purpose. A developer who uses AM should have a specific goal (e.g., to
communicate information to the customer or to help better understand aspect of the software)
in mind before creating the model.
Use multiple models. There are many different models and notations that can be used to
describe software.
Each model should present a different aspect of the system and only those models that provide
value to their intended audience should be used.
Travel light. As software engineering work proceeds, keep only those models that will provide
long-term value .
Content is more important than representation. Modeling should impart information to its
intended audience.
Know the models and the tools you use to create them. Understand the strengths and
weaknesses of each model and the tools that are used to create it.
Adapt locally. The modeling approach should be adapted to the needs of the agile team.
Agile Unified Process (AUP)
The Agile Unified Process (AUP) adopts a “serial in the large” and “iterative in the small”
philosophy for building computer-based systems.
Each AUP iteration addresses the following activities:
•Modeling. UML representations of the business and problem domains are created.
• Implementation. Models are translated into source code.
•Testing. Like XP, the team designs and executes a series of tests to uncover errors and ensure
that the source code meets its requirements.
• Deployment. focuses on the delivery of a software increment and the acquisition of feedback
from end users.
• Configuration and project management.configuration management addresses change
management, risk management,and the control of any persistent work products that are
produced by the team.
Project management tracks and controls the progress of the team and coordinates team
activities.
• Environment management. Environment management coordinates a process infrastructure
that includes standards, tools, and other support technology available to the team.
A tool set for agile development
Automated software tools (e.g., design tools) should be viewed as a minor supplement to the
team’s activities, and not to the success of the team.
Collaborative and communication “tools” are generally low technology and incorporate any
mechanism(, whiteboards, poster sheets) that provides information and coordination among
agile developers.
other agile tools are used to optimize the environment in which the agile team works ,improve
the team culture by social interactions, physical devices (e.g., electronic whiteboards), and
process enhancement (e.g., pair programming or time-boxing)”
Contents:
Function-Oriented Software Design: Overview of SA/SD Methodology, Structured Analysis, Structured
Design, Detailed Design, Design Review.
● The structured analysis activity transforms the SRS document into a graphic model called
the DFD model. During structured analysis, functional decomposition of the system is
achieved. The purpose of structured analysis is to capture the detailed structure of the
system as perceived by the user
● During structured design, all functions identified during structured analysis are mapped
to a module structure. This module structure is also called the high level design or the
software architecture for the given problem. This is represented using a structure chart.
The purpose of structured design is to define the structure of the solution that is suitable
for implementation
● The high-level design stage is normally followed by a detailed design stage.
Structured Analysis
During structured analysis, the major processing tasks (high-level functions) of the system are
analyzed, and t h e data flow among these processing tasks are represented graphically.
The structured analysis technique is based on the following underlying principles:
● Top-down decomposition approach.
● Application of divide and conquer principle.
● Through this each high level function is independently decomposed into detailed
functions. Graphical representation of the analysis results using data flow diagrams
(DFDs).
What is DFD?
● A DFD is a hierarchical graphical model of a system that shows the different processing
activities or functions that the system performs and the data interchange among those
functions.
● Though extremely simple, it is a very powerful tool to tackle the complexity of industry
standard problems.
● A DFD model only represents the data flow aspects and does not show the sequence of
execution of the different functions and the conditions based on which a function may or
may not be executed.
● In the DFD terminology, each function is called a process or a bubble. each function as a
processing station (or process) that consumes some input data and produces some
output data.
● DFD is an elegant modeling technique not only to represent the results of structured
analysis but also useful for several other applications.
● Starting with a set of high-level functions that a system performs, a DFD model represents
the subfunctions performed by the functions using a hierarchy of diagrams.
Data dictionary
● Every DFD model of a system must be accompanied by a data dictionary. A data dictionary
lists all data items that appear in a DFD model.
● A data dictionary lists the purpose of all data items and the definition of all composite data
items in terms of their component data items.
● It includes all data flows and the contents of all data stores appearing on all the DFDs in a
DFD model.
● For the smallest units of data items, the data dictionary simply lists their name and their
type.
● Composite data items are expressed in terms of the component data items using certain
operators.
● The dictionary plays a very important role in any software development process, especially
for the following reasons:
○ A data dictionary provides a standard terminology for all relevant data for use by the
developers working in a project.
○ The data dictionary helps the developers to determine the definition of different
data structures in terms of their component elements while implementing the
design.
○ The data dictionary helps to perform impact analysis. That is, it is possible to
determine the effect of some data on various processing activities and vice versa.
● For large systems, the data dictionary can become extremely complex and voluminous.
● Computer-aided software engineering (CASE) tools come handy to overcome this problem.
● Most CASE tools usually capture the data items appearing in a DFD as the DFD is drawn,
and automatically generate the data dictionary.
Self Study:
Data Definition
Level 1 DFD:
● The level 1 DFD usually contains three to seven bubbles.
● The system is represented as performing three to seven important functions.
● To develop the level 1 DFD, examine the high-level functional requirements in the SRS
document.
● If there are three to seven high level functional requirements, then each of these can be
directly represented as a bubble in the level 1 DFD.
● Next, examine the input data to these functions and the data output by these functions as
documented in the SRS document and represent them appropriately in the diagram.
Decomposition:
● Each bubble in the DFD represents a function performed by the system.
● The bubbles are decomposed into subfunctions at the successive levels of the DFD model.
Decomposition of a bubble is also known as factoring o r exploding a bubble.
● Each bubble at any level of DFD is usually decomposed to anything from three to seven
bubbles. Decomposition of a bubble should be carried on until a level is reached at which
the function of the bubble can be described using a simple algorithm.
RMS Calculator
Trading House Automation System
Structured Design
● The aim of structured design is to transform the results of the structured analysis into a
structure chart.
● A structure chart represents the software architecture.
● The various modules making up the system, the module dependency (i.e. which module
calls which other modules), and the parameters that are passed among the different
modules.
● The structure chart representation can be easily implemented using some programming
language.
● The basic building blocks using which structure charts are designed are as following:
○ Rectangular boxes: A rectangular box represents a module.
○ Module invocation arrows: An arrow connecting two modules implies that during program
execution control is passed from one module to the other in the direction of the connecting
arrow.
○ Data flow arrows: These are small arrows appearing alongside the module invocation
arrows. represent the fact that the named data passes from one module to the other in
the direction of the arrow.
○ Library modules: A library module is usually represented by a rectangle with double
edges. Libraries comprise the frequently called modules.
○ Selection: The diamond symbol represents the fact that one module of several modules
connected with the diamond symbol i s invoked depending on the outcome of the
condition attached with the diamond symbol.
○ Repetition: A loop around the control flow arrows denotes that the respective modules
are invoked repeatedly.
● In any structure chart, there should be one and only one module at the top, called the root.
● There should be at most one control relationship between any two modules in the structure
chart. This means that if module A invokes module B, module B cannot invoke module A.
Transform Analysis:
● Transform analysis identifies the primary functional components (modules) and the input
and output data for these components.
● The first step in transform analysis is to divide the DFD into three types of parts:
○ Input (afferent branch)
○ Processing (central transform)
○ Output (efferent branch)
● In the next step of transform analysis, the structure chart is derived by drawing one
functional component each for the central transform, the afferent and efferent branches.
● These are drawn below a root module, which would invoke these modules.
● In the third step of transform analysis, the structure chart is refined by adding sub functions
required by each of the high-level functional components.
Transaction Analysis:
● Transaction analysis is an alternative to transform analysis and is useful while
designing transaction processing programs.
● A transaction allows the user to perform some specific type of work by using the software.
● For example, ‘issue book’, ‘return book’, ‘query book’, etc., are transactions.
● As in transform analysis, first all data entering into the DFD need to be identified.
● In a transaction-driven system, different data items may pass through different
computation paths through the DFD.
● This is in contrast to a transform centered system where each data item entering the DFD
goes through the same processing steps.
● Each different way in which input data is processed is a transaction. For each
identified transaction, trace the input data to the output.
● All the traversed bubbles belong to the transaction.
● These bubbles should be mapped to the same module on the structure chart.
● In the structure chart, draw a root module and below this module draw each identified
transaction as a module.
Detailed Design:
● During detailed design the pseudo code description of the processing and the different data
structures are designed for the different modules of the structure chart.
● These are usually described in the form of module specifications (MSPEC). MSPEC is
usually written using structured English.
● The MSPEC for the non-leaf modules describe the different conditions under which the
responsibilities are delegated to the lower level modules.
● The MSPEC for the leaf-level modules should describe in algorithmic form how the
primitive processing steps are carried out.
● To develop the MSPEC of a module, it is usually necessary to refer to the DFD model and
the SRS document to determine the functionality of the module.
Design Review:
● After a design is complete, the design is required to be reviewed.
● The review team usually consists of members with design, implementation, testing, and
maintenance perspectives.
● The review team checks the design documents especially for the following aspects:
○ Traceability: Whether each bubble of the DFD can be traced to some module in the
structure chart and vice versa.
○ Correctness: Whether all the algorithms and data structures of the detailed design are
correct.
○ Maintainability: Whether the design can be easily maintained in future.
○ Implementation: Whether the design can be easily and efficiently implemented.
● After the points raised by the reviewers are addressed by the designers, the design
document becomes ready for implementation.
● The user interface portion of a software product is responsible for all interactions with the
user. Almost every software product has a user interface.
● User interface part of a software product is responsible for all interactions.
● The user interface part of any software product is of direct concern to the end-users.
● No wonder then that many users often judge a software product based on its user interface
● an interface that is difficult to use leads to higher levels of user errors and ultimately leads
to user dissatisfaction.
● Sufficient care and attention should be paid to the design of the user interface of
any software product.
● Systematic development of the user interface is also important.
● Development of a good user interface usually takes a significant portion of the total system
development effort.
● For many interactive applications, as much as 50 per cent of the total development effort is
spent on developing the user interface part.
● Unless the user interface is designed and developed in a systematic manner, the total
effort required to develop the interface will increase tremendously.
● Speed of learning:
○ A good user interface should be easy to learn.
○ A good user interface should not require its users to memorize commands.
○ Neither should the user be asked to remember information from one screen to another
■ Use of metaphors and intuitive command names: Speed of learning an interface is greatly
facilitated if these are based on some dayto-day real-life examples or some physical
objects with which the users are familiar with. The abstractions of real-life objects
or concepts used in user interface design are called metaphors.
■ Consistency: Once a user learns about a command, he should be able to use the
similar commands in different circumstances for carrying out similar actions.
■ Component-based interface: Users can learn an interface faster if the interaction style
of the interface is very similar to the interface of other applications with which the
user is already familiar with.
○ The speed of learning characteristic of a user interface can be determined by measuring
the training time and practice that users require before they can effectively use the
software.
● Speed of use:
○ Speed of use of a user interface is determined by the time and user effort necessary to
initiate and execute different commands.
○ It indicates how fast the users can perform their intended tasks.
○ The time and user effort necessary to initiate and execute different commands should
be minimal.
○ This can be achieved through careful design of the interface.
○ The most frequently used commands should have the smallest length or be available at
the top of a menu.
● Speed of recall:
○ Once users learn how to use an interface, the speed with which they can recall the
command issue procedure should be maximized.
○ This characteristic is very important for intermittent users.
○ Speed of recall is improved if the interface is based on some metaphors, symbolic
command issue procedures, and intuitive command names.
● Error prevention:
○ A good user interface should minimize the scope of committing errors while initiating
different commands.
○ The error rate of an interface can be easily determined by monitoring the errors
committed by an average user while using the interface.
○ The interface should prevent the user from entering wrong values.
● Aesthetic and attractive:
○ A good user interface should be attractive to use.
○ An attractive user interface catches user attention and fancy.
○ In this respect, graphics-based user interfaces have a definite advantage over
text-based interfaces.
● Consistency:
○ The commands supported by a user interface should be consistent.
○ The basic purpose of consistency is to allow users to generalize the knowledge about
aspects of the interface from one part to another.
● Feedback:
○ A good user interface must provide feedback to various user actions.
○ Especially, if any user request takes more than a few seconds to process, the user
should be informed about the state of the processing of his request.
○ In the absence of any response from the computer for a long time, a novice user might
even start recovery/shutdown procedures in panic.
Basic Concepts:
User Guidance and Online help:
Users may seek help about the operation of the software any time while using the software. This
is provided by the on-line help system.
2. Guidance messages:
● The guidance messages should be carefully designed to prompt the user about the
next actions he might pursue, the current status of the system, the progress so far
made in processing his last command, etc.
● A good guidance system should have different levels of sophistication.
3. Error Messages:
● Error messages are generated by a system either when the user commits some error
or when some errors encountered by the system during processing due to some
exceptional conditions, such as out of memory, communication link broken, etc.
● Users do not like error messages that are either ambiguous or too general such as
“invalid input or system error”. Error messages should be polite.
Menu-Based Interfaces:
● An important advantage of a menu-based interface over a command language-based
interface is that a menu-based interface does not require the users to remember the exact
syntax of the commands.
● A menu-based interface is based on recognition of the command names, rather than
recollection.
● In a menu-based interface the typing effort is minimal.
● A major challenge in the design of a menu-based interface is to structure a large number of
menu choices into manageable forms.
● Techniques available to structure a large number of menu items:
○ Scrolling menu:
■ Sometimes the full choice list is large and cannot be displayed within the
menu area, scrolling of the menu items is required.
■ In a scrolling menu all the commands should be highly correlated, so that the user
can easily locate a command that he needs.
■ This is important since the user cannot see all the commands at any one time.
○ Walking menu:
■ Walking menu is very commonly used to structure a large collection of menu
items.
■ When a menu item is selected, it causes further menu items to be displayed
adjacent to it in a sub-menu.
■ A walking menu can successfully be used to structure commands only if there are
tens rather than hundreds of choices.
○ Hierarchical menu:
■ This type of menu is suitable for small screens with limited display area such as
that in mobile phones.
■ The menu items are organized in a hierarchy or tree structure.
■ Selecting a menu item causes the current menu display to be replaced by an
appropriate sub-menu.
1. Modularity: Breaking down the GUI into smaller, manageable components that can be
developed and tested independently.
3. Separation of Concerns: Each component manages its own state and behavior, while
the overall application logic is separate.
This approach enhances maintainability, scalability, and makes UI development faster and
more organized.
Metaphor selection:
● The first place one should look for while trying to identify the candidate metaphors is the
set of parallels to objects, tasks, and terminologies of the use cases.
● If no obvious metaphors can be found, then the designer can fall back on the metaphors of
the physical world of concrete objects.
● The appropriateness of each candidate metaphor should be tested by restating the objects
and tasks of the user interface model in terms of the metaphor.